Nov 28 06:52:24 crc systemd[1]: Starting Kubernetes Kubelet... Nov 28 06:52:24 crc restorecon[4811]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:24 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:52:25 crc restorecon[4811]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:52:25 crc restorecon[4811]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 28 06:52:25 crc kubenswrapper[4946]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 06:52:25 crc kubenswrapper[4946]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 28 06:52:25 crc kubenswrapper[4946]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 06:52:25 crc kubenswrapper[4946]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 06:52:25 crc kubenswrapper[4946]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 28 06:52:25 crc kubenswrapper[4946]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.775255 4946 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782286 4946 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782326 4946 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782337 4946 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782348 4946 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782358 4946 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782367 4946 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782377 4946 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782388 4946 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782400 4946 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782411 4946 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782420 4946 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782437 4946 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782446 4946 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782455 4946 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782497 4946 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782506 4946 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782515 4946 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782523 4946 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782531 4946 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782540 4946 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782548 4946 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782556 4946 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782564 4946 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782573 4946 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782581 4946 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782589 4946 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782598 4946 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782606 4946 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782614 4946 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782623 4946 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782631 4946 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782639 4946 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782647 4946 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782656 4946 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782665 4946 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782673 4946 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782681 4946 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782693 4946 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782702 4946 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782711 4946 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782719 4946 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782731 4946 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782741 4946 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782752 4946 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782761 4946 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782771 4946 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782782 4946 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782793 4946 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782805 4946 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782817 4946 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782827 4946 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782836 4946 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782844 4946 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782853 4946 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782861 4946 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782870 4946 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782878 4946 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782886 4946 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782894 4946 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782902 4946 feature_gate.go:330] unrecognized feature gate: Example Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782910 4946 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782919 4946 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782927 4946 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782940 4946 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782949 4946 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782958 4946 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782968 4946 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782976 4946 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782985 4946 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.782993 4946 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.783001 4946 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783557 4946 flags.go:64] FLAG: --address="0.0.0.0" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783579 4946 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783595 4946 flags.go:64] FLAG: --anonymous-auth="true" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783608 4946 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783620 4946 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783631 4946 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783643 4946 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783656 4946 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783669 4946 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783682 4946 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783695 4946 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783708 4946 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783724 4946 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783738 4946 flags.go:64] FLAG: --cgroup-root="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783749 4946 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783761 4946 flags.go:64] FLAG: --client-ca-file="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783773 4946 flags.go:64] FLAG: --cloud-config="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783785 4946 flags.go:64] FLAG: --cloud-provider="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783798 4946 flags.go:64] FLAG: --cluster-dns="[]" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783814 4946 flags.go:64] FLAG: --cluster-domain="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783826 4946 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783839 4946 flags.go:64] FLAG: --config-dir="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783851 4946 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783864 4946 flags.go:64] FLAG: --container-log-max-files="5" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783881 4946 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783893 4946 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783906 4946 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783918 4946 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783930 4946 flags.go:64] FLAG: --contention-profiling="false" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783943 4946 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783956 4946 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783969 4946 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783981 4946 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.783997 4946 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784009 4946 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784023 4946 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784035 4946 flags.go:64] FLAG: --enable-load-reader="false" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784048 4946 flags.go:64] FLAG: --enable-server="true" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784061 4946 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784076 4946 flags.go:64] FLAG: --event-burst="100" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784090 4946 flags.go:64] FLAG: --event-qps="50" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784102 4946 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784114 4946 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784126 4946 flags.go:64] FLAG: --eviction-hard="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784153 4946 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784165 4946 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784178 4946 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784196 4946 flags.go:64] FLAG: --eviction-soft="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784213 4946 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784225 4946 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784237 4946 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784249 4946 flags.go:64] FLAG: --experimental-mounter-path="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784261 4946 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784272 4946 flags.go:64] FLAG: --fail-swap-on="true" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784283 4946 flags.go:64] FLAG: --feature-gates="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784297 4946 flags.go:64] FLAG: --file-check-frequency="20s" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784308 4946 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784321 4946 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784332 4946 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784343 4946 flags.go:64] FLAG: --healthz-port="10248" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784355 4946 flags.go:64] FLAG: --help="false" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784366 4946 flags.go:64] FLAG: --hostname-override="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784376 4946 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784388 4946 flags.go:64] FLAG: --http-check-frequency="20s" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784399 4946 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784411 4946 flags.go:64] FLAG: --image-credential-provider-config="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784422 4946 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784434 4946 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784445 4946 flags.go:64] FLAG: --image-service-endpoint="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784456 4946 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784505 4946 flags.go:64] FLAG: --kube-api-burst="100" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784517 4946 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784529 4946 flags.go:64] FLAG: --kube-api-qps="50" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784541 4946 flags.go:64] FLAG: --kube-reserved="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784552 4946 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784581 4946 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784593 4946 flags.go:64] FLAG: --kubelet-cgroups="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784605 4946 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784617 4946 flags.go:64] FLAG: --lock-file="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784630 4946 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784643 4946 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784656 4946 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784675 4946 flags.go:64] FLAG: --log-json-split-stream="false" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784686 4946 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784701 4946 flags.go:64] FLAG: --log-text-split-stream="false" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784712 4946 flags.go:64] FLAG: --logging-format="text" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784724 4946 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784737 4946 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784748 4946 flags.go:64] FLAG: --manifest-url="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784760 4946 flags.go:64] FLAG: --manifest-url-header="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784775 4946 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784787 4946 flags.go:64] FLAG: --max-open-files="1000000" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784802 4946 flags.go:64] FLAG: --max-pods="110" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784813 4946 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784826 4946 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784838 4946 flags.go:64] FLAG: --memory-manager-policy="None" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784849 4946 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784861 4946 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784873 4946 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784886 4946 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784912 4946 flags.go:64] FLAG: --node-status-max-images="50" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784924 4946 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784936 4946 flags.go:64] FLAG: --oom-score-adj="-999" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784948 4946 flags.go:64] FLAG: --pod-cidr="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784959 4946 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784978 4946 flags.go:64] FLAG: --pod-manifest-path="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.784989 4946 flags.go:64] FLAG: --pod-max-pids="-1" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785006 4946 flags.go:64] FLAG: --pods-per-core="0" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785017 4946 flags.go:64] FLAG: --port="10250" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785030 4946 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785041 4946 flags.go:64] FLAG: --provider-id="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785052 4946 flags.go:64] FLAG: --qos-reserved="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785065 4946 flags.go:64] FLAG: --read-only-port="10255" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785077 4946 flags.go:64] FLAG: --register-node="true" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785089 4946 flags.go:64] FLAG: --register-schedulable="true" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785101 4946 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785156 4946 flags.go:64] FLAG: --registry-burst="10" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785168 4946 flags.go:64] FLAG: --registry-qps="5" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785180 4946 flags.go:64] FLAG: --reserved-cpus="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785191 4946 flags.go:64] FLAG: --reserved-memory="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785210 4946 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785222 4946 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785234 4946 flags.go:64] FLAG: --rotate-certificates="false" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785246 4946 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785257 4946 flags.go:64] FLAG: --runonce="false" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785270 4946 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785282 4946 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785295 4946 flags.go:64] FLAG: --seccomp-default="false" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785306 4946 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785318 4946 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785331 4946 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785345 4946 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785358 4946 flags.go:64] FLAG: --storage-driver-password="root" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785372 4946 flags.go:64] FLAG: --storage-driver-secure="false" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785386 4946 flags.go:64] FLAG: --storage-driver-table="stats" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785399 4946 flags.go:64] FLAG: --storage-driver-user="root" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785412 4946 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785425 4946 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785438 4946 flags.go:64] FLAG: --system-cgroups="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785455 4946 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785522 4946 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785536 4946 flags.go:64] FLAG: --tls-cert-file="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785548 4946 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785565 4946 flags.go:64] FLAG: --tls-min-version="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785578 4946 flags.go:64] FLAG: --tls-private-key-file="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785590 4946 flags.go:64] FLAG: --topology-manager-policy="none" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785603 4946 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785615 4946 flags.go:64] FLAG: --topology-manager-scope="container" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785627 4946 flags.go:64] FLAG: --v="2" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785644 4946 flags.go:64] FLAG: --version="false" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785660 4946 flags.go:64] FLAG: --vmodule="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785675 4946 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.785687 4946 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786008 4946 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786032 4946 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786046 4946 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786060 4946 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786072 4946 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786082 4946 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786093 4946 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786103 4946 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786114 4946 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786124 4946 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786135 4946 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786146 4946 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786156 4946 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786172 4946 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786187 4946 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786200 4946 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786212 4946 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786224 4946 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786242 4946 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786253 4946 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786267 4946 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786278 4946 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786289 4946 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786301 4946 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786313 4946 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786323 4946 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786334 4946 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786344 4946 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786354 4946 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786365 4946 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786375 4946 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786385 4946 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786396 4946 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786406 4946 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786416 4946 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786427 4946 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786437 4946 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786447 4946 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786458 4946 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786519 4946 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786532 4946 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786544 4946 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786555 4946 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786566 4946 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786577 4946 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786587 4946 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786599 4946 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786609 4946 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786620 4946 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786631 4946 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786644 4946 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786656 4946 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786667 4946 feature_gate.go:330] unrecognized feature gate: Example Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786677 4946 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786688 4946 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786703 4946 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786714 4946 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786725 4946 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786735 4946 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786745 4946 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786755 4946 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786770 4946 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786784 4946 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786795 4946 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786810 4946 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786823 4946 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786836 4946 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786847 4946 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786859 4946 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786870 4946 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.786882 4946 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.786912 4946 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.802775 4946 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.802851 4946 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803100 4946 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803131 4946 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803146 4946 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803160 4946 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803174 4946 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803185 4946 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803196 4946 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803207 4946 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803217 4946 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803228 4946 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803238 4946 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803249 4946 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803259 4946 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803270 4946 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803280 4946 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803291 4946 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803301 4946 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803312 4946 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803323 4946 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803334 4946 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803345 4946 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803356 4946 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803367 4946 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803377 4946 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803388 4946 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803404 4946 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803423 4946 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803455 4946 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803499 4946 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803511 4946 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803521 4946 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803532 4946 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803541 4946 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803551 4946 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803561 4946 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803572 4946 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803583 4946 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803594 4946 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803604 4946 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803614 4946 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803626 4946 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803637 4946 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803649 4946 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803659 4946 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803670 4946 feature_gate.go:330] unrecognized feature gate: Example Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803680 4946 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803690 4946 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803701 4946 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803716 4946 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803729 4946 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803741 4946 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803752 4946 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803764 4946 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803780 4946 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803794 4946 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803806 4946 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803818 4946 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803830 4946 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803840 4946 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803850 4946 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803861 4946 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803871 4946 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803881 4946 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803907 4946 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803918 4946 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803930 4946 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803941 4946 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803951 4946 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803961 4946 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803971 4946 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.803981 4946 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.804000 4946 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804512 4946 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804534 4946 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804547 4946 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804563 4946 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804580 4946 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804593 4946 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804604 4946 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804615 4946 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804626 4946 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804637 4946 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804647 4946 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804657 4946 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804668 4946 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804679 4946 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804693 4946 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804706 4946 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804717 4946 feature_gate.go:330] unrecognized feature gate: Example Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804732 4946 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804743 4946 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804754 4946 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804764 4946 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804775 4946 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804785 4946 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804795 4946 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804807 4946 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804826 4946 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804840 4946 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804869 4946 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804882 4946 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804893 4946 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804906 4946 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804918 4946 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804929 4946 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804940 4946 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804951 4946 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804962 4946 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804974 4946 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804985 4946 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.804996 4946 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805008 4946 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805023 4946 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805036 4946 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805047 4946 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805057 4946 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805069 4946 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805079 4946 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805090 4946 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805101 4946 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805111 4946 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805149 4946 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805160 4946 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805170 4946 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805180 4946 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805190 4946 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805200 4946 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805210 4946 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805221 4946 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805232 4946 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805242 4946 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805252 4946 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805262 4946 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805273 4946 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805284 4946 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805311 4946 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805321 4946 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805332 4946 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805344 4946 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805355 4946 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805365 4946 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805376 4946 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.805386 4946 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.805402 4946 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.805961 4946 server.go:940] "Client rotation is on, will bootstrap in background" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.814331 4946 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.814582 4946 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.815606 4946 server.go:997] "Starting client certificate rotation" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.815650 4946 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.815881 4946 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-17 09:24:49.188668632 +0000 UTC Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.816015 4946 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 458h32m23.37265874s for next certificate rotation Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.823119 4946 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.826321 4946 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.838219 4946 log.go:25] "Validated CRI v1 runtime API" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.859650 4946 log.go:25] "Validated CRI v1 image API" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.862664 4946 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.866915 4946 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-28-06-42-44-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.866996 4946 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.894950 4946 manager.go:217] Machine: {Timestamp:2025-11-28 06:52:25.892904806 +0000 UTC m=+0.270969987 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:99c18e26-a017-426b-bf43-1a9a83f35f94 BootID:c74aee61-da50-44cb-be0e-1b7c62358c2a Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:28:a6:75 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:28:a6:75 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:11:5f:6c Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:85:ea:06 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:7c:5e:89 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:ce:ca:b9 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:71:b2:43 Speed:-1 Mtu:1496} {Name:ens7.44 MacAddress:52:54:00:ba:58:35 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ce:d0:d7:59:a8:bc Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ca:45:23:30:f9:e5 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.895337 4946 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.895875 4946 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.896410 4946 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.896820 4946 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.896885 4946 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.897287 4946 topology_manager.go:138] "Creating topology manager with none policy" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.897308 4946 container_manager_linux.go:303] "Creating device plugin manager" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.897663 4946 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.897735 4946 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.898258 4946 state_mem.go:36] "Initialized new in-memory state store" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.898408 4946 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.899384 4946 kubelet.go:418] "Attempting to sync node with API server" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.899417 4946 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.899459 4946 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.899518 4946 kubelet.go:324] "Adding apiserver pod source" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.899542 4946 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.906864 4946 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.907063 4946 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 28 06:52:25 crc kubenswrapper[4946]: E1128 06:52:25.907237 4946 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.907341 4946 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 28 06:52:25 crc kubenswrapper[4946]: E1128 06:52:25.907558 4946 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.907597 4946 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.908602 4946 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.909639 4946 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.909687 4946 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.909702 4946 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.909716 4946 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.909739 4946 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.909753 4946 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.909766 4946 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.909789 4946 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.909805 4946 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.909821 4946 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.909840 4946 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.909854 4946 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.910112 4946 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.910880 4946 server.go:1280] "Started kubelet" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.911278 4946 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.911310 4946 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.911316 4946 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 28 06:52:25 crc systemd[1]: Started Kubernetes Kubelet. Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.913366 4946 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.914419 4946 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.914498 4946 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.915214 4946 server.go:460] "Adding debug handlers to kubelet server" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.915332 4946 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.915362 4946 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 28 06:52:25 crc kubenswrapper[4946]: E1128 06:52:25.916059 4946 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.2:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187c19183cb5adb4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 06:52:25.910840756 +0000 UTC m=+0.288905907,LastTimestamp:2025-11-28 06:52:25.910840756 +0000 UTC m=+0.288905907,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 06:52:25 crc kubenswrapper[4946]: E1128 06:52:25.915224 4946 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.917838 4946 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 28 06:52:25 crc kubenswrapper[4946]: E1128 06:52:25.917947 4946 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.915691 4946 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.914845 4946 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 06:48:37.823259949 +0000 UTC Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.919636 4946 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 599h56m11.903639868s for next certificate rotation Nov 28 06:52:25 crc kubenswrapper[4946]: E1128 06:52:25.920224 4946 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="200ms" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.923805 4946 factory.go:55] Registering systemd factory Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.923896 4946 factory.go:221] Registration of the systemd container factory successfully Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.932960 4946 factory.go:153] Registering CRI-O factory Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.933047 4946 factory.go:221] Registration of the crio container factory successfully Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.933339 4946 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.933427 4946 factory.go:103] Registering Raw factory Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.933560 4946 manager.go:1196] Started watching for new ooms in manager Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.935165 4946 manager.go:319] Starting recovery of all containers Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.942891 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943018 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943045 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943068 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943090 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943111 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943131 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943150 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943174 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943195 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943216 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943237 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943258 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943284 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943305 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943326 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943349 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943371 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943391 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943438 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943458 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943503 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943523 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943544 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943564 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943584 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943609 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943632 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943677 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943698 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943720 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943741 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943764 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943786 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943810 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943834 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943857 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943879 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943900 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943922 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943943 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943965 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.943984 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944005 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944026 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944047 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944070 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944090 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944113 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944133 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944154 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944175 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944203 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944257 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944281 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944303 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944326 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944349 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944370 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944390 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944413 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944437 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944457 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944502 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944523 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944543 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944562 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944584 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944605 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944627 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944654 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944679 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944706 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944741 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944769 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944797 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.944819 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.945946 4946 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.945996 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946022 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946047 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946073 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946101 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946122 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946143 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946166 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946191 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946211 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946229 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946251 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946275 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946297 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946317 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946342 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946361 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946382 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946402 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946421 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946441 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946491 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946511 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946530 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946554 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946573 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946593 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946622 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946646 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946666 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946727 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946753 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946774 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946796 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946816 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946841 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946863 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946885 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946908 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946927 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946947 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946967 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.946990 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.947011 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.947031 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.947052 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.947070 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.947093 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.947113 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.947136 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.947205 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.947226 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.947246 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.947266 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.947286 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.947305 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.947325 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.947343 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.947366 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.947388 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.947408 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.947428 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.947447 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.947497 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.947516 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.947534 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.947553 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.947572 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948108 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948166 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948186 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948205 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948225 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948243 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948261 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948280 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948301 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948319 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948340 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948365 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948386 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948407 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948427 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948447 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948494 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948514 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948535 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948556 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948580 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948601 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948625 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948646 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948669 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948690 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948713 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948735 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948755 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948774 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948795 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948816 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948836 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948856 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948877 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948897 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948920 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948941 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948959 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.948979 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.949005 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.949025 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.949045 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.949066 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.949089 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.949110 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.949130 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.949185 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.949214 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.949242 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.949270 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.949298 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.949323 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.949344 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.949364 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.949385 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.949406 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.949426 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.949446 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.949512 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.949531 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.949550 4946 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.949568 4946 reconstruct.go:97] "Volume reconstruction finished" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.949583 4946 reconciler.go:26] "Reconciler: start to sync state" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.966328 4946 manager.go:324] Recovery completed Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.986088 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.986220 4946 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.988523 4946 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.988585 4946 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.988627 4946 kubelet.go:2335] "Starting kubelet main sync loop" Nov 28 06:52:25 crc kubenswrapper[4946]: E1128 06:52:25.988686 4946 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.988834 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.988876 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.988892 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.990226 4946 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.990252 4946 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 28 06:52:25 crc kubenswrapper[4946]: W1128 06:52:25.990213 4946 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 28 06:52:25 crc kubenswrapper[4946]: I1128 06:52:25.990280 4946 state_mem.go:36] "Initialized new in-memory state store" Nov 28 06:52:25 crc kubenswrapper[4946]: E1128 06:52:25.990296 4946 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.002215 4946 policy_none.go:49] "None policy: Start" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.003682 4946 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.003718 4946 state_mem.go:35] "Initializing new in-memory state store" Nov 28 06:52:26 crc kubenswrapper[4946]: E1128 06:52:26.018157 4946 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.061152 4946 manager.go:334] "Starting Device Plugin manager" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.061232 4946 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.061251 4946 server.go:79] "Starting device plugin registration server" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.061830 4946 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.061851 4946 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.062050 4946 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.062282 4946 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.062304 4946 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 28 06:52:26 crc kubenswrapper[4946]: E1128 06:52:26.070142 4946 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.089444 4946 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.089596 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.092268 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.092332 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.092347 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.092650 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.093007 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.093095 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.094938 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.094952 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.095000 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.094976 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.095017 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.095022 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.095280 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.096150 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.096247 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.096535 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.096565 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.096579 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.096777 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.097089 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.097221 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.098992 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.099033 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.099031 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.099063 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.099109 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.099047 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.099264 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.099316 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.099359 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.099880 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.099913 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.099923 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.100635 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.100683 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.100705 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.103812 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.103858 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.103877 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.104145 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.104197 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.105448 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.105516 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.105537 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:26 crc kubenswrapper[4946]: E1128 06:52:26.122001 4946 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="400ms" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.153299 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.153357 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.153388 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.153412 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.153436 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.153479 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.153501 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.153524 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.153547 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.153569 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.153590 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.153609 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.153718 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.153740 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.153762 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.162309 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.164737 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.164806 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.164833 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.164878 4946 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 06:52:26 crc kubenswrapper[4946]: E1128 06:52:26.165679 4946 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.2:6443: connect: connection refused" node="crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.255613 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.255712 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.255750 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.255781 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.255807 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.255835 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.255861 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.255904 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.255925 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.255971 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.256020 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.255949 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.255928 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.256078 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.256058 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.256044 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.256107 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.256174 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.256323 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.256395 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.256408 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.256431 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.256488 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.256497 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.256528 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.256534 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.256569 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.256572 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.256609 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.256532 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.366192 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.368043 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.368092 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.368111 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.368145 4946 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 06:52:26 crc kubenswrapper[4946]: E1128 06:52:26.368666 4946 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.2:6443: connect: connection refused" node="crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.428071 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.433741 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.462293 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: W1128 06:52:26.463886 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-32800f1f5d877ae499b42909e0e3f413aa033e3901be96798b0aa239a6b05de2 WatchSource:0}: Error finding container 32800f1f5d877ae499b42909e0e3f413aa033e3901be96798b0aa239a6b05de2: Status 404 returned error can't find the container with id 32800f1f5d877ae499b42909e0e3f413aa033e3901be96798b0aa239a6b05de2 Nov 28 06:52:26 crc kubenswrapper[4946]: W1128 06:52:26.468644 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-2b0873ecca80553e97fc02f814ce5ca17b36c711d3dede35e2aca39efb47938b WatchSource:0}: Error finding container 2b0873ecca80553e97fc02f814ce5ca17b36c711d3dede35e2aca39efb47938b: Status 404 returned error can't find the container with id 2b0873ecca80553e97fc02f814ce5ca17b36c711d3dede35e2aca39efb47938b Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.486659 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.493128 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:52:26 crc kubenswrapper[4946]: W1128 06:52:26.512243 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-8f18449aed16ddd50819fbcef481c00ed9b16d43e0d7d27a975f53c5393c06e8 WatchSource:0}: Error finding container 8f18449aed16ddd50819fbcef481c00ed9b16d43e0d7d27a975f53c5393c06e8: Status 404 returned error can't find the container with id 8f18449aed16ddd50819fbcef481c00ed9b16d43e0d7d27a975f53c5393c06e8 Nov 28 06:52:26 crc kubenswrapper[4946]: W1128 06:52:26.522230 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-6839514b8df215c1fdb5507f724466a8fe01b6a49726a34db92d51edd7d3d044 WatchSource:0}: Error finding container 6839514b8df215c1fdb5507f724466a8fe01b6a49726a34db92d51edd7d3d044: Status 404 returned error can't find the container with id 6839514b8df215c1fdb5507f724466a8fe01b6a49726a34db92d51edd7d3d044 Nov 28 06:52:26 crc kubenswrapper[4946]: E1128 06:52:26.523566 4946 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="800ms" Nov 28 06:52:26 crc kubenswrapper[4946]: W1128 06:52:26.730523 4946 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 28 06:52:26 crc kubenswrapper[4946]: E1128 06:52:26.730644 4946 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.769828 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.772329 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.772377 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.772389 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.772417 4946 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 06:52:26 crc kubenswrapper[4946]: E1128 06:52:26.772889 4946 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.2:6443: connect: connection refused" node="crc" Nov 28 06:52:26 crc kubenswrapper[4946]: W1128 06:52:26.904398 4946 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 28 06:52:26 crc kubenswrapper[4946]: E1128 06:52:26.904509 4946 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.912190 4946 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 28 06:52:26 crc kubenswrapper[4946]: W1128 06:52:26.949789 4946 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 28 06:52:26 crc kubenswrapper[4946]: E1128 06:52:26.949951 4946 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.995911 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4"} Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.996050 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8f18449aed16ddd50819fbcef481c00ed9b16d43e0d7d27a975f53c5393c06e8"} Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.998049 4946 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d" exitCode=0 Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.998155 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d"} Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.998231 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"32a721cfaa06c242007e8766f9f5096e392e09b8ccb9fd0cda12d0157a662e43"} Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.998410 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.999637 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.999688 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:26 crc kubenswrapper[4946]: I1128 06:52:26.999698 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.001393 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"764942077dc431862238587fb657414079d756d63de73f2ea2d6798305d777d8"} Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.001597 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.001327 4946 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="764942077dc431862238587fb657414079d756d63de73f2ea2d6798305d777d8" exitCode=0 Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.001921 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2b0873ecca80553e97fc02f814ce5ca17b36c711d3dede35e2aca39efb47938b"} Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.002536 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.002580 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.002601 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.002616 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.004436 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.004502 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.004515 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.006085 4946 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="5f9b328df1631fb424c220a0e33954374075c6f7e6313fbbdcedd04298fa1478" exitCode=0 Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.006177 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"5f9b328df1631fb424c220a0e33954374075c6f7e6313fbbdcedd04298fa1478"} Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.006258 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"32800f1f5d877ae499b42909e0e3f413aa033e3901be96798b0aa239a6b05de2"} Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.006349 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.007505 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.008258 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.008332 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.019824 4946 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="674e2b1496ef2f46b2bd37fd98df7bb076560669c1c6b4a39a081aaa0f0dab23" exitCode=0 Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.019882 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"674e2b1496ef2f46b2bd37fd98df7bb076560669c1c6b4a39a081aaa0f0dab23"} Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.019921 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6839514b8df215c1fdb5507f724466a8fe01b6a49726a34db92d51edd7d3d044"} Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.020070 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.021589 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.021634 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.021646 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:27 crc kubenswrapper[4946]: W1128 06:52:27.298428 4946 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.2:6443: connect: connection refused Nov 28 06:52:27 crc kubenswrapper[4946]: E1128 06:52:27.298543 4946 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.2:6443: connect: connection refused" logger="UnhandledError" Nov 28 06:52:27 crc kubenswrapper[4946]: E1128 06:52:27.325911 4946 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="1.6s" Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.573128 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.575079 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.575129 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.575141 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:27 crc kubenswrapper[4946]: I1128 06:52:27.575169 4946 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 06:52:27 crc kubenswrapper[4946]: E1128 06:52:27.576591 4946 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.2:6443: connect: connection refused" node="crc" Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.033136 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"14ecbd61c64f04b0ec631e67e25393619f5692c9967fff9d461c40bbb7f3c077"} Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.033200 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"30a18f86a38f7d74aedc8fb55161733023c3a515474f699fc66c645901b43a90"} Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.033213 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a67b58ab443f2a80c08229709de3844bf287465b58912f472f81b993e5f5fd8b"} Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.033365 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.034801 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.034841 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.034851 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.037702 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac"} Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.037839 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617"} Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.037908 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c"} Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.037728 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.038888 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.038934 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.038947 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.041627 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e"} Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.041750 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb"} Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.041817 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea"} Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.041874 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627"} Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.043516 4946 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c93e14ce0c08f75c10b5e43a9182b5d05d085cc07a7a69c1f3d8954e74f34e78" exitCode=0 Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.043589 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c93e14ce0c08f75c10b5e43a9182b5d05d085cc07a7a69c1f3d8954e74f34e78"} Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.043697 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.045363 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.045449 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.045526 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.046901 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2af426224940bfcc803c85bcaf635d47969c877b90220d0acefa264ae2111215"} Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.046956 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.047670 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.047719 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:28 crc kubenswrapper[4946]: I1128 06:52:28.047735 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.054897 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413"} Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.055003 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.056380 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.056429 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.056445 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.058831 4946 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0f6b328df2369a99f085517d1de927df9c288ce1b2ce7e4061015d6316253b22" exitCode=0 Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.058913 4946 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.058943 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.059000 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.059006 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0f6b328df2369a99f085517d1de927df9c288ce1b2ce7e4061015d6316253b22"} Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.059024 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.059178 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.060091 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.060143 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.060158 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.060489 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.060541 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.060562 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.061122 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.061193 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.061214 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.063986 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.064030 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.064063 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.177324 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.179213 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.179327 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.179358 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.179395 4946 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 06:52:29 crc kubenswrapper[4946]: I1128 06:52:29.370222 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:30 crc kubenswrapper[4946]: I1128 06:52:30.068063 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6008609adcacda2a52f4042b3485fd28d61ca67c4cc88c631264a679f41acb48"} Nov 28 06:52:30 crc kubenswrapper[4946]: I1128 06:52:30.068572 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a48fd8a33ac409c7b143e5d7696756ebc8a1a933770ffb7a861ca5900e88ccc0"} Nov 28 06:52:30 crc kubenswrapper[4946]: I1128 06:52:30.068593 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d9da5133752b2217b918ba9c4f28553f480799958a24cc9a28f9156d03f4786a"} Nov 28 06:52:30 crc kubenswrapper[4946]: I1128 06:52:30.068179 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:30 crc kubenswrapper[4946]: I1128 06:52:30.069995 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:30 crc kubenswrapper[4946]: I1128 06:52:30.070043 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:30 crc kubenswrapper[4946]: I1128 06:52:30.070061 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:30 crc kubenswrapper[4946]: I1128 06:52:30.808683 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:30 crc kubenswrapper[4946]: I1128 06:52:30.827218 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:30 crc kubenswrapper[4946]: I1128 06:52:30.972188 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:52:30 crc kubenswrapper[4946]: I1128 06:52:30.972495 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:30 crc kubenswrapper[4946]: I1128 06:52:30.974554 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:30 crc kubenswrapper[4946]: I1128 06:52:30.974616 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:30 crc kubenswrapper[4946]: I1128 06:52:30.974640 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:31 crc kubenswrapper[4946]: I1128 06:52:31.079097 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f95855a220a3735c099cb973532d7ff5c78e05a9a3af07a58b1d0afac5155b0c"} Nov 28 06:52:31 crc kubenswrapper[4946]: I1128 06:52:31.079210 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cc4ede497e074e6af132884e813868e34ef8838ec0c576dfd147faee683017a7"} Nov 28 06:52:31 crc kubenswrapper[4946]: I1128 06:52:31.079233 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:31 crc kubenswrapper[4946]: I1128 06:52:31.079313 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:31 crc kubenswrapper[4946]: I1128 06:52:31.081162 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:31 crc kubenswrapper[4946]: I1128 06:52:31.081226 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:31 crc kubenswrapper[4946]: I1128 06:52:31.081247 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:31 crc kubenswrapper[4946]: I1128 06:52:31.083268 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:31 crc kubenswrapper[4946]: I1128 06:52:31.083336 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:31 crc kubenswrapper[4946]: I1128 06:52:31.083359 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:32 crc kubenswrapper[4946]: I1128 06:52:32.082092 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:32 crc kubenswrapper[4946]: I1128 06:52:32.082091 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:32 crc kubenswrapper[4946]: I1128 06:52:32.084151 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:32 crc kubenswrapper[4946]: I1128 06:52:32.084179 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:32 crc kubenswrapper[4946]: I1128 06:52:32.084178 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:32 crc kubenswrapper[4946]: I1128 06:52:32.084216 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:32 crc kubenswrapper[4946]: I1128 06:52:32.084240 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:32 crc kubenswrapper[4946]: I1128 06:52:32.084189 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:32 crc kubenswrapper[4946]: I1128 06:52:32.169835 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:52:32 crc kubenswrapper[4946]: I1128 06:52:32.170124 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:32 crc kubenswrapper[4946]: I1128 06:52:32.172126 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:32 crc kubenswrapper[4946]: I1128 06:52:32.172209 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:32 crc kubenswrapper[4946]: I1128 06:52:32.172229 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:32 crc kubenswrapper[4946]: I1128 06:52:32.868449 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:52:33 crc kubenswrapper[4946]: I1128 06:52:33.085113 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:33 crc kubenswrapper[4946]: I1128 06:52:33.086266 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:33 crc kubenswrapper[4946]: I1128 06:52:33.086332 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:33 crc kubenswrapper[4946]: I1128 06:52:33.086353 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:33 crc kubenswrapper[4946]: I1128 06:52:33.681745 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:52:33 crc kubenswrapper[4946]: I1128 06:52:33.690551 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:52:33 crc kubenswrapper[4946]: I1128 06:52:33.904766 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 28 06:52:33 crc kubenswrapper[4946]: I1128 06:52:33.905090 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:33 crc kubenswrapper[4946]: I1128 06:52:33.906757 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:33 crc kubenswrapper[4946]: I1128 06:52:33.906806 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:33 crc kubenswrapper[4946]: I1128 06:52:33.906823 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:34 crc kubenswrapper[4946]: I1128 06:52:34.088102 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:34 crc kubenswrapper[4946]: I1128 06:52:34.089843 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:34 crc kubenswrapper[4946]: I1128 06:52:34.089908 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:34 crc kubenswrapper[4946]: I1128 06:52:34.089932 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:34 crc kubenswrapper[4946]: I1128 06:52:34.929625 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:52:34 crc kubenswrapper[4946]: I1128 06:52:34.929964 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:34 crc kubenswrapper[4946]: I1128 06:52:34.932156 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:34 crc kubenswrapper[4946]: I1128 06:52:34.932203 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:34 crc kubenswrapper[4946]: I1128 06:52:34.932221 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:35 crc kubenswrapper[4946]: I1128 06:52:35.090704 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:35 crc kubenswrapper[4946]: I1128 06:52:35.092264 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:35 crc kubenswrapper[4946]: I1128 06:52:35.092328 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:35 crc kubenswrapper[4946]: I1128 06:52:35.092346 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:35 crc kubenswrapper[4946]: I1128 06:52:35.170727 4946 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 06:52:35 crc kubenswrapper[4946]: I1128 06:52:35.170829 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 06:52:36 crc kubenswrapper[4946]: E1128 06:52:36.070525 4946 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 06:52:37 crc kubenswrapper[4946]: I1128 06:52:37.913101 4946 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 28 06:52:38 crc kubenswrapper[4946]: I1128 06:52:38.846422 4946 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 28 06:52:38 crc kubenswrapper[4946]: I1128 06:52:38.846520 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 28 06:52:38 crc kubenswrapper[4946]: I1128 06:52:38.851325 4946 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Nov 28 06:52:38 crc kubenswrapper[4946]: I1128 06:52:38.851411 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 28 06:52:40 crc kubenswrapper[4946]: I1128 06:52:40.767049 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 28 06:52:40 crc kubenswrapper[4946]: I1128 06:52:40.767792 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:40 crc kubenswrapper[4946]: I1128 06:52:40.770810 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:40 crc kubenswrapper[4946]: I1128 06:52:40.770886 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:40 crc kubenswrapper[4946]: I1128 06:52:40.770911 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:40 crc kubenswrapper[4946]: I1128 06:52:40.802867 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 28 06:52:40 crc kubenswrapper[4946]: I1128 06:52:40.817325 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:40 crc kubenswrapper[4946]: I1128 06:52:40.817821 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:40 crc kubenswrapper[4946]: I1128 06:52:40.819496 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:40 crc kubenswrapper[4946]: I1128 06:52:40.819567 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:40 crc kubenswrapper[4946]: I1128 06:52:40.819588 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:40 crc kubenswrapper[4946]: I1128 06:52:40.826827 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:40 crc kubenswrapper[4946]: I1128 06:52:40.980545 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:52:40 crc kubenswrapper[4946]: I1128 06:52:40.981137 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:40 crc kubenswrapper[4946]: I1128 06:52:40.983255 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:40 crc kubenswrapper[4946]: I1128 06:52:40.983326 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:40 crc kubenswrapper[4946]: I1128 06:52:40.983346 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:41 crc kubenswrapper[4946]: I1128 06:52:41.107283 4946 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 06:52:41 crc kubenswrapper[4946]: I1128 06:52:41.107358 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:41 crc kubenswrapper[4946]: I1128 06:52:41.107361 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:41 crc kubenswrapper[4946]: I1128 06:52:41.109455 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:41 crc kubenswrapper[4946]: I1128 06:52:41.109777 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:41 crc kubenswrapper[4946]: I1128 06:52:41.109939 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:41 crc kubenswrapper[4946]: I1128 06:52:41.109539 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:41 crc kubenswrapper[4946]: I1128 06:52:41.110130 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:41 crc kubenswrapper[4946]: I1128 06:52:41.110152 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:41 crc kubenswrapper[4946]: I1128 06:52:41.130562 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 28 06:52:42 crc kubenswrapper[4946]: I1128 06:52:42.111814 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:42 crc kubenswrapper[4946]: I1128 06:52:42.117607 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:42 crc kubenswrapper[4946]: I1128 06:52:42.117695 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:42 crc kubenswrapper[4946]: I1128 06:52:42.117717 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:43 crc kubenswrapper[4946]: E1128 06:52:43.831200 4946 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.834303 4946 trace.go:236] Trace[1130703759]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 06:52:29.620) (total time: 14213ms): Nov 28 06:52:43 crc kubenswrapper[4946]: Trace[1130703759]: ---"Objects listed" error: 14213ms (06:52:43.834) Nov 28 06:52:43 crc kubenswrapper[4946]: Trace[1130703759]: [14.21394593s] [14.21394593s] END Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.834364 4946 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.836844 4946 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.836956 4946 trace.go:236] Trace[1441319892]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 06:52:29.621) (total time: 14214ms): Nov 28 06:52:43 crc kubenswrapper[4946]: Trace[1441319892]: ---"Objects listed" error: 14214ms (06:52:43.836) Nov 28 06:52:43 crc kubenswrapper[4946]: Trace[1441319892]: [14.214873562s] [14.214873562s] END Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.836991 4946 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 28 06:52:43 crc kubenswrapper[4946]: E1128 06:52:43.837338 4946 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.838570 4946 trace.go:236] Trace[352053882]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 06:52:29.440) (total time: 14397ms): Nov 28 06:52:43 crc kubenswrapper[4946]: Trace[352053882]: ---"Objects listed" error: 14397ms (06:52:43.838) Nov 28 06:52:43 crc kubenswrapper[4946]: Trace[352053882]: [14.397649088s] [14.397649088s] END Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.838608 4946 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.842064 4946 trace.go:236] Trace[983291245]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 06:52:29.423) (total time: 14418ms): Nov 28 06:52:43 crc kubenswrapper[4946]: Trace[983291245]: ---"Objects listed" error: 14418ms (06:52:43.841) Nov 28 06:52:43 crc kubenswrapper[4946]: Trace[983291245]: [14.418855034s] [14.418855034s] END Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.842102 4946 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.910197 4946 apiserver.go:52] "Watching apiserver" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.911595 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.915847 4946 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.916169 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.916776 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.916914 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:52:43 crc kubenswrapper[4946]: E1128 06:52:43.916979 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.917066 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:52:43 crc kubenswrapper[4946]: E1128 06:52:43.917193 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.917211 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.917432 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:52:43 crc kubenswrapper[4946]: E1128 06:52:43.917499 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.917578 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.920965 4946 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.939904 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.940215 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.940312 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.940409 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.940506 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.940589 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.940681 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.940779 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.940865 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.940949 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.941042 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.941136 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.941210 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.941290 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942389 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942429 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942454 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942491 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942516 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942541 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942558 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942577 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942601 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942623 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942640 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942682 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942703 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942727 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942748 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942772 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942792 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942810 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942829 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942851 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942877 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942899 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942916 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942935 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942954 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942972 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942990 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943010 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943031 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943051 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943069 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943087 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943102 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943120 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943139 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943154 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943173 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943195 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943213 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943229 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943248 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943269 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943286 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943306 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943324 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943342 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943361 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943379 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943397 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943414 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943433 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943452 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943482 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943502 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943526 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943542 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943561 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943580 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943596 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943616 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943635 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943656 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943675 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943694 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943714 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943733 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943756 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943775 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943793 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943813 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943834 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943854 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943875 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943896 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943918 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943935 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.943955 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.944648 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.944679 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.944731 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.944754 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.944774 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.944792 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.944811 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.944830 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.944849 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.944869 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.944890 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.944909 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.944926 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.944945 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.944965 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.944981 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945003 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945021 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945038 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945056 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945077 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945120 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945136 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945156 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945174 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945190 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945209 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945227 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945247 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945265 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945283 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945302 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945321 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945340 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945361 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945380 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945401 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945420 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945440 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945471 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945494 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945515 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945532 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945552 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945573 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945603 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945626 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945643 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945663 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945689 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945709 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945725 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945745 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945764 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945780 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945800 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945818 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945838 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945857 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945933 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945953 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945975 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945995 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946013 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946034 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946053 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946074 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946093 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946113 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946132 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946150 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946169 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946190 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946211 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946230 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946250 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946270 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946287 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946306 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946326 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946344 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946364 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946386 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946408 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946426 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946446 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946480 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946505 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946526 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946549 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946572 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946591 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946612 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946633 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946650 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946669 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946692 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946710 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946731 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946752 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946774 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946793 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946819 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946841 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946904 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946940 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946970 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946992 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.947016 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.947040 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.947195 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.947222 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.947247 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.947269 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.947293 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.947315 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.947333 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.947353 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.940676 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.940677 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.940737 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.958010 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.940839 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.940925 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.958355 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.941190 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.942324 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.944764 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.944914 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.958780 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945056 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945226 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945528 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945567 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.945796 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946055 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.946560 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.947146 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.947148 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.947328 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.947498 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.947457 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.947763 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.947889 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.948043 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.948143 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.948359 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.959489 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.948356 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.948389 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.949813 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.950119 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.950415 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.950956 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.951697 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.951756 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.951887 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.952706 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.952705 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.953209 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.953244 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.953778 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.953851 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.954149 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.954682 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.956357 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.957160 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.959583 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.959782 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.957600 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.959977 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.959980 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.960698 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.961018 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.961484 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.961826 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.962196 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.959306 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.963236 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.963533 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.963872 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.964250 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.964341 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.965586 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.966514 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.966823 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.967388 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.968798 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.969199 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.969641 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.969782 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.969950 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.969962 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.970114 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.970190 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.969361 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.970275 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.970417 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.968820 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.970591 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.970648 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.970754 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.970926 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.970980 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.971353 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.971699 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.973024 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.973178 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.973311 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.973607 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.973763 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.974320 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.974629 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.974753 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.975279 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.975564 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.975655 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.975860 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.975901 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.976215 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.976253 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.976265 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.976314 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.976342 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.976577 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.976727 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.976882 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.977041 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.977283 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.977118 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.977423 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.977535 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.977546 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.978011 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.978084 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.978308 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.978479 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.978635 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.977978 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.978866 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.978879 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.978684 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.981038 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.981295 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.981311 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.981500 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.981935 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.981001 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.982627 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.982719 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.983041 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.981343 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.983244 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.983731 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.984537 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.984893 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.985356 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.985946 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.986694 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.987265 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.987603 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.988179 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.988618 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.988631 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.988667 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.988803 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: E1128 06:52:43.989228 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:52:44.489204137 +0000 UTC m=+18.867269248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.989257 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.989454 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.989481 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: E1128 06:52:43.989573 4946 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:52:43 crc kubenswrapper[4946]: E1128 06:52:43.989627 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:52:44.489617967 +0000 UTC m=+18.867683078 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.989894 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: E1128 06:52:43.990362 4946 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:52:43 crc kubenswrapper[4946]: E1128 06:52:43.990403 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:52:44.490396007 +0000 UTC m=+18.868461118 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.990400 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.990803 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.990935 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.991346 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.991729 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.991828 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.992130 4946 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.993063 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.995511 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.995630 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:43 crc kubenswrapper[4946]: I1128 06:52:43.996683 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:52:44 crc kubenswrapper[4946]: E1128 06:52:44.004834 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:52:44 crc kubenswrapper[4946]: E1128 06:52:44.004869 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:52:44 crc kubenswrapper[4946]: E1128 06:52:44.004887 4946 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:52:44 crc kubenswrapper[4946]: E1128 06:52:44.004974 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 06:52:44.504948707 +0000 UTC m=+18.883013818 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:52:44 crc kubenswrapper[4946]: E1128 06:52:44.008184 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:52:44 crc kubenswrapper[4946]: E1128 06:52:44.008226 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:52:44 crc kubenswrapper[4946]: E1128 06:52:44.008241 4946 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:52:44 crc kubenswrapper[4946]: E1128 06:52:44.008305 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 06:52:44.508284362 +0000 UTC m=+18.886349473 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.011020 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.012931 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.013073 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.013667 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.014024 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.014052 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.015670 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.015735 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.016387 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.016746 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.016855 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.017686 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.017783 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.019677 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.019762 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.021163 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.022174 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.022297 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.022335 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.022980 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.025222 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.026392 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.026646 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.026996 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.027012 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.027171 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.027494 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.027974 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.027987 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.028052 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.028354 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.028813 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.028985 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.029151 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.029207 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.029438 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.029765 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.029808 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.030392 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.030553 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.030645 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.030835 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.031021 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.031337 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.031388 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.031505 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.031829 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.035351 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.035567 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.035759 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.036518 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.036800 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.037610 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.038553 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.038714 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.039609 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.040603 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.042203 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.043153 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.044991 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.045696 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048045 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048145 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048214 4946 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048237 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048252 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048264 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048276 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048289 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048301 4946 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048313 4946 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048325 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048337 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048348 4946 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048351 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048362 4946 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048213 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048400 4946 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048417 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048426 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048438 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048447 4946 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048457 4946 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048486 4946 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048497 4946 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048506 4946 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048515 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048525 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048533 4946 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048541 4946 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048550 4946 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048559 4946 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048567 4946 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048576 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048585 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048593 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048601 4946 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048609 4946 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048618 4946 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048626 4946 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048635 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048644 4946 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048652 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048660 4946 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048671 4946 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048679 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048688 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048697 4946 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048705 4946 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048713 4946 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048721 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048729 4946 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048738 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048746 4946 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048754 4946 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048761 4946 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048770 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048778 4946 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048809 4946 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048818 4946 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048826 4946 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048833 4946 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048841 4946 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048849 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048856 4946 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048864 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048872 4946 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048882 4946 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048891 4946 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048900 4946 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048909 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048917 4946 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048926 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048933 4946 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048942 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048951 4946 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048959 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048967 4946 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048974 4946 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048982 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.048992 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049000 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049008 4946 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049015 4946 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049025 4946 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049033 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049042 4946 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049050 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049058 4946 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049066 4946 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049075 4946 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049085 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049094 4946 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049102 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049139 4946 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049148 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049156 4946 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049164 4946 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049173 4946 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049182 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049190 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049198 4946 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049206 4946 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049213 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049222 4946 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049229 4946 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049238 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049248 4946 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049257 4946 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049266 4946 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049274 4946 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049282 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049292 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049300 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049308 4946 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049316 4946 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049325 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049333 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049345 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049354 4946 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049362 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049369 4946 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049378 4946 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049385 4946 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049393 4946 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049401 4946 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049410 4946 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049419 4946 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049426 4946 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049434 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049442 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049450 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049777 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049794 4946 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049803 4946 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049812 4946 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049823 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049831 4946 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049839 4946 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049848 4946 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049856 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049865 4946 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049873 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049881 4946 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049889 4946 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049899 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049907 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049915 4946 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049924 4946 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049931 4946 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049940 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049949 4946 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049957 4946 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049965 4946 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049973 4946 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049981 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049988 4946 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.049996 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050003 4946 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050013 4946 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050021 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050028 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050036 4946 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050044 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050051 4946 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050059 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050067 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050074 4946 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050082 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050089 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050097 4946 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050104 4946 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050112 4946 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050121 4946 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050129 4946 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050137 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050145 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050153 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050160 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050168 4946 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050175 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050183 4946 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050192 4946 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050201 4946 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050208 4946 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050216 4946 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050225 4946 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050232 4946 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050240 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050248 4946 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050257 4946 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050265 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050273 4946 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050281 4946 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050288 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050296 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050321 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.050976 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.051879 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.055940 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.060716 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.061376 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.062620 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.064399 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.067223 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.068708 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.077006 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.077653 4946 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35146->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.077710 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35146->192.168.126.11:17697: read: connection reset by peer" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.077907 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.078027 4946 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.078046 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.078202 4946 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58966->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.078244 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58966->192.168.126.11:17697: read: connection reset by peer" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.078514 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.079213 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.079858 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.080915 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.081351 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.083379 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.084026 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.085128 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.085854 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.086568 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.087980 4946 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.088090 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.091105 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.091630 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.092390 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.092810 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.095113 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.096791 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.097281 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.097975 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.099137 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.099963 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.100502 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.101696 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.102734 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.103450 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.104416 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.105150 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.106392 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.107047 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.107416 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.108593 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.109196 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.109826 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.111390 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.112410 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.115488 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.117091 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:52:44 crc kubenswrapper[4946]: E1128 06:52:44.123914 4946 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.130693 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.152279 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.152311 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.152321 4946 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.233307 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:52:44 crc kubenswrapper[4946]: W1128 06:52:44.253847 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-f0415b77ef80c5d84e9b6bfbac3f7b3fc203723fc2a5205c87b4547276bfcb13 WatchSource:0}: Error finding container f0415b77ef80c5d84e9b6bfbac3f7b3fc203723fc2a5205c87b4547276bfcb13: Status 404 returned error can't find the container with id f0415b77ef80c5d84e9b6bfbac3f7b3fc203723fc2a5205c87b4547276bfcb13 Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.273410 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.296640 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:52:44 crc kubenswrapper[4946]: W1128 06:52:44.299696 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-e3295efc98f190e930d857ed3df4c2a82bedafd2f8859ce6ee787b089a20a4aa WatchSource:0}: Error finding container e3295efc98f190e930d857ed3df4c2a82bedafd2f8859ce6ee787b089a20a4aa: Status 404 returned error can't find the container with id e3295efc98f190e930d857ed3df4c2a82bedafd2f8859ce6ee787b089a20a4aa Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.555745 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.555868 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.555913 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.555950 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:52:44 crc kubenswrapper[4946]: E1128 06:52:44.555979 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:52:45.55594335 +0000 UTC m=+19.934008491 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:52:44 crc kubenswrapper[4946]: I1128 06:52:44.556022 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:52:44 crc kubenswrapper[4946]: E1128 06:52:44.556073 4946 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:52:44 crc kubenswrapper[4946]: E1128 06:52:44.556109 4946 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:52:44 crc kubenswrapper[4946]: E1128 06:52:44.556154 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:52:44 crc kubenswrapper[4946]: E1128 06:52:44.556185 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:52:45.556154735 +0000 UTC m=+19.934220016 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:52:44 crc kubenswrapper[4946]: E1128 06:52:44.556211 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:52:45.556201227 +0000 UTC m=+19.934266438 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:52:44 crc kubenswrapper[4946]: E1128 06:52:44.556190 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:52:44 crc kubenswrapper[4946]: E1128 06:52:44.556274 4946 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:52:44 crc kubenswrapper[4946]: E1128 06:52:44.556332 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 06:52:45.55631876 +0000 UTC m=+19.934383881 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:52:44 crc kubenswrapper[4946]: E1128 06:52:44.556163 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:52:44 crc kubenswrapper[4946]: E1128 06:52:44.556379 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:52:44 crc kubenswrapper[4946]: E1128 06:52:44.556396 4946 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:52:44 crc kubenswrapper[4946]: E1128 06:52:44.556440 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 06:52:45.556425662 +0000 UTC m=+19.934490963 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.123657 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a0a6a6d530baec4925cf8c651830f77f2db77aed79553f7149c34cd4ad612e45"} Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.127693 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21"} Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.127778 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9"} Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.127800 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e3295efc98f190e930d857ed3df4c2a82bedafd2f8859ce6ee787b089a20a4aa"} Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.130100 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6"} Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.130178 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f0415b77ef80c5d84e9b6bfbac3f7b3fc203723fc2a5205c87b4547276bfcb13"} Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.133250 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.135723 4946 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413" exitCode=255 Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.135797 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413"} Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.148407 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.148821 4946 scope.go:117] "RemoveContainer" containerID="b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.148895 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.181715 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.203280 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.229414 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.254565 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.287810 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.310931 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.336837 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.353974 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.370781 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.387926 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.403814 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.420671 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.438160 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.453949 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.565510 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.565628 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.565668 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.565700 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.565738 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:52:45 crc kubenswrapper[4946]: E1128 06:52:45.565894 4946 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:52:45 crc kubenswrapper[4946]: E1128 06:52:45.565972 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:52:47.565951186 +0000 UTC m=+21.944016297 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:52:45 crc kubenswrapper[4946]: E1128 06:52:45.566476 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:52:47.566445178 +0000 UTC m=+21.944510299 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:52:45 crc kubenswrapper[4946]: E1128 06:52:45.566572 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:52:45 crc kubenswrapper[4946]: E1128 06:52:45.566610 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:52:45 crc kubenswrapper[4946]: E1128 06:52:45.566627 4946 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:52:45 crc kubenswrapper[4946]: E1128 06:52:45.566664 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 06:52:47.566654174 +0000 UTC m=+21.944719285 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:52:45 crc kubenswrapper[4946]: E1128 06:52:45.566711 4946 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:52:45 crc kubenswrapper[4946]: E1128 06:52:45.566744 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:52:47.566735426 +0000 UTC m=+21.944800537 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:52:45 crc kubenswrapper[4946]: E1128 06:52:45.566803 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:52:45 crc kubenswrapper[4946]: E1128 06:52:45.566818 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:52:45 crc kubenswrapper[4946]: E1128 06:52:45.566827 4946 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:52:45 crc kubenswrapper[4946]: E1128 06:52:45.566874 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 06:52:47.566847609 +0000 UTC m=+21.944912720 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.747740 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-j4wp8"] Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.748140 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-j4wp8" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.751492 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.752235 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.752322 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.753689 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.778270 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.815652 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.834492 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.852270 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.863053 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.868368 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87x27\" (UniqueName: \"kubernetes.io/projected/edc5a786-a162-4416-a240-54272b0c7376-kube-api-access-87x27\") pod \"node-ca-j4wp8\" (UID: \"edc5a786-a162-4416-a240-54272b0c7376\") " pod="openshift-image-registry/node-ca-j4wp8" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.868414 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/edc5a786-a162-4416-a240-54272b0c7376-serviceca\") pod \"node-ca-j4wp8\" (UID: \"edc5a786-a162-4416-a240-54272b0c7376\") " pod="openshift-image-registry/node-ca-j4wp8" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.868439 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/edc5a786-a162-4416-a240-54272b0c7376-host\") pod \"node-ca-j4wp8\" (UID: \"edc5a786-a162-4416-a240-54272b0c7376\") " pod="openshift-image-registry/node-ca-j4wp8" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.876225 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.887059 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.915919 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.930498 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.969177 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/edc5a786-a162-4416-a240-54272b0c7376-serviceca\") pod \"node-ca-j4wp8\" (UID: \"edc5a786-a162-4416-a240-54272b0c7376\") " pod="openshift-image-registry/node-ca-j4wp8" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.969231 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/edc5a786-a162-4416-a240-54272b0c7376-host\") pod \"node-ca-j4wp8\" (UID: \"edc5a786-a162-4416-a240-54272b0c7376\") " pod="openshift-image-registry/node-ca-j4wp8" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.969294 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87x27\" (UniqueName: \"kubernetes.io/projected/edc5a786-a162-4416-a240-54272b0c7376-kube-api-access-87x27\") pod \"node-ca-j4wp8\" (UID: \"edc5a786-a162-4416-a240-54272b0c7376\") " pod="openshift-image-registry/node-ca-j4wp8" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.969428 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/edc5a786-a162-4416-a240-54272b0c7376-host\") pod \"node-ca-j4wp8\" (UID: \"edc5a786-a162-4416-a240-54272b0c7376\") " pod="openshift-image-registry/node-ca-j4wp8" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.971629 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/edc5a786-a162-4416-a240-54272b0c7376-serviceca\") pod \"node-ca-j4wp8\" (UID: \"edc5a786-a162-4416-a240-54272b0c7376\") " pod="openshift-image-registry/node-ca-j4wp8" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.989764 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.989813 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:52:45 crc kubenswrapper[4946]: I1128 06:52:45.989896 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:52:45 crc kubenswrapper[4946]: E1128 06:52:45.989927 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:52:45 crc kubenswrapper[4946]: E1128 06:52:45.990084 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:52:45 crc kubenswrapper[4946]: E1128 06:52:45.990197 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.001004 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87x27\" (UniqueName: \"kubernetes.io/projected/edc5a786-a162-4416-a240-54272b0c7376-kube-api-access-87x27\") pod \"node-ca-j4wp8\" (UID: \"edc5a786-a162-4416-a240-54272b0c7376\") " pod="openshift-image-registry/node-ca-j4wp8" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.001808 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.002595 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.004280 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.020064 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.043005 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.060256 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-j4wp8" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.106147 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: W1128 06:52:46.109053 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedc5a786_a162_4416_a240_54272b0c7376.slice/crio-f4e7e2de249c015f6c56f80ccbfe4b39247068f96aafb3acbe5dd7fb91904b2c WatchSource:0}: Error finding container f4e7e2de249c015f6c56f80ccbfe4b39247068f96aafb3acbe5dd7fb91904b2c: Status 404 returned error can't find the container with id f4e7e2de249c015f6c56f80ccbfe4b39247068f96aafb3acbe5dd7fb91904b2c Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.129905 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.211178 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.222899 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35"} Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.223655 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.225898 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-j4wp8" event={"ID":"edc5a786-a162-4416-a240-54272b0c7376","Type":"ContainerStarted","Data":"f4e7e2de249c015f6c56f80ccbfe4b39247068f96aafb3acbe5dd7fb91904b2c"} Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.233852 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.254912 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.267966 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.284903 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.300994 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.331111 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.354332 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.368788 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.383056 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.396298 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.408329 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.435044 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.453727 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.612908 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-6tvtt"] Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.613309 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6tvtt" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.616331 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.617161 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.617420 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.623151 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-9g9w4"] Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.623599 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.625888 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-pqf5z"] Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.626659 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-g2vhr"] Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.627013 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.627579 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.633747 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pkknv"] Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.634433 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.635118 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.635213 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.635346 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.634671 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.634833 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.634888 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.637660 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.638116 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.637821 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.637935 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.638042 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.638705 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.639948 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.640235 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.640366 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.640721 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.642428 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.643305 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.646841 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.675917 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.692962 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-host-run-netns\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.693275 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-multus-conf-dir\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.699566 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/45d160dd-b1b4-4cdf-800c-74195ab023e1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pqf5z\" (UID: \"45d160dd-b1b4-4cdf-800c-74195ab023e1\") " pod="openshift-multus/multus-additional-cni-plugins-pqf5z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.699714 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-host-var-lib-cni-multus\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.699745 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-host-var-lib-cni-bin\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.699779 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-etc-kubernetes\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.699802 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-ovn-node-metrics-cert\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.699852 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-host-var-lib-kubelet\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.699879 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmv4h\" (UniqueName: \"kubernetes.io/projected/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-kube-api-access-tmv4h\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.699918 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/45d160dd-b1b4-4cdf-800c-74195ab023e1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pqf5z\" (UID: \"45d160dd-b1b4-4cdf-800c-74195ab023e1\") " pod="openshift-multus/multus-additional-cni-plugins-pqf5z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.699942 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-run-ovn\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.699965 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnm9d\" (UniqueName: \"kubernetes.io/projected/7450befc-262f-45d1-a5f4-f445e540185b-kube-api-access-hnm9d\") pod \"machine-config-daemon-g2vhr\" (UID: \"7450befc-262f-45d1-a5f4-f445e540185b\") " pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.699989 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-cnibin\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700005 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/45d160dd-b1b4-4cdf-800c-74195ab023e1-cni-binary-copy\") pod \"multus-additional-cni-plugins-pqf5z\" (UID: \"45d160dd-b1b4-4cdf-800c-74195ab023e1\") " pod="openshift-multus/multus-additional-cni-plugins-pqf5z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700028 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7450befc-262f-45d1-a5f4-f445e540185b-proxy-tls\") pod \"machine-config-daemon-g2vhr\" (UID: \"7450befc-262f-45d1-a5f4-f445e540185b\") " pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700045 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/857356d2-6585-41c6-9a2c-e06ef45f7303-cni-binary-copy\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700063 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-multus-socket-dir-parent\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700080 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-run-netns\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700098 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbjr2\" (UniqueName: \"kubernetes.io/projected/857356d2-6585-41c6-9a2c-e06ef45f7303-kube-api-access-fbjr2\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700116 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-var-lib-openvswitch\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700139 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-etc-openvswitch\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700157 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-node-log\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700175 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-cni-bin\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700220 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-ovnkube-config\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700244 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-system-cni-dir\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700265 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45pfm\" (UniqueName: \"kubernetes.io/projected/a10fe13c-fc86-474a-945b-f96caafad2a6-kube-api-access-45pfm\") pod \"node-resolver-6tvtt\" (UID: \"a10fe13c-fc86-474a-945b-f96caafad2a6\") " pod="openshift-dns/node-resolver-6tvtt" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700282 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-os-release\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700301 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-host-run-k8s-cni-cncf-io\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700323 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7450befc-262f-45d1-a5f4-f445e540185b-mcd-auth-proxy-config\") pod \"machine-config-daemon-g2vhr\" (UID: \"7450befc-262f-45d1-a5f4-f445e540185b\") " pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700344 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-multus-cni-dir\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700361 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-hostroot\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700380 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-systemd-units\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700398 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-slash\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700417 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-run-systemd\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700436 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-log-socket\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700476 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/45d160dd-b1b4-4cdf-800c-74195ab023e1-system-cni-dir\") pod \"multus-additional-cni-plugins-pqf5z\" (UID: \"45d160dd-b1b4-4cdf-800c-74195ab023e1\") " pod="openshift-multus/multus-additional-cni-plugins-pqf5z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700501 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szzk4\" (UniqueName: \"kubernetes.io/projected/45d160dd-b1b4-4cdf-800c-74195ab023e1-kube-api-access-szzk4\") pod \"multus-additional-cni-plugins-pqf5z\" (UID: \"45d160dd-b1b4-4cdf-800c-74195ab023e1\") " pod="openshift-multus/multus-additional-cni-plugins-pqf5z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700522 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-ovnkube-script-lib\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700540 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7450befc-262f-45d1-a5f4-f445e540185b-rootfs\") pod \"machine-config-daemon-g2vhr\" (UID: \"7450befc-262f-45d1-a5f4-f445e540185b\") " pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700572 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-host-run-multus-certs\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700593 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/45d160dd-b1b4-4cdf-800c-74195ab023e1-cnibin\") pod \"multus-additional-cni-plugins-pqf5z\" (UID: \"45d160dd-b1b4-4cdf-800c-74195ab023e1\") " pod="openshift-multus/multus-additional-cni-plugins-pqf5z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700611 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-run-ovn-kubernetes\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700627 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/857356d2-6585-41c6-9a2c-e06ef45f7303-multus-daemon-config\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700644 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-kubelet\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700666 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700686 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a10fe13c-fc86-474a-945b-f96caafad2a6-hosts-file\") pod \"node-resolver-6tvtt\" (UID: \"a10fe13c-fc86-474a-945b-f96caafad2a6\") " pod="openshift-dns/node-resolver-6tvtt" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700702 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-run-openvswitch\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700719 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-cni-netd\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700738 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-env-overrides\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.700757 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/45d160dd-b1b4-4cdf-800c-74195ab023e1-os-release\") pod \"multus-additional-cni-plugins-pqf5z\" (UID: \"45d160dd-b1b4-4cdf-800c-74195ab023e1\") " pod="openshift-multus/multus-additional-cni-plugins-pqf5z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.732416 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.753075 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.764961 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.783625 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.798162 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.801514 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45pfm\" (UniqueName: \"kubernetes.io/projected/a10fe13c-fc86-474a-945b-f96caafad2a6-kube-api-access-45pfm\") pod \"node-resolver-6tvtt\" (UID: \"a10fe13c-fc86-474a-945b-f96caafad2a6\") " pod="openshift-dns/node-resolver-6tvtt" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.801563 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-os-release\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.801588 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-host-run-k8s-cni-cncf-io\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.801613 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-systemd-units\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.801662 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-slash\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.801700 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7450befc-262f-45d1-a5f4-f445e540185b-mcd-auth-proxy-config\") pod \"machine-config-daemon-g2vhr\" (UID: \"7450befc-262f-45d1-a5f4-f445e540185b\") " pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.801725 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-multus-cni-dir\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.801747 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-hostroot\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.801772 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/45d160dd-b1b4-4cdf-800c-74195ab023e1-system-cni-dir\") pod \"multus-additional-cni-plugins-pqf5z\" (UID: \"45d160dd-b1b4-4cdf-800c-74195ab023e1\") " pod="openshift-multus/multus-additional-cni-plugins-pqf5z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.801794 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-run-systemd\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.801812 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-log-socket\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.801829 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7450befc-262f-45d1-a5f4-f445e540185b-rootfs\") pod \"machine-config-daemon-g2vhr\" (UID: \"7450befc-262f-45d1-a5f4-f445e540185b\") " pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.801848 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szzk4\" (UniqueName: \"kubernetes.io/projected/45d160dd-b1b4-4cdf-800c-74195ab023e1-kube-api-access-szzk4\") pod \"multus-additional-cni-plugins-pqf5z\" (UID: \"45d160dd-b1b4-4cdf-800c-74195ab023e1\") " pod="openshift-multus/multus-additional-cni-plugins-pqf5z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.801933 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-ovnkube-script-lib\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.801953 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-run-ovn-kubernetes\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.801984 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-host-run-multus-certs\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802003 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/45d160dd-b1b4-4cdf-800c-74195ab023e1-cnibin\") pod \"multus-additional-cni-plugins-pqf5z\" (UID: \"45d160dd-b1b4-4cdf-800c-74195ab023e1\") " pod="openshift-multus/multus-additional-cni-plugins-pqf5z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802023 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a10fe13c-fc86-474a-945b-f96caafad2a6-hosts-file\") pod \"node-resolver-6tvtt\" (UID: \"a10fe13c-fc86-474a-945b-f96caafad2a6\") " pod="openshift-dns/node-resolver-6tvtt" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802043 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/857356d2-6585-41c6-9a2c-e06ef45f7303-multus-daemon-config\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802045 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-host-run-k8s-cni-cncf-io\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802063 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-kubelet\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802087 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802095 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-systemd-units\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802107 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-slash\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802114 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/45d160dd-b1b4-4cdf-800c-74195ab023e1-os-release\") pod \"multus-additional-cni-plugins-pqf5z\" (UID: \"45d160dd-b1b4-4cdf-800c-74195ab023e1\") " pod="openshift-multus/multus-additional-cni-plugins-pqf5z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802146 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-run-systemd\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802162 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-run-openvswitch\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802182 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-hostroot\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802196 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/45d160dd-b1b4-4cdf-800c-74195ab023e1-os-release\") pod \"multus-additional-cni-plugins-pqf5z\" (UID: \"45d160dd-b1b4-4cdf-800c-74195ab023e1\") " pod="openshift-multus/multus-additional-cni-plugins-pqf5z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802207 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-cni-netd\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802243 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/45d160dd-b1b4-4cdf-800c-74195ab023e1-system-cni-dir\") pod \"multus-additional-cni-plugins-pqf5z\" (UID: \"45d160dd-b1b4-4cdf-800c-74195ab023e1\") " pod="openshift-multus/multus-additional-cni-plugins-pqf5z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802251 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-run-ovn-kubernetes\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802182 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-cni-netd\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802282 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-host-run-multus-certs\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802293 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-env-overrides\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802311 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/45d160dd-b1b4-4cdf-800c-74195ab023e1-cnibin\") pod \"multus-additional-cni-plugins-pqf5z\" (UID: \"45d160dd-b1b4-4cdf-800c-74195ab023e1\") " pod="openshift-multus/multus-additional-cni-plugins-pqf5z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802334 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-host-run-netns\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802347 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a10fe13c-fc86-474a-945b-f96caafad2a6-hosts-file\") pod \"node-resolver-6tvtt\" (UID: \"a10fe13c-fc86-474a-945b-f96caafad2a6\") " pod="openshift-dns/node-resolver-6tvtt" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802353 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-multus-conf-dir\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802375 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/45d160dd-b1b4-4cdf-800c-74195ab023e1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pqf5z\" (UID: \"45d160dd-b1b4-4cdf-800c-74195ab023e1\") " pod="openshift-multus/multus-additional-cni-plugins-pqf5z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802410 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-host-var-lib-cni-multus\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802429 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-host-var-lib-cni-bin\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802445 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-etc-kubernetes\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802490 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-ovn-node-metrics-cert\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802519 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-host-var-lib-kubelet\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802537 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmv4h\" (UniqueName: \"kubernetes.io/projected/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-kube-api-access-tmv4h\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802557 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/45d160dd-b1b4-4cdf-800c-74195ab023e1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pqf5z\" (UID: \"45d160dd-b1b4-4cdf-800c-74195ab023e1\") " pod="openshift-multus/multus-additional-cni-plugins-pqf5z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802582 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-run-ovn\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802606 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/45d160dd-b1b4-4cdf-800c-74195ab023e1-cni-binary-copy\") pod \"multus-additional-cni-plugins-pqf5z\" (UID: \"45d160dd-b1b4-4cdf-800c-74195ab023e1\") " pod="openshift-multus/multus-additional-cni-plugins-pqf5z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802630 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnm9d\" (UniqueName: \"kubernetes.io/projected/7450befc-262f-45d1-a5f4-f445e540185b-kube-api-access-hnm9d\") pod \"machine-config-daemon-g2vhr\" (UID: \"7450befc-262f-45d1-a5f4-f445e540185b\") " pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802649 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-cnibin\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802669 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7450befc-262f-45d1-a5f4-f445e540185b-proxy-tls\") pod \"machine-config-daemon-g2vhr\" (UID: \"7450befc-262f-45d1-a5f4-f445e540185b\") " pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802687 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/857356d2-6585-41c6-9a2c-e06ef45f7303-cni-binary-copy\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802703 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-multus-socket-dir-parent\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802719 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-run-netns\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802734 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-ovnkube-config\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802749 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-system-cni-dir\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802765 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbjr2\" (UniqueName: \"kubernetes.io/projected/857356d2-6585-41c6-9a2c-e06ef45f7303-kube-api-access-fbjr2\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802781 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-var-lib-openvswitch\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802798 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-etc-openvswitch\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802853 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-node-log\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802873 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-cni-bin\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802902 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-ovnkube-script-lib\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802939 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-cni-bin\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802968 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-log-socket\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.803003 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7450befc-262f-45d1-a5f4-f445e540185b-rootfs\") pod \"machine-config-daemon-g2vhr\" (UID: \"7450befc-262f-45d1-a5f4-f445e540185b\") " pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.803048 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7450befc-262f-45d1-a5f4-f445e540185b-mcd-auth-proxy-config\") pod \"machine-config-daemon-g2vhr\" (UID: \"7450befc-262f-45d1-a5f4-f445e540185b\") " pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802040 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-os-release\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.803117 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-run-openvswitch\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.802053 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-multus-cni-dir\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.803163 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-kubelet\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.803167 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/857356d2-6585-41c6-9a2c-e06ef45f7303-multus-daemon-config\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.803191 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.803225 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-run-netns\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.803291 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-etc-kubernetes\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.803324 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-host-run-netns\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.803348 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-multus-conf-dir\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.803437 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-env-overrides\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.803454 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-cnibin\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.803842 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/45d160dd-b1b4-4cdf-800c-74195ab023e1-cni-binary-copy\") pod \"multus-additional-cni-plugins-pqf5z\" (UID: \"45d160dd-b1b4-4cdf-800c-74195ab023e1\") " pod="openshift-multus/multus-additional-cni-plugins-pqf5z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.803980 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/45d160dd-b1b4-4cdf-800c-74195ab023e1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pqf5z\" (UID: \"45d160dd-b1b4-4cdf-800c-74195ab023e1\") " pod="openshift-multus/multus-additional-cni-plugins-pqf5z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.804030 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-host-var-lib-cni-multus\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.804059 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-host-var-lib-cni-bin\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.804231 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-host-var-lib-kubelet\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.804269 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-run-ovn\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.804300 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-var-lib-openvswitch\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.804365 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/857356d2-6585-41c6-9a2c-e06ef45f7303-cni-binary-copy\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.804390 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-etc-openvswitch\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.804432 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-node-log\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.804449 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-system-cni-dir\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.804561 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/857356d2-6585-41c6-9a2c-e06ef45f7303-multus-socket-dir-parent\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.804632 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-ovnkube-config\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.804671 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/45d160dd-b1b4-4cdf-800c-74195ab023e1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pqf5z\" (UID: \"45d160dd-b1b4-4cdf-800c-74195ab023e1\") " pod="openshift-multus/multus-additional-cni-plugins-pqf5z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.809130 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-ovn-node-metrics-cert\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.810801 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7450befc-262f-45d1-a5f4-f445e540185b-proxy-tls\") pod \"machine-config-daemon-g2vhr\" (UID: \"7450befc-262f-45d1-a5f4-f445e540185b\") " pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.814728 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.819406 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45pfm\" (UniqueName: \"kubernetes.io/projected/a10fe13c-fc86-474a-945b-f96caafad2a6-kube-api-access-45pfm\") pod \"node-resolver-6tvtt\" (UID: \"a10fe13c-fc86-474a-945b-f96caafad2a6\") " pod="openshift-dns/node-resolver-6tvtt" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.822187 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szzk4\" (UniqueName: \"kubernetes.io/projected/45d160dd-b1b4-4cdf-800c-74195ab023e1-kube-api-access-szzk4\") pod \"multus-additional-cni-plugins-pqf5z\" (UID: \"45d160dd-b1b4-4cdf-800c-74195ab023e1\") " pod="openshift-multus/multus-additional-cni-plugins-pqf5z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.823816 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbjr2\" (UniqueName: \"kubernetes.io/projected/857356d2-6585-41c6-9a2c-e06ef45f7303-kube-api-access-fbjr2\") pod \"multus-9g9w4\" (UID: \"857356d2-6585-41c6-9a2c-e06ef45f7303\") " pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.824854 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmv4h\" (UniqueName: \"kubernetes.io/projected/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-kube-api-access-tmv4h\") pod \"ovnkube-node-pkknv\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.825268 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnm9d\" (UniqueName: \"kubernetes.io/projected/7450befc-262f-45d1-a5f4-f445e540185b-kube-api-access-hnm9d\") pod \"machine-config-daemon-g2vhr\" (UID: \"7450befc-262f-45d1-a5f4-f445e540185b\") " pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.831224 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.848478 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.860739 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.877273 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.890526 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.905259 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.919020 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.925195 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6tvtt" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.933536 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.936696 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9g9w4" Nov 28 06:52:46 crc kubenswrapper[4946]: W1128 06:52:46.937968 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda10fe13c_fc86_474a_945b_f96caafad2a6.slice/crio-b6bc4ffd43eb599a1b056003745df190243ed5b7b4101cb99260ee5fae787a0e WatchSource:0}: Error finding container b6bc4ffd43eb599a1b056003745df190243ed5b7b4101cb99260ee5fae787a0e: Status 404 returned error can't find the container with id b6bc4ffd43eb599a1b056003745df190243ed5b7b4101cb99260ee5fae787a0e Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.943858 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 06:52:46 crc kubenswrapper[4946]: W1128 06:52:46.947721 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod857356d2_6585_41c6_9a2c_e06ef45f7303.slice/crio-7410b61b02d49e34038758cb5cba2ae32163847d3216c14f041a11c7b987a7f7 WatchSource:0}: Error finding container 7410b61b02d49e34038758cb5cba2ae32163847d3216c14f041a11c7b987a7f7: Status 404 returned error can't find the container with id 7410b61b02d49e34038758cb5cba2ae32163847d3216c14f041a11c7b987a7f7 Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.950548 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.957246 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.959447 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:46 crc kubenswrapper[4946]: W1128 06:52:46.967385 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7450befc_262f_45d1_a5f4_f445e540185b.slice/crio-b29ce38df997a467458956c01955e8b964643c042b41e7d001de9a2721373744 WatchSource:0}: Error finding container b29ce38df997a467458956c01955e8b964643c042b41e7d001de9a2721373744: Status 404 returned error can't find the container with id b29ce38df997a467458956c01955e8b964643c042b41e7d001de9a2721373744 Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.970006 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: W1128 06:52:46.973633 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45d160dd_b1b4_4cdf_800c_74195ab023e1.slice/crio-e02a3735b575639d2d0db3f25bdb4a5e327f3476da0602aea776ff5807debe4e WatchSource:0}: Error finding container e02a3735b575639d2d0db3f25bdb4a5e327f3476da0602aea776ff5807debe4e: Status 404 returned error can't find the container with id e02a3735b575639d2d0db3f25bdb4a5e327f3476da0602aea776ff5807debe4e Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.984085 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:46 crc kubenswrapper[4946]: W1128 06:52:46.993843 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47e7046d_60dc_4dc0_b63e_f22f4ca5cd51.slice/crio-61e3f55bcbc151c49e2b4d5d26793757faebef7293f442e7cc50af03aab8bb11 WatchSource:0}: Error finding container 61e3f55bcbc151c49e2b4d5d26793757faebef7293f442e7cc50af03aab8bb11: Status 404 returned error can't find the container with id 61e3f55bcbc151c49e2b4d5d26793757faebef7293f442e7cc50af03aab8bb11 Nov 28 06:52:46 crc kubenswrapper[4946]: I1128 06:52:46.998569 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.017513 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.031634 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.038126 4946 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.040133 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.040172 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.040184 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.040350 4946 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.051109 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.051489 4946 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.052570 4946 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.060203 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.060257 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.060279 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.060312 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.060326 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:47Z","lastTransitionTime":"2025-11-28T06:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.073996 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.090323 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: E1128 06:52:47.094301 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.098806 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.098857 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.098892 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.098913 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.098925 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:47Z","lastTransitionTime":"2025-11-28T06:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:47 crc kubenswrapper[4946]: E1128 06:52:47.136133 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.147952 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.148003 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.148022 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.148042 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.148058 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:47Z","lastTransitionTime":"2025-11-28T06:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:47 crc kubenswrapper[4946]: E1128 06:52:47.170331 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.179104 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.179141 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.179155 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.179172 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.179185 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:47Z","lastTransitionTime":"2025-11-28T06:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:47 crc kubenswrapper[4946]: E1128 06:52:47.197290 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.211955 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.212014 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.212029 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.212053 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.212067 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:47Z","lastTransitionTime":"2025-11-28T06:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:47 crc kubenswrapper[4946]: E1128 06:52:47.225340 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: E1128 06:52:47.225522 4946 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.227334 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.227362 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.227371 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.227390 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.227403 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:47Z","lastTransitionTime":"2025-11-28T06:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.239824 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-j4wp8" event={"ID":"edc5a786-a162-4416-a240-54272b0c7376","Type":"ContainerStarted","Data":"b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512"} Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.240770 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" event={"ID":"45d160dd-b1b4-4cdf-800c-74195ab023e1","Type":"ContainerStarted","Data":"e02a3735b575639d2d0db3f25bdb4a5e327f3476da0602aea776ff5807debe4e"} Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.243677 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6tvtt" event={"ID":"a10fe13c-fc86-474a-945b-f96caafad2a6","Type":"ContainerStarted","Data":"b6bc4ffd43eb599a1b056003745df190243ed5b7b4101cb99260ee5fae787a0e"} Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.245624 4946 generic.go:334] "Generic (PLEG): container finished" podID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerID="5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830" exitCode=0 Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.245686 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerDied","Data":"5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830"} Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.245709 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerStarted","Data":"61e3f55bcbc151c49e2b4d5d26793757faebef7293f442e7cc50af03aab8bb11"} Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.247473 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67"} Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.247522 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"b29ce38df997a467458956c01955e8b964643c042b41e7d001de9a2721373744"} Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.248676 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9g9w4" event={"ID":"857356d2-6585-41c6-9a2c-e06ef45f7303","Type":"ContainerStarted","Data":"db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a"} Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.248709 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9g9w4" event={"ID":"857356d2-6585-41c6-9a2c-e06ef45f7303","Type":"ContainerStarted","Data":"7410b61b02d49e34038758cb5cba2ae32163847d3216c14f041a11c7b987a7f7"} Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.251804 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e"} Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.254423 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.268707 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.280874 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.294399 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.308704 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.330982 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.332293 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.332313 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.332324 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.332341 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.332354 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:47Z","lastTransitionTime":"2025-11-28T06:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.351779 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.373857 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.394199 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.442678 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.443214 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.443225 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.443241 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.443257 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:47Z","lastTransitionTime":"2025-11-28T06:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.448524 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.464625 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.509742 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.522884 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.538967 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.546694 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.546724 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.546734 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.546750 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.546761 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:47Z","lastTransitionTime":"2025-11-28T06:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.554557 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.566417 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.579255 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.590389 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.600996 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.613355 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.623354 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.623480 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:52:47 crc kubenswrapper[4946]: E1128 06:52:47.623533 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:52:51.623509671 +0000 UTC m=+26.001574782 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.623568 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:52:47 crc kubenswrapper[4946]: E1128 06:52:47.623597 4946 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.623624 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:52:47 crc kubenswrapper[4946]: E1128 06:52:47.623663 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:52:51.623642154 +0000 UTC m=+26.001707335 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.623686 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:52:47 crc kubenswrapper[4946]: E1128 06:52:47.623696 4946 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:52:47 crc kubenswrapper[4946]: E1128 06:52:47.623731 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:52:51.623723726 +0000 UTC m=+26.001788837 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:52:47 crc kubenswrapper[4946]: E1128 06:52:47.623760 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:52:47 crc kubenswrapper[4946]: E1128 06:52:47.623774 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:52:47 crc kubenswrapper[4946]: E1128 06:52:47.623785 4946 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:52:47 crc kubenswrapper[4946]: E1128 06:52:47.623807 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 06:52:51.623800948 +0000 UTC m=+26.001866059 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:52:47 crc kubenswrapper[4946]: E1128 06:52:47.623809 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:52:47 crc kubenswrapper[4946]: E1128 06:52:47.623822 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:52:47 crc kubenswrapper[4946]: E1128 06:52:47.623834 4946 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:52:47 crc kubenswrapper[4946]: E1128 06:52:47.623856 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 06:52:51.623849559 +0000 UTC m=+26.001914670 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.626692 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.640453 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.648719 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.648748 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.648758 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.648772 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.648782 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:47Z","lastTransitionTime":"2025-11-28T06:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.654540 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.670627 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.683990 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.698405 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.713723 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.733036 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.751237 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.751277 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.751289 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.751306 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.751318 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:47Z","lastTransitionTime":"2025-11-28T06:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.855328 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.855805 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.855817 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.855832 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.855841 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:47Z","lastTransitionTime":"2025-11-28T06:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.958670 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.958719 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.958732 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.958758 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.958774 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:47Z","lastTransitionTime":"2025-11-28T06:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.989853 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.989934 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:52:47 crc kubenswrapper[4946]: I1128 06:52:47.989871 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:52:47 crc kubenswrapper[4946]: E1128 06:52:47.990129 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:52:47 crc kubenswrapper[4946]: E1128 06:52:47.990295 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:52:47 crc kubenswrapper[4946]: E1128 06:52:47.990514 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.062231 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.062304 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.062324 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.062354 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.062380 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:48Z","lastTransitionTime":"2025-11-28T06:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.166819 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.166875 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.166886 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.166908 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.166921 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:48Z","lastTransitionTime":"2025-11-28T06:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.260707 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerStarted","Data":"35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca"} Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.260761 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerStarted","Data":"a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f"} Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.260772 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerStarted","Data":"7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324"} Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.266862 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84"} Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.268692 4946 generic.go:334] "Generic (PLEG): container finished" podID="45d160dd-b1b4-4cdf-800c-74195ab023e1" containerID="50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449" exitCode=0 Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.268729 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" event={"ID":"45d160dd-b1b4-4cdf-800c-74195ab023e1","Type":"ContainerDied","Data":"50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449"} Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.268712 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.268820 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.268838 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.268858 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.268870 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:48Z","lastTransitionTime":"2025-11-28T06:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.270386 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6tvtt" event={"ID":"a10fe13c-fc86-474a-945b-f96caafad2a6","Type":"ContainerStarted","Data":"712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5"} Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.283716 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.300014 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.322837 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.338702 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.352932 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.365339 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.371656 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.371690 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.371701 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.371719 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.371729 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:48Z","lastTransitionTime":"2025-11-28T06:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.381888 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.394719 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.407894 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.419483 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.431310 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.446626 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.461945 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.475111 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.475234 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.475291 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.475351 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.475420 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:48Z","lastTransitionTime":"2025-11-28T06:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.477124 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.492518 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.509534 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.524583 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.539416 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.560714 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.576570 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.578990 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.579027 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.579035 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.579049 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.579059 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:48Z","lastTransitionTime":"2025-11-28T06:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.596543 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.611700 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.627972 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.642574 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.653408 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.670766 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.681374 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.681416 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.681429 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.681450 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.681479 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:48Z","lastTransitionTime":"2025-11-28T06:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.683624 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.727010 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.785205 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.785258 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.785270 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.785294 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.785309 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:48Z","lastTransitionTime":"2025-11-28T06:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.888315 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.888399 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.888420 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.888445 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.888490 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:48Z","lastTransitionTime":"2025-11-28T06:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.991878 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.991916 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.991927 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.991942 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:48 crc kubenswrapper[4946]: I1128 06:52:48.991952 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:48Z","lastTransitionTime":"2025-11-28T06:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.094934 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.094982 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.094994 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.095013 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.095025 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:49Z","lastTransitionTime":"2025-11-28T06:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.197894 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.197940 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.197951 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.197971 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.197984 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:49Z","lastTransitionTime":"2025-11-28T06:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.279118 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerStarted","Data":"38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7"} Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.279204 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerStarted","Data":"f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce"} Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.279236 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerStarted","Data":"d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e"} Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.281384 4946 generic.go:334] "Generic (PLEG): container finished" podID="45d160dd-b1b4-4cdf-800c-74195ab023e1" containerID="c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a" exitCode=0 Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.281624 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" event={"ID":"45d160dd-b1b4-4cdf-800c-74195ab023e1","Type":"ContainerDied","Data":"c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a"} Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.296708 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.301809 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.301876 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.301899 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.301929 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.301951 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:49Z","lastTransitionTime":"2025-11-28T06:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.314322 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.338924 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.353193 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.372287 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.387506 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.400659 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.405175 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.405213 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.405224 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.405243 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.405253 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:49Z","lastTransitionTime":"2025-11-28T06:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.414935 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.429254 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.448494 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.464554 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.479198 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.496589 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.509376 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.509739 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.509827 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.509923 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.510000 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:49Z","lastTransitionTime":"2025-11-28T06:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.514371 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.613940 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.613998 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.614017 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.614042 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.614058 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:49Z","lastTransitionTime":"2025-11-28T06:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.717258 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.717305 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.717316 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.717336 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.717348 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:49Z","lastTransitionTime":"2025-11-28T06:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.820895 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.820950 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.820961 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.820981 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.820994 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:49Z","lastTransitionTime":"2025-11-28T06:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.923908 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.923957 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.923967 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.923987 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.924000 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:49Z","lastTransitionTime":"2025-11-28T06:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.989716 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.989755 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:52:49 crc kubenswrapper[4946]: I1128 06:52:49.989883 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:52:49 crc kubenswrapper[4946]: E1128 06:52:49.990506 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:52:49 crc kubenswrapper[4946]: E1128 06:52:49.990716 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:52:49 crc kubenswrapper[4946]: E1128 06:52:49.990799 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.026695 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.026742 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.026754 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.026771 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.026786 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:50Z","lastTransitionTime":"2025-11-28T06:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.130361 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.130733 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.130857 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.130987 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.131108 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:50Z","lastTransitionTime":"2025-11-28T06:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.233557 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.233833 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.233973 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.234132 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.234255 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:50Z","lastTransitionTime":"2025-11-28T06:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.289694 4946 generic.go:334] "Generic (PLEG): container finished" podID="45d160dd-b1b4-4cdf-800c-74195ab023e1" containerID="a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9" exitCode=0 Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.289779 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" event={"ID":"45d160dd-b1b4-4cdf-800c-74195ab023e1","Type":"ContainerDied","Data":"a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9"} Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.311748 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.328415 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.337639 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.337794 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.337821 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.337853 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.337877 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:50Z","lastTransitionTime":"2025-11-28T06:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.347051 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.361178 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.385149 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.403500 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.419946 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.438800 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.440845 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.440881 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.440899 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.440922 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.440939 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:50Z","lastTransitionTime":"2025-11-28T06:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.457217 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.492168 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.512242 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.529033 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.545623 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.545653 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.545662 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.545677 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.545687 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:50Z","lastTransitionTime":"2025-11-28T06:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.547209 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.565634 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.648009 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.648049 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.648060 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.648078 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.648090 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:50Z","lastTransitionTime":"2025-11-28T06:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.751131 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.751164 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.751175 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.751193 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.751204 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:50Z","lastTransitionTime":"2025-11-28T06:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.853924 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.853999 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.854017 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.854046 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.854064 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:50Z","lastTransitionTime":"2025-11-28T06:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.958119 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.958207 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.958229 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.958262 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:50 crc kubenswrapper[4946]: I1128 06:52:50.958283 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:50Z","lastTransitionTime":"2025-11-28T06:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.061970 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.062055 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.062081 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.062117 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.062143 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:51Z","lastTransitionTime":"2025-11-28T06:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.165602 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.165694 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.165717 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.165752 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.165774 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:51Z","lastTransitionTime":"2025-11-28T06:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.268678 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.268747 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.268762 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.268787 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.268801 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:51Z","lastTransitionTime":"2025-11-28T06:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.300268 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerStarted","Data":"bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b"} Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.305793 4946 generic.go:334] "Generic (PLEG): container finished" podID="45d160dd-b1b4-4cdf-800c-74195ab023e1" containerID="222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2" exitCode=0 Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.305840 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" event={"ID":"45d160dd-b1b4-4cdf-800c-74195ab023e1","Type":"ContainerDied","Data":"222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2"} Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.324825 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.347326 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.365263 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.375165 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.375198 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.375209 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.375226 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.375240 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:51Z","lastTransitionTime":"2025-11-28T06:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.380272 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.395951 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.409147 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.422492 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.440276 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.456403 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.477113 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.477179 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.477194 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.477218 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.477235 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:51Z","lastTransitionTime":"2025-11-28T06:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.482523 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.497550 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.549990 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.565080 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.580805 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.580841 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.580850 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.580865 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.580875 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:51Z","lastTransitionTime":"2025-11-28T06:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.584808 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.670861 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.671002 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.671048 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:52:51 crc kubenswrapper[4946]: E1128 06:52:51.671091 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:52:59.671061202 +0000 UTC m=+34.049126313 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:52:51 crc kubenswrapper[4946]: E1128 06:52:51.671129 4946 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.671151 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:52:51 crc kubenswrapper[4946]: E1128 06:52:51.671199 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:52:59.671170035 +0000 UTC m=+34.049235146 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:52:51 crc kubenswrapper[4946]: E1128 06:52:51.671206 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:52:51 crc kubenswrapper[4946]: E1128 06:52:51.671229 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.671242 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:52:51 crc kubenswrapper[4946]: E1128 06:52:51.671242 4946 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:52:51 crc kubenswrapper[4946]: E1128 06:52:51.671328 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 06:52:59.671321939 +0000 UTC m=+34.049387050 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:52:51 crc kubenswrapper[4946]: E1128 06:52:51.671265 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:52:51 crc kubenswrapper[4946]: E1128 06:52:51.671371 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:52:51 crc kubenswrapper[4946]: E1128 06:52:51.671382 4946 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:52:51 crc kubenswrapper[4946]: E1128 06:52:51.671291 4946 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:52:51 crc kubenswrapper[4946]: E1128 06:52:51.671401 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 06:52:59.671395731 +0000 UTC m=+34.049460832 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:52:51 crc kubenswrapper[4946]: E1128 06:52:51.671501 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:52:59.671492363 +0000 UTC m=+34.049557474 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.685214 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.685265 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.685283 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.685302 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.685315 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:51Z","lastTransitionTime":"2025-11-28T06:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.787985 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.788032 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.788044 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.788065 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.788076 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:51Z","lastTransitionTime":"2025-11-28T06:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.896180 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.896226 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.896241 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.896263 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.896278 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:51Z","lastTransitionTime":"2025-11-28T06:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.989848 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:52:51 crc kubenswrapper[4946]: E1128 06:52:51.989967 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.990097 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.990094 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:52:51 crc kubenswrapper[4946]: E1128 06:52:51.990279 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:52:51 crc kubenswrapper[4946]: E1128 06:52:51.990346 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.999415 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.999471 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.999481 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.999493 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:51 crc kubenswrapper[4946]: I1128 06:52:51.999503 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:51Z","lastTransitionTime":"2025-11-28T06:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.102887 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.102931 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.102947 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.102970 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.102998 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:52Z","lastTransitionTime":"2025-11-28T06:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.207273 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.207331 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.207351 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.207375 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.207394 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:52Z","lastTransitionTime":"2025-11-28T06:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.310048 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.310121 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.310139 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.310165 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.310185 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:52Z","lastTransitionTime":"2025-11-28T06:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.314220 4946 generic.go:334] "Generic (PLEG): container finished" podID="45d160dd-b1b4-4cdf-800c-74195ab023e1" containerID="e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb" exitCode=0 Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.314277 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" event={"ID":"45d160dd-b1b4-4cdf-800c-74195ab023e1","Type":"ContainerDied","Data":"e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb"} Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.335571 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.354242 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.374902 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.401160 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.414131 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.414162 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.414172 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.414188 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.414198 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:52Z","lastTransitionTime":"2025-11-28T06:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.419174 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.438486 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.454799 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.477689 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.494792 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.507636 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.517175 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.517226 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.517243 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.517262 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.517276 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:52Z","lastTransitionTime":"2025-11-28T06:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.520826 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.535018 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.547758 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.562520 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.620954 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.620990 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.621000 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.621013 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.621022 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:52Z","lastTransitionTime":"2025-11-28T06:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.724040 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.724105 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.724120 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.724144 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.724159 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:52Z","lastTransitionTime":"2025-11-28T06:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.828504 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.828946 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.828964 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.828993 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.829011 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:52Z","lastTransitionTime":"2025-11-28T06:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.931978 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.932034 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.932052 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.932079 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:52 crc kubenswrapper[4946]: I1128 06:52:52.932097 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:52Z","lastTransitionTime":"2025-11-28T06:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.035688 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.035738 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.035749 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.035766 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.035779 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:53Z","lastTransitionTime":"2025-11-28T06:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.163143 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.163208 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.163231 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.163266 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.163294 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:53Z","lastTransitionTime":"2025-11-28T06:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.267261 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.267350 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.267371 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.267403 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.267428 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:53Z","lastTransitionTime":"2025-11-28T06:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.326090 4946 generic.go:334] "Generic (PLEG): container finished" podID="45d160dd-b1b4-4cdf-800c-74195ab023e1" containerID="43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4" exitCode=0 Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.326194 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" event={"ID":"45d160dd-b1b4-4cdf-800c-74195ab023e1","Type":"ContainerDied","Data":"43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4"} Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.335641 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerStarted","Data":"88ec893f18e2e817786cb4ea007632832cdc85b6701af08f2f8442f7aed15e9c"} Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.335975 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.340172 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.359061 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.372741 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.372780 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.372790 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.372805 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.372815 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:53Z","lastTransitionTime":"2025-11-28T06:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.376234 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.376537 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.394925 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.407675 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.427149 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.443999 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.460835 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.475851 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.475880 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.476058 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.476084 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.476098 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:53Z","lastTransitionTime":"2025-11-28T06:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.480089 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.505013 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.521203 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.535592 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.552039 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.569875 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.579385 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.579805 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.579924 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.580019 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.580119 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:53Z","lastTransitionTime":"2025-11-28T06:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.586205 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.603109 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.621167 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.644844 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.662211 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.678303 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.682919 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.682964 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.682976 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.682994 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.683006 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:53Z","lastTransitionTime":"2025-11-28T06:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.691429 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.708357 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.728868 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.748789 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.767861 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.785849 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.785883 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.785891 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.785908 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.785917 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:53Z","lastTransitionTime":"2025-11-28T06:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.789381 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.805848 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.828144 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88ec893f18e2e817786cb4ea007632832cdc85b6701af08f2f8442f7aed15e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:53Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.888813 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.888890 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.888918 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.888950 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.888970 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:53Z","lastTransitionTime":"2025-11-28T06:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.989593 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.989671 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:52:53 crc kubenswrapper[4946]: E1128 06:52:53.989732 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:52:53 crc kubenswrapper[4946]: E1128 06:52:53.989867 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.989677 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:52:53 crc kubenswrapper[4946]: E1128 06:52:53.990006 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.991787 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.991815 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.991826 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.991840 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:53 crc kubenswrapper[4946]: I1128 06:52:53.991852 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:53Z","lastTransitionTime":"2025-11-28T06:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.095584 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.095655 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.095668 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.095686 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.095700 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:54Z","lastTransitionTime":"2025-11-28T06:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.199575 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.199632 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.199650 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.199673 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.199689 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:54Z","lastTransitionTime":"2025-11-28T06:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.302678 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.302746 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.302770 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.302806 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.302833 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:54Z","lastTransitionTime":"2025-11-28T06:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.347764 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" event={"ID":"45d160dd-b1b4-4cdf-800c-74195ab023e1","Type":"ContainerStarted","Data":"3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a"} Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.347869 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.347815 4946 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.371081 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.383956 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.391734 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.405506 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.405565 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.405584 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.405607 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.405623 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:54Z","lastTransitionTime":"2025-11-28T06:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.413151 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.431700 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.491788 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.508518 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.508574 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.508589 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.508610 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.508626 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:54Z","lastTransitionTime":"2025-11-28T06:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.522390 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.536862 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.549100 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.560281 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.570541 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.586002 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.601127 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.611418 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.611454 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.611487 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.611509 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.611519 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:54Z","lastTransitionTime":"2025-11-28T06:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.630551 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88ec893f18e2e817786cb4ea007632832cdc85b6701af08f2f8442f7aed15e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.649943 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.665622 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.684658 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.698968 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.711759 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.714605 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.714661 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.714673 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.714692 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.714704 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:54Z","lastTransitionTime":"2025-11-28T06:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.729182 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.745301 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.755489 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.771344 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.791005 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.817522 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88ec893f18e2e817786cb4ea007632832cdc85b6701af08f2f8442f7aed15e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.818854 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.818915 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.818944 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.818975 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.818998 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:54Z","lastTransitionTime":"2025-11-28T06:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.845232 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.867887 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.888717 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.918674 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:54Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.921663 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.921704 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.921715 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.921732 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:54 crc kubenswrapper[4946]: I1128 06:52:54.921742 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:54Z","lastTransitionTime":"2025-11-28T06:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.025260 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.025331 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.025355 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.025400 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.025419 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:55Z","lastTransitionTime":"2025-11-28T06:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.127996 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.128034 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.128045 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.128062 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.128074 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:55Z","lastTransitionTime":"2025-11-28T06:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.231070 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.231109 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.231122 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.231143 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.231156 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:55Z","lastTransitionTime":"2025-11-28T06:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.335376 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.335443 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.335478 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.335506 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.335526 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:55Z","lastTransitionTime":"2025-11-28T06:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.351860 4946 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.438380 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.438434 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.438445 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.438474 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.438485 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:55Z","lastTransitionTime":"2025-11-28T06:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.541160 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.541232 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.541254 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.541282 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.541304 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:55Z","lastTransitionTime":"2025-11-28T06:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.643984 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.644037 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.644048 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.644065 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.644076 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:55Z","lastTransitionTime":"2025-11-28T06:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.746261 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.746303 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.746313 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.746329 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.746342 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:55Z","lastTransitionTime":"2025-11-28T06:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.849137 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.849215 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.849233 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.849259 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.849278 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:55Z","lastTransitionTime":"2025-11-28T06:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.952543 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.952599 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.952610 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.952629 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.952640 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:55Z","lastTransitionTime":"2025-11-28T06:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.989924 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.990086 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:52:55 crc kubenswrapper[4946]: E1128 06:52:55.990241 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:52:55 crc kubenswrapper[4946]: I1128 06:52:55.990296 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:52:55 crc kubenswrapper[4946]: E1128 06:52:55.990547 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:52:55 crc kubenswrapper[4946]: E1128 06:52:55.990691 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.015516 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.035963 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.056024 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.056089 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.056105 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.056130 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.056145 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:56Z","lastTransitionTime":"2025-11-28T06:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.064673 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88ec893f18e2e817786cb4ea007632832cdc85b6701af08f2f8442f7aed15e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.088337 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.111065 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.133036 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.159712 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.159774 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.159797 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.159825 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.159847 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:56Z","lastTransitionTime":"2025-11-28T06:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.165121 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.180786 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.198334 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.213292 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.235199 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.253039 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.264211 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.264308 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.264497 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.264559 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.264604 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:56Z","lastTransitionTime":"2025-11-28T06:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.268774 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.280289 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.357181 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pkknv_47e7046d-60dc-4dc0-b63e-f22f4ca5cd51/ovnkube-controller/0.log" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.360809 4946 generic.go:334] "Generic (PLEG): container finished" podID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerID="88ec893f18e2e817786cb4ea007632832cdc85b6701af08f2f8442f7aed15e9c" exitCode=1 Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.360874 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerDied","Data":"88ec893f18e2e817786cb4ea007632832cdc85b6701af08f2f8442f7aed15e9c"} Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.361698 4946 scope.go:117] "RemoveContainer" containerID="88ec893f18e2e817786cb4ea007632832cdc85b6701af08f2f8442f7aed15e9c" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.367221 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.367273 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.367288 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.367305 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.367318 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:56Z","lastTransitionTime":"2025-11-28T06:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.381789 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.398090 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.416106 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.434658 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.450520 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.464790 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.470079 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.470126 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.470136 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.470154 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.470165 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:56Z","lastTransitionTime":"2025-11-28T06:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.481243 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.495410 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.516035 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.543273 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88ec893f18e2e817786cb4ea007632832cdc85b6701af08f2f8442f7aed15e9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88ec893f18e2e817786cb4ea007632832cdc85b6701af08f2f8442f7aed15e9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:52:55Z\\\",\\\"message\\\":\\\"flector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:52:55.668343 6250 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 06:52:55.668412 6250 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 06:52:55.668492 6250 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 06:52:55.668499 6250 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 06:52:55.668509 6250 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 06:52:55.668525 6250 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 06:52:55.668553 6250 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 06:52:55.668563 6250 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 06:52:55.668567 6250 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 06:52:55.668584 6250 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 06:52:55.668602 6250 factory.go:656] Stopping watch factory\\\\nI1128 06:52:55.668619 6250 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:52:55.668647 6250 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 06:52:55.668655 6250 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 06:52:55.668662 6250 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.561759 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.572262 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.572286 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.572294 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.572308 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.572325 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:56Z","lastTransitionTime":"2025-11-28T06:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.576685 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.589870 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.601446 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.675538 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.675588 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.675619 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.675643 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.675659 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:56Z","lastTransitionTime":"2025-11-28T06:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.779352 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.779393 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.779401 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.779419 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.779428 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:56Z","lastTransitionTime":"2025-11-28T06:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.882519 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.882555 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.882563 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.882578 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.882587 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:56Z","lastTransitionTime":"2025-11-28T06:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.985050 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.985091 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.985099 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.985113 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:56 crc kubenswrapper[4946]: I1128 06:52:56.985122 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:56Z","lastTransitionTime":"2025-11-28T06:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.088398 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.088494 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.088519 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.088548 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.088568 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:57Z","lastTransitionTime":"2025-11-28T06:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.191453 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.191539 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.191562 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.191592 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.191614 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:57Z","lastTransitionTime":"2025-11-28T06:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.294909 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.295140 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.295295 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.295416 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.295538 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:57Z","lastTransitionTime":"2025-11-28T06:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.317790 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.317838 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.317851 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.317867 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.317879 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:57Z","lastTransitionTime":"2025-11-28T06:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:57 crc kubenswrapper[4946]: E1128 06:52:57.335348 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.340591 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.340663 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.340685 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.340710 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.340730 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:57Z","lastTransitionTime":"2025-11-28T06:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:57 crc kubenswrapper[4946]: E1128 06:52:57.356520 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.361657 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.362560 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.362654 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.362760 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.362837 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:57Z","lastTransitionTime":"2025-11-28T06:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.366792 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pkknv_47e7046d-60dc-4dc0-b63e-f22f4ca5cd51/ovnkube-controller/0.log" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.369523 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerStarted","Data":"6df860d0b9182067aa468a67d34f001cfe4b3e2251c630ee9ceab34315956bce"} Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.369651 4946 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 06:52:57 crc kubenswrapper[4946]: E1128 06:52:57.383984 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.389893 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.389955 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.390107 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.390118 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.390136 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.390150 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:57Z","lastTransitionTime":"2025-11-28T06:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.409216 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:57 crc kubenswrapper[4946]: E1128 06:52:57.410288 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.415231 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.415370 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.415453 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.415539 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.415613 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:57Z","lastTransitionTime":"2025-11-28T06:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.428948 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:57 crc kubenswrapper[4946]: E1128 06:52:57.437572 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:57 crc kubenswrapper[4946]: E1128 06:52:57.437918 4946 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.439774 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.439876 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.440021 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.440203 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.440354 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:57Z","lastTransitionTime":"2025-11-28T06:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.450604 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.472900 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.490761 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.508024 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.528845 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.542988 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.543192 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.543395 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.543566 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.543831 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:57Z","lastTransitionTime":"2025-11-28T06:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.550064 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.581383 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df860d0b9182067aa468a67d34f001cfe4b3e2251c630ee9ceab34315956bce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88ec893f18e2e817786cb4ea007632832cdc85b6701af08f2f8442f7aed15e9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:52:55Z\\\",\\\"message\\\":\\\"flector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:52:55.668343 6250 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 06:52:55.668412 6250 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 06:52:55.668492 6250 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 06:52:55.668499 6250 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 06:52:55.668509 6250 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 06:52:55.668525 6250 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 06:52:55.668553 6250 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 06:52:55.668563 6250 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 06:52:55.668567 6250 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 06:52:55.668584 6250 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 06:52:55.668602 6250 factory.go:656] Stopping watch factory\\\\nI1128 06:52:55.668619 6250 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:52:55.668647 6250 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 06:52:55.668655 6250 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 06:52:55.668662 6250 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.602559 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.633144 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.647196 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.651012 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.651256 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.651282 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.651312 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.651333 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:57Z","lastTransitionTime":"2025-11-28T06:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.667101 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.754638 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.754939 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.755067 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.755193 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.755311 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:57Z","lastTransitionTime":"2025-11-28T06:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.859191 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.859272 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.859299 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.859333 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.859358 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:57Z","lastTransitionTime":"2025-11-28T06:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.963176 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.963243 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.963258 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.963276 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.963292 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:57Z","lastTransitionTime":"2025-11-28T06:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.989542 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.989606 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:52:57 crc kubenswrapper[4946]: I1128 06:52:57.989925 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:52:57 crc kubenswrapper[4946]: E1128 06:52:57.989970 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:52:57 crc kubenswrapper[4946]: E1128 06:52:57.990182 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:52:57 crc kubenswrapper[4946]: E1128 06:52:57.989812 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.066864 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.066926 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.066951 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.066984 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.067011 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:58Z","lastTransitionTime":"2025-11-28T06:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.171030 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.171114 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.171138 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.171170 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.171194 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:58Z","lastTransitionTime":"2025-11-28T06:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.274878 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.274942 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.274959 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.274987 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.275010 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:58Z","lastTransitionTime":"2025-11-28T06:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.376621 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pkknv_47e7046d-60dc-4dc0-b63e-f22f4ca5cd51/ovnkube-controller/1.log" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.378398 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pkknv_47e7046d-60dc-4dc0-b63e-f22f4ca5cd51/ovnkube-controller/0.log" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.378500 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.378558 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.378579 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.378607 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.378626 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:58Z","lastTransitionTime":"2025-11-28T06:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.385139 4946 generic.go:334] "Generic (PLEG): container finished" podID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerID="6df860d0b9182067aa468a67d34f001cfe4b3e2251c630ee9ceab34315956bce" exitCode=1 Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.385187 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerDied","Data":"6df860d0b9182067aa468a67d34f001cfe4b3e2251c630ee9ceab34315956bce"} Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.385252 4946 scope.go:117] "RemoveContainer" containerID="88ec893f18e2e817786cb4ea007632832cdc85b6701af08f2f8442f7aed15e9c" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.385953 4946 scope.go:117] "RemoveContainer" containerID="6df860d0b9182067aa468a67d34f001cfe4b3e2251c630ee9ceab34315956bce" Nov 28 06:52:58 crc kubenswrapper[4946]: E1128 06:52:58.386100 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pkknv_openshift-ovn-kubernetes(47e7046d-60dc-4dc0-b63e-f22f4ca5cd51)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.411097 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:58Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.433270 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:58Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.450067 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:58Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.470783 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:58Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.481998 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.482076 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.482101 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.482137 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.482164 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:58Z","lastTransitionTime":"2025-11-28T06:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.495501 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:58Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.535125 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df860d0b9182067aa468a67d34f001cfe4b3e2251c630ee9ceab34315956bce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88ec893f18e2e817786cb4ea007632832cdc85b6701af08f2f8442f7aed15e9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:52:55Z\\\",\\\"message\\\":\\\"flector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:52:55.668343 6250 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 06:52:55.668412 6250 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 06:52:55.668492 6250 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 06:52:55.668499 6250 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 06:52:55.668509 6250 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 06:52:55.668525 6250 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 06:52:55.668553 6250 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 06:52:55.668563 6250 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 06:52:55.668567 6250 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 06:52:55.668584 6250 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 06:52:55.668602 6250 factory.go:656] Stopping watch factory\\\\nI1128 06:52:55.668619 6250 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:52:55.668647 6250 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 06:52:55.668655 6250 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 06:52:55.668662 6250 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6df860d0b9182067aa468a67d34f001cfe4b3e2251c630ee9ceab34315956bce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:52:58Z\\\",\\\"message\\\":\\\"eflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:52:57.223889 6381 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 06:52:57.223909 6381 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 06:52:57.223935 6381 factory.go:656] Stopping watch factory\\\\nI1128 06:52:57.223946 6381 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 06:52:57.223942 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 06:52:57.223954 6381 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 06:52:57.222497 6381 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:52:57.223994 6381 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:52:57.224897 6381 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 06:52:57.225235 6381 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:52:57.225348 6381 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:52:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:58Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.558525 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:58Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.579824 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:58Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.586643 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.586699 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.586716 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.586741 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.586759 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:58Z","lastTransitionTime":"2025-11-28T06:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.601161 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:58Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.624369 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:58Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.642954 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:58Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.662299 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:58Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.676766 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:58Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.690113 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.690205 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.690265 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.690290 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.690355 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:58Z","lastTransitionTime":"2025-11-28T06:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.697238 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:58Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.794759 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.795126 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.795258 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.795387 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.795558 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:58Z","lastTransitionTime":"2025-11-28T06:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.898691 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.898759 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.898782 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.898810 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:58 crc kubenswrapper[4946]: I1128 06:52:58.898831 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:58Z","lastTransitionTime":"2025-11-28T06:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.002447 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.002544 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.002557 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.002576 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.002588 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:59Z","lastTransitionTime":"2025-11-28T06:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.105525 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.105576 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.105590 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.105626 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.105649 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:59Z","lastTransitionTime":"2025-11-28T06:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.208404 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.208458 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.208512 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.208543 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.208566 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:59Z","lastTransitionTime":"2025-11-28T06:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.311653 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.311741 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.311760 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.311784 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.311801 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:59Z","lastTransitionTime":"2025-11-28T06:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.378134 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.395834 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pkknv_47e7046d-60dc-4dc0-b63e-f22f4ca5cd51/ovnkube-controller/1.log" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.400017 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.401204 4946 scope.go:117] "RemoveContainer" containerID="6df860d0b9182067aa468a67d34f001cfe4b3e2251c630ee9ceab34315956bce" Nov 28 06:52:59 crc kubenswrapper[4946]: E1128 06:52:59.401579 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pkknv_openshift-ovn-kubernetes(47e7046d-60dc-4dc0-b63e-f22f4ca5cd51)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.414955 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.415007 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.415026 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.415052 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.415071 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:59Z","lastTransitionTime":"2025-11-28T06:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.419517 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.436598 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.463351 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.477828 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.497964 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.514820 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.517780 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.517947 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.518034 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.518166 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.518256 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:59Z","lastTransitionTime":"2025-11-28T06:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.536052 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.562654 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.583096 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.603124 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.620555 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.621618 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.621653 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.621663 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.621680 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.621692 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:59Z","lastTransitionTime":"2025-11-28T06:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.638674 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.670292 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df860d0b9182067aa468a67d34f001cfe4b3e2251c630ee9ceab34315956bce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88ec893f18e2e817786cb4ea007632832cdc85b6701af08f2f8442f7aed15e9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:52:55Z\\\",\\\"message\\\":\\\"flector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:52:55.668343 6250 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 06:52:55.668412 6250 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 06:52:55.668492 6250 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 06:52:55.668499 6250 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 06:52:55.668509 6250 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 06:52:55.668525 6250 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 06:52:55.668553 6250 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 06:52:55.668563 6250 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 06:52:55.668567 6250 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 06:52:55.668584 6250 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 06:52:55.668602 6250 factory.go:656] Stopping watch factory\\\\nI1128 06:52:55.668619 6250 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:52:55.668647 6250 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 06:52:55.668655 6250 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 06:52:55.668662 6250 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6df860d0b9182067aa468a67d34f001cfe4b3e2251c630ee9ceab34315956bce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:52:58Z\\\",\\\"message\\\":\\\"eflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:52:57.223889 6381 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 06:52:57.223909 6381 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 06:52:57.223935 6381 factory.go:656] Stopping watch factory\\\\nI1128 06:52:57.223946 6381 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 06:52:57.223942 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 06:52:57.223954 6381 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 06:52:57.222497 6381 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:52:57.223994 6381 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:52:57.224897 6381 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 06:52:57.225235 6381 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:52:57.225348 6381 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:52:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.691683 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.705247 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.724261 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.724296 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.724306 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.724321 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.724332 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:59Z","lastTransitionTime":"2025-11-28T06:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.727198 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.747246 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.758986 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:52:59 crc kubenswrapper[4946]: E1128 06:52:59.759231 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:53:15.759155448 +0000 UTC m=+50.137220589 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.759346 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.759401 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.759507 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.759540 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:52:59 crc kubenswrapper[4946]: E1128 06:52:59.759662 4946 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:52:59 crc kubenswrapper[4946]: E1128 06:52:59.759674 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:52:59 crc kubenswrapper[4946]: E1128 06:52:59.759704 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:52:59 crc kubenswrapper[4946]: E1128 06:52:59.759716 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:52:59 crc kubenswrapper[4946]: E1128 06:52:59.759732 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:52:59 crc kubenswrapper[4946]: E1128 06:52:59.759747 4946 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:52:59 crc kubenswrapper[4946]: E1128 06:52:59.759753 4946 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:52:59 crc kubenswrapper[4946]: E1128 06:52:59.759734 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:53:15.759715672 +0000 UTC m=+50.137780783 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:52:59 crc kubenswrapper[4946]: E1128 06:52:59.759820 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 06:53:15.759801385 +0000 UTC m=+50.137866526 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:52:59 crc kubenswrapper[4946]: E1128 06:52:59.759845 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 06:53:15.759834285 +0000 UTC m=+50.137899426 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:52:59 crc kubenswrapper[4946]: E1128 06:52:59.759878 4946 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:52:59 crc kubenswrapper[4946]: E1128 06:52:59.759937 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:53:15.759920748 +0000 UTC m=+50.137985889 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.771515 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df860d0b9182067aa468a67d34f001cfe4b3e2251c630ee9ceab34315956bce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6df860d0b9182067aa468a67d34f001cfe4b3e2251c630ee9ceab34315956bce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:52:58Z\\\",\\\"message\\\":\\\"eflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:52:57.223889 6381 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 06:52:57.223909 6381 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 06:52:57.223935 6381 factory.go:656] Stopping watch factory\\\\nI1128 06:52:57.223946 6381 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 06:52:57.223942 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 06:52:57.223954 6381 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 06:52:57.222497 6381 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:52:57.223994 6381 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:52:57.224897 6381 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 06:52:57.225235 6381 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:52:57.225348 6381 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:52:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pkknv_openshift-ovn-kubernetes(47e7046d-60dc-4dc0-b63e-f22f4ca5cd51)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.787894 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.803367 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.817964 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.827008 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.827050 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.827059 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.827074 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.827085 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:59Z","lastTransitionTime":"2025-11-28T06:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.838842 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.855360 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.875940 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.891564 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.903805 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.917740 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:52:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.930447 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.930527 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.930546 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.930572 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.930592 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:52:59Z","lastTransitionTime":"2025-11-28T06:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.989067 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.989135 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:52:59 crc kubenswrapper[4946]: E1128 06:52:59.989251 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:52:59 crc kubenswrapper[4946]: I1128 06:52:59.989403 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:52:59 crc kubenswrapper[4946]: E1128 06:52:59.989623 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:52:59 crc kubenswrapper[4946]: E1128 06:52:59.989780 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.033702 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.033775 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.033798 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.033829 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.033853 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:00Z","lastTransitionTime":"2025-11-28T06:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.083740 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv"] Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.084506 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.092688 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.099288 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.134684 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.136774 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.136802 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.136812 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.136827 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.136839 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:00Z","lastTransitionTime":"2025-11-28T06:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.152711 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6df591b-287c-45d9-9db2-f3c441005fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpkwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.165843 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f6df591b-287c-45d9-9db2-f3c441005fdd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zpkwv\" (UID: \"f6df591b-287c-45d9-9db2-f3c441005fdd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.166017 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f6df591b-287c-45d9-9db2-f3c441005fdd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zpkwv\" (UID: \"f6df591b-287c-45d9-9db2-f3c441005fdd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.166098 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f6df591b-287c-45d9-9db2-f3c441005fdd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zpkwv\" (UID: \"f6df591b-287c-45d9-9db2-f3c441005fdd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.166189 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm2g7\" (UniqueName: \"kubernetes.io/projected/f6df591b-287c-45d9-9db2-f3c441005fdd-kube-api-access-zm2g7\") pod \"ovnkube-control-plane-749d76644c-zpkwv\" (UID: \"f6df591b-287c-45d9-9db2-f3c441005fdd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.170623 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.184972 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.200263 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.216227 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.239800 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.239851 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.239863 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.239883 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.239898 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:00Z","lastTransitionTime":"2025-11-28T06:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.248169 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df860d0b9182067aa468a67d34f001cfe4b3e2251c630ee9ceab34315956bce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6df860d0b9182067aa468a67d34f001cfe4b3e2251c630ee9ceab34315956bce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:52:58Z\\\",\\\"message\\\":\\\"eflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:52:57.223889 6381 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 06:52:57.223909 6381 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 06:52:57.223935 6381 factory.go:656] Stopping watch factory\\\\nI1128 06:52:57.223946 6381 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 06:52:57.223942 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 06:52:57.223954 6381 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 06:52:57.222497 6381 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:52:57.223994 6381 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:52:57.224897 6381 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 06:52:57.225235 6381 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:52:57.225348 6381 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:52:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pkknv_openshift-ovn-kubernetes(47e7046d-60dc-4dc0-b63e-f22f4ca5cd51)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.265668 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.266817 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f6df591b-287c-45d9-9db2-f3c441005fdd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zpkwv\" (UID: \"f6df591b-287c-45d9-9db2-f3c441005fdd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.266870 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f6df591b-287c-45d9-9db2-f3c441005fdd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zpkwv\" (UID: \"f6df591b-287c-45d9-9db2-f3c441005fdd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.266896 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f6df591b-287c-45d9-9db2-f3c441005fdd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zpkwv\" (UID: \"f6df591b-287c-45d9-9db2-f3c441005fdd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.266925 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm2g7\" (UniqueName: \"kubernetes.io/projected/f6df591b-287c-45d9-9db2-f3c441005fdd-kube-api-access-zm2g7\") pod \"ovnkube-control-plane-749d76644c-zpkwv\" (UID: \"f6df591b-287c-45d9-9db2-f3c441005fdd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.267803 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f6df591b-287c-45d9-9db2-f3c441005fdd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zpkwv\" (UID: \"f6df591b-287c-45d9-9db2-f3c441005fdd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.268026 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f6df591b-287c-45d9-9db2-f3c441005fdd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zpkwv\" (UID: \"f6df591b-287c-45d9-9db2-f3c441005fdd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.276278 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f6df591b-287c-45d9-9db2-f3c441005fdd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zpkwv\" (UID: \"f6df591b-287c-45d9-9db2-f3c441005fdd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.284848 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.294932 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm2g7\" (UniqueName: \"kubernetes.io/projected/f6df591b-287c-45d9-9db2-f3c441005fdd-kube-api-access-zm2g7\") pod \"ovnkube-control-plane-749d76644c-zpkwv\" (UID: \"f6df591b-287c-45d9-9db2-f3c441005fdd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.304676 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.319976 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.338754 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.342960 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.342992 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.343003 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.343021 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.343034 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:00Z","lastTransitionTime":"2025-11-28T06:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.353182 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.366981 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.385544 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.413579 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" Nov 28 06:53:00 crc kubenswrapper[4946]: W1128 06:53:00.433817 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6df591b_287c_45d9_9db2_f3c441005fdd.slice/crio-65df5a43aae2b0ba175b8700fec39185779f24b4d82f040942b2d65d0237274d WatchSource:0}: Error finding container 65df5a43aae2b0ba175b8700fec39185779f24b4d82f040942b2d65d0237274d: Status 404 returned error can't find the container with id 65df5a43aae2b0ba175b8700fec39185779f24b4d82f040942b2d65d0237274d Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.446546 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.446615 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.446625 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.446641 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.446652 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:00Z","lastTransitionTime":"2025-11-28T06:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.549681 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.549724 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.549736 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.549754 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.549767 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:00Z","lastTransitionTime":"2025-11-28T06:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.652693 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.652736 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.652747 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.652763 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.652777 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:00Z","lastTransitionTime":"2025-11-28T06:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.754945 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.755540 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.755557 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.755578 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.755593 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:00Z","lastTransitionTime":"2025-11-28T06:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.837850 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-gkg79"] Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.838350 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:00 crc kubenswrapper[4946]: E1128 06:53:00.838421 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.858317 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.858390 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.858412 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.858441 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.858491 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:00Z","lastTransitionTime":"2025-11-28T06:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.866062 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.875326 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9srk\" (UniqueName: \"kubernetes.io/projected/4e6983b1-6887-4d13-8f9a-f261a745115f-kube-api-access-c9srk\") pod \"network-metrics-daemon-gkg79\" (UID: \"4e6983b1-6887-4d13-8f9a-f261a745115f\") " pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.875450 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs\") pod \"network-metrics-daemon-gkg79\" (UID: \"4e6983b1-6887-4d13-8f9a-f261a745115f\") " pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.888906 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.906934 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.926822 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.943152 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.955149 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.960853 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.960895 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.960907 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.960926 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.960939 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:00Z","lastTransitionTime":"2025-11-28T06:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.968614 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.976425 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9srk\" (UniqueName: \"kubernetes.io/projected/4e6983b1-6887-4d13-8f9a-f261a745115f-kube-api-access-c9srk\") pod \"network-metrics-daemon-gkg79\" (UID: \"4e6983b1-6887-4d13-8f9a-f261a745115f\") " pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.976530 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs\") pod \"network-metrics-daemon-gkg79\" (UID: \"4e6983b1-6887-4d13-8f9a-f261a745115f\") " pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:00 crc kubenswrapper[4946]: E1128 06:53:00.976662 4946 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:53:00 crc kubenswrapper[4946]: E1128 06:53:00.976721 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs podName:4e6983b1-6887-4d13-8f9a-f261a745115f nodeName:}" failed. No retries permitted until 2025-11-28 06:53:01.476705341 +0000 UTC m=+35.854770462 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs") pod "network-metrics-daemon-gkg79" (UID: "4e6983b1-6887-4d13-8f9a-f261a745115f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.983911 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gkg79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6983b1-6887-4d13-8f9a-f261a745115f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gkg79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:00 crc kubenswrapper[4946]: I1128 06:53:00.997317 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9srk\" (UniqueName: \"kubernetes.io/projected/4e6983b1-6887-4d13-8f9a-f261a745115f-kube-api-access-c9srk\") pod \"network-metrics-daemon-gkg79\" (UID: \"4e6983b1-6887-4d13-8f9a-f261a745115f\") " pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.000103 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:00Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.014800 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.035133 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6df591b-287c-45d9-9db2-f3c441005fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpkwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.060342 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.063584 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.063704 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.063731 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.063762 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.063785 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:01Z","lastTransitionTime":"2025-11-28T06:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.082692 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.103374 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.125020 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.153582 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df860d0b9182067aa468a67d34f001cfe4b3e2251c630ee9ceab34315956bce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6df860d0b9182067aa468a67d34f001cfe4b3e2251c630ee9ceab34315956bce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:52:58Z\\\",\\\"message\\\":\\\"eflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:52:57.223889 6381 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 06:52:57.223909 6381 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 06:52:57.223935 6381 factory.go:656] Stopping watch factory\\\\nI1128 06:52:57.223946 6381 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 06:52:57.223942 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 06:52:57.223954 6381 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 06:52:57.222497 6381 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:52:57.223994 6381 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:52:57.224897 6381 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 06:52:57.225235 6381 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:52:57.225348 6381 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:52:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pkknv_openshift-ovn-kubernetes(47e7046d-60dc-4dc0-b63e-f22f4ca5cd51)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.166862 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.166943 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.166969 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.167003 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.167025 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:01Z","lastTransitionTime":"2025-11-28T06:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.270267 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.270622 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.270768 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.270913 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.271096 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:01Z","lastTransitionTime":"2025-11-28T06:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.374373 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.374438 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.374456 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.374518 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.374537 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:01Z","lastTransitionTime":"2025-11-28T06:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.411136 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" event={"ID":"f6df591b-287c-45d9-9db2-f3c441005fdd","Type":"ContainerStarted","Data":"3e6b6b4122d5aa6fa73c5453308ea21172330e8324feb57965db83d5c2c05fa3"} Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.411220 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" event={"ID":"f6df591b-287c-45d9-9db2-f3c441005fdd","Type":"ContainerStarted","Data":"4f983a4bf3fb71511789223ca3b7b223ac9589d4807ea2dfe4087d2fa5d48738"} Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.411242 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" event={"ID":"f6df591b-287c-45d9-9db2-f3c441005fdd","Type":"ContainerStarted","Data":"65df5a43aae2b0ba175b8700fec39185779f24b4d82f040942b2d65d0237274d"} Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.435778 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.458832 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.477714 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.477781 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.477801 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.477826 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.477844 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:01Z","lastTransitionTime":"2025-11-28T06:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.482334 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs\") pod \"network-metrics-daemon-gkg79\" (UID: \"4e6983b1-6887-4d13-8f9a-f261a745115f\") " pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:01 crc kubenswrapper[4946]: E1128 06:53:01.483543 4946 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:53:01 crc kubenswrapper[4946]: E1128 06:53:01.483699 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs podName:4e6983b1-6887-4d13-8f9a-f261a745115f nodeName:}" failed. No retries permitted until 2025-11-28 06:53:02.483657503 +0000 UTC m=+36.861722664 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs") pod "network-metrics-daemon-gkg79" (UID: "4e6983b1-6887-4d13-8f9a-f261a745115f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.491137 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df860d0b9182067aa468a67d34f001cfe4b3e2251c630ee9ceab34315956bce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6df860d0b9182067aa468a67d34f001cfe4b3e2251c630ee9ceab34315956bce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:52:58Z\\\",\\\"message\\\":\\\"eflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:52:57.223889 6381 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 06:52:57.223909 6381 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 06:52:57.223935 6381 factory.go:656] Stopping watch factory\\\\nI1128 06:52:57.223946 6381 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 06:52:57.223942 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 06:52:57.223954 6381 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 06:52:57.222497 6381 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:52:57.223994 6381 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:52:57.224897 6381 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 06:52:57.225235 6381 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:52:57.225348 6381 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:52:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pkknv_openshift-ovn-kubernetes(47e7046d-60dc-4dc0-b63e-f22f4ca5cd51)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.514656 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.536026 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.558781 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.580655 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.580715 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.580739 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.580770 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.580794 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:01Z","lastTransitionTime":"2025-11-28T06:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.585270 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.603173 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gkg79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6983b1-6887-4d13-8f9a-f261a745115f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gkg79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.624220 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.644550 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.661827 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.680258 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.683557 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.683618 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.683636 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.683662 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.683680 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:01Z","lastTransitionTime":"2025-11-28T06:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.710332 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.731356 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.748809 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.769175 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6df591b-287c-45d9-9db2-f3c441005fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f983a4bf3fb71511789223ca3b7b223ac9589d4807ea2dfe4087d2fa5d48738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6b6b4122d5aa6fa73c5453308ea21172330e8324feb57965db83d5c2c05fa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpkwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.787046 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.787110 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.787127 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.787154 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.787172 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:01Z","lastTransitionTime":"2025-11-28T06:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.895152 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.895243 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.895268 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.895299 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.895331 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:01Z","lastTransitionTime":"2025-11-28T06:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.989363 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.989390 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:01 crc kubenswrapper[4946]: E1128 06:53:01.989587 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.989648 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:01 crc kubenswrapper[4946]: E1128 06:53:01.989875 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:01 crc kubenswrapper[4946]: E1128 06:53:01.990016 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.999414 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.999514 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.999535 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.999559 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:01 crc kubenswrapper[4946]: I1128 06:53:01.999580 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:01Z","lastTransitionTime":"2025-11-28T06:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.102140 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.102182 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.102190 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.102202 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.102211 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:02Z","lastTransitionTime":"2025-11-28T06:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.205694 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.205760 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.205773 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.205793 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.205805 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:02Z","lastTransitionTime":"2025-11-28T06:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.310309 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.310453 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.310533 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.310568 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.310590 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:02Z","lastTransitionTime":"2025-11-28T06:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.414338 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.414422 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.414451 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.414521 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.414548 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:02Z","lastTransitionTime":"2025-11-28T06:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.493747 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs\") pod \"network-metrics-daemon-gkg79\" (UID: \"4e6983b1-6887-4d13-8f9a-f261a745115f\") " pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:02 crc kubenswrapper[4946]: E1128 06:53:02.493937 4946 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:53:02 crc kubenswrapper[4946]: E1128 06:53:02.494035 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs podName:4e6983b1-6887-4d13-8f9a-f261a745115f nodeName:}" failed. No retries permitted until 2025-11-28 06:53:04.494001627 +0000 UTC m=+38.872066758 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs") pod "network-metrics-daemon-gkg79" (UID: "4e6983b1-6887-4d13-8f9a-f261a745115f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.517398 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.517453 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.517510 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.517541 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.517560 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:02Z","lastTransitionTime":"2025-11-28T06:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.620978 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.621078 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.621100 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.621132 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.621155 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:02Z","lastTransitionTime":"2025-11-28T06:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.724830 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.724901 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.724918 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.724943 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.724962 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:02Z","lastTransitionTime":"2025-11-28T06:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.827698 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.827732 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.827742 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.827755 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.827766 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:02Z","lastTransitionTime":"2025-11-28T06:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.930770 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.930831 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.930843 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.930862 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.930874 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:02Z","lastTransitionTime":"2025-11-28T06:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:02 crc kubenswrapper[4946]: I1128 06:53:02.989797 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:02 crc kubenswrapper[4946]: E1128 06:53:02.990065 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.034183 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.034239 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.034257 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.034283 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.034302 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:03Z","lastTransitionTime":"2025-11-28T06:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.137079 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.137142 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.137160 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.137188 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.137206 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:03Z","lastTransitionTime":"2025-11-28T06:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.240278 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.240368 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.240395 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.240430 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.240498 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:03Z","lastTransitionTime":"2025-11-28T06:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.344515 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.344590 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.344619 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.344651 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.344669 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:03Z","lastTransitionTime":"2025-11-28T06:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.448010 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.448072 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.448088 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.448113 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.448132 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:03Z","lastTransitionTime":"2025-11-28T06:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.551336 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.551420 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.551445 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.551508 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.551528 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:03Z","lastTransitionTime":"2025-11-28T06:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.654621 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.654699 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.654774 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.654809 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.654837 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:03Z","lastTransitionTime":"2025-11-28T06:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.759037 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.759092 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.759100 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.759119 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.759179 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:03Z","lastTransitionTime":"2025-11-28T06:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.862490 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.862538 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.862550 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.862574 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.862589 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:03Z","lastTransitionTime":"2025-11-28T06:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.975573 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.975623 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.975633 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.975650 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.975662 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:03Z","lastTransitionTime":"2025-11-28T06:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.989066 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.989090 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:03 crc kubenswrapper[4946]: I1128 06:53:03.989066 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:03 crc kubenswrapper[4946]: E1128 06:53:03.989225 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:03 crc kubenswrapper[4946]: E1128 06:53:03.989347 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:03 crc kubenswrapper[4946]: E1128 06:53:03.989504 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.079186 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.079226 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.079234 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.079254 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.079266 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:04Z","lastTransitionTime":"2025-11-28T06:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.182304 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.182374 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.182393 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.182417 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.182434 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:04Z","lastTransitionTime":"2025-11-28T06:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.285152 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.285211 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.285229 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.285262 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.285281 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:04Z","lastTransitionTime":"2025-11-28T06:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.388124 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.388196 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.388208 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.388234 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.388248 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:04Z","lastTransitionTime":"2025-11-28T06:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.491063 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.491144 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.491169 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.491204 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.491229 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:04Z","lastTransitionTime":"2025-11-28T06:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.520022 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs\") pod \"network-metrics-daemon-gkg79\" (UID: \"4e6983b1-6887-4d13-8f9a-f261a745115f\") " pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:04 crc kubenswrapper[4946]: E1128 06:53:04.520264 4946 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:53:04 crc kubenswrapper[4946]: E1128 06:53:04.520408 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs podName:4e6983b1-6887-4d13-8f9a-f261a745115f nodeName:}" failed. No retries permitted until 2025-11-28 06:53:08.520367689 +0000 UTC m=+42.898432840 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs") pod "network-metrics-daemon-gkg79" (UID: "4e6983b1-6887-4d13-8f9a-f261a745115f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.595077 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.595128 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.595167 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.595186 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.595198 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:04Z","lastTransitionTime":"2025-11-28T06:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.698396 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.698513 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.698540 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.698571 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.698598 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:04Z","lastTransitionTime":"2025-11-28T06:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.801732 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.801782 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.801809 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.801825 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.801837 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:04Z","lastTransitionTime":"2025-11-28T06:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.905414 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.905538 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.905562 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.905592 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.905618 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:04Z","lastTransitionTime":"2025-11-28T06:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:04 crc kubenswrapper[4946]: I1128 06:53:04.989933 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:04 crc kubenswrapper[4946]: E1128 06:53:04.990147 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.008911 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.008983 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.009002 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.009027 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.009046 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:05Z","lastTransitionTime":"2025-11-28T06:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.113138 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.113220 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.113238 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.113300 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.113320 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:05Z","lastTransitionTime":"2025-11-28T06:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.216741 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.216804 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.216823 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.216847 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.216866 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:05Z","lastTransitionTime":"2025-11-28T06:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.320538 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.320626 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.320641 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.320661 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.320677 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:05Z","lastTransitionTime":"2025-11-28T06:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.423867 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.423906 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.423916 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.423931 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.423942 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:05Z","lastTransitionTime":"2025-11-28T06:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.526107 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.526156 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.526168 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.526183 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.526196 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:05Z","lastTransitionTime":"2025-11-28T06:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.629191 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.629264 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.629285 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.629313 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.629334 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:05Z","lastTransitionTime":"2025-11-28T06:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.734016 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.734085 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.734103 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.734131 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.734151 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:05Z","lastTransitionTime":"2025-11-28T06:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.837564 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.837642 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.837659 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.837683 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.837702 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:05Z","lastTransitionTime":"2025-11-28T06:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.941232 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.941308 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.941325 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.941357 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.941400 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:05Z","lastTransitionTime":"2025-11-28T06:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.989704 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:05 crc kubenswrapper[4946]: E1128 06:53:05.989916 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.989985 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:05 crc kubenswrapper[4946]: I1128 06:53:05.990111 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:05 crc kubenswrapper[4946]: E1128 06:53:05.990957 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:05 crc kubenswrapper[4946]: E1128 06:53:05.991102 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.021004 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.040952 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.045009 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.045050 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.045062 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.045085 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.045101 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:06Z","lastTransitionTime":"2025-11-28T06:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.069741 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df860d0b9182067aa468a67d34f001cfe4b3e2251c630ee9ceab34315956bce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6df860d0b9182067aa468a67d34f001cfe4b3e2251c630ee9ceab34315956bce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:52:58Z\\\",\\\"message\\\":\\\"eflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:52:57.223889 6381 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 06:52:57.223909 6381 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 06:52:57.223935 6381 factory.go:656] Stopping watch factory\\\\nI1128 06:52:57.223946 6381 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 06:52:57.223942 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 06:52:57.223954 6381 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 06:52:57.222497 6381 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:52:57.223994 6381 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:52:57.224897 6381 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 06:52:57.225235 6381 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:52:57.225348 6381 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:52:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pkknv_openshift-ovn-kubernetes(47e7046d-60dc-4dc0-b63e-f22f4ca5cd51)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.091670 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.112527 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.129656 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.149002 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.149085 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.149104 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.149131 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.149148 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:06Z","lastTransitionTime":"2025-11-28T06:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.154441 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.170867 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gkg79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6983b1-6887-4d13-8f9a-f261a745115f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gkg79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.187512 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.203234 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.218870 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.245800 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.252671 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.252713 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.252726 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.252747 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.252763 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:06Z","lastTransitionTime":"2025-11-28T06:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.270967 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.288540 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.306111 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.324226 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6df591b-287c-45d9-9db2-f3c441005fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f983a4bf3fb71511789223ca3b7b223ac9589d4807ea2dfe4087d2fa5d48738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6b6b4122d5aa6fa73c5453308ea21172330e8324feb57965db83d5c2c05fa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpkwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.355562 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.355646 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.355671 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.355702 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.355728 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:06Z","lastTransitionTime":"2025-11-28T06:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.459005 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.459062 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.459079 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.459101 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.459118 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:06Z","lastTransitionTime":"2025-11-28T06:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.562130 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.562199 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.562220 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.562249 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.562268 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:06Z","lastTransitionTime":"2025-11-28T06:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.665695 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.665766 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.665790 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.665819 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.665842 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:06Z","lastTransitionTime":"2025-11-28T06:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.768805 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.768859 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.768875 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.768904 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.768942 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:06Z","lastTransitionTime":"2025-11-28T06:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.871633 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.871720 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.871746 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.871777 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.871799 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:06Z","lastTransitionTime":"2025-11-28T06:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.974974 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.975033 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.975051 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.975075 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.975094 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:06Z","lastTransitionTime":"2025-11-28T06:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:06 crc kubenswrapper[4946]: I1128 06:53:06.990010 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:06 crc kubenswrapper[4946]: E1128 06:53:06.990269 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.078624 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.078687 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.078706 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.078731 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.078750 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:07Z","lastTransitionTime":"2025-11-28T06:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.182253 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.182319 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.182333 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.182350 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.182362 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:07Z","lastTransitionTime":"2025-11-28T06:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.285547 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.285609 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.285628 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.285651 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.285704 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:07Z","lastTransitionTime":"2025-11-28T06:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.388944 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.389007 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.389024 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.389050 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.389071 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:07Z","lastTransitionTime":"2025-11-28T06:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.493027 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.493110 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.493134 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.493171 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.493194 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:07Z","lastTransitionTime":"2025-11-28T06:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.577926 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.577990 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.578010 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.578032 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.578051 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:07Z","lastTransitionTime":"2025-11-28T06:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:07 crc kubenswrapper[4946]: E1128 06:53:07.600973 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.607625 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.607706 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.607729 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.607765 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.607787 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:07Z","lastTransitionTime":"2025-11-28T06:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:07 crc kubenswrapper[4946]: E1128 06:53:07.629354 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.635309 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.635379 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.635397 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.635420 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.635439 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:07Z","lastTransitionTime":"2025-11-28T06:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:07 crc kubenswrapper[4946]: E1128 06:53:07.659092 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.665252 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.665317 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.665343 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.665373 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.665395 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:07Z","lastTransitionTime":"2025-11-28T06:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:07 crc kubenswrapper[4946]: E1128 06:53:07.688593 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.694134 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.694194 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.694216 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.694245 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.694267 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:07Z","lastTransitionTime":"2025-11-28T06:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:07 crc kubenswrapper[4946]: E1128 06:53:07.715118 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:07 crc kubenswrapper[4946]: E1128 06:53:07.715345 4946 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.718213 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.718261 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.718280 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.718304 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.718321 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:07Z","lastTransitionTime":"2025-11-28T06:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.822562 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.822639 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.822661 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.822696 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.822719 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:07Z","lastTransitionTime":"2025-11-28T06:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.926964 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.927019 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.927038 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.927063 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.927080 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:07Z","lastTransitionTime":"2025-11-28T06:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.989613 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.989640 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:07 crc kubenswrapper[4946]: I1128 06:53:07.989737 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:07 crc kubenswrapper[4946]: E1128 06:53:07.989976 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:07 crc kubenswrapper[4946]: E1128 06:53:07.990106 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:07 crc kubenswrapper[4946]: E1128 06:53:07.990360 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.030816 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.030875 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.030894 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.030919 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.030937 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:08Z","lastTransitionTime":"2025-11-28T06:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.139042 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.139159 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.139178 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.139202 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.139220 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:08Z","lastTransitionTime":"2025-11-28T06:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.243751 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.243869 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.243888 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.243920 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.243942 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:08Z","lastTransitionTime":"2025-11-28T06:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.347707 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.347793 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.347819 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.347853 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.347879 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:08Z","lastTransitionTime":"2025-11-28T06:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.451087 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.451120 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.451131 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.451147 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.451158 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:08Z","lastTransitionTime":"2025-11-28T06:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.554755 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.554822 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.554843 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.554868 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.554886 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:08Z","lastTransitionTime":"2025-11-28T06:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.576554 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs\") pod \"network-metrics-daemon-gkg79\" (UID: \"4e6983b1-6887-4d13-8f9a-f261a745115f\") " pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:08 crc kubenswrapper[4946]: E1128 06:53:08.576751 4946 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:53:08 crc kubenswrapper[4946]: E1128 06:53:08.576839 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs podName:4e6983b1-6887-4d13-8f9a-f261a745115f nodeName:}" failed. No retries permitted until 2025-11-28 06:53:16.576816178 +0000 UTC m=+50.954881319 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs") pod "network-metrics-daemon-gkg79" (UID: "4e6983b1-6887-4d13-8f9a-f261a745115f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.658298 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.658494 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.658515 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.658547 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.658569 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:08Z","lastTransitionTime":"2025-11-28T06:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.762362 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.762439 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.762456 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.762521 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.762540 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:08Z","lastTransitionTime":"2025-11-28T06:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.841682 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.843124 4946 scope.go:117] "RemoveContainer" containerID="6df860d0b9182067aa468a67d34f001cfe4b3e2251c630ee9ceab34315956bce" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.865852 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.865918 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.865938 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.865963 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.866015 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:08Z","lastTransitionTime":"2025-11-28T06:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.970316 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.970768 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.970786 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.970809 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.970840 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:08Z","lastTransitionTime":"2025-11-28T06:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:08 crc kubenswrapper[4946]: I1128 06:53:08.989318 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:08 crc kubenswrapper[4946]: E1128 06:53:08.989566 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.075537 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.075585 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.075601 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.075627 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.075642 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:09Z","lastTransitionTime":"2025-11-28T06:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.179583 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.179635 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.179647 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.179667 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.179681 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:09Z","lastTransitionTime":"2025-11-28T06:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.283696 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.283771 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.283794 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.283823 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.283844 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:09Z","lastTransitionTime":"2025-11-28T06:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.386427 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.386499 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.386516 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.386537 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.386553 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:09Z","lastTransitionTime":"2025-11-28T06:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.440054 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pkknv_47e7046d-60dc-4dc0-b63e-f22f4ca5cd51/ovnkube-controller/1.log" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.443150 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerStarted","Data":"bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237"} Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.443484 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.457656 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.469865 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.479365 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.489609 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.489651 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.489661 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.489676 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.489686 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:09Z","lastTransitionTime":"2025-11-28T06:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.491704 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.501198 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gkg79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6983b1-6887-4d13-8f9a-f261a745115f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gkg79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.513279 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.525308 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.535222 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.545889 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6df591b-287c-45d9-9db2-f3c441005fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f983a4bf3fb71511789223ca3b7b223ac9589d4807ea2dfe4087d2fa5d48738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6b6b4122d5aa6fa73c5453308ea21172330e8324feb57965db83d5c2c05fa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpkwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.558846 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.573325 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.592117 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.592185 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.592204 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.592235 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.592253 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:09Z","lastTransitionTime":"2025-11-28T06:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.592863 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6df860d0b9182067aa468a67d34f001cfe4b3e2251c630ee9ceab34315956bce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:52:58Z\\\",\\\"message\\\":\\\"eflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:52:57.223889 6381 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 06:52:57.223909 6381 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 06:52:57.223935 6381 factory.go:656] Stopping watch factory\\\\nI1128 06:52:57.223946 6381 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 06:52:57.223942 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 06:52:57.223954 6381 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 06:52:57.222497 6381 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:52:57.223994 6381 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:52:57.224897 6381 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 06:52:57.225235 6381 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:52:57.225348 6381 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:52:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.606345 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.619401 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.631624 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.645505 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.695141 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.695180 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.695189 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.695202 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.695211 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:09Z","lastTransitionTime":"2025-11-28T06:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.798011 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.798074 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.798091 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.798118 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.798136 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:09Z","lastTransitionTime":"2025-11-28T06:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.901023 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.901170 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.901260 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.901360 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.901451 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:09Z","lastTransitionTime":"2025-11-28T06:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.988967 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.988967 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:09 crc kubenswrapper[4946]: I1128 06:53:09.989139 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:09 crc kubenswrapper[4946]: E1128 06:53:09.989431 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:09 crc kubenswrapper[4946]: E1128 06:53:09.989646 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:09 crc kubenswrapper[4946]: E1128 06:53:09.989781 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.005024 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.005080 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.005098 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.005124 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.005142 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:10Z","lastTransitionTime":"2025-11-28T06:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.109061 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.109145 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.109171 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.109205 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.109229 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:10Z","lastTransitionTime":"2025-11-28T06:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.212221 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.212293 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.212315 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.212344 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.212366 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:10Z","lastTransitionTime":"2025-11-28T06:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.315796 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.315853 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.315868 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.315896 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.315936 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:10Z","lastTransitionTime":"2025-11-28T06:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.419025 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.419590 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.419786 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.419949 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.420074 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:10Z","lastTransitionTime":"2025-11-28T06:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.449794 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pkknv_47e7046d-60dc-4dc0-b63e-f22f4ca5cd51/ovnkube-controller/2.log" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.450928 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pkknv_47e7046d-60dc-4dc0-b63e-f22f4ca5cd51/ovnkube-controller/1.log" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.454872 4946 generic.go:334] "Generic (PLEG): container finished" podID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerID="bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237" exitCode=1 Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.454921 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerDied","Data":"bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237"} Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.454992 4946 scope.go:117] "RemoveContainer" containerID="6df860d0b9182067aa468a67d34f001cfe4b3e2251c630ee9ceab34315956bce" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.455778 4946 scope.go:117] "RemoveContainer" containerID="bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237" Nov 28 06:53:10 crc kubenswrapper[4946]: E1128 06:53:10.455990 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pkknv_openshift-ovn-kubernetes(47e7046d-60dc-4dc0-b63e-f22f4ca5cd51)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.483201 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.505835 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.524131 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.524298 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.524320 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.524345 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.524363 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:10Z","lastTransitionTime":"2025-11-28T06:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.527083 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.547279 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6df591b-287c-45d9-9db2-f3c441005fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f983a4bf3fb71511789223ca3b7b223ac9589d4807ea2dfe4087d2fa5d48738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6b6b4122d5aa6fa73c5453308ea21172330e8324feb57965db83d5c2c05fa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpkwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.569748 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.586211 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.619125 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6df860d0b9182067aa468a67d34f001cfe4b3e2251c630ee9ceab34315956bce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:52:58Z\\\",\\\"message\\\":\\\"eflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:52:57.223889 6381 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 06:52:57.223909 6381 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 06:52:57.223935 6381 factory.go:656] Stopping watch factory\\\\nI1128 06:52:57.223946 6381 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 06:52:57.223942 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 06:52:57.223954 6381 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 06:52:57.222497 6381 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:52:57.223994 6381 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:52:57.224897 6381 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 06:52:57.225235 6381 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:52:57.225348 6381 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:52:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:53:10Z\\\",\\\"message\\\":\\\"6983b1-6887-4d13-8f9a-f261a745115f\\\\\\\", APIVersion:\\\\\\\"v1\\\\\\\", ResourceVersion:\\\\\\\"26911\\\\\\\", FieldPath:\\\\\\\"\\\\\\\"}): type: 'Warning' reason: 'ErrorAddingResource' addLogicalPort failed for openshift-multus/network-metrics-daemon-gkg79: failed to update pod openshift-multus/network-metrics-daemon-gkg79: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z\\\\nI1128 06:53:09.888528 6569 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:53:09.888829 6569 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 06:53:09.888959 6569 factory.go:656] Stopping watch factory\\\\nI1128 06:53:09.888988 6569 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 06:53:09.889839 6569 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:53:09.889881 6569 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:53:09.889948 6569 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.627519 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.627581 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.627603 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.627622 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.627637 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:10Z","lastTransitionTime":"2025-11-28T06:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.643084 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.663445 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.680163 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.701795 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.721408 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.736903 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.736979 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.736995 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.737018 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.737035 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:10Z","lastTransitionTime":"2025-11-28T06:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.741290 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gkg79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6983b1-6887-4d13-8f9a-f261a745115f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gkg79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.762926 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.781243 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.793498 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:10Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.839844 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.839889 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.839900 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.839916 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.839928 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:10Z","lastTransitionTime":"2025-11-28T06:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.943544 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.943594 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.943610 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.943632 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.943649 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:10Z","lastTransitionTime":"2025-11-28T06:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:10 crc kubenswrapper[4946]: I1128 06:53:10.989103 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:10 crc kubenswrapper[4946]: E1128 06:53:10.989297 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.046655 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.046730 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.046755 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.046784 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.046808 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:11Z","lastTransitionTime":"2025-11-28T06:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.150265 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.150330 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.150347 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.150370 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.150390 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:11Z","lastTransitionTime":"2025-11-28T06:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.253243 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.253336 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.253349 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.253368 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.253389 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:11Z","lastTransitionTime":"2025-11-28T06:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.357200 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.357690 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.357759 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.357794 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.357821 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:11Z","lastTransitionTime":"2025-11-28T06:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.461717 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.462105 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.462260 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.462581 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.462768 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:11Z","lastTransitionTime":"2025-11-28T06:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.463412 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pkknv_47e7046d-60dc-4dc0-b63e-f22f4ca5cd51/ovnkube-controller/2.log" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.470419 4946 scope.go:117] "RemoveContainer" containerID="bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237" Nov 28 06:53:11 crc kubenswrapper[4946]: E1128 06:53:11.470831 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pkknv_openshift-ovn-kubernetes(47e7046d-60dc-4dc0-b63e-f22f4ca5cd51)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.506367 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.540991 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.559881 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6df591b-287c-45d9-9db2-f3c441005fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f983a4bf3fb71511789223ca3b7b223ac9589d4807ea2dfe4087d2fa5d48738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6b6b4122d5aa6fa73c5453308ea21172330e8324feb57965db83d5c2c05fa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpkwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.565035 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.565064 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.565075 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.565090 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.565111 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:11Z","lastTransitionTime":"2025-11-28T06:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.575690 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.599257 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:53:10Z\\\",\\\"message\\\":\\\"6983b1-6887-4d13-8f9a-f261a745115f\\\\\\\", APIVersion:\\\\\\\"v1\\\\\\\", ResourceVersion:\\\\\\\"26911\\\\\\\", FieldPath:\\\\\\\"\\\\\\\"}): type: 'Warning' reason: 'ErrorAddingResource' addLogicalPort failed for openshift-multus/network-metrics-daemon-gkg79: failed to update pod openshift-multus/network-metrics-daemon-gkg79: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z\\\\nI1128 06:53:09.888528 6569 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:53:09.888829 6569 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 06:53:09.888959 6569 factory.go:656] Stopping watch factory\\\\nI1128 06:53:09.888988 6569 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 06:53:09.889839 6569 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:53:09.889881 6569 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:53:09.889948 6569 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:53:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pkknv_openshift-ovn-kubernetes(47e7046d-60dc-4dc0-b63e-f22f4ca5cd51)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.614935 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.630729 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.644521 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.664414 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.668399 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.668589 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.668683 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.668766 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.668857 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:11Z","lastTransitionTime":"2025-11-28T06:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.684697 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.700003 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.716521 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.731636 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.746060 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.763215 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.771312 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.771372 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.771392 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.771418 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.771436 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:11Z","lastTransitionTime":"2025-11-28T06:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.778344 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gkg79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6983b1-6887-4d13-8f9a-f261a745115f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gkg79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.874681 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.874725 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.874745 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.874767 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.874785 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:11Z","lastTransitionTime":"2025-11-28T06:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.978629 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.978719 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.978821 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.978921 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.978952 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:11Z","lastTransitionTime":"2025-11-28T06:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.989299 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.989362 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:11 crc kubenswrapper[4946]: E1128 06:53:11.989564 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:11 crc kubenswrapper[4946]: I1128 06:53:11.989637 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:11 crc kubenswrapper[4946]: E1128 06:53:11.989823 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:11 crc kubenswrapper[4946]: E1128 06:53:11.989948 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.082115 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.082186 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.082203 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.082227 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.082246 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:12Z","lastTransitionTime":"2025-11-28T06:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.185781 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.185885 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.185902 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.185927 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.185949 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:12Z","lastTransitionTime":"2025-11-28T06:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.289294 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.289351 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.289368 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.289390 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.289409 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:12Z","lastTransitionTime":"2025-11-28T06:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.393251 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.393336 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.393354 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.393378 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.393395 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:12Z","lastTransitionTime":"2025-11-28T06:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.496992 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.497065 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.497085 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.497111 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.497129 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:12Z","lastTransitionTime":"2025-11-28T06:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.600128 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.600203 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.600227 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.600255 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.600277 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:12Z","lastTransitionTime":"2025-11-28T06:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.704498 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.704561 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.704579 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.704605 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.704623 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:12Z","lastTransitionTime":"2025-11-28T06:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.808281 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.808333 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.808348 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.808370 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.808380 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:12Z","lastTransitionTime":"2025-11-28T06:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.910612 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.910668 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.910682 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.910706 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.910720 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:12Z","lastTransitionTime":"2025-11-28T06:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:12 crc kubenswrapper[4946]: I1128 06:53:12.989717 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:12 crc kubenswrapper[4946]: E1128 06:53:12.989937 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.013988 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.014078 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.014104 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.014137 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.014161 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:13Z","lastTransitionTime":"2025-11-28T06:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.117103 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.117146 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.117156 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.117172 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.117182 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:13Z","lastTransitionTime":"2025-11-28T06:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.220425 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.220521 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.220537 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.220561 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.220579 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:13Z","lastTransitionTime":"2025-11-28T06:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.324623 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.324694 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.324712 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.324739 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.324759 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:13Z","lastTransitionTime":"2025-11-28T06:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.428313 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.428378 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.428402 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.428433 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.428458 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:13Z","lastTransitionTime":"2025-11-28T06:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.531998 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.532058 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.532075 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.532100 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.532118 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:13Z","lastTransitionTime":"2025-11-28T06:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.635962 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.636017 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.636033 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.636058 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.636078 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:13Z","lastTransitionTime":"2025-11-28T06:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.740011 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.740085 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.740102 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.740130 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.740149 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:13Z","lastTransitionTime":"2025-11-28T06:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.844109 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.844180 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.844197 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.844225 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.844242 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:13Z","lastTransitionTime":"2025-11-28T06:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.947561 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.947676 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.947694 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.947718 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.947735 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:13Z","lastTransitionTime":"2025-11-28T06:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.989400 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.989412 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:13 crc kubenswrapper[4946]: E1128 06:53:13.989691 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:13 crc kubenswrapper[4946]: I1128 06:53:13.989445 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:13 crc kubenswrapper[4946]: E1128 06:53:13.990033 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:13 crc kubenswrapper[4946]: E1128 06:53:13.990237 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.051659 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.051732 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.051749 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.051776 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.051795 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:14Z","lastTransitionTime":"2025-11-28T06:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.156122 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.156207 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.156227 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.156256 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.156277 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:14Z","lastTransitionTime":"2025-11-28T06:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.259607 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.259969 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.260139 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.260281 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.260401 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:14Z","lastTransitionTime":"2025-11-28T06:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.364421 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.365222 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.365416 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.365648 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.365846 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:14Z","lastTransitionTime":"2025-11-28T06:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.469811 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.469864 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.469876 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.469893 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.469905 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:14Z","lastTransitionTime":"2025-11-28T06:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.573606 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.573703 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.573731 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.573764 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.573789 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:14Z","lastTransitionTime":"2025-11-28T06:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.677244 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.677321 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.677345 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.677377 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.677396 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:14Z","lastTransitionTime":"2025-11-28T06:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.781443 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.781532 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.781556 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.781583 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.781601 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:14Z","lastTransitionTime":"2025-11-28T06:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.884414 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.884521 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.884539 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.884564 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.884582 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:14Z","lastTransitionTime":"2025-11-28T06:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.938651 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.955622 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.962877 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:14Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.979814 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:14Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.987589 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.987653 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.987673 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.987705 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.987725 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:14Z","lastTransitionTime":"2025-11-28T06:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.988884 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:14 crc kubenswrapper[4946]: E1128 06:53:14.989070 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:14 crc kubenswrapper[4946]: I1128 06:53:14.999740 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:14Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.018506 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:15Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.036094 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gkg79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6983b1-6887-4d13-8f9a-f261a745115f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gkg79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:15Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.062422 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:15Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.083802 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:15Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.091015 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.091082 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.091100 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.091125 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.091147 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:15Z","lastTransitionTime":"2025-11-28T06:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.102674 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:15Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.122864 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6df591b-287c-45d9-9db2-f3c441005fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f983a4bf3fb71511789223ca3b7b223ac9589d4807ea2dfe4087d2fa5d48738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6b6b4122d5aa6fa73c5453308ea21172330e8324feb57965db83d5c2c05fa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpkwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:15Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.146253 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:15Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.169580 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:15Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.194357 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.194444 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.194508 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.194543 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.194566 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:15Z","lastTransitionTime":"2025-11-28T06:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.202937 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:53:10Z\\\",\\\"message\\\":\\\"6983b1-6887-4d13-8f9a-f261a745115f\\\\\\\", APIVersion:\\\\\\\"v1\\\\\\\", ResourceVersion:\\\\\\\"26911\\\\\\\", FieldPath:\\\\\\\"\\\\\\\"}): type: 'Warning' reason: 'ErrorAddingResource' addLogicalPort failed for openshift-multus/network-metrics-daemon-gkg79: failed to update pod openshift-multus/network-metrics-daemon-gkg79: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z\\\\nI1128 06:53:09.888528 6569 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:53:09.888829 6569 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 06:53:09.888959 6569 factory.go:656] Stopping watch factory\\\\nI1128 06:53:09.888988 6569 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 06:53:09.889839 6569 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:53:09.889881 6569 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:53:09.889948 6569 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:53:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pkknv_openshift-ovn-kubernetes(47e7046d-60dc-4dc0-b63e-f22f4ca5cd51)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:15Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.219940 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:15Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.236136 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:15Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.249603 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:15Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.267752 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:15Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.297914 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.297974 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.297997 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.298025 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.298046 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:15Z","lastTransitionTime":"2025-11-28T06:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.400721 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.400780 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.400795 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.400816 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.400834 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:15Z","lastTransitionTime":"2025-11-28T06:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.503901 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.503963 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.503980 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.504006 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.504022 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:15Z","lastTransitionTime":"2025-11-28T06:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.607583 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.607661 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.607686 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.607715 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.607738 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:15Z","lastTransitionTime":"2025-11-28T06:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.713707 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.713792 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.713816 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.713850 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.713882 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:15Z","lastTransitionTime":"2025-11-28T06:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.763911 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.764172 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:15 crc kubenswrapper[4946]: E1128 06:53:15.764302 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:53:47.764186186 +0000 UTC m=+82.142251337 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.764404 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:15 crc kubenswrapper[4946]: E1128 06:53:15.764517 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.764528 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:15 crc kubenswrapper[4946]: E1128 06:53:15.764552 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:53:15 crc kubenswrapper[4946]: E1128 06:53:15.764682 4946 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.764647 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:15 crc kubenswrapper[4946]: E1128 06:53:15.764638 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:53:15 crc kubenswrapper[4946]: E1128 06:53:15.764776 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 06:53:47.764744481 +0000 UTC m=+82.142809632 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:53:15 crc kubenswrapper[4946]: E1128 06:53:15.764818 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:53:15 crc kubenswrapper[4946]: E1128 06:53:15.764842 4946 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:53:15 crc kubenswrapper[4946]: E1128 06:53:15.764935 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:53:47.764913365 +0000 UTC m=+82.142978696 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:53:15 crc kubenswrapper[4946]: E1128 06:53:15.764847 4946 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:53:15 crc kubenswrapper[4946]: E1128 06:53:15.764635 4946 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:53:15 crc kubenswrapper[4946]: E1128 06:53:15.765113 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 06:53:47.765076979 +0000 UTC m=+82.143142330 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:53:15 crc kubenswrapper[4946]: E1128 06:53:15.765158 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:53:47.765136391 +0000 UTC m=+82.143201752 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.817557 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.817620 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.817637 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.817664 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.817683 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:15Z","lastTransitionTime":"2025-11-28T06:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.920662 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.920738 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.920760 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.920888 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.920991 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:15Z","lastTransitionTime":"2025-11-28T06:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.989642 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:15 crc kubenswrapper[4946]: E1128 06:53:15.989888 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.990017 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:15 crc kubenswrapper[4946]: E1128 06:53:15.990176 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:15 crc kubenswrapper[4946]: I1128 06:53:15.990276 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:15 crc kubenswrapper[4946]: E1128 06:53:15.990525 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.013635 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:16Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.024817 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.024905 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.024923 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.024947 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.024996 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:16Z","lastTransitionTime":"2025-11-28T06:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.036618 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:16Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.054287 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:16Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.073802 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:16Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.093580 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gkg79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6983b1-6887-4d13-8f9a-f261a745115f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gkg79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:16Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.117309 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"136606da-fc7c-4bea-902d-102f836514a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a67b58ab443f2a80c08229709de3844bf287465b58912f472f81b993e5f5fd8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30a18f86a38f7d74aedc8fb55161733023c3a515474f699fc66c645901b43a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14ecbd61c64f04b0ec631e67e25393619f5692c9967fff9d461c40bbb7f3c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://674e2b1496ef2f46b2bd37fd98df7bb076560669c1c6b4a39a081aaa0f0dab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://674e2b1496ef2f46b2bd37fd98df7bb076560669c1c6b4a39a081aaa0f0dab23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:16Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.128763 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.128826 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.128851 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.128880 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.128904 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:16Z","lastTransitionTime":"2025-11-28T06:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.139445 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:16Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.156935 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:16Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.175654 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6df591b-287c-45d9-9db2-f3c441005fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f983a4bf3fb71511789223ca3b7b223ac9589d4807ea2dfe4087d2fa5d48738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6b6b4122d5aa6fa73c5453308ea21172330e8324feb57965db83d5c2c05fa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpkwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:16Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.197582 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:16Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.220713 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:16Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.232548 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.232702 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.232733 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.232802 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.232826 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:16Z","lastTransitionTime":"2025-11-28T06:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.255579 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:53:10Z\\\",\\\"message\\\":\\\"6983b1-6887-4d13-8f9a-f261a745115f\\\\\\\", APIVersion:\\\\\\\"v1\\\\\\\", ResourceVersion:\\\\\\\"26911\\\\\\\", FieldPath:\\\\\\\"\\\\\\\"}): type: 'Warning' reason: 'ErrorAddingResource' addLogicalPort failed for openshift-multus/network-metrics-daemon-gkg79: failed to update pod openshift-multus/network-metrics-daemon-gkg79: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z\\\\nI1128 06:53:09.888528 6569 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:53:09.888829 6569 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 06:53:09.888959 6569 factory.go:656] Stopping watch factory\\\\nI1128 06:53:09.888988 6569 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 06:53:09.889839 6569 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:53:09.889881 6569 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:53:09.889948 6569 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:53:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pkknv_openshift-ovn-kubernetes(47e7046d-60dc-4dc0-b63e-f22f4ca5cd51)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:16Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.279031 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:16Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.303858 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:16Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.325911 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:16Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.336281 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.336341 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.336364 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.336391 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.336412 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:16Z","lastTransitionTime":"2025-11-28T06:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.353896 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:16Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.378175 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:16Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.440282 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.440648 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.440791 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.440928 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.441059 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:16Z","lastTransitionTime":"2025-11-28T06:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.549165 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.549515 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.549694 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.549824 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.549951 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:16Z","lastTransitionTime":"2025-11-28T06:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.654226 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.654304 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.654324 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.654404 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.654428 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:16Z","lastTransitionTime":"2025-11-28T06:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.675356 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs\") pod \"network-metrics-daemon-gkg79\" (UID: \"4e6983b1-6887-4d13-8f9a-f261a745115f\") " pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:16 crc kubenswrapper[4946]: E1128 06:53:16.675639 4946 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:53:16 crc kubenswrapper[4946]: E1128 06:53:16.675724 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs podName:4e6983b1-6887-4d13-8f9a-f261a745115f nodeName:}" failed. No retries permitted until 2025-11-28 06:53:32.675696457 +0000 UTC m=+67.053761608 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs") pod "network-metrics-daemon-gkg79" (UID: "4e6983b1-6887-4d13-8f9a-f261a745115f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.758225 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.758311 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.758336 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.758363 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.758382 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:16Z","lastTransitionTime":"2025-11-28T06:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.861846 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.861903 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.861920 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.861943 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.861961 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:16Z","lastTransitionTime":"2025-11-28T06:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.964566 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.964644 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.964670 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.964702 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.964720 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:16Z","lastTransitionTime":"2025-11-28T06:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:16 crc kubenswrapper[4946]: I1128 06:53:16.989749 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:16 crc kubenswrapper[4946]: E1128 06:53:16.989950 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.068906 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.068981 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.069009 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.069041 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.069174 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:17Z","lastTransitionTime":"2025-11-28T06:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.172799 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.172904 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.172931 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.173021 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.173793 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:17Z","lastTransitionTime":"2025-11-28T06:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.283846 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.284405 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.284417 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.284436 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.284448 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:17Z","lastTransitionTime":"2025-11-28T06:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.388378 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.388435 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.388457 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.388518 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.388541 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:17Z","lastTransitionTime":"2025-11-28T06:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.492310 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.492372 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.492395 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.492424 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.492445 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:17Z","lastTransitionTime":"2025-11-28T06:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.595451 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.595547 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.595565 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.595592 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.595610 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:17Z","lastTransitionTime":"2025-11-28T06:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.701642 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.701704 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.701722 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.701756 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.701776 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:17Z","lastTransitionTime":"2025-11-28T06:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.805703 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.805767 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.805799 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.805824 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.805843 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:17Z","lastTransitionTime":"2025-11-28T06:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.909620 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.909695 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.909718 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.909744 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.909765 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:17Z","lastTransitionTime":"2025-11-28T06:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.989614 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.989705 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:17 crc kubenswrapper[4946]: I1128 06:53:17.989825 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:17 crc kubenswrapper[4946]: E1128 06:53:17.990047 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:17 crc kubenswrapper[4946]: E1128 06:53:17.990209 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:17 crc kubenswrapper[4946]: E1128 06:53:17.990362 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.000430 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.000526 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.000552 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.000580 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.000604 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:18Z","lastTransitionTime":"2025-11-28T06:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:18 crc kubenswrapper[4946]: E1128 06:53:18.022210 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.028983 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.029042 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.029060 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.029085 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.029105 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:18Z","lastTransitionTime":"2025-11-28T06:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:18 crc kubenswrapper[4946]: E1128 06:53:18.052896 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.058385 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.058438 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.058458 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.058511 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.058529 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:18Z","lastTransitionTime":"2025-11-28T06:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:18 crc kubenswrapper[4946]: E1128 06:53:18.081810 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.093537 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.093594 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.093612 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.093637 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.093656 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:18Z","lastTransitionTime":"2025-11-28T06:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:18 crc kubenswrapper[4946]: E1128 06:53:18.113587 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.119355 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.119408 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.119425 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.119448 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.119490 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:18Z","lastTransitionTime":"2025-11-28T06:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:18 crc kubenswrapper[4946]: E1128 06:53:18.140048 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:18 crc kubenswrapper[4946]: E1128 06:53:18.140278 4946 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.143427 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.143522 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.143540 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.143567 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.143591 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:18Z","lastTransitionTime":"2025-11-28T06:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.247228 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.247297 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.247317 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.247345 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.247365 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:18Z","lastTransitionTime":"2025-11-28T06:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.350898 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.350994 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.351023 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.351055 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.351081 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:18Z","lastTransitionTime":"2025-11-28T06:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.454603 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.454673 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.454696 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.454729 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.454751 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:18Z","lastTransitionTime":"2025-11-28T06:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.558339 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.558386 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.558402 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.558428 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.558446 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:18Z","lastTransitionTime":"2025-11-28T06:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.662070 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.662158 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.662186 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.662219 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.662242 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:18Z","lastTransitionTime":"2025-11-28T06:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.765324 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.765386 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.765405 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.765427 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.765444 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:18Z","lastTransitionTime":"2025-11-28T06:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.868932 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.868979 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.868995 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.869013 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.869027 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:18Z","lastTransitionTime":"2025-11-28T06:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.973067 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.973152 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.973179 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.973211 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.973235 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:18Z","lastTransitionTime":"2025-11-28T06:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:18 crc kubenswrapper[4946]: I1128 06:53:18.989655 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:18 crc kubenswrapper[4946]: E1128 06:53:18.989925 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.076715 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.076773 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.076787 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.076808 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.076824 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:19Z","lastTransitionTime":"2025-11-28T06:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.180032 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.180102 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.180121 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.180147 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.180168 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:19Z","lastTransitionTime":"2025-11-28T06:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.282797 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.282866 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.282886 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.282908 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.282924 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:19Z","lastTransitionTime":"2025-11-28T06:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.386325 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.386403 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.386424 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.386449 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.386503 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:19Z","lastTransitionTime":"2025-11-28T06:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.489811 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.489872 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.489890 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.489917 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.489934 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:19Z","lastTransitionTime":"2025-11-28T06:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.593966 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.594028 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.594046 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.594072 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.594093 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:19Z","lastTransitionTime":"2025-11-28T06:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.697658 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.697722 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.697740 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.697764 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.697785 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:19Z","lastTransitionTime":"2025-11-28T06:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.801689 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.801762 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.801787 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.801815 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.801835 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:19Z","lastTransitionTime":"2025-11-28T06:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.905549 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.905622 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.905645 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.905679 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.905706 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:19Z","lastTransitionTime":"2025-11-28T06:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.989320 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.989370 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:19 crc kubenswrapper[4946]: E1128 06:53:19.989521 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:19 crc kubenswrapper[4946]: I1128 06:53:19.989555 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:19 crc kubenswrapper[4946]: E1128 06:53:19.989672 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:19 crc kubenswrapper[4946]: E1128 06:53:19.989906 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.015067 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.015115 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.015134 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.015158 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.015176 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:20Z","lastTransitionTime":"2025-11-28T06:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.118660 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.118709 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.118726 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.118751 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.118769 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:20Z","lastTransitionTime":"2025-11-28T06:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.221838 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.221899 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.221918 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.221943 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.221963 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:20Z","lastTransitionTime":"2025-11-28T06:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.325389 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.325447 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.325497 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.325521 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.325539 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:20Z","lastTransitionTime":"2025-11-28T06:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.428952 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.429024 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.429042 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.429064 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.429081 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:20Z","lastTransitionTime":"2025-11-28T06:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.532523 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.532591 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.532608 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.532632 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.532649 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:20Z","lastTransitionTime":"2025-11-28T06:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.635793 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.635878 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.635904 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.635931 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.635953 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:20Z","lastTransitionTime":"2025-11-28T06:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.739612 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.739701 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.739728 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.739763 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.739790 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:20Z","lastTransitionTime":"2025-11-28T06:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.843086 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.843140 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.843158 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.843183 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.843201 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:20Z","lastTransitionTime":"2025-11-28T06:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.947034 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.947113 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.947131 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.947154 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.947171 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:20Z","lastTransitionTime":"2025-11-28T06:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:20 crc kubenswrapper[4946]: I1128 06:53:20.988862 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:20 crc kubenswrapper[4946]: E1128 06:53:20.989047 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.050566 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.050665 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.050699 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.050741 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.050764 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:21Z","lastTransitionTime":"2025-11-28T06:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.154693 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.154787 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.154817 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.154846 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.154866 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:21Z","lastTransitionTime":"2025-11-28T06:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.258698 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.258751 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.258765 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.258785 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.258798 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:21Z","lastTransitionTime":"2025-11-28T06:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.361616 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.361692 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.361718 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.361749 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.361771 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:21Z","lastTransitionTime":"2025-11-28T06:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.466058 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.466132 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.466160 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.466189 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.466212 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:21Z","lastTransitionTime":"2025-11-28T06:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.569230 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.569286 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.569298 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.569327 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.569346 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:21Z","lastTransitionTime":"2025-11-28T06:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.671947 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.671996 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.672007 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.672024 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.672035 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:21Z","lastTransitionTime":"2025-11-28T06:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.775171 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.775238 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.775260 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.775291 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.775315 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:21Z","lastTransitionTime":"2025-11-28T06:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.879546 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.879650 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.879669 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.879734 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.879755 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:21Z","lastTransitionTime":"2025-11-28T06:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.983406 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.983504 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.983518 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.983541 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.983589 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:21Z","lastTransitionTime":"2025-11-28T06:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.989038 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.989075 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:21 crc kubenswrapper[4946]: E1128 06:53:21.989217 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:21 crc kubenswrapper[4946]: I1128 06:53:21.989279 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:21 crc kubenswrapper[4946]: E1128 06:53:21.989483 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:21 crc kubenswrapper[4946]: E1128 06:53:21.989717 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.087083 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.087135 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.087147 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.087175 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.087188 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:22Z","lastTransitionTime":"2025-11-28T06:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.191011 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.191082 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.191099 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.191125 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.191143 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:22Z","lastTransitionTime":"2025-11-28T06:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.295086 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.295143 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.295160 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.295181 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.295196 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:22Z","lastTransitionTime":"2025-11-28T06:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.398820 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.398942 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.398959 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.398985 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.399002 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:22Z","lastTransitionTime":"2025-11-28T06:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.502183 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.502258 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.502277 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.502312 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.502332 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:22Z","lastTransitionTime":"2025-11-28T06:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.605928 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.606008 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.606027 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.606053 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.606073 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:22Z","lastTransitionTime":"2025-11-28T06:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.709108 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.709212 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.709248 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.709287 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.709310 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:22Z","lastTransitionTime":"2025-11-28T06:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.812526 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.812567 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.812595 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.812619 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.812634 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:22Z","lastTransitionTime":"2025-11-28T06:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.915336 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.915387 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.915401 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.915419 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.915431 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:22Z","lastTransitionTime":"2025-11-28T06:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.989339 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:22 crc kubenswrapper[4946]: E1128 06:53:22.989806 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:22 crc kubenswrapper[4946]: I1128 06:53:22.989862 4946 scope.go:117] "RemoveContainer" containerID="bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237" Nov 28 06:53:22 crc kubenswrapper[4946]: E1128 06:53:22.990016 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pkknv_openshift-ovn-kubernetes(47e7046d-60dc-4dc0-b63e-f22f4ca5cd51)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.018173 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.018259 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.018282 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.018314 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.018336 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:23Z","lastTransitionTime":"2025-11-28T06:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.121347 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.121426 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.121444 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.121497 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.121517 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:23Z","lastTransitionTime":"2025-11-28T06:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.225535 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.225617 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.225638 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.225664 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.225682 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:23Z","lastTransitionTime":"2025-11-28T06:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.329068 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.329127 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.329150 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.329179 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.329204 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:23Z","lastTransitionTime":"2025-11-28T06:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.432893 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.432981 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.432999 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.433027 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.433045 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:23Z","lastTransitionTime":"2025-11-28T06:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.536163 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.536232 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.536253 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.536279 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.536297 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:23Z","lastTransitionTime":"2025-11-28T06:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.639592 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.639659 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.639677 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.639704 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.639722 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:23Z","lastTransitionTime":"2025-11-28T06:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.743829 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.743908 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.743935 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.743968 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.743990 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:23Z","lastTransitionTime":"2025-11-28T06:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.847032 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.847109 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.847132 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.847161 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.847182 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:23Z","lastTransitionTime":"2025-11-28T06:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.951128 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.951215 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.951239 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.951263 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.951281 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:23Z","lastTransitionTime":"2025-11-28T06:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.989310 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.989361 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:23 crc kubenswrapper[4946]: E1128 06:53:23.989543 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:23 crc kubenswrapper[4946]: I1128 06:53:23.989606 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:23 crc kubenswrapper[4946]: E1128 06:53:23.989789 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:23 crc kubenswrapper[4946]: E1128 06:53:23.989902 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.053959 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.054175 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.054198 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.054224 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.054242 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:24Z","lastTransitionTime":"2025-11-28T06:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.158183 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.158238 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.158255 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.158276 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.158293 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:24Z","lastTransitionTime":"2025-11-28T06:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.261935 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.262002 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.262019 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.262043 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.262062 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:24Z","lastTransitionTime":"2025-11-28T06:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.368837 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.368948 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.369013 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.369045 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.369250 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:24Z","lastTransitionTime":"2025-11-28T06:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.473147 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.473225 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.473248 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.473278 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.473300 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:24Z","lastTransitionTime":"2025-11-28T06:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.576933 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.577043 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.577069 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.577277 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.577300 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:24Z","lastTransitionTime":"2025-11-28T06:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.680831 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.680888 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.680905 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.680928 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.680947 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:24Z","lastTransitionTime":"2025-11-28T06:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.784660 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.784720 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.784737 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.784759 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.784776 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:24Z","lastTransitionTime":"2025-11-28T06:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.888628 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.888682 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.888702 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.888728 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.888746 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:24Z","lastTransitionTime":"2025-11-28T06:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.990053 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:24 crc kubenswrapper[4946]: E1128 06:53:24.990324 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.992494 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.992555 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.992573 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.992596 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:24 crc kubenswrapper[4946]: I1128 06:53:24.992613 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:24Z","lastTransitionTime":"2025-11-28T06:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.096142 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.096209 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.096227 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.096253 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.096276 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:25Z","lastTransitionTime":"2025-11-28T06:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.199733 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.199800 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.199821 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.199947 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.199978 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:25Z","lastTransitionTime":"2025-11-28T06:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.303048 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.303116 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.303137 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.303164 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.303185 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:25Z","lastTransitionTime":"2025-11-28T06:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.406920 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.406979 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.406996 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.407020 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.407038 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:25Z","lastTransitionTime":"2025-11-28T06:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.510882 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.510969 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.510998 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.511027 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.511046 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:25Z","lastTransitionTime":"2025-11-28T06:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.615056 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.615144 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.615163 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.615188 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.615209 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:25Z","lastTransitionTime":"2025-11-28T06:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.718380 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.718455 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.718526 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.718556 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.718577 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:25Z","lastTransitionTime":"2025-11-28T06:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.821630 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.821732 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.821764 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.821798 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.821823 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:25Z","lastTransitionTime":"2025-11-28T06:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.925625 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.925687 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.925709 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.925738 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.925759 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:25Z","lastTransitionTime":"2025-11-28T06:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.992370 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:25 crc kubenswrapper[4946]: E1128 06:53:25.992612 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.992892 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:25 crc kubenswrapper[4946]: I1128 06:53:25.992939 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:25 crc kubenswrapper[4946]: E1128 06:53:25.993058 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:25 crc kubenswrapper[4946]: E1128 06:53:25.993350 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.019739 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.029703 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.029769 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.029786 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.029811 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.029828 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:26Z","lastTransitionTime":"2025-11-28T06:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.041973 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.061905 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:53:10Z\\\",\\\"message\\\":\\\"6983b1-6887-4d13-8f9a-f261a745115f\\\\\\\", APIVersion:\\\\\\\"v1\\\\\\\", ResourceVersion:\\\\\\\"26911\\\\\\\", FieldPath:\\\\\\\"\\\\\\\"}): type: 'Warning' reason: 'ErrorAddingResource' addLogicalPort failed for openshift-multus/network-metrics-daemon-gkg79: failed to update pod openshift-multus/network-metrics-daemon-gkg79: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z\\\\nI1128 06:53:09.888528 6569 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:53:09.888829 6569 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 06:53:09.888959 6569 factory.go:656] Stopping watch factory\\\\nI1128 06:53:09.888988 6569 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 06:53:09.889839 6569 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:53:09.889881 6569 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:53:09.889948 6569 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:53:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pkknv_openshift-ovn-kubernetes(47e7046d-60dc-4dc0-b63e-f22f4ca5cd51)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.079709 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.099629 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.119148 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.132704 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.132784 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.132802 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.132850 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.132866 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:26Z","lastTransitionTime":"2025-11-28T06:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.141448 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.155939 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.170094 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.181953 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gkg79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6983b1-6887-4d13-8f9a-f261a745115f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gkg79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.194566 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.212079 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.223897 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6df591b-287c-45d9-9db2-f3c441005fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f983a4bf3fb71511789223ca3b7b223ac9589d4807ea2dfe4087d2fa5d48738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6b6b4122d5aa6fa73c5453308ea21172330e8324feb57965db83d5c2c05fa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpkwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.235500 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.235573 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.235592 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.235617 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.235635 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:26Z","lastTransitionTime":"2025-11-28T06:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.242703 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.257015 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"136606da-fc7c-4bea-902d-102f836514a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a67b58ab443f2a80c08229709de3844bf287465b58912f472f81b993e5f5fd8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30a18f86a38f7d74aedc8fb55161733023c3a515474f699fc66c645901b43a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14ecbd61c64f04b0ec631e67e25393619f5692c9967fff9d461c40bbb7f3c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://674e2b1496ef2f46b2bd37fd98df7bb076560669c1c6b4a39a081aaa0f0dab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://674e2b1496ef2f46b2bd37fd98df7bb076560669c1c6b4a39a081aaa0f0dab23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.271542 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.284387 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.338662 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.338712 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.338727 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.338745 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.338757 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:26Z","lastTransitionTime":"2025-11-28T06:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.441830 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.441907 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.441930 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.441961 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.441982 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:26Z","lastTransitionTime":"2025-11-28T06:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.545058 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.545167 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.545185 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.545217 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.545235 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:26Z","lastTransitionTime":"2025-11-28T06:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.648542 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.648599 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.648614 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.648633 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.648647 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:26Z","lastTransitionTime":"2025-11-28T06:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.752793 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.752869 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.752887 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.752911 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.752931 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:26Z","lastTransitionTime":"2025-11-28T06:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.857270 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.857796 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.857813 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.857837 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.857855 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:26Z","lastTransitionTime":"2025-11-28T06:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.960949 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.961018 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.961040 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.961069 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.961091 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:26Z","lastTransitionTime":"2025-11-28T06:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:26 crc kubenswrapper[4946]: I1128 06:53:26.988982 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:26 crc kubenswrapper[4946]: E1128 06:53:26.989179 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.065434 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.065564 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.065583 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.065609 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.065626 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:27Z","lastTransitionTime":"2025-11-28T06:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.168943 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.168988 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.169003 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.169019 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.169031 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:27Z","lastTransitionTime":"2025-11-28T06:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.272664 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.272744 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.272766 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.272795 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.272861 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:27Z","lastTransitionTime":"2025-11-28T06:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.375846 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.375942 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.375968 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.376002 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.376025 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:27Z","lastTransitionTime":"2025-11-28T06:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.480312 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.480387 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.480405 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.480428 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.480447 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:27Z","lastTransitionTime":"2025-11-28T06:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.583944 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.584000 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.584020 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.584042 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.584059 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:27Z","lastTransitionTime":"2025-11-28T06:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.687703 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.687768 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.687779 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.687820 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.687837 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:27Z","lastTransitionTime":"2025-11-28T06:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.791545 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.791656 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.791683 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.791719 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.791748 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:27Z","lastTransitionTime":"2025-11-28T06:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.895146 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.895235 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.895262 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.895304 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.895345 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:27Z","lastTransitionTime":"2025-11-28T06:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.989814 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.989904 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.990016 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:27 crc kubenswrapper[4946]: E1128 06:53:27.990028 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:27 crc kubenswrapper[4946]: E1128 06:53:27.990157 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:27 crc kubenswrapper[4946]: E1128 06:53:27.990409 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.997178 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.997229 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.997244 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.997262 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:27 crc kubenswrapper[4946]: I1128 06:53:27.997277 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:27Z","lastTransitionTime":"2025-11-28T06:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.100542 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.100583 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.100594 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.100610 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.100624 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:28Z","lastTransitionTime":"2025-11-28T06:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.203863 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.203917 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.203935 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.203956 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.203973 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:28Z","lastTransitionTime":"2025-11-28T06:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.252346 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.252400 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.252412 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.252434 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.252446 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:28Z","lastTransitionTime":"2025-11-28T06:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:28 crc kubenswrapper[4946]: E1128 06:53:28.267659 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:28Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.271636 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.271728 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.271750 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.271813 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.271833 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:28Z","lastTransitionTime":"2025-11-28T06:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:28 crc kubenswrapper[4946]: E1128 06:53:28.293318 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:28Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.300455 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.300745 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.300767 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.300786 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.300801 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:28Z","lastTransitionTime":"2025-11-28T06:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:28 crc kubenswrapper[4946]: E1128 06:53:28.317904 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:28Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.322983 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.323029 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.323042 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.323061 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.323074 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:28Z","lastTransitionTime":"2025-11-28T06:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:28 crc kubenswrapper[4946]: E1128 06:53:28.339499 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:28Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.343697 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.343773 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.343796 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.343826 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.343848 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:28Z","lastTransitionTime":"2025-11-28T06:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:28 crc kubenswrapper[4946]: E1128 06:53:28.361996 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:28Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:28 crc kubenswrapper[4946]: E1128 06:53:28.362157 4946 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.364232 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.364267 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.364278 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.364294 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.364306 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:28Z","lastTransitionTime":"2025-11-28T06:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.467532 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.467647 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.467672 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.467695 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.467711 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:28Z","lastTransitionTime":"2025-11-28T06:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.570617 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.570683 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.570709 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.570740 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.570762 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:28Z","lastTransitionTime":"2025-11-28T06:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.674986 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.675048 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.675071 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.675103 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.675127 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:28Z","lastTransitionTime":"2025-11-28T06:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.778154 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.778192 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.778200 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.778214 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.778224 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:28Z","lastTransitionTime":"2025-11-28T06:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.882133 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.882196 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.882213 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.882289 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.882307 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:28Z","lastTransitionTime":"2025-11-28T06:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.985979 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.986035 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.986048 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.986067 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.986082 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:28Z","lastTransitionTime":"2025-11-28T06:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:28 crc kubenswrapper[4946]: I1128 06:53:28.989228 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:28 crc kubenswrapper[4946]: E1128 06:53:28.989382 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.089722 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.089887 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.089962 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.089998 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.090104 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:29Z","lastTransitionTime":"2025-11-28T06:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.194314 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.194392 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.194411 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.194437 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.194455 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:29Z","lastTransitionTime":"2025-11-28T06:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.297596 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.297656 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.297675 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.297702 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.297719 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:29Z","lastTransitionTime":"2025-11-28T06:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.401052 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.401116 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.401136 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.401160 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.401178 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:29Z","lastTransitionTime":"2025-11-28T06:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.504796 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.504875 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.504894 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.504928 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.504948 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:29Z","lastTransitionTime":"2025-11-28T06:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.607768 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.607819 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.607828 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.607843 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.607852 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:29Z","lastTransitionTime":"2025-11-28T06:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.710187 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.710267 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.710292 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.710322 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.710346 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:29Z","lastTransitionTime":"2025-11-28T06:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.813057 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.813107 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.813121 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.813138 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.813149 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:29Z","lastTransitionTime":"2025-11-28T06:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.916255 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.916296 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.916306 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.916320 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.916330 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:29Z","lastTransitionTime":"2025-11-28T06:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.989270 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.989340 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:29 crc kubenswrapper[4946]: E1128 06:53:29.989394 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:29 crc kubenswrapper[4946]: I1128 06:53:29.989272 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:29 crc kubenswrapper[4946]: E1128 06:53:29.989548 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:29 crc kubenswrapper[4946]: E1128 06:53:29.989747 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.019251 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.019306 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.019324 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.019348 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.019365 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:30Z","lastTransitionTime":"2025-11-28T06:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.122539 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.122583 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.122592 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.122608 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.122617 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:30Z","lastTransitionTime":"2025-11-28T06:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.225963 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.225999 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.226007 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.226021 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.226031 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:30Z","lastTransitionTime":"2025-11-28T06:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.328947 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.328987 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.329019 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.329034 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.329043 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:30Z","lastTransitionTime":"2025-11-28T06:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.431803 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.431852 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.431864 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.431881 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.431894 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:30Z","lastTransitionTime":"2025-11-28T06:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.535147 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.535194 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.535205 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.535220 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.535232 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:30Z","lastTransitionTime":"2025-11-28T06:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.637810 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.637854 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.637868 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.637884 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.637897 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:30Z","lastTransitionTime":"2025-11-28T06:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.740984 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.741029 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.741038 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.741052 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.741063 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:30Z","lastTransitionTime":"2025-11-28T06:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.843358 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.843453 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.843497 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.843525 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.843542 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:30Z","lastTransitionTime":"2025-11-28T06:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.946988 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.947070 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.947087 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.947114 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.947136 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:30Z","lastTransitionTime":"2025-11-28T06:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:30 crc kubenswrapper[4946]: I1128 06:53:30.989851 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:30 crc kubenswrapper[4946]: E1128 06:53:30.990020 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.049841 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.049890 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.049899 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.049917 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.049928 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:31Z","lastTransitionTime":"2025-11-28T06:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.152681 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.152780 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.152800 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.152858 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.152878 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:31Z","lastTransitionTime":"2025-11-28T06:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.255895 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.255948 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.255960 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.255976 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.255987 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:31Z","lastTransitionTime":"2025-11-28T06:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.359455 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.359559 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.359571 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.359589 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.359605 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:31Z","lastTransitionTime":"2025-11-28T06:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.462578 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.462643 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.462653 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.462673 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.462689 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:31Z","lastTransitionTime":"2025-11-28T06:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.567710 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.567762 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.567773 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.567792 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.567804 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:31Z","lastTransitionTime":"2025-11-28T06:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.670416 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.670517 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.670538 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.670565 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.670588 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:31Z","lastTransitionTime":"2025-11-28T06:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.773856 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.773907 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.773917 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.773935 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.773946 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:31Z","lastTransitionTime":"2025-11-28T06:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.876099 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.876181 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.876199 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.876231 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.876251 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:31Z","lastTransitionTime":"2025-11-28T06:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.979697 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.979756 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.979766 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.979789 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.979801 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:31Z","lastTransitionTime":"2025-11-28T06:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.989593 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.989752 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:31 crc kubenswrapper[4946]: E1128 06:53:31.989817 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:31 crc kubenswrapper[4946]: I1128 06:53:31.989882 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:31 crc kubenswrapper[4946]: E1128 06:53:31.990089 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:31 crc kubenswrapper[4946]: E1128 06:53:31.990192 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.083325 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.083389 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.083411 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.083442 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.083493 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:32Z","lastTransitionTime":"2025-11-28T06:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.187041 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.187110 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.187131 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.187160 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.187182 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:32Z","lastTransitionTime":"2025-11-28T06:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.290170 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.290232 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.290249 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.290270 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.290286 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:32Z","lastTransitionTime":"2025-11-28T06:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.393756 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.393835 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.393851 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.393871 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.393907 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:32Z","lastTransitionTime":"2025-11-28T06:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.499363 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.499410 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.499420 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.499435 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.499446 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:32Z","lastTransitionTime":"2025-11-28T06:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.601760 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.601814 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.601824 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.601873 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.601886 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:32Z","lastTransitionTime":"2025-11-28T06:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.677549 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs\") pod \"network-metrics-daemon-gkg79\" (UID: \"4e6983b1-6887-4d13-8f9a-f261a745115f\") " pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:32 crc kubenswrapper[4946]: E1128 06:53:32.677700 4946 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:53:32 crc kubenswrapper[4946]: E1128 06:53:32.677760 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs podName:4e6983b1-6887-4d13-8f9a-f261a745115f nodeName:}" failed. No retries permitted until 2025-11-28 06:54:04.677741277 +0000 UTC m=+99.055806388 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs") pod "network-metrics-daemon-gkg79" (UID: "4e6983b1-6887-4d13-8f9a-f261a745115f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.705697 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.705751 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.705770 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.705795 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.705815 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:32Z","lastTransitionTime":"2025-11-28T06:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.809489 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.809549 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.809559 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.809576 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.809586 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:32Z","lastTransitionTime":"2025-11-28T06:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.912402 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.912457 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.912502 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.912524 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.912542 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:32Z","lastTransitionTime":"2025-11-28T06:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:32 crc kubenswrapper[4946]: I1128 06:53:32.989398 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:32 crc kubenswrapper[4946]: E1128 06:53:32.989624 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.015006 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.015039 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.015050 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.015065 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.015076 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:33Z","lastTransitionTime":"2025-11-28T06:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.117408 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.117555 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.117569 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.117595 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.117606 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:33Z","lastTransitionTime":"2025-11-28T06:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.220931 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.221009 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.221033 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.221078 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.221096 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:33Z","lastTransitionTime":"2025-11-28T06:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.325188 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.325257 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.325278 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.325306 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.325323 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:33Z","lastTransitionTime":"2025-11-28T06:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.428742 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.428804 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.428824 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.428847 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.428865 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:33Z","lastTransitionTime":"2025-11-28T06:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.532366 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.532452 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.532519 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.532545 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.532562 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:33Z","lastTransitionTime":"2025-11-28T06:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.551377 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9g9w4_857356d2-6585-41c6-9a2c-e06ef45f7303/kube-multus/0.log" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.551511 4946 generic.go:334] "Generic (PLEG): container finished" podID="857356d2-6585-41c6-9a2c-e06ef45f7303" containerID="db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a" exitCode=1 Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.551554 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9g9w4" event={"ID":"857356d2-6585-41c6-9a2c-e06ef45f7303","Type":"ContainerDied","Data":"db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a"} Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.552453 4946 scope.go:117] "RemoveContainer" containerID="db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.567029 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.582761 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.597227 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gkg79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6983b1-6887-4d13-8f9a-f261a745115f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gkg79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.624014 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.635608 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.635999 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.636009 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.636024 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.636034 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:33Z","lastTransitionTime":"2025-11-28T06:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.638314 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.653322 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6df591b-287c-45d9-9db2-f3c441005fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f983a4bf3fb71511789223ca3b7b223ac9589d4807ea2dfe4087d2fa5d48738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6b6b4122d5aa6fa73c5453308ea21172330e8324feb57965db83d5c2c05fa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpkwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.672010 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.686371 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"136606da-fc7c-4bea-902d-102f836514a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a67b58ab443f2a80c08229709de3844bf287465b58912f472f81b993e5f5fd8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30a18f86a38f7d74aedc8fb55161733023c3a515474f699fc66c645901b43a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14ecbd61c64f04b0ec631e67e25393619f5692c9967fff9d461c40bbb7f3c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://674e2b1496ef2f46b2bd37fd98df7bb076560669c1c6b4a39a081aaa0f0dab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://674e2b1496ef2f46b2bd37fd98df7bb076560669c1c6b4a39a081aaa0f0dab23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.704842 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.719565 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.735589 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.738525 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.738566 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.738577 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.738593 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.738602 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:33Z","lastTransitionTime":"2025-11-28T06:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.753750 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:53:32Z\\\",\\\"message\\\":\\\"2025-11-28T06:52:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c0be44d2-2994-440c-b0d4-da72ba214f8c\\\\n2025-11-28T06:52:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c0be44d2-2994-440c-b0d4-da72ba214f8c to /host/opt/cni/bin/\\\\n2025-11-28T06:52:47Z [verbose] multus-daemon started\\\\n2025-11-28T06:52:47Z [verbose] Readiness Indicator file check\\\\n2025-11-28T06:53:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.773871 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:53:10Z\\\",\\\"message\\\":\\\"6983b1-6887-4d13-8f9a-f261a745115f\\\\\\\", APIVersion:\\\\\\\"v1\\\\\\\", ResourceVersion:\\\\\\\"26911\\\\\\\", FieldPath:\\\\\\\"\\\\\\\"}): type: 'Warning' reason: 'ErrorAddingResource' addLogicalPort failed for openshift-multus/network-metrics-daemon-gkg79: failed to update pod openshift-multus/network-metrics-daemon-gkg79: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z\\\\nI1128 06:53:09.888528 6569 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:53:09.888829 6569 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 06:53:09.888959 6569 factory.go:656] Stopping watch factory\\\\nI1128 06:53:09.888988 6569 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 06:53:09.889839 6569 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:53:09.889881 6569 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:53:09.889948 6569 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:53:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pkknv_openshift-ovn-kubernetes(47e7046d-60dc-4dc0-b63e-f22f4ca5cd51)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.786490 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.800810 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.816995 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.835542 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.845440 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.845507 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.845521 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.845540 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.845553 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:33Z","lastTransitionTime":"2025-11-28T06:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.948713 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.948764 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.948775 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.948791 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.948801 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:33Z","lastTransitionTime":"2025-11-28T06:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.989692 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.989765 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:33 crc kubenswrapper[4946]: E1128 06:53:33.989886 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:33 crc kubenswrapper[4946]: I1128 06:53:33.989720 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:33 crc kubenswrapper[4946]: E1128 06:53:33.990036 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:33 crc kubenswrapper[4946]: E1128 06:53:33.990315 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.052356 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.052793 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.052973 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.053122 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.053258 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:34Z","lastTransitionTime":"2025-11-28T06:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.156182 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.156237 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.156252 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.156272 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.156286 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:34Z","lastTransitionTime":"2025-11-28T06:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.259888 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.259957 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.259977 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.260002 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.260021 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:34Z","lastTransitionTime":"2025-11-28T06:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.363295 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.363372 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.363396 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.363428 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.363452 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:34Z","lastTransitionTime":"2025-11-28T06:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.467041 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.468620 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.468656 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.468680 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.468714 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:34Z","lastTransitionTime":"2025-11-28T06:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.557844 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9g9w4_857356d2-6585-41c6-9a2c-e06ef45f7303/kube-multus/0.log" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.557943 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9g9w4" event={"ID":"857356d2-6585-41c6-9a2c-e06ef45f7303","Type":"ContainerStarted","Data":"c0a0a595eea30586e0c6859963642e750f7e52a69e98545d3dd7dcaff841e1b1"} Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.571851 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.571882 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.571891 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.571905 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.571915 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:34Z","lastTransitionTime":"2025-11-28T06:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.583084 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.598076 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.612996 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.625318 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.639423 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.653833 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.671484 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.675722 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.675779 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.675797 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.675819 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.675838 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:34Z","lastTransitionTime":"2025-11-28T06:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.682263 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gkg79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6983b1-6887-4d13-8f9a-f261a745115f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gkg79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.693708 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.701693 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.712555 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6df591b-287c-45d9-9db2-f3c441005fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f983a4bf3fb71511789223ca3b7b223ac9589d4807ea2dfe4087d2fa5d48738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6b6b4122d5aa6fa73c5453308ea21172330e8324feb57965db83d5c2c05fa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpkwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.728303 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.740505 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"136606da-fc7c-4bea-902d-102f836514a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a67b58ab443f2a80c08229709de3844bf287465b58912f472f81b993e5f5fd8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30a18f86a38f7d74aedc8fb55161733023c3a515474f699fc66c645901b43a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14ecbd61c64f04b0ec631e67e25393619f5692c9967fff9d461c40bbb7f3c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://674e2b1496ef2f46b2bd37fd98df7bb076560669c1c6b4a39a081aaa0f0dab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://674e2b1496ef2f46b2bd37fd98df7bb076560669c1c6b4a39a081aaa0f0dab23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.755311 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.770759 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.778149 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.778210 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.778229 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.778254 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.778269 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:34Z","lastTransitionTime":"2025-11-28T06:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.789494 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a0a595eea30586e0c6859963642e750f7e52a69e98545d3dd7dcaff841e1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:53:32Z\\\",\\\"message\\\":\\\"2025-11-28T06:52:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c0be44d2-2994-440c-b0d4-da72ba214f8c\\\\n2025-11-28T06:52:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c0be44d2-2994-440c-b0d4-da72ba214f8c to /host/opt/cni/bin/\\\\n2025-11-28T06:52:47Z [verbose] multus-daemon started\\\\n2025-11-28T06:52:47Z [verbose] Readiness Indicator file check\\\\n2025-11-28T06:53:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.807914 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:53:10Z\\\",\\\"message\\\":\\\"6983b1-6887-4d13-8f9a-f261a745115f\\\\\\\", APIVersion:\\\\\\\"v1\\\\\\\", ResourceVersion:\\\\\\\"26911\\\\\\\", FieldPath:\\\\\\\"\\\\\\\"}): type: 'Warning' reason: 'ErrorAddingResource' addLogicalPort failed for openshift-multus/network-metrics-daemon-gkg79: failed to update pod openshift-multus/network-metrics-daemon-gkg79: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z\\\\nI1128 06:53:09.888528 6569 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:53:09.888829 6569 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 06:53:09.888959 6569 factory.go:656] Stopping watch factory\\\\nI1128 06:53:09.888988 6569 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 06:53:09.889839 6569 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:53:09.889881 6569 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:53:09.889948 6569 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:53:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pkknv_openshift-ovn-kubernetes(47e7046d-60dc-4dc0-b63e-f22f4ca5cd51)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.880539 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.880575 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.880586 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.880599 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.880608 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:34Z","lastTransitionTime":"2025-11-28T06:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.983379 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.983430 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.983442 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.983472 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.983488 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:34Z","lastTransitionTime":"2025-11-28T06:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:34 crc kubenswrapper[4946]: I1128 06:53:34.989023 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:34 crc kubenswrapper[4946]: E1128 06:53:34.989160 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.086247 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.086318 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.086336 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.086358 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.086374 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:35Z","lastTransitionTime":"2025-11-28T06:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.193208 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.193271 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.193290 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.193317 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.193335 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:35Z","lastTransitionTime":"2025-11-28T06:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.296876 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.296925 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.296940 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.296961 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.296975 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:35Z","lastTransitionTime":"2025-11-28T06:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.400255 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.400315 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.400332 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.400355 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.400373 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:35Z","lastTransitionTime":"2025-11-28T06:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.502996 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.503056 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.503068 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.503118 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.503134 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:35Z","lastTransitionTime":"2025-11-28T06:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.605765 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.605815 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.605827 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.605848 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.605860 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:35Z","lastTransitionTime":"2025-11-28T06:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.708859 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.708910 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.708921 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.708938 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.708955 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:35Z","lastTransitionTime":"2025-11-28T06:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.811858 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.811914 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.811928 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.811946 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.811960 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:35Z","lastTransitionTime":"2025-11-28T06:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.915366 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.915406 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.915414 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.915429 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.915439 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:35Z","lastTransitionTime":"2025-11-28T06:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.989314 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:35 crc kubenswrapper[4946]: E1128 06:53:35.989443 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.989685 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:35 crc kubenswrapper[4946]: E1128 06:53:35.989943 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:35 crc kubenswrapper[4946]: I1128 06:53:35.989685 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:35 crc kubenswrapper[4946]: E1128 06:53:35.990275 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.002161 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.016502 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gkg79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6983b1-6887-4d13-8f9a-f261a745115f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gkg79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.017900 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.017949 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.017962 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.017981 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.017994 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:36Z","lastTransitionTime":"2025-11-28T06:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.030014 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.044103 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.054563 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.066362 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.079841 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"136606da-fc7c-4bea-902d-102f836514a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a67b58ab443f2a80c08229709de3844bf287465b58912f472f81b993e5f5fd8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30a18f86a38f7d74aedc8fb55161733023c3a515474f699fc66c645901b43a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14ecbd61c64f04b0ec631e67e25393619f5692c9967fff9d461c40bbb7f3c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://674e2b1496ef2f46b2bd37fd98df7bb076560669c1c6b4a39a081aaa0f0dab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://674e2b1496ef2f46b2bd37fd98df7bb076560669c1c6b4a39a081aaa0f0dab23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.094154 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.105123 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.115310 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6df591b-287c-45d9-9db2-f3c441005fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f983a4bf3fb71511789223ca3b7b223ac9589d4807ea2dfe4087d2fa5d48738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6b6b4122d5aa6fa73c5453308ea21172330e8324feb57965db83d5c2c05fa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpkwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.121021 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.121048 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.121057 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.121073 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.121083 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:36Z","lastTransitionTime":"2025-11-28T06:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.127412 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.141512 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a0a595eea30586e0c6859963642e750f7e52a69e98545d3dd7dcaff841e1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:53:32Z\\\",\\\"message\\\":\\\"2025-11-28T06:52:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c0be44d2-2994-440c-b0d4-da72ba214f8c\\\\n2025-11-28T06:52:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c0be44d2-2994-440c-b0d4-da72ba214f8c to /host/opt/cni/bin/\\\\n2025-11-28T06:52:47Z [verbose] multus-daemon started\\\\n2025-11-28T06:52:47Z [verbose] Readiness Indicator file check\\\\n2025-11-28T06:53:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.158994 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:53:10Z\\\",\\\"message\\\":\\\"6983b1-6887-4d13-8f9a-f261a745115f\\\\\\\", APIVersion:\\\\\\\"v1\\\\\\\", ResourceVersion:\\\\\\\"26911\\\\\\\", FieldPath:\\\\\\\"\\\\\\\"}): type: 'Warning' reason: 'ErrorAddingResource' addLogicalPort failed for openshift-multus/network-metrics-daemon-gkg79: failed to update pod openshift-multus/network-metrics-daemon-gkg79: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z\\\\nI1128 06:53:09.888528 6569 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:53:09.888829 6569 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 06:53:09.888959 6569 factory.go:656] Stopping watch factory\\\\nI1128 06:53:09.888988 6569 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 06:53:09.889839 6569 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:53:09.889881 6569 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:53:09.889948 6569 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:53:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pkknv_openshift-ovn-kubernetes(47e7046d-60dc-4dc0-b63e-f22f4ca5cd51)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.172438 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.187762 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.203612 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.223727 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.223763 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.223771 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.223784 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.223794 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:36Z","lastTransitionTime":"2025-11-28T06:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.223803 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.327010 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.327367 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.327434 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.327522 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.327601 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:36Z","lastTransitionTime":"2025-11-28T06:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.430118 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.430418 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.430547 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.430624 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.430707 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:36Z","lastTransitionTime":"2025-11-28T06:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.533132 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.533174 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.533188 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.533204 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.533216 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:36Z","lastTransitionTime":"2025-11-28T06:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.635745 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.635780 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.635788 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.635803 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.635812 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:36Z","lastTransitionTime":"2025-11-28T06:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.738284 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.738331 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.738345 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.738365 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.738379 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:36Z","lastTransitionTime":"2025-11-28T06:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.841038 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.841086 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.841096 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.841114 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.841124 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:36Z","lastTransitionTime":"2025-11-28T06:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.944354 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.944401 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.944415 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.944431 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.944443 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:36Z","lastTransitionTime":"2025-11-28T06:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.988871 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:36 crc kubenswrapper[4946]: E1128 06:53:36.989019 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:36 crc kubenswrapper[4946]: I1128 06:53:36.989911 4946 scope.go:117] "RemoveContainer" containerID="bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.048316 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.048358 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.048369 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.048385 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.048398 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:37Z","lastTransitionTime":"2025-11-28T06:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.153178 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.153211 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.153222 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.153240 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.153253 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:37Z","lastTransitionTime":"2025-11-28T06:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.256325 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.256378 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.256389 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.256412 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.256428 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:37Z","lastTransitionTime":"2025-11-28T06:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.358848 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.358885 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.358893 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.358909 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.358919 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:37Z","lastTransitionTime":"2025-11-28T06:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.461412 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.461472 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.461482 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.461499 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.461508 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:37Z","lastTransitionTime":"2025-11-28T06:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.563936 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.563979 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.563990 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.564004 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.564016 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:37Z","lastTransitionTime":"2025-11-28T06:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.568599 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pkknv_47e7046d-60dc-4dc0-b63e-f22f4ca5cd51/ovnkube-controller/2.log" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.573520 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerStarted","Data":"9a4ec6ab6b9acae6303733b64d2906e1bd955b80202cc9af54adc15207de969b"} Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.574291 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.588165 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.607367 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.657701 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.667552 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.667595 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.667608 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.667625 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.667639 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:37Z","lastTransitionTime":"2025-11-28T06:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.672662 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.685227 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.696320 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.707646 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.721035 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.733682 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gkg79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6983b1-6887-4d13-8f9a-f261a745115f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gkg79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.749604 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.762301 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.781722 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.781767 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.781776 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.781792 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.781801 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:37Z","lastTransitionTime":"2025-11-28T06:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.784201 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6df591b-287c-45d9-9db2-f3c441005fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f983a4bf3fb71511789223ca3b7b223ac9589d4807ea2dfe4087d2fa5d48738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6b6b4122d5aa6fa73c5453308ea21172330e8324feb57965db83d5c2c05fa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpkwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.798521 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.810581 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"136606da-fc7c-4bea-902d-102f836514a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a67b58ab443f2a80c08229709de3844bf287465b58912f472f81b993e5f5fd8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30a18f86a38f7d74aedc8fb55161733023c3a515474f699fc66c645901b43a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14ecbd61c64f04b0ec631e67e25393619f5692c9967fff9d461c40bbb7f3c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://674e2b1496ef2f46b2bd37fd98df7bb076560669c1c6b4a39a081aaa0f0dab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://674e2b1496ef2f46b2bd37fd98df7bb076560669c1c6b4a39a081aaa0f0dab23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.839631 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4ec6ab6b9acae6303733b64d2906e1bd955b80202cc9af54adc15207de969b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:53:10Z\\\",\\\"message\\\":\\\"6983b1-6887-4d13-8f9a-f261a745115f\\\\\\\", APIVersion:\\\\\\\"v1\\\\\\\", ResourceVersion:\\\\\\\"26911\\\\\\\", FieldPath:\\\\\\\"\\\\\\\"}): type: 'Warning' reason: 'ErrorAddingResource' addLogicalPort failed for openshift-multus/network-metrics-daemon-gkg79: failed to update pod openshift-multus/network-metrics-daemon-gkg79: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z\\\\nI1128 06:53:09.888528 6569 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:53:09.888829 6569 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 06:53:09.888959 6569 factory.go:656] Stopping watch factory\\\\nI1128 06:53:09.888988 6569 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 06:53:09.889839 6569 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:53:09.889881 6569 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:53:09.889948 6569 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:53:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.853131 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.869023 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a0a595eea30586e0c6859963642e750f7e52a69e98545d3dd7dcaff841e1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:53:32Z\\\",\\\"message\\\":\\\"2025-11-28T06:52:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c0be44d2-2994-440c-b0d4-da72ba214f8c\\\\n2025-11-28T06:52:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c0be44d2-2994-440c-b0d4-da72ba214f8c to /host/opt/cni/bin/\\\\n2025-11-28T06:52:47Z [verbose] multus-daemon started\\\\n2025-11-28T06:52:47Z [verbose] Readiness Indicator file check\\\\n2025-11-28T06:53:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.883829 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.883870 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.883888 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.883906 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.883915 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:37Z","lastTransitionTime":"2025-11-28T06:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.987047 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.987083 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.987095 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.987110 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.987122 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:37Z","lastTransitionTime":"2025-11-28T06:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.989615 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.989648 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:37 crc kubenswrapper[4946]: I1128 06:53:37.989693 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:37 crc kubenswrapper[4946]: E1128 06:53:37.989876 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:37 crc kubenswrapper[4946]: E1128 06:53:37.989983 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:37 crc kubenswrapper[4946]: E1128 06:53:37.990138 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.090795 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.090847 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.090859 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.090877 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.090889 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:38Z","lastTransitionTime":"2025-11-28T06:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.193152 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.193186 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.193197 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.193211 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.193236 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:38Z","lastTransitionTime":"2025-11-28T06:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.296842 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.296874 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.296882 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.296895 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.296906 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:38Z","lastTransitionTime":"2025-11-28T06:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.399806 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.399851 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.399861 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.399876 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.399885 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:38Z","lastTransitionTime":"2025-11-28T06:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.451467 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.451505 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.451515 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.451529 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.451538 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:38Z","lastTransitionTime":"2025-11-28T06:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:38 crc kubenswrapper[4946]: E1128 06:53:38.466716 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.470856 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.470885 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.470894 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.470911 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.470920 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:38Z","lastTransitionTime":"2025-11-28T06:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:38 crc kubenswrapper[4946]: E1128 06:53:38.485904 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.489877 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.489920 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.489932 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.489948 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.489960 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:38Z","lastTransitionTime":"2025-11-28T06:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:38 crc kubenswrapper[4946]: E1128 06:53:38.504814 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.508203 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.508243 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.508254 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.508273 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.508286 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:38Z","lastTransitionTime":"2025-11-28T06:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:38 crc kubenswrapper[4946]: E1128 06:53:38.521326 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.524318 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.524339 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.524347 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.524358 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.524366 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:38Z","lastTransitionTime":"2025-11-28T06:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:38 crc kubenswrapper[4946]: E1128 06:53:38.544700 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:38 crc kubenswrapper[4946]: E1128 06:53:38.544861 4946 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.546445 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.546531 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.546552 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.546576 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.546625 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:38Z","lastTransitionTime":"2025-11-28T06:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.579234 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pkknv_47e7046d-60dc-4dc0-b63e-f22f4ca5cd51/ovnkube-controller/3.log" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.580035 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pkknv_47e7046d-60dc-4dc0-b63e-f22f4ca5cd51/ovnkube-controller/2.log" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.583024 4946 generic.go:334] "Generic (PLEG): container finished" podID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerID="9a4ec6ab6b9acae6303733b64d2906e1bd955b80202cc9af54adc15207de969b" exitCode=1 Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.583067 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerDied","Data":"9a4ec6ab6b9acae6303733b64d2906e1bd955b80202cc9af54adc15207de969b"} Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.583105 4946 scope.go:117] "RemoveContainer" containerID="bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.584009 4946 scope.go:117] "RemoveContainer" containerID="9a4ec6ab6b9acae6303733b64d2906e1bd955b80202cc9af54adc15207de969b" Nov 28 06:53:38 crc kubenswrapper[4946]: E1128 06:53:38.584260 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pkknv_openshift-ovn-kubernetes(47e7046d-60dc-4dc0-b63e-f22f4ca5cd51)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.600442 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.613326 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.627779 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.642329 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.649022 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.649091 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.649105 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.649121 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.649134 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:38Z","lastTransitionTime":"2025-11-28T06:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.656932 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.673423 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.684051 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.697065 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.709291 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gkg79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6983b1-6887-4d13-8f9a-f261a745115f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gkg79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.730272 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.775032 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"136606da-fc7c-4bea-902d-102f836514a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a67b58ab443f2a80c08229709de3844bf287465b58912f472f81b993e5f5fd8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30a18f86a38f7d74aedc8fb55161733023c3a515474f699fc66c645901b43a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14ecbd61c64f04b0ec631e67e25393619f5692c9967fff9d461c40bbb7f3c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://674e2b1496ef2f46b2bd37fd98df7bb076560669c1c6b4a39a081aaa0f0dab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://674e2b1496ef2f46b2bd37fd98df7bb076560669c1c6b4a39a081aaa0f0dab23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.775525 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.775550 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.775562 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.775579 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.775593 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:38Z","lastTransitionTime":"2025-11-28T06:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.788684 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.800760 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.816965 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6df591b-287c-45d9-9db2-f3c441005fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f983a4bf3fb71511789223ca3b7b223ac9589d4807ea2dfe4087d2fa5d48738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6b6b4122d5aa6fa73c5453308ea21172330e8324feb57965db83d5c2c05fa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpkwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.833567 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.848558 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a0a595eea30586e0c6859963642e750f7e52a69e98545d3dd7dcaff841e1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:53:32Z\\\",\\\"message\\\":\\\"2025-11-28T06:52:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c0be44d2-2994-440c-b0d4-da72ba214f8c\\\\n2025-11-28T06:52:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c0be44d2-2994-440c-b0d4-da72ba214f8c to /host/opt/cni/bin/\\\\n2025-11-28T06:52:47Z [verbose] multus-daemon started\\\\n2025-11-28T06:52:47Z [verbose] Readiness Indicator file check\\\\n2025-11-28T06:53:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.876192 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4ec6ab6b9acae6303733b64d2906e1bd955b80202cc9af54adc15207de969b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc3269320004c28557ed0853d941c5174ed1d136bae64a4ecbf769e8cf7fd237\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:53:10Z\\\",\\\"message\\\":\\\"6983b1-6887-4d13-8f9a-f261a745115f\\\\\\\", APIVersion:\\\\\\\"v1\\\\\\\", ResourceVersion:\\\\\\\"26911\\\\\\\", FieldPath:\\\\\\\"\\\\\\\"}): type: 'Warning' reason: 'ErrorAddingResource' addLogicalPort failed for openshift-multus/network-metrics-daemon-gkg79: failed to update pod openshift-multus/network-metrics-daemon-gkg79: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:09Z is after 2025-08-24T17:21:41Z\\\\nI1128 06:53:09.888528 6569 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:53:09.888829 6569 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 06:53:09.888959 6569 factory.go:656] Stopping watch factory\\\\nI1128 06:53:09.888988 6569 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 06:53:09.889839 6569 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:53:09.889881 6569 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:53:09.889948 6569 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:53:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4ec6ab6b9acae6303733b64d2906e1bd955b80202cc9af54adc15207de969b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"message\\\":\\\"e:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 06:53:37.869333 6897 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9g9w4\\\\nI1128 06:53:37.869337 6897 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-6tvtt\\\\nI1128 06:53:37.869343 6897 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-6tvtt\\\\nF1128 06:53:37.869344 6897 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.878373 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.878446 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.878495 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.878527 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.878547 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:38Z","lastTransitionTime":"2025-11-28T06:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.980801 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.981179 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.981380 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.981568 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.981713 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:38Z","lastTransitionTime":"2025-11-28T06:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:38 crc kubenswrapper[4946]: I1128 06:53:38.989650 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:38 crc kubenswrapper[4946]: E1128 06:53:38.989959 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.084614 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.084669 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.084688 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.084711 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.084731 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:39Z","lastTransitionTime":"2025-11-28T06:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.186990 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.187298 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.187365 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.187428 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.187510 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:39Z","lastTransitionTime":"2025-11-28T06:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.290309 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.290372 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.290387 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.290410 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.290425 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:39Z","lastTransitionTime":"2025-11-28T06:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.393827 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.393872 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.393882 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.393913 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.393924 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:39Z","lastTransitionTime":"2025-11-28T06:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.497823 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.497890 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.497906 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.497932 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.497950 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:39Z","lastTransitionTime":"2025-11-28T06:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.589356 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pkknv_47e7046d-60dc-4dc0-b63e-f22f4ca5cd51/ovnkube-controller/3.log" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.593989 4946 scope.go:117] "RemoveContainer" containerID="9a4ec6ab6b9acae6303733b64d2906e1bd955b80202cc9af54adc15207de969b" Nov 28 06:53:39 crc kubenswrapper[4946]: E1128 06:53:39.594152 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pkknv_openshift-ovn-kubernetes(47e7046d-60dc-4dc0-b63e-f22f4ca5cd51)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.600277 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.600525 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.600666 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.600798 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.600949 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:39Z","lastTransitionTime":"2025-11-28T06:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.612818 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.638721 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.684190 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.703961 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.704026 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.704049 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.704078 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.704099 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:39Z","lastTransitionTime":"2025-11-28T06:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.731114 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.744009 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gkg79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6983b1-6887-4d13-8f9a-f261a745115f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gkg79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.758567 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.772949 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.786568 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.798319 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.805960 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.805993 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.806004 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.806018 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.806030 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:39Z","lastTransitionTime":"2025-11-28T06:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.812993 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.826895 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"136606da-fc7c-4bea-902d-102f836514a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a67b58ab443f2a80c08229709de3844bf287465b58912f472f81b993e5f5fd8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30a18f86a38f7d74aedc8fb55161733023c3a515474f699fc66c645901b43a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14ecbd61c64f04b0ec631e67e25393619f5692c9967fff9d461c40bbb7f3c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://674e2b1496ef2f46b2bd37fd98df7bb076560669c1c6b4a39a081aaa0f0dab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://674e2b1496ef2f46b2bd37fd98df7bb076560669c1c6b4a39a081aaa0f0dab23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.840747 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.854879 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.868168 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6df591b-287c-45d9-9db2-f3c441005fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f983a4bf3fb71511789223ca3b7b223ac9589d4807ea2dfe4087d2fa5d48738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6b6b4122d5aa6fa73c5453308ea21172330e8324feb57965db83d5c2c05fa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpkwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.887851 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.908148 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a0a595eea30586e0c6859963642e750f7e52a69e98545d3dd7dcaff841e1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:53:32Z\\\",\\\"message\\\":\\\"2025-11-28T06:52:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c0be44d2-2994-440c-b0d4-da72ba214f8c\\\\n2025-11-28T06:52:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c0be44d2-2994-440c-b0d4-da72ba214f8c to /host/opt/cni/bin/\\\\n2025-11-28T06:52:47Z [verbose] multus-daemon started\\\\n2025-11-28T06:52:47Z [verbose] Readiness Indicator file check\\\\n2025-11-28T06:53:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.908748 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.908775 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.908787 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.908802 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.908816 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:39Z","lastTransitionTime":"2025-11-28T06:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.937564 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4ec6ab6b9acae6303733b64d2906e1bd955b80202cc9af54adc15207de969b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4ec6ab6b9acae6303733b64d2906e1bd955b80202cc9af54adc15207de969b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"message\\\":\\\"e:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 06:53:37.869333 6897 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9g9w4\\\\nI1128 06:53:37.869337 6897 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-6tvtt\\\\nI1128 06:53:37.869343 6897 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-6tvtt\\\\nF1128 06:53:37.869344 6897 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:53:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pkknv_openshift-ovn-kubernetes(47e7046d-60dc-4dc0-b63e-f22f4ca5cd51)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.990019 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:39 crc kubenswrapper[4946]: E1128 06:53:39.990160 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.990217 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:39 crc kubenswrapper[4946]: E1128 06:53:39.990443 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:39 crc kubenswrapper[4946]: I1128 06:53:39.990044 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:39 crc kubenswrapper[4946]: E1128 06:53:39.990822 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.017418 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.017507 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.017526 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.017550 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.017568 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:40Z","lastTransitionTime":"2025-11-28T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.121288 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.121346 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.121363 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.121384 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.121401 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:40Z","lastTransitionTime":"2025-11-28T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.224217 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.224276 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.224292 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.224312 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.224326 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:40Z","lastTransitionTime":"2025-11-28T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.327084 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.327150 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.327161 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.327179 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.327191 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:40Z","lastTransitionTime":"2025-11-28T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.431100 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.431157 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.431170 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.431190 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.431203 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:40Z","lastTransitionTime":"2025-11-28T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.534572 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.534646 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.534662 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.534689 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.534707 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:40Z","lastTransitionTime":"2025-11-28T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.638748 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.638836 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.638860 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.638891 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.638914 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:40Z","lastTransitionTime":"2025-11-28T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.741289 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.741349 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.741362 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.741380 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.741395 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:40Z","lastTransitionTime":"2025-11-28T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.844051 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.844143 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.844167 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.844202 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.844226 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:40Z","lastTransitionTime":"2025-11-28T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.947647 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.947730 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.947765 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.947796 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.947818 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:40Z","lastTransitionTime":"2025-11-28T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:40 crc kubenswrapper[4946]: I1128 06:53:40.989300 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:40 crc kubenswrapper[4946]: E1128 06:53:40.989570 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.051124 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.051207 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.051233 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.051265 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.051283 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:41Z","lastTransitionTime":"2025-11-28T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.154027 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.154108 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.154131 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.154159 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.154180 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:41Z","lastTransitionTime":"2025-11-28T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.257975 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.258049 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.258071 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.258098 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.258118 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:41Z","lastTransitionTime":"2025-11-28T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.361219 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.361279 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.361300 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.361330 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.361350 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:41Z","lastTransitionTime":"2025-11-28T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.465518 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.465580 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.465601 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.465628 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.465648 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:41Z","lastTransitionTime":"2025-11-28T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.569578 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.569633 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.569649 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.569676 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.569693 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:41Z","lastTransitionTime":"2025-11-28T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.673510 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.673557 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.673573 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.673596 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.673613 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:41Z","lastTransitionTime":"2025-11-28T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.777217 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.777287 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.777304 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.777331 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.777350 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:41Z","lastTransitionTime":"2025-11-28T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.881065 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.881142 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.881166 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.881195 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.881218 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:41Z","lastTransitionTime":"2025-11-28T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.985373 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.985432 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.985449 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.985501 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.985519 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:41Z","lastTransitionTime":"2025-11-28T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.989020 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.989074 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:41 crc kubenswrapper[4946]: I1128 06:53:41.989026 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:41 crc kubenswrapper[4946]: E1128 06:53:41.989234 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:41 crc kubenswrapper[4946]: E1128 06:53:41.989359 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:41 crc kubenswrapper[4946]: E1128 06:53:41.989545 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.088606 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.088667 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.088689 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.088718 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.088745 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:42Z","lastTransitionTime":"2025-11-28T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.191950 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.192007 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.192030 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.192056 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.192078 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:42Z","lastTransitionTime":"2025-11-28T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.295181 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.295262 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.295286 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.295314 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.295335 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:42Z","lastTransitionTime":"2025-11-28T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.398339 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.398399 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.398421 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.398451 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.398522 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:42Z","lastTransitionTime":"2025-11-28T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.501778 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.501822 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.501835 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.501850 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.501860 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:42Z","lastTransitionTime":"2025-11-28T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.604704 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.604766 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.604783 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.604808 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.604825 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:42Z","lastTransitionTime":"2025-11-28T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.709065 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.709115 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.709127 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.709144 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.709164 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:42Z","lastTransitionTime":"2025-11-28T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.811601 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.811676 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.811701 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.811730 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.811753 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:42Z","lastTransitionTime":"2025-11-28T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.914113 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.914167 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.914184 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.914206 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.914223 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:42Z","lastTransitionTime":"2025-11-28T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:42 crc kubenswrapper[4946]: I1128 06:53:42.989852 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:42 crc kubenswrapper[4946]: E1128 06:53:42.990079 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.016529 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.016590 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.016603 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.016622 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.016635 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:43Z","lastTransitionTime":"2025-11-28T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.119926 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.120008 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.120028 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.120050 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.120068 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:43Z","lastTransitionTime":"2025-11-28T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.222630 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.222700 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.222724 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.222752 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.222774 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:43Z","lastTransitionTime":"2025-11-28T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.326239 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.326309 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.326332 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.326361 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.326381 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:43Z","lastTransitionTime":"2025-11-28T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.429381 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.429424 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.429436 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.429451 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.429479 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:43Z","lastTransitionTime":"2025-11-28T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.532444 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.532502 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.532514 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.532534 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.532547 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:43Z","lastTransitionTime":"2025-11-28T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.637698 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.637752 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.637764 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.637784 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.637798 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:43Z","lastTransitionTime":"2025-11-28T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.741399 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.741535 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.741563 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.741594 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.741619 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:43Z","lastTransitionTime":"2025-11-28T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.844331 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.844394 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.844410 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.844434 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.844449 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:43Z","lastTransitionTime":"2025-11-28T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.947457 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.947561 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.947581 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.947609 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.947628 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:43Z","lastTransitionTime":"2025-11-28T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.989057 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.989153 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:43 crc kubenswrapper[4946]: E1128 06:53:43.989261 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:43 crc kubenswrapper[4946]: E1128 06:53:43.989339 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:43 crc kubenswrapper[4946]: I1128 06:53:43.989153 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:43 crc kubenswrapper[4946]: E1128 06:53:43.989500 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.051310 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.051368 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.051386 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.051411 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.051431 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:44Z","lastTransitionTime":"2025-11-28T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.155686 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.155771 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.155798 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.155828 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.155853 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:44Z","lastTransitionTime":"2025-11-28T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.259320 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.259385 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.259402 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.259424 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.259440 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:44Z","lastTransitionTime":"2025-11-28T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.362621 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.362675 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.362689 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.362711 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.362724 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:44Z","lastTransitionTime":"2025-11-28T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.465686 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.465779 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.465801 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.465835 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.465858 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:44Z","lastTransitionTime":"2025-11-28T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.569184 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.569270 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.569295 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.569328 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.569352 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:44Z","lastTransitionTime":"2025-11-28T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.672291 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.672349 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.672366 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.672392 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.672413 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:44Z","lastTransitionTime":"2025-11-28T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.775920 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.776004 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.776030 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.776061 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.776086 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:44Z","lastTransitionTime":"2025-11-28T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.878830 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.878875 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.878890 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.878910 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.878925 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:44Z","lastTransitionTime":"2025-11-28T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.982501 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.982570 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.982589 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.982621 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.982641 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:44Z","lastTransitionTime":"2025-11-28T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:44 crc kubenswrapper[4946]: I1128 06:53:44.989819 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:44 crc kubenswrapper[4946]: E1128 06:53:44.990046 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.085950 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.086013 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.086030 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.086052 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.086067 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:45Z","lastTransitionTime":"2025-11-28T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.190855 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.190915 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.190947 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.190974 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.190991 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:45Z","lastTransitionTime":"2025-11-28T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.294602 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.294664 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.294684 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.294713 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.294734 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:45Z","lastTransitionTime":"2025-11-28T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.397978 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.398033 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.398044 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.398061 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.398077 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:45Z","lastTransitionTime":"2025-11-28T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.501014 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.501078 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.501096 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.501124 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.501142 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:45Z","lastTransitionTime":"2025-11-28T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.605721 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.605781 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.605792 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.605809 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.605821 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:45Z","lastTransitionTime":"2025-11-28T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.710074 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.710640 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.710655 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.710676 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.710689 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:45Z","lastTransitionTime":"2025-11-28T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.814915 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.814988 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.815012 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.815040 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.815062 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:45Z","lastTransitionTime":"2025-11-28T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.918873 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.918947 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.918970 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.918997 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.919014 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:45Z","lastTransitionTime":"2025-11-28T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.993326 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.993410 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:45 crc kubenswrapper[4946]: I1128 06:53:45.993326 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:45 crc kubenswrapper[4946]: E1128 06:53:45.993593 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:45 crc kubenswrapper[4946]: E1128 06:53:45.993818 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:45 crc kubenswrapper[4946]: E1128 06:53:45.993986 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.016975 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.022820 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.022881 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.022899 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.022922 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.022941 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:46Z","lastTransitionTime":"2025-11-28T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.039939 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.053650 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.068513 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.080990 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gkg79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6983b1-6887-4d13-8f9a-f261a745115f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gkg79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.099380 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"136606da-fc7c-4bea-902d-102f836514a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a67b58ab443f2a80c08229709de3844bf287465b58912f472f81b993e5f5fd8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30a18f86a38f7d74aedc8fb55161733023c3a515474f699fc66c645901b43a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14ecbd61c64f04b0ec631e67e25393619f5692c9967fff9d461c40bbb7f3c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://674e2b1496ef2f46b2bd37fd98df7bb076560669c1c6b4a39a081aaa0f0dab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://674e2b1496ef2f46b2bd37fd98df7bb076560669c1c6b4a39a081aaa0f0dab23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.116548 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.125978 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.126024 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.126041 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.126063 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.126079 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:46Z","lastTransitionTime":"2025-11-28T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.126923 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.140068 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6df591b-287c-45d9-9db2-f3c441005fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f983a4bf3fb71511789223ca3b7b223ac9589d4807ea2dfe4087d2fa5d48738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6b6b4122d5aa6fa73c5453308ea21172330e8324feb57965db83d5c2c05fa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpkwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.155959 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.168971 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a0a595eea30586e0c6859963642e750f7e52a69e98545d3dd7dcaff841e1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:53:32Z\\\",\\\"message\\\":\\\"2025-11-28T06:52:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c0be44d2-2994-440c-b0d4-da72ba214f8c\\\\n2025-11-28T06:52:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c0be44d2-2994-440c-b0d4-da72ba214f8c to /host/opt/cni/bin/\\\\n2025-11-28T06:52:47Z [verbose] multus-daemon started\\\\n2025-11-28T06:52:47Z [verbose] Readiness Indicator file check\\\\n2025-11-28T06:53:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.192161 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4ec6ab6b9acae6303733b64d2906e1bd955b80202cc9af54adc15207de969b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4ec6ab6b9acae6303733b64d2906e1bd955b80202cc9af54adc15207de969b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"message\\\":\\\"e:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 06:53:37.869333 6897 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9g9w4\\\\nI1128 06:53:37.869337 6897 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-6tvtt\\\\nI1128 06:53:37.869343 6897 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-6tvtt\\\\nF1128 06:53:37.869344 6897 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:53:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pkknv_openshift-ovn-kubernetes(47e7046d-60dc-4dc0-b63e-f22f4ca5cd51)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.211851 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.225404 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.229078 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.229143 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.229152 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.229167 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.229176 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:46Z","lastTransitionTime":"2025-11-28T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.239663 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.255607 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.272343 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.332289 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.332352 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.332377 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.332405 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.332427 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:46Z","lastTransitionTime":"2025-11-28T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.435222 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.435299 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.435317 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.435340 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.435357 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:46Z","lastTransitionTime":"2025-11-28T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.537880 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.537952 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.537971 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.537999 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.538018 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:46Z","lastTransitionTime":"2025-11-28T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.640751 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.640813 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.640831 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.640854 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.640872 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:46Z","lastTransitionTime":"2025-11-28T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.744388 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.744453 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.744506 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.744531 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.744548 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:46Z","lastTransitionTime":"2025-11-28T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.848594 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.848677 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.848695 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.848719 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.848739 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:46Z","lastTransitionTime":"2025-11-28T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.952020 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.952121 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.952141 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.952165 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.952223 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:46Z","lastTransitionTime":"2025-11-28T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:46 crc kubenswrapper[4946]: I1128 06:53:46.989230 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:46 crc kubenswrapper[4946]: E1128 06:53:46.989482 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.055368 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.055423 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.055435 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.055455 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.055489 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:47Z","lastTransitionTime":"2025-11-28T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.158802 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.158860 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.158873 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.158892 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.158905 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:47Z","lastTransitionTime":"2025-11-28T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.261663 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.261745 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.261768 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.261803 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.261823 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:47Z","lastTransitionTime":"2025-11-28T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.364217 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.364294 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.364313 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.364341 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.364360 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:47Z","lastTransitionTime":"2025-11-28T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.469320 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.469420 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.469449 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.469506 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.469527 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:47Z","lastTransitionTime":"2025-11-28T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.573187 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.573241 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.573260 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.573282 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.573297 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:47Z","lastTransitionTime":"2025-11-28T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.676803 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.676855 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.676869 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.676888 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.676900 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:47Z","lastTransitionTime":"2025-11-28T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.778851 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.778882 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.778891 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.778905 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.778916 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:47Z","lastTransitionTime":"2025-11-28T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.862365 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.862559 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.862606 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:47 crc kubenswrapper[4946]: E1128 06:53:47.862646 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:51.862602764 +0000 UTC m=+146.240667945 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:53:47 crc kubenswrapper[4946]: E1128 06:53:47.862712 4946 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.862737 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:47 crc kubenswrapper[4946]: E1128 06:53:47.862771 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:53:47 crc kubenswrapper[4946]: E1128 06:53:47.862826 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:53:47 crc kubenswrapper[4946]: E1128 06:53:47.862842 4946 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:53:47 crc kubenswrapper[4946]: E1128 06:53:47.862792 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:54:51.862771428 +0000 UTC m=+146.240836559 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.862950 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:47 crc kubenswrapper[4946]: E1128 06:53:47.863101 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 06:54:51.863017234 +0000 UTC m=+146.241082365 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:53:47 crc kubenswrapper[4946]: E1128 06:53:47.863197 4946 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:53:47 crc kubenswrapper[4946]: E1128 06:53:47.863284 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:54:51.863268341 +0000 UTC m=+146.241333472 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:53:47 crc kubenswrapper[4946]: E1128 06:53:47.863590 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:53:47 crc kubenswrapper[4946]: E1128 06:53:47.863625 4946 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:53:47 crc kubenswrapper[4946]: E1128 06:53:47.863641 4946 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:53:47 crc kubenswrapper[4946]: E1128 06:53:47.863720 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 06:54:51.863702521 +0000 UTC m=+146.241767852 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.882075 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.882130 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.882147 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.882168 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.882182 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:47Z","lastTransitionTime":"2025-11-28T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.984351 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.984406 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.984420 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.984445 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.984479 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:47Z","lastTransitionTime":"2025-11-28T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.989050 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.989083 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:47 crc kubenswrapper[4946]: E1128 06:53:47.989259 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:47 crc kubenswrapper[4946]: E1128 06:53:47.989389 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:47 crc kubenswrapper[4946]: I1128 06:53:47.989625 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:47 crc kubenswrapper[4946]: E1128 06:53:47.989812 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.087058 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.087112 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.087132 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.087156 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.087177 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:48Z","lastTransitionTime":"2025-11-28T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.189002 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.189040 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.189050 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.189063 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.189073 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:48Z","lastTransitionTime":"2025-11-28T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.292054 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.292133 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.292157 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.292188 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.292216 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:48Z","lastTransitionTime":"2025-11-28T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.395308 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.395351 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.395364 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.395379 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.395390 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:48Z","lastTransitionTime":"2025-11-28T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.498137 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.498180 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.498191 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.498208 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.498219 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:48Z","lastTransitionTime":"2025-11-28T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.601317 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.601381 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.601398 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.601422 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.601440 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:48Z","lastTransitionTime":"2025-11-28T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.705101 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.705150 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.705171 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.705194 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.705214 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:48Z","lastTransitionTime":"2025-11-28T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.807816 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.807890 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.807908 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.807931 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.807949 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:48Z","lastTransitionTime":"2025-11-28T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.903863 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.903907 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.903917 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.903934 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.904220 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:48Z","lastTransitionTime":"2025-11-28T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:48 crc kubenswrapper[4946]: E1128 06:53:48.918087 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.922747 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.922788 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.922802 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.922819 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.922830 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:48Z","lastTransitionTime":"2025-11-28T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:48 crc kubenswrapper[4946]: E1128 06:53:48.933928 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.939011 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.939053 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.939073 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.939096 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.939113 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:48Z","lastTransitionTime":"2025-11-28T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:48 crc kubenswrapper[4946]: E1128 06:53:48.958115 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.963743 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.963798 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.963811 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.963830 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.963842 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:48Z","lastTransitionTime":"2025-11-28T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:48 crc kubenswrapper[4946]: E1128 06:53:48.980259 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.984290 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.984349 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.984362 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.984429 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.984451 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:48Z","lastTransitionTime":"2025-11-28T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.989970 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:48 crc kubenswrapper[4946]: E1128 06:53:48.990076 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:48 crc kubenswrapper[4946]: E1128 06:53:48.996600 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:48 crc kubenswrapper[4946]: E1128 06:53:48.996780 4946 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.998818 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.998848 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.998860 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.998875 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:48 crc kubenswrapper[4946]: I1128 06:53:48.998889 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:48Z","lastTransitionTime":"2025-11-28T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.101250 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.101310 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.101330 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.101357 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.101375 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:49Z","lastTransitionTime":"2025-11-28T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.204656 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.204713 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.204724 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.204746 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.204762 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:49Z","lastTransitionTime":"2025-11-28T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.307168 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.307225 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.307243 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.307268 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.307289 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:49Z","lastTransitionTime":"2025-11-28T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.409820 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.409857 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.409866 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.409882 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.409907 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:49Z","lastTransitionTime":"2025-11-28T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.513333 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.513404 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.513429 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.513503 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.513531 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:49Z","lastTransitionTime":"2025-11-28T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.617262 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.617339 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.617362 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.617389 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.617406 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:49Z","lastTransitionTime":"2025-11-28T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.720772 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.720840 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.720857 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.720881 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.720904 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:49Z","lastTransitionTime":"2025-11-28T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.824390 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.824524 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.824551 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.824581 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.824604 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:49Z","lastTransitionTime":"2025-11-28T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.927832 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.927901 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.927922 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.927951 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.927972 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:49Z","lastTransitionTime":"2025-11-28T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.997914 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:49 crc kubenswrapper[4946]: E1128 06:53:49.998145 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.998546 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:49 crc kubenswrapper[4946]: E1128 06:53:49.998666 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:49 crc kubenswrapper[4946]: I1128 06:53:49.999037 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:49 crc kubenswrapper[4946]: E1128 06:53:49.999174 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.030877 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.030919 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.030938 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.030959 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.030975 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:50Z","lastTransitionTime":"2025-11-28T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.133700 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.133800 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.133859 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.133885 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.133902 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:50Z","lastTransitionTime":"2025-11-28T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.238389 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.238457 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.238508 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.238532 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.238548 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:50Z","lastTransitionTime":"2025-11-28T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.342314 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.342380 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.342402 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.342428 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.342449 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:50Z","lastTransitionTime":"2025-11-28T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.445567 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.445607 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.445618 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.445632 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.445645 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:50Z","lastTransitionTime":"2025-11-28T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.549714 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.549815 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.549834 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.549862 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.549890 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:50Z","lastTransitionTime":"2025-11-28T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.654412 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.654549 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.654581 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.654621 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.654659 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:50Z","lastTransitionTime":"2025-11-28T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.757694 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.757789 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.757811 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.757850 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.757888 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:50Z","lastTransitionTime":"2025-11-28T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.860757 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.860818 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.860837 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.860863 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.860882 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:50Z","lastTransitionTime":"2025-11-28T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.964575 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.964648 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.964672 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.964703 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.964725 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:50Z","lastTransitionTime":"2025-11-28T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:50 crc kubenswrapper[4946]: I1128 06:53:50.989846 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:50 crc kubenswrapper[4946]: E1128 06:53:50.990132 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:51 crc kubenswrapper[4946]: I1128 06:53:51.069107 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:51 crc kubenswrapper[4946]: I1128 06:53:51.069235 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:51 crc kubenswrapper[4946]: I1128 06:53:51.069263 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:51 crc kubenswrapper[4946]: I1128 06:53:51.069301 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:51 crc kubenswrapper[4946]: I1128 06:53:51.069326 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:51Z","lastTransitionTime":"2025-11-28T06:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:51 crc kubenswrapper[4946]: I1128 06:53:51.171745 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:51 crc kubenswrapper[4946]: I1128 06:53:51.171797 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:51 crc kubenswrapper[4946]: I1128 06:53:51.171811 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:51 crc kubenswrapper[4946]: I1128 06:53:51.171831 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:51 crc kubenswrapper[4946]: I1128 06:53:51.171845 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:51Z","lastTransitionTime":"2025-11-28T06:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:51 crc kubenswrapper[4946]: I1128 06:53:51.275814 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:51 crc kubenswrapper[4946]: I1128 06:53:51.276545 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:51 crc kubenswrapper[4946]: I1128 06:53:51.276583 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:51 crc kubenswrapper[4946]: I1128 06:53:51.276618 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:51 crc kubenswrapper[4946]: I1128 06:53:51.276643 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:51Z","lastTransitionTime":"2025-11-28T06:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:51 crc kubenswrapper[4946]: I1128 06:53:51.381696 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:51 crc kubenswrapper[4946]: I1128 06:53:51.381738 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:51 crc kubenswrapper[4946]: I1128 06:53:51.381748 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:51 crc kubenswrapper[4946]: I1128 06:53:51.381763 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:51 crc kubenswrapper[4946]: I1128 06:53:51.381775 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:51Z","lastTransitionTime":"2025-11-28T06:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:51 crc kubenswrapper[4946]: I1128 06:53:51.989641 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:51 crc kubenswrapper[4946]: I1128 06:53:51.989705 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:51 crc kubenswrapper[4946]: I1128 06:53:51.989704 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:51 crc kubenswrapper[4946]: E1128 06:53:51.989841 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:51 crc kubenswrapper[4946]: E1128 06:53:51.990198 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:51 crc kubenswrapper[4946]: E1128 06:53:51.990440 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.140079 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.140145 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.140164 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.140189 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.140208 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:52Z","lastTransitionTime":"2025-11-28T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.244628 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.244737 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.244764 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.244796 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.244823 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:52Z","lastTransitionTime":"2025-11-28T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.348102 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.348181 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.348199 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.348226 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.348249 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:52Z","lastTransitionTime":"2025-11-28T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.451883 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.451961 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.451979 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.452010 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.452028 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:52Z","lastTransitionTime":"2025-11-28T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.555490 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.555549 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.555568 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.555592 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.555610 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:52Z","lastTransitionTime":"2025-11-28T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.659604 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.659676 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.659693 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.659721 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.659740 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:52Z","lastTransitionTime":"2025-11-28T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.762915 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.762954 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.762964 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.762994 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.763004 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:52Z","lastTransitionTime":"2025-11-28T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.865437 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.865527 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.865546 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.865572 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.865590 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:52Z","lastTransitionTime":"2025-11-28T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.969137 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.969192 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.969203 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.969221 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.969239 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:52Z","lastTransitionTime":"2025-11-28T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.989696 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:52 crc kubenswrapper[4946]: E1128 06:53:52.990166 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:52 crc kubenswrapper[4946]: I1128 06:53:52.990391 4946 scope.go:117] "RemoveContainer" containerID="9a4ec6ab6b9acae6303733b64d2906e1bd955b80202cc9af54adc15207de969b" Nov 28 06:53:52 crc kubenswrapper[4946]: E1128 06:53:52.990584 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pkknv_openshift-ovn-kubernetes(47e7046d-60dc-4dc0-b63e-f22f4ca5cd51)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.071831 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.071885 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.071896 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.071913 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.071928 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:53Z","lastTransitionTime":"2025-11-28T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.175830 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.175915 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.175938 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.175969 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.175988 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:53Z","lastTransitionTime":"2025-11-28T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.279705 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.279788 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.279810 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.279841 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.279863 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:53Z","lastTransitionTime":"2025-11-28T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.383117 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.383213 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.383245 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.383280 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.383310 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:53Z","lastTransitionTime":"2025-11-28T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.486366 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.486450 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.486832 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.486889 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.486912 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:53Z","lastTransitionTime":"2025-11-28T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.590199 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.590267 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.590287 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.590312 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.590331 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:53Z","lastTransitionTime":"2025-11-28T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.694205 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.694267 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.694289 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.694312 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.694332 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:53Z","lastTransitionTime":"2025-11-28T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.796944 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.796977 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.796987 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.797003 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.797013 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:53Z","lastTransitionTime":"2025-11-28T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.899967 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.900095 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.900194 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.900280 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.900310 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:53Z","lastTransitionTime":"2025-11-28T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.989674 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.989711 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:53 crc kubenswrapper[4946]: I1128 06:53:53.989847 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:53 crc kubenswrapper[4946]: E1128 06:53:53.990066 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:53 crc kubenswrapper[4946]: E1128 06:53:53.990189 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:53 crc kubenswrapper[4946]: E1128 06:53:53.990338 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.003771 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.003822 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.003839 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.003861 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.003879 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:54Z","lastTransitionTime":"2025-11-28T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.107098 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.107173 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.107197 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.107227 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.107248 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:54Z","lastTransitionTime":"2025-11-28T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.210115 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.210181 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.210198 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.210220 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.210237 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:54Z","lastTransitionTime":"2025-11-28T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.312907 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.312948 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.312967 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.312988 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.312999 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:54Z","lastTransitionTime":"2025-11-28T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.416421 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.416493 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.416512 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.416526 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.416538 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:54Z","lastTransitionTime":"2025-11-28T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.519311 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.519416 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.519433 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.519454 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.519496 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:54Z","lastTransitionTime":"2025-11-28T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.622610 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.622769 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.622788 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.622812 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.622828 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:54Z","lastTransitionTime":"2025-11-28T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.727298 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.727397 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.727424 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.727457 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.727540 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:54Z","lastTransitionTime":"2025-11-28T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.831360 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.831426 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.831446 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.831509 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.831536 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:54Z","lastTransitionTime":"2025-11-28T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.935677 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.935754 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.935773 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.935805 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.935827 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:54Z","lastTransitionTime":"2025-11-28T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:54 crc kubenswrapper[4946]: I1128 06:53:54.989894 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:54 crc kubenswrapper[4946]: E1128 06:53:54.990278 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.004415 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.039252 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.039322 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.039338 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.039365 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.039382 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:55Z","lastTransitionTime":"2025-11-28T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.143664 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.143731 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.143749 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.143778 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.143797 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:55Z","lastTransitionTime":"2025-11-28T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.247333 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.247390 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.247415 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.247446 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.247509 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:55Z","lastTransitionTime":"2025-11-28T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.350853 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.350917 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.350943 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.350971 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.350990 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:55Z","lastTransitionTime":"2025-11-28T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.454802 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.454894 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.454919 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.454952 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.454973 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:55Z","lastTransitionTime":"2025-11-28T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.557927 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.557990 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.558003 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.558030 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.558043 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:55Z","lastTransitionTime":"2025-11-28T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.661209 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.661299 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.661313 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.661334 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.661346 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:55Z","lastTransitionTime":"2025-11-28T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.764073 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.764221 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.764249 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.764279 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.764300 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:55Z","lastTransitionTime":"2025-11-28T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.868373 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.868456 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.868511 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.868540 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.868561 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:55Z","lastTransitionTime":"2025-11-28T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.971738 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.971809 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.971828 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.971869 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.971885 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:55Z","lastTransitionTime":"2025-11-28T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.989030 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.989202 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:55 crc kubenswrapper[4946]: E1128 06:53:55.989733 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:55 crc kubenswrapper[4946]: I1128 06:53:55.989861 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:55 crc kubenswrapper[4946]: E1128 06:53:55.990003 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:55 crc kubenswrapper[4946]: E1128 06:53:55.990181 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.008147 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68d47c52-9827-4ebb-88a2-ba498092709e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:52:44Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:52:38.666044 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:52:38.667906 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1018003996/tls.crt::/tmp/serving-cert-1018003996/tls.key\\\\\\\"\\\\nI1128 06:52:44.041872 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:52:44.048251 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:52:44.048273 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:52:44.048300 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:52:44.048305 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:52:44.064176 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 06:52:44.064208 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064213 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:52:44.064217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:52:44.064220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:52:44.064223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:52:44.064226 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 06:52:44.064671 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 06:52:44.066682 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.026265 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"136606da-fc7c-4bea-902d-102f836514a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a67b58ab443f2a80c08229709de3844bf287465b58912f472f81b993e5f5fd8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30a18f86a38f7d74aedc8fb55161733023c3a515474f699fc66c645901b43a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14ecbd61c64f04b0ec631e67e25393619f5692c9967fff9d461c40bbb7f3c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://674e2b1496ef2f46b2bd37fd98df7bb076560669c1c6b4a39a081aaa0f0dab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://674e2b1496ef2f46b2bd37fd98df7bb076560669c1c6b4a39a081aaa0f0dab23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.046196 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.059331 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j4wp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5a786-a162-4416-a240-54272b0c7376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7a6e397c1f3423abc4eeb0a20453109c22f305c1b259dddbe23bd71a78b5512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87x27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j4wp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.074224 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6df591b-287c-45d9-9db2-f3c441005fdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f983a4bf3fb71511789223ca3b7b223ac9589d4807ea2dfe4087d2fa5d48738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6b6b4122d5aa6fa73c5453308ea21172330e8324feb57965db83d5c2c05fa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm2g7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zpkwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.086669 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.086741 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.086756 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.086804 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.086816 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:56Z","lastTransitionTime":"2025-11-28T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.091227 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16482226c2bd22abff4c445ab0d5e46c2012a83ab5e616ebecc38d9009a8a6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.108144 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9g9w4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"857356d2-6585-41c6-9a2c-e06ef45f7303\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a0a595eea30586e0c6859963642e750f7e52a69e98545d3dd7dcaff841e1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:53:32Z\\\",\\\"message\\\":\\\"2025-11-28T06:52:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c0be44d2-2994-440c-b0d4-da72ba214f8c\\\\n2025-11-28T06:52:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c0be44d2-2994-440c-b0d4-da72ba214f8c to /host/opt/cni/bin/\\\\n2025-11-28T06:52:47Z [verbose] multus-daemon started\\\\n2025-11-28T06:52:47Z [verbose] Readiness Indicator file check\\\\n2025-11-28T06:53:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbjr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9g9w4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.129796 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4ec6ab6b9acae6303733b64d2906e1bd955b80202cc9af54adc15207de969b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4ec6ab6b9acae6303733b64d2906e1bd955b80202cc9af54adc15207de969b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:53:38Z\\\",\\\"message\\\":\\\"e:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 06:53:37.869333 6897 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9g9w4\\\\nI1128 06:53:37.869337 6897 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-6tvtt\\\\nI1128 06:53:37.869343 6897 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-6tvtt\\\\nF1128 06:53:37.869344 6897 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:53:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pkknv_openshift-ovn-kubernetes(47e7046d-60dc-4dc0-b63e-f22f4ca5cd51)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tmv4h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pkknv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.147266 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.166544 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0514bebed46aabc46c74ef48fda9c1e561a174d87db40128875cc0ec78446d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://955ab7252b4526779db2e0aea8b7747369f52242a76df7e179c6617290fee4e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.185624 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://078e67af49c0d442d125d9cb67d9d93ce05e452bf3b09f666e9763e481eaa15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.190100 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.190142 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.190155 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.190174 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.190187 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:56Z","lastTransitionTime":"2025-11-28T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.206341 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d160dd-b1b4-4cdf-800c-74195ab023e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bd1943c060c0e4d739d86520e0661bbd6ecef5614aaceb3e893a11204fdb39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ec6343bd0264895aac690a041ef8e14806fb585d0b458a12141802ad41a449\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5f85be836e8c0127fd5c2bf679ee918d9be4050b439279e85c97a216cc0444a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7331b2b7a0c5df1fb5c00ba3e669effaa754205da982c181cc7d3e55b79cae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ca669c1549125719ec7bdf2905ff407828cf0636459c02a70a029a7be01f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3aa7cc33a44bdc777956770bc10f5fdcf49901d925d2da7472556c6e86c0fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f336183b95d088fbf0d45cb8e6e6ab124453e9ba4a1456171f5631e9d029a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szzk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pqf5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.237266 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce9c1a0a-4957-4c1f-bea3-3d83911524ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2af426224940bfcc803c85bcaf635d47969c877b90220d0acefa264ae2111215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9b328df1631fb424c220a0e33954374075c6f7e6313fbbdcedd04298fa1478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f9b328df1631fb424c220a0e33954374075c6f7e6313fbbdcedd04298fa1478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.258103 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fb7be34-ed01-4217-aab4-c8ab35341ac6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af6de2bd2f3d9be0a670cb8c36e6db8c037c4987eff7356840af5b275156ad7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89780aac4572411aeeb8480b61c98dbc22a2e4b3e72290470ba56c81b648b617\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b6df8018ceaf95465c4ad1bfd0bb2653ae4b87430f6ed4e21af2bd14bbb9ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.276851 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.294058 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a10fe13c-fc86-474a-945b-f96caafad2a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712d9a3cda3acae09eae5946cb14e19bee8697d567bff79dd67db8a41ed38fe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45pfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.294175 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.294223 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.294242 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.294267 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.294283 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:56Z","lastTransitionTime":"2025-11-28T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.310350 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7450befc-262f-45d1-a5f4-f445e540185b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7854e764531b926939e7ce77e671b5dae4c0d8a4a1d8ac7153f851e924c4bb84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hnm9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g2vhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.323242 4946 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gkg79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e6983b1-6887-4d13-8f9a-f261a745115f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9srk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gkg79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:56Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.397309 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.397355 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.397367 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.397386 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.397398 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:56Z","lastTransitionTime":"2025-11-28T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.500723 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.500828 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.500881 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.500914 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.500932 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:56Z","lastTransitionTime":"2025-11-28T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.604683 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.604773 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.604806 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.604839 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.604863 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:56Z","lastTransitionTime":"2025-11-28T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.709107 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.709198 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.709220 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.709250 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.709273 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:56Z","lastTransitionTime":"2025-11-28T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.812095 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.812151 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.812162 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.812181 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.812194 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:56Z","lastTransitionTime":"2025-11-28T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.919667 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.920021 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.920105 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.920195 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.920375 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:56Z","lastTransitionTime":"2025-11-28T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:56 crc kubenswrapper[4946]: I1128 06:53:56.989197 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:56 crc kubenswrapper[4946]: E1128 06:53:56.989730 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.024058 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.024126 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.024146 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.024172 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.024192 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:57Z","lastTransitionTime":"2025-11-28T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.127610 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.127667 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.127684 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.127706 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.127723 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:57Z","lastTransitionTime":"2025-11-28T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.230674 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.230736 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.230756 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.230783 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.230805 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:57Z","lastTransitionTime":"2025-11-28T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.334210 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.334752 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.334939 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.335141 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.335374 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:57Z","lastTransitionTime":"2025-11-28T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.439173 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.439662 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.439916 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.440135 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.440325 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:57Z","lastTransitionTime":"2025-11-28T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.544515 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.544588 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.544607 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.544634 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.544654 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:57Z","lastTransitionTime":"2025-11-28T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.647797 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.648201 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.648348 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.648545 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.648712 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:57Z","lastTransitionTime":"2025-11-28T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.752018 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.752350 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.752558 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.752775 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.752938 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:57Z","lastTransitionTime":"2025-11-28T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.857203 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.857286 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.857309 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.857345 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.857368 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:57Z","lastTransitionTime":"2025-11-28T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.961152 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.961227 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.961249 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.961274 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.961291 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:57Z","lastTransitionTime":"2025-11-28T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.989244 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.989337 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:57 crc kubenswrapper[4946]: I1128 06:53:57.989266 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:57 crc kubenswrapper[4946]: E1128 06:53:57.989450 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:57 crc kubenswrapper[4946]: E1128 06:53:57.989610 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:53:57 crc kubenswrapper[4946]: E1128 06:53:57.989753 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.065677 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.065792 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.065815 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.065849 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.065882 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:58Z","lastTransitionTime":"2025-11-28T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.168856 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.168942 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.168966 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.168996 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.169016 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:58Z","lastTransitionTime":"2025-11-28T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.272853 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.272920 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.272947 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.272975 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.272994 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:58Z","lastTransitionTime":"2025-11-28T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.376706 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.376782 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.376805 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.376835 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.376859 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:58Z","lastTransitionTime":"2025-11-28T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.480758 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.480844 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.480867 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.480900 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.480923 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:58Z","lastTransitionTime":"2025-11-28T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.584699 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.584755 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.584772 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.584801 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.584819 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:58Z","lastTransitionTime":"2025-11-28T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.687880 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.687950 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.687967 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.687991 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.688008 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:58Z","lastTransitionTime":"2025-11-28T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.791282 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.791370 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.791395 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.791428 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.791520 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:58Z","lastTransitionTime":"2025-11-28T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.895121 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.895173 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.895185 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.895204 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.895216 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:58Z","lastTransitionTime":"2025-11-28T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.989156 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:53:58 crc kubenswrapper[4946]: E1128 06:53:58.990377 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.997605 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.997653 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.997671 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.997697 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:58 crc kubenswrapper[4946]: I1128 06:53:58.997718 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:58Z","lastTransitionTime":"2025-11-28T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.102127 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.102206 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.102236 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.102270 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.102293 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:59Z","lastTransitionTime":"2025-11-28T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.205560 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.205633 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.205657 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.205685 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.205707 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:59Z","lastTransitionTime":"2025-11-28T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.216293 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.216342 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.216359 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.216378 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.216394 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:59Z","lastTransitionTime":"2025-11-28T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:59 crc kubenswrapper[4946]: E1128 06:53:59.240575 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.246253 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.246309 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.246332 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.246360 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.246383 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:59Z","lastTransitionTime":"2025-11-28T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:59 crc kubenswrapper[4946]: E1128 06:53:59.268408 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.274126 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.274202 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.274225 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.274256 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.274279 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:59Z","lastTransitionTime":"2025-11-28T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:59 crc kubenswrapper[4946]: E1128 06:53:59.298375 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.305250 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.305347 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.305366 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.305415 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.305429 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:59Z","lastTransitionTime":"2025-11-28T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:59 crc kubenswrapper[4946]: E1128 06:53:59.327138 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.333330 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.333540 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.333638 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.333733 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.333758 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:59Z","lastTransitionTime":"2025-11-28T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:59 crc kubenswrapper[4946]: E1128 06:53:59.356703 4946 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:53:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c74aee61-da50-44cb-be0e-1b7c62358c2a\\\",\\\"systemUUID\\\":\\\"99c18e26-a017-426b-bf43-1a9a83f35f94\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:53:59Z is after 2025-08-24T17:21:41Z" Nov 28 06:53:59 crc kubenswrapper[4946]: E1128 06:53:59.356933 4946 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.358919 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.358977 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.358996 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.359020 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.359039 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:59Z","lastTransitionTime":"2025-11-28T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.463008 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.463076 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.463096 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.463120 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.463138 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:59Z","lastTransitionTime":"2025-11-28T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.566529 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.566606 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.566625 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.566658 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.566676 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:59Z","lastTransitionTime":"2025-11-28T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.670614 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.670683 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.670702 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.670727 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.670746 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:59Z","lastTransitionTime":"2025-11-28T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.773713 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.773778 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.773797 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.773828 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.773844 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:59Z","lastTransitionTime":"2025-11-28T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.877701 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.877784 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.877805 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.877839 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.877863 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:59Z","lastTransitionTime":"2025-11-28T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.981968 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.982065 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.982096 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.982129 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.982153 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:53:59Z","lastTransitionTime":"2025-11-28T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.989388 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.989493 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:53:59 crc kubenswrapper[4946]: I1128 06:53:59.989778 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:53:59 crc kubenswrapper[4946]: E1128 06:53:59.990267 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:53:59 crc kubenswrapper[4946]: E1128 06:53:59.990512 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:53:59 crc kubenswrapper[4946]: E1128 06:53:59.990747 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.085977 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.086026 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.086044 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.086068 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.086087 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:00Z","lastTransitionTime":"2025-11-28T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.189685 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.189772 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.189796 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.189825 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.189848 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:00Z","lastTransitionTime":"2025-11-28T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.293258 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.293334 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.293357 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.293386 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.293409 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:00Z","lastTransitionTime":"2025-11-28T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.397288 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.397326 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.397336 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.397352 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.397360 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:00Z","lastTransitionTime":"2025-11-28T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.500861 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.500917 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.500934 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.500956 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.500972 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:00Z","lastTransitionTime":"2025-11-28T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.604420 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.604456 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.604485 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.604501 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.604512 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:00Z","lastTransitionTime":"2025-11-28T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.708174 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.708242 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.708260 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.708293 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.708318 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:00Z","lastTransitionTime":"2025-11-28T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.812217 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.812307 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.812325 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.812355 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.812374 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:00Z","lastTransitionTime":"2025-11-28T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.915956 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.916034 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.916055 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.916089 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.916109 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:00Z","lastTransitionTime":"2025-11-28T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:00 crc kubenswrapper[4946]: I1128 06:54:00.989174 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:54:00 crc kubenswrapper[4946]: E1128 06:54:00.989723 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.018966 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.019016 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.019027 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.019043 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.019056 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:01Z","lastTransitionTime":"2025-11-28T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.122782 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.122866 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.122887 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.122918 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.122937 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:01Z","lastTransitionTime":"2025-11-28T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.226972 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.227054 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.227079 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.227109 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.227132 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:01Z","lastTransitionTime":"2025-11-28T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.329864 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.329912 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.329931 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.329955 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.329975 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:01Z","lastTransitionTime":"2025-11-28T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.432320 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.432368 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.432386 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.432412 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.432431 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:01Z","lastTransitionTime":"2025-11-28T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.536603 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.536666 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.536684 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.536709 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.536726 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:01Z","lastTransitionTime":"2025-11-28T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.641000 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.641067 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.641083 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.641108 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.641129 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:01Z","lastTransitionTime":"2025-11-28T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.744544 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.744596 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.744614 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.744637 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.744656 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:01Z","lastTransitionTime":"2025-11-28T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.848594 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.848662 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.848685 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.848718 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.848737 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:01Z","lastTransitionTime":"2025-11-28T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.952429 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.952539 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.952559 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.952586 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.952604 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:01Z","lastTransitionTime":"2025-11-28T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.989913 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.990077 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:54:01 crc kubenswrapper[4946]: E1128 06:54:01.990254 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:54:01 crc kubenswrapper[4946]: E1128 06:54:01.990446 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:54:01 crc kubenswrapper[4946]: I1128 06:54:01.990666 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:54:01 crc kubenswrapper[4946]: E1128 06:54:01.991239 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.017740 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.055564 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.055620 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.055632 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.055650 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.055662 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:02Z","lastTransitionTime":"2025-11-28T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.157948 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.157993 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.158003 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.158020 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.158032 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:02Z","lastTransitionTime":"2025-11-28T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.260859 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.260934 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.260958 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.260985 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.261015 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:02Z","lastTransitionTime":"2025-11-28T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.364809 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.364890 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.364916 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.364948 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.364970 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:02Z","lastTransitionTime":"2025-11-28T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.467923 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.468003 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.468026 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.468053 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.468069 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:02Z","lastTransitionTime":"2025-11-28T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.571274 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.571347 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.571363 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.571385 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.571400 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:02Z","lastTransitionTime":"2025-11-28T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.674451 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.674510 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.674518 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.674535 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.674549 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:02Z","lastTransitionTime":"2025-11-28T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.777920 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.777997 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.778024 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.778056 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.778081 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:02Z","lastTransitionTime":"2025-11-28T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.881032 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.881099 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.881118 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.881143 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.881162 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:02Z","lastTransitionTime":"2025-11-28T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.985146 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.985222 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.985241 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.985269 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.985289 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:02Z","lastTransitionTime":"2025-11-28T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:02 crc kubenswrapper[4946]: I1128 06:54:02.989499 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:54:02 crc kubenswrapper[4946]: E1128 06:54:02.989711 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.088892 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.088964 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.088988 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.089017 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.089040 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:03Z","lastTransitionTime":"2025-11-28T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.191686 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.191737 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.191753 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.191775 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.191792 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:03Z","lastTransitionTime":"2025-11-28T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.294347 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.294404 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.294419 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.294444 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.294488 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:03Z","lastTransitionTime":"2025-11-28T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.397493 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.397549 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.397564 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.397585 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.397597 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:03Z","lastTransitionTime":"2025-11-28T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.501894 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.502032 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.502058 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.502086 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.502146 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:03Z","lastTransitionTime":"2025-11-28T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.605647 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.605684 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.605692 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.605706 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.605716 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:03Z","lastTransitionTime":"2025-11-28T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.708317 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.708418 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.708434 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.708494 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.708511 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:03Z","lastTransitionTime":"2025-11-28T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.811137 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.811184 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.811193 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.811211 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.811224 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:03Z","lastTransitionTime":"2025-11-28T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.919552 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.919594 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.919607 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.919626 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.919645 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:03Z","lastTransitionTime":"2025-11-28T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.989501 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.989584 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:54:03 crc kubenswrapper[4946]: I1128 06:54:03.989653 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:54:03 crc kubenswrapper[4946]: E1128 06:54:03.989727 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:54:03 crc kubenswrapper[4946]: E1128 06:54:03.989896 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:54:03 crc kubenswrapper[4946]: E1128 06:54:03.990025 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.022703 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.022765 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.022777 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.022800 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.022814 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:04Z","lastTransitionTime":"2025-11-28T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.125534 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.125619 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.125643 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.125675 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.125699 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:04Z","lastTransitionTime":"2025-11-28T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.228726 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.228783 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.228796 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.228823 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.228838 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:04Z","lastTransitionTime":"2025-11-28T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.332625 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.332687 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.332701 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.332725 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.332740 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:04Z","lastTransitionTime":"2025-11-28T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.435825 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.435862 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.435871 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.435886 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.435896 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:04Z","lastTransitionTime":"2025-11-28T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.538786 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.538847 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.538858 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.538878 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.538891 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:04Z","lastTransitionTime":"2025-11-28T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.642100 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.642149 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.642162 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.642179 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.642191 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:04Z","lastTransitionTime":"2025-11-28T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.745136 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.745215 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.745233 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.745262 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.745285 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:04Z","lastTransitionTime":"2025-11-28T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.757180 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs\") pod \"network-metrics-daemon-gkg79\" (UID: \"4e6983b1-6887-4d13-8f9a-f261a745115f\") " pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:54:04 crc kubenswrapper[4946]: E1128 06:54:04.757427 4946 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:54:04 crc kubenswrapper[4946]: E1128 06:54:04.757611 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs podName:4e6983b1-6887-4d13-8f9a-f261a745115f nodeName:}" failed. No retries permitted until 2025-11-28 06:55:08.757573668 +0000 UTC m=+163.135638819 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs") pod "network-metrics-daemon-gkg79" (UID: "4e6983b1-6887-4d13-8f9a-f261a745115f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.848545 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.848622 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.848634 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.848650 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.848664 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:04Z","lastTransitionTime":"2025-11-28T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.951105 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.951166 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.951182 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.951209 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.951230 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:04Z","lastTransitionTime":"2025-11-28T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.989675 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:54:04 crc kubenswrapper[4946]: E1128 06:54:04.990176 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:54:04 crc kubenswrapper[4946]: I1128 06:54:04.990435 4946 scope.go:117] "RemoveContainer" containerID="9a4ec6ab6b9acae6303733b64d2906e1bd955b80202cc9af54adc15207de969b" Nov 28 06:54:04 crc kubenswrapper[4946]: E1128 06:54:04.990638 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pkknv_openshift-ovn-kubernetes(47e7046d-60dc-4dc0-b63e-f22f4ca5cd51)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.054115 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.054203 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.054226 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.054258 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.054281 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:05Z","lastTransitionTime":"2025-11-28T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.158242 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.158325 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.158348 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.158378 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.158404 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:05Z","lastTransitionTime":"2025-11-28T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.261803 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.261876 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.261899 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.261927 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.261966 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:05Z","lastTransitionTime":"2025-11-28T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.365838 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.365938 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.365970 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.366009 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.366040 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:05Z","lastTransitionTime":"2025-11-28T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.470117 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.470191 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.470208 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.470233 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.470259 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:05Z","lastTransitionTime":"2025-11-28T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.580053 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.580110 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.580126 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.580149 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.580166 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:05Z","lastTransitionTime":"2025-11-28T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.683194 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.683262 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.683280 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.683308 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.683327 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:05Z","lastTransitionTime":"2025-11-28T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.786742 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.786811 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.786828 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.786853 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.786873 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:05Z","lastTransitionTime":"2025-11-28T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.889907 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.889981 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.890005 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.890033 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.890059 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:05Z","lastTransitionTime":"2025-11-28T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.989344 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.989448 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.989558 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:54:05 crc kubenswrapper[4946]: E1128 06:54:05.991296 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:54:05 crc kubenswrapper[4946]: E1128 06:54:05.991649 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:54:05 crc kubenswrapper[4946]: E1128 06:54:05.993545 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.995181 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.995287 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.995405 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.995449 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:05 crc kubenswrapper[4946]: I1128 06:54:05.995513 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:05Z","lastTransitionTime":"2025-11-28T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.044226 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-9g9w4" podStartSLOduration=81.04419331 podStartE2EDuration="1m21.04419331s" podCreationTimestamp="2025-11-28 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:06.043556224 +0000 UTC m=+100.421621395" watchObservedRunningTime="2025-11-28 06:54:06.04419331 +0000 UTC m=+100.422258461" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.097738 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.097806 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.097827 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.097854 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.097876 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:06Z","lastTransitionTime":"2025-11-28T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.195491 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-pqf5z" podStartSLOduration=81.19543697 podStartE2EDuration="1m21.19543697s" podCreationTimestamp="2025-11-28 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:06.180433888 +0000 UTC m=+100.558499039" watchObservedRunningTime="2025-11-28 06:54:06.19543697 +0000 UTC m=+100.573502121" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.200297 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.200357 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.200376 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.200872 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.200945 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:06Z","lastTransitionTime":"2025-11-28T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.209560 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=12.20954198 podStartE2EDuration="12.20954198s" podCreationTimestamp="2025-11-28 06:53:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:06.208682869 +0000 UTC m=+100.586748000" watchObservedRunningTime="2025-11-28 06:54:06.20954198 +0000 UTC m=+100.587607121" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.244562 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=4.244537508 podStartE2EDuration="4.244537508s" podCreationTimestamp="2025-11-28 06:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:06.244209159 +0000 UTC m=+100.622274300" watchObservedRunningTime="2025-11-28 06:54:06.244537508 +0000 UTC m=+100.622602639" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.289845 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=83.2898106 podStartE2EDuration="1m23.2898106s" podCreationTimestamp="2025-11-28 06:52:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:06.26641739 +0000 UTC m=+100.644482521" watchObservedRunningTime="2025-11-28 06:54:06.2898106 +0000 UTC m=+100.667875741" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.304048 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.304119 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.304142 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.304170 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.304191 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:06Z","lastTransitionTime":"2025-11-28T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.304216 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-6tvtt" podStartSLOduration=81.304195447 podStartE2EDuration="1m21.304195447s" podCreationTimestamp="2025-11-28 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:06.302803472 +0000 UTC m=+100.680868613" watchObservedRunningTime="2025-11-28 06:54:06.304195447 +0000 UTC m=+100.682260558" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.318592 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podStartSLOduration=81.318557283 podStartE2EDuration="1m21.318557283s" podCreationTimestamp="2025-11-28 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:06.31801498 +0000 UTC m=+100.696080151" watchObservedRunningTime="2025-11-28 06:54:06.318557283 +0000 UTC m=+100.696622424" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.335777 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=81.335751349 podStartE2EDuration="1m21.335751349s" podCreationTimestamp="2025-11-28 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:06.335049212 +0000 UTC m=+100.713114373" watchObservedRunningTime="2025-11-28 06:54:06.335751349 +0000 UTC m=+100.713816470" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.351827 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=52.351798527 podStartE2EDuration="52.351798527s" podCreationTimestamp="2025-11-28 06:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:06.351765966 +0000 UTC m=+100.729831087" watchObservedRunningTime="2025-11-28 06:54:06.351798527 +0000 UTC m=+100.729863648" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.397570 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-j4wp8" podStartSLOduration=81.397543242 podStartE2EDuration="1m21.397543242s" podCreationTimestamp="2025-11-28 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:06.383941964 +0000 UTC m=+100.762007095" watchObservedRunningTime="2025-11-28 06:54:06.397543242 +0000 UTC m=+100.775608353" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.397770 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zpkwv" podStartSLOduration=80.397764867 podStartE2EDuration="1m20.397764867s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:06.397706306 +0000 UTC m=+100.775771457" watchObservedRunningTime="2025-11-28 06:54:06.397764867 +0000 UTC m=+100.775829978" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.406621 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.406672 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.406687 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.406705 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.406718 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:06Z","lastTransitionTime":"2025-11-28T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.509738 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.509807 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.509832 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.509861 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.509882 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:06Z","lastTransitionTime":"2025-11-28T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.613223 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.613298 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.613321 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.613352 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.613375 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:06Z","lastTransitionTime":"2025-11-28T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.716272 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.716317 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.716329 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.716345 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.716359 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:06Z","lastTransitionTime":"2025-11-28T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.820073 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.820129 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.820148 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.820171 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.820189 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:06Z","lastTransitionTime":"2025-11-28T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.924097 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.924160 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.924177 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.924202 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.924220 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:06Z","lastTransitionTime":"2025-11-28T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:06 crc kubenswrapper[4946]: I1128 06:54:06.989222 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:54:06 crc kubenswrapper[4946]: E1128 06:54:06.989432 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.027711 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.027775 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.027796 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.027823 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.027844 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:07Z","lastTransitionTime":"2025-11-28T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.131453 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.131558 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.131582 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.131607 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.131626 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:07Z","lastTransitionTime":"2025-11-28T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.234989 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.235040 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.235057 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.235078 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.235096 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:07Z","lastTransitionTime":"2025-11-28T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.338253 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.338307 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.338323 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.338344 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.338360 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:07Z","lastTransitionTime":"2025-11-28T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.441373 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.441430 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.441447 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.441496 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.441514 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:07Z","lastTransitionTime":"2025-11-28T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.544883 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.544979 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.545015 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.545048 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.545104 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:07Z","lastTransitionTime":"2025-11-28T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.648592 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.648659 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.648684 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.648711 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.648732 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:07Z","lastTransitionTime":"2025-11-28T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.752624 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.752743 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.752774 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.752814 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.752857 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:07Z","lastTransitionTime":"2025-11-28T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.855706 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.855780 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.855801 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.855826 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.855847 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:07Z","lastTransitionTime":"2025-11-28T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.959081 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.959135 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.959152 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.959174 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.959191 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:07Z","lastTransitionTime":"2025-11-28T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.989824 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.989944 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:54:07 crc kubenswrapper[4946]: E1128 06:54:07.989980 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:54:07 crc kubenswrapper[4946]: I1128 06:54:07.990064 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:54:07 crc kubenswrapper[4946]: E1128 06:54:07.990139 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:54:07 crc kubenswrapper[4946]: E1128 06:54:07.990259 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.062991 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.063059 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.063075 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.063098 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.063135 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:08Z","lastTransitionTime":"2025-11-28T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.166802 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.166863 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.166876 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.166894 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.166907 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:08Z","lastTransitionTime":"2025-11-28T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.269515 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.269600 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.269622 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.269650 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.269671 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:08Z","lastTransitionTime":"2025-11-28T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.372545 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.372625 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.372645 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.372669 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.372686 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:08Z","lastTransitionTime":"2025-11-28T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.476215 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.476284 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.476306 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.476335 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.476356 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:08Z","lastTransitionTime":"2025-11-28T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.579221 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.579276 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.579321 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.579346 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.579363 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:08Z","lastTransitionTime":"2025-11-28T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.682631 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.682698 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.682710 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.682726 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.682738 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:08Z","lastTransitionTime":"2025-11-28T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.786343 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.786416 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.786434 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.786492 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.786521 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:08Z","lastTransitionTime":"2025-11-28T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.931917 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.931968 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.931987 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.932008 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.932020 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:08Z","lastTransitionTime":"2025-11-28T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:08 crc kubenswrapper[4946]: I1128 06:54:08.989162 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:54:08 crc kubenswrapper[4946]: E1128 06:54:08.989310 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.035085 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.035136 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.035152 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.035169 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.035181 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:09Z","lastTransitionTime":"2025-11-28T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.137985 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.138035 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.138048 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.138064 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.138075 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:09Z","lastTransitionTime":"2025-11-28T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.241404 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.241443 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.241452 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.241478 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.241488 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:09Z","lastTransitionTime":"2025-11-28T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.343930 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.343988 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.344006 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.344031 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.344049 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:09Z","lastTransitionTime":"2025-11-28T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.424095 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.424156 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.424167 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.424185 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.424197 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:09Z","lastTransitionTime":"2025-11-28T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.451097 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.451148 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.451163 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.451184 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.451201 4946 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:54:09Z","lastTransitionTime":"2025-11-28T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.489166 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-hbqlg"] Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.489716 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hbqlg" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.491495 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.492584 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.493301 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.498480 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.639755 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6941e832-4fbc-4667-9180-023d6ab06e7a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hbqlg\" (UID: \"6941e832-4fbc-4667-9180-023d6ab06e7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hbqlg" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.639830 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6941e832-4fbc-4667-9180-023d6ab06e7a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hbqlg\" (UID: \"6941e832-4fbc-4667-9180-023d6ab06e7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hbqlg" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.639930 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6941e832-4fbc-4667-9180-023d6ab06e7a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hbqlg\" (UID: \"6941e832-4fbc-4667-9180-023d6ab06e7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hbqlg" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.639991 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6941e832-4fbc-4667-9180-023d6ab06e7a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hbqlg\" (UID: \"6941e832-4fbc-4667-9180-023d6ab06e7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hbqlg" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.640051 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6941e832-4fbc-4667-9180-023d6ab06e7a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hbqlg\" (UID: \"6941e832-4fbc-4667-9180-023d6ab06e7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hbqlg" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.741386 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6941e832-4fbc-4667-9180-023d6ab06e7a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hbqlg\" (UID: \"6941e832-4fbc-4667-9180-023d6ab06e7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hbqlg" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.741555 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6941e832-4fbc-4667-9180-023d6ab06e7a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hbqlg\" (UID: \"6941e832-4fbc-4667-9180-023d6ab06e7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hbqlg" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.741644 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6941e832-4fbc-4667-9180-023d6ab06e7a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hbqlg\" (UID: \"6941e832-4fbc-4667-9180-023d6ab06e7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hbqlg" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.741701 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6941e832-4fbc-4667-9180-023d6ab06e7a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hbqlg\" (UID: \"6941e832-4fbc-4667-9180-023d6ab06e7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hbqlg" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.741734 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6941e832-4fbc-4667-9180-023d6ab06e7a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hbqlg\" (UID: \"6941e832-4fbc-4667-9180-023d6ab06e7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hbqlg" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.741731 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6941e832-4fbc-4667-9180-023d6ab06e7a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hbqlg\" (UID: \"6941e832-4fbc-4667-9180-023d6ab06e7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hbqlg" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.741817 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6941e832-4fbc-4667-9180-023d6ab06e7a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hbqlg\" (UID: \"6941e832-4fbc-4667-9180-023d6ab06e7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hbqlg" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.742934 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6941e832-4fbc-4667-9180-023d6ab06e7a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hbqlg\" (UID: \"6941e832-4fbc-4667-9180-023d6ab06e7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hbqlg" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.749859 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6941e832-4fbc-4667-9180-023d6ab06e7a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hbqlg\" (UID: \"6941e832-4fbc-4667-9180-023d6ab06e7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hbqlg" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.764204 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6941e832-4fbc-4667-9180-023d6ab06e7a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hbqlg\" (UID: \"6941e832-4fbc-4667-9180-023d6ab06e7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hbqlg" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.815731 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hbqlg" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.988959 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.989040 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:54:09 crc kubenswrapper[4946]: E1128 06:54:09.989652 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:54:09 crc kubenswrapper[4946]: I1128 06:54:09.989176 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:54:09 crc kubenswrapper[4946]: E1128 06:54:09.989868 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:54:09 crc kubenswrapper[4946]: E1128 06:54:09.990016 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:54:10 crc kubenswrapper[4946]: I1128 06:54:10.207080 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hbqlg" event={"ID":"6941e832-4fbc-4667-9180-023d6ab06e7a","Type":"ContainerStarted","Data":"f4158a4788f4a9c311b89b89786f082373e10c9a24caf43a97d91c0081510556"} Nov 28 06:54:10 crc kubenswrapper[4946]: I1128 06:54:10.207128 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hbqlg" event={"ID":"6941e832-4fbc-4667-9180-023d6ab06e7a","Type":"ContainerStarted","Data":"90943dcbb81ed0e09798242487464482b4b5c1a4bb4d16ac724716a06a1220a6"} Nov 28 06:54:10 crc kubenswrapper[4946]: I1128 06:54:10.229015 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hbqlg" podStartSLOduration=85.228982005 podStartE2EDuration="1m25.228982005s" podCreationTimestamp="2025-11-28 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:10.228303248 +0000 UTC m=+104.606368369" watchObservedRunningTime="2025-11-28 06:54:10.228982005 +0000 UTC m=+104.607047156" Nov 28 06:54:10 crc kubenswrapper[4946]: I1128 06:54:10.989779 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:54:10 crc kubenswrapper[4946]: E1128 06:54:10.989974 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:54:11 crc kubenswrapper[4946]: I1128 06:54:11.989293 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:54:11 crc kubenswrapper[4946]: I1128 06:54:11.989399 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:54:11 crc kubenswrapper[4946]: E1128 06:54:11.989515 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:54:11 crc kubenswrapper[4946]: E1128 06:54:11.989680 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:54:11 crc kubenswrapper[4946]: I1128 06:54:11.989749 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:54:11 crc kubenswrapper[4946]: E1128 06:54:11.989924 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:54:12 crc kubenswrapper[4946]: I1128 06:54:12.989416 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:54:12 crc kubenswrapper[4946]: E1128 06:54:12.989605 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:54:13 crc kubenswrapper[4946]: I1128 06:54:13.989751 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:54:13 crc kubenswrapper[4946]: I1128 06:54:13.989818 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:54:13 crc kubenswrapper[4946]: E1128 06:54:13.990201 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:54:13 crc kubenswrapper[4946]: I1128 06:54:13.989694 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:54:13 crc kubenswrapper[4946]: E1128 06:54:13.990442 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:54:13 crc kubenswrapper[4946]: E1128 06:54:13.990549 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:54:14 crc kubenswrapper[4946]: I1128 06:54:14.989739 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:54:14 crc kubenswrapper[4946]: E1128 06:54:14.990767 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:54:15 crc kubenswrapper[4946]: I1128 06:54:15.989102 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:54:15 crc kubenswrapper[4946]: I1128 06:54:15.989112 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:54:15 crc kubenswrapper[4946]: I1128 06:54:15.989244 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:54:15 crc kubenswrapper[4946]: E1128 06:54:15.990870 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:54:15 crc kubenswrapper[4946]: E1128 06:54:15.990999 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:54:15 crc kubenswrapper[4946]: E1128 06:54:15.991031 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:54:15 crc kubenswrapper[4946]: I1128 06:54:15.991875 4946 scope.go:117] "RemoveContainer" containerID="9a4ec6ab6b9acae6303733b64d2906e1bd955b80202cc9af54adc15207de969b" Nov 28 06:54:15 crc kubenswrapper[4946]: E1128 06:54:15.992077 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pkknv_openshift-ovn-kubernetes(47e7046d-60dc-4dc0-b63e-f22f4ca5cd51)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" Nov 28 06:54:16 crc kubenswrapper[4946]: I1128 06:54:16.989122 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:54:16 crc kubenswrapper[4946]: E1128 06:54:16.989370 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:54:17 crc kubenswrapper[4946]: I1128 06:54:17.988975 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:54:17 crc kubenswrapper[4946]: I1128 06:54:17.989083 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:54:17 crc kubenswrapper[4946]: I1128 06:54:17.989002 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:54:17 crc kubenswrapper[4946]: E1128 06:54:17.989196 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:54:17 crc kubenswrapper[4946]: E1128 06:54:17.989297 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:54:17 crc kubenswrapper[4946]: E1128 06:54:17.989439 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:54:18 crc kubenswrapper[4946]: I1128 06:54:18.989519 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:54:18 crc kubenswrapper[4946]: E1128 06:54:18.989911 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:54:19 crc kubenswrapper[4946]: I1128 06:54:19.254740 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9g9w4_857356d2-6585-41c6-9a2c-e06ef45f7303/kube-multus/1.log" Nov 28 06:54:19 crc kubenswrapper[4946]: I1128 06:54:19.255515 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9g9w4_857356d2-6585-41c6-9a2c-e06ef45f7303/kube-multus/0.log" Nov 28 06:54:19 crc kubenswrapper[4946]: I1128 06:54:19.255592 4946 generic.go:334] "Generic (PLEG): container finished" podID="857356d2-6585-41c6-9a2c-e06ef45f7303" containerID="c0a0a595eea30586e0c6859963642e750f7e52a69e98545d3dd7dcaff841e1b1" exitCode=1 Nov 28 06:54:19 crc kubenswrapper[4946]: I1128 06:54:19.255648 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9g9w4" event={"ID":"857356d2-6585-41c6-9a2c-e06ef45f7303","Type":"ContainerDied","Data":"c0a0a595eea30586e0c6859963642e750f7e52a69e98545d3dd7dcaff841e1b1"} Nov 28 06:54:19 crc kubenswrapper[4946]: I1128 06:54:19.255711 4946 scope.go:117] "RemoveContainer" containerID="db9d433d72f090d2c1d0036b7ba652dfca7bed7425a35bf87809ede7fd9fca8a" Nov 28 06:54:19 crc kubenswrapper[4946]: I1128 06:54:19.256502 4946 scope.go:117] "RemoveContainer" containerID="c0a0a595eea30586e0c6859963642e750f7e52a69e98545d3dd7dcaff841e1b1" Nov 28 06:54:19 crc kubenswrapper[4946]: E1128 06:54:19.256811 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-9g9w4_openshift-multus(857356d2-6585-41c6-9a2c-e06ef45f7303)\"" pod="openshift-multus/multus-9g9w4" podUID="857356d2-6585-41c6-9a2c-e06ef45f7303" Nov 28 06:54:19 crc kubenswrapper[4946]: I1128 06:54:19.989659 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:54:19 crc kubenswrapper[4946]: I1128 06:54:19.989718 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:54:19 crc kubenswrapper[4946]: E1128 06:54:19.990873 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:54:19 crc kubenswrapper[4946]: E1128 06:54:19.991026 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:54:19 crc kubenswrapper[4946]: I1128 06:54:19.989758 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:54:19 crc kubenswrapper[4946]: E1128 06:54:19.991257 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:54:20 crc kubenswrapper[4946]: I1128 06:54:20.262602 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9g9w4_857356d2-6585-41c6-9a2c-e06ef45f7303/kube-multus/1.log" Nov 28 06:54:20 crc kubenswrapper[4946]: I1128 06:54:20.989077 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:54:20 crc kubenswrapper[4946]: E1128 06:54:20.989300 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:54:21 crc kubenswrapper[4946]: I1128 06:54:21.989348 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:54:21 crc kubenswrapper[4946]: I1128 06:54:21.989379 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:54:21 crc kubenswrapper[4946]: I1128 06:54:21.989910 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:54:21 crc kubenswrapper[4946]: E1128 06:54:21.990920 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:54:21 crc kubenswrapper[4946]: E1128 06:54:21.991158 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:54:21 crc kubenswrapper[4946]: E1128 06:54:21.991219 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:54:22 crc kubenswrapper[4946]: I1128 06:54:22.989792 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:54:22 crc kubenswrapper[4946]: E1128 06:54:22.990015 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:54:23 crc kubenswrapper[4946]: I1128 06:54:23.989780 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:54:23 crc kubenswrapper[4946]: I1128 06:54:23.989867 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:54:23 crc kubenswrapper[4946]: I1128 06:54:23.989894 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:54:23 crc kubenswrapper[4946]: E1128 06:54:23.989960 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:54:23 crc kubenswrapper[4946]: E1128 06:54:23.990108 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:54:23 crc kubenswrapper[4946]: E1128 06:54:23.990203 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:54:24 crc kubenswrapper[4946]: I1128 06:54:24.989670 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:54:24 crc kubenswrapper[4946]: E1128 06:54:24.989879 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:54:25 crc kubenswrapper[4946]: I1128 06:54:25.989296 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:54:25 crc kubenswrapper[4946]: I1128 06:54:25.989373 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:54:25 crc kubenswrapper[4946]: I1128 06:54:25.989420 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:54:25 crc kubenswrapper[4946]: E1128 06:54:25.990292 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:54:25 crc kubenswrapper[4946]: E1128 06:54:25.990568 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:54:25 crc kubenswrapper[4946]: E1128 06:54:25.990660 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:54:26 crc kubenswrapper[4946]: E1128 06:54:26.007795 4946 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 28 06:54:26 crc kubenswrapper[4946]: E1128 06:54:26.094808 4946 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 06:54:26 crc kubenswrapper[4946]: I1128 06:54:26.989527 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:54:26 crc kubenswrapper[4946]: E1128 06:54:26.990745 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:54:27 crc kubenswrapper[4946]: I1128 06:54:27.989764 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:54:27 crc kubenswrapper[4946]: I1128 06:54:27.989831 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:54:27 crc kubenswrapper[4946]: E1128 06:54:27.989967 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:54:27 crc kubenswrapper[4946]: I1128 06:54:27.990034 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:54:27 crc kubenswrapper[4946]: E1128 06:54:27.990224 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:54:27 crc kubenswrapper[4946]: E1128 06:54:27.990287 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:54:28 crc kubenswrapper[4946]: I1128 06:54:28.989644 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:54:28 crc kubenswrapper[4946]: E1128 06:54:28.990230 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:54:29 crc kubenswrapper[4946]: I1128 06:54:29.989948 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:54:29 crc kubenswrapper[4946]: I1128 06:54:29.990005 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:54:29 crc kubenswrapper[4946]: I1128 06:54:29.990066 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:54:29 crc kubenswrapper[4946]: E1128 06:54:29.990117 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:54:29 crc kubenswrapper[4946]: E1128 06:54:29.990303 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:54:29 crc kubenswrapper[4946]: E1128 06:54:29.990749 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:54:29 crc kubenswrapper[4946]: I1128 06:54:29.991088 4946 scope.go:117] "RemoveContainer" containerID="9a4ec6ab6b9acae6303733b64d2906e1bd955b80202cc9af54adc15207de969b" Nov 28 06:54:30 crc kubenswrapper[4946]: I1128 06:54:30.989354 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:54:30 crc kubenswrapper[4946]: E1128 06:54:30.989503 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:54:31 crc kubenswrapper[4946]: E1128 06:54:31.095839 4946 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 06:54:31 crc kubenswrapper[4946]: I1128 06:54:31.304876 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pkknv_47e7046d-60dc-4dc0-b63e-f22f4ca5cd51/ovnkube-controller/3.log" Nov 28 06:54:31 crc kubenswrapper[4946]: I1128 06:54:31.307442 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerStarted","Data":"c30a490f27e222171754b9e68dbf02ac61a2f09d7e76897a6b22805e9582c6c8"} Nov 28 06:54:31 crc kubenswrapper[4946]: I1128 06:54:31.307902 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:54:31 crc kubenswrapper[4946]: I1128 06:54:31.337103 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" podStartSLOduration=105.337083196 podStartE2EDuration="1m45.337083196s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:31.335012835 +0000 UTC m=+125.713077946" watchObservedRunningTime="2025-11-28 06:54:31.337083196 +0000 UTC m=+125.715148317" Nov 28 06:54:31 crc kubenswrapper[4946]: I1128 06:54:31.502016 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gkg79"] Nov 28 06:54:31 crc kubenswrapper[4946]: I1128 06:54:31.502152 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:54:31 crc kubenswrapper[4946]: E1128 06:54:31.502266 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:54:31 crc kubenswrapper[4946]: I1128 06:54:31.989275 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:54:31 crc kubenswrapper[4946]: I1128 06:54:31.989332 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:54:31 crc kubenswrapper[4946]: I1128 06:54:31.989901 4946 scope.go:117] "RemoveContainer" containerID="c0a0a595eea30586e0c6859963642e750f7e52a69e98545d3dd7dcaff841e1b1" Nov 28 06:54:31 crc kubenswrapper[4946]: E1128 06:54:31.989932 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:54:31 crc kubenswrapper[4946]: I1128 06:54:31.990113 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:54:31 crc kubenswrapper[4946]: E1128 06:54:31.990317 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:54:31 crc kubenswrapper[4946]: E1128 06:54:31.991129 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:54:32 crc kubenswrapper[4946]: I1128 06:54:32.314074 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9g9w4_857356d2-6585-41c6-9a2c-e06ef45f7303/kube-multus/1.log" Nov 28 06:54:32 crc kubenswrapper[4946]: I1128 06:54:32.314972 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9g9w4" event={"ID":"857356d2-6585-41c6-9a2c-e06ef45f7303","Type":"ContainerStarted","Data":"550d51e89fd2af743383ca0f5cafb114f9ccfa889b800eabc67d79127fe98802"} Nov 28 06:54:32 crc kubenswrapper[4946]: I1128 06:54:32.989113 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:54:32 crc kubenswrapper[4946]: E1128 06:54:32.989302 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:54:33 crc kubenswrapper[4946]: I1128 06:54:33.989739 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:54:33 crc kubenswrapper[4946]: I1128 06:54:33.989836 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:54:33 crc kubenswrapper[4946]: E1128 06:54:33.989946 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:54:33 crc kubenswrapper[4946]: I1128 06:54:33.990064 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:54:33 crc kubenswrapper[4946]: E1128 06:54:33.990099 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:54:33 crc kubenswrapper[4946]: E1128 06:54:33.990280 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:54:34 crc kubenswrapper[4946]: I1128 06:54:34.989576 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:54:34 crc kubenswrapper[4946]: E1128 06:54:34.989839 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gkg79" podUID="4e6983b1-6887-4d13-8f9a-f261a745115f" Nov 28 06:54:35 crc kubenswrapper[4946]: I1128 06:54:35.989355 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:54:35 crc kubenswrapper[4946]: I1128 06:54:35.989412 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:54:35 crc kubenswrapper[4946]: I1128 06:54:35.989545 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:54:35 crc kubenswrapper[4946]: E1128 06:54:35.991744 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:54:35 crc kubenswrapper[4946]: E1128 06:54:35.991881 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:54:35 crc kubenswrapper[4946]: E1128 06:54:35.992133 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:54:36 crc kubenswrapper[4946]: I1128 06:54:36.989409 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:54:36 crc kubenswrapper[4946]: I1128 06:54:36.994282 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 28 06:54:36 crc kubenswrapper[4946]: I1128 06:54:36.994294 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 28 06:54:37 crc kubenswrapper[4946]: I1128 06:54:37.989858 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:54:37 crc kubenswrapper[4946]: I1128 06:54:37.989986 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:54:37 crc kubenswrapper[4946]: I1128 06:54:37.990058 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:54:37 crc kubenswrapper[4946]: I1128 06:54:37.993080 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 28 06:54:37 crc kubenswrapper[4946]: I1128 06:54:37.993516 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 28 06:54:37 crc kubenswrapper[4946]: I1128 06:54:37.994116 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 28 06:54:37 crc kubenswrapper[4946]: I1128 06:54:37.995047 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 28 06:54:38 crc kubenswrapper[4946]: I1128 06:54:38.865818 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.765995 4946 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.826542 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tft2c"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.827565 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tft2c" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.830695 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hnzrs"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.831579 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hnzrs" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.832308 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xv7dv"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.833645 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.833751 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.834380 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.834686 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.835125 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.836783 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-cqn4c"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.837241 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.837267 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-cqn4c" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.837511 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.837966 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.841575 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pddp7"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.842413 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pddp7" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.843047 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-559n4"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.843601 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-559n4" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.848547 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-r7ztb"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.849290 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.849977 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.851953 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.859003 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.859980 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.860531 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.860684 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.860836 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.861173 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.861673 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 28 06:54:39 crc kubenswrapper[4946]: W1128 06:54:39.863990 4946 reflector.go:561] object-"openshift-console-operator"/"trusted-ca": failed to list *v1.ConfigMap: configmaps "trusted-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.864073 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: E1128 06:54:39.864075 4946 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.864384 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.863993 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.864857 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.865023 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.865440 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.865587 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.865729 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.865830 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.865857 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.865920 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.866328 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.866498 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.866655 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.866718 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.866836 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.867608 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.867919 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.867948 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.868042 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.868304 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.868378 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.868578 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.868717 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.868970 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.872723 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.876004 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.876308 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.876478 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wb2pq"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.877210 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.882220 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79r5c"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.883195 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xt6vv"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.883612 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79r5c" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.883887 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xt6vv" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.885542 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.899158 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.900290 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.901101 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.901157 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.901192 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.901318 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.901458 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.901648 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-m444h"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.902295 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.902425 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qhbbg"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.903017 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.903565 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.905215 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.905474 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.905633 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.905707 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.905862 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.905901 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.906054 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.906514 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.906721 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.907246 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.908106 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.908724 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m8vtf"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.909330 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8vtf" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.909651 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910325 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0-config\") pod \"authentication-operator-69f744f599-pddp7\" (UID: \"cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pddp7" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910365 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68shv\" (UniqueName: \"kubernetes.io/projected/ee803337-53f2-4467-8f6c-602a16bda8e5-kube-api-access-68shv\") pod \"machine-api-operator-5694c8668f-hnzrs\" (UID: \"ee803337-53f2-4467-8f6c-602a16bda8e5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hnzrs" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910393 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/677259a2-4efb-4f49-ad8e-57357402c59a-audit-policies\") pod \"apiserver-7bbb656c7d-pw98g\" (UID: \"677259a2-4efb-4f49-ad8e-57357402c59a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910416 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79c0d15c-8fc9-4efd-b1ec-739718f313d9-service-ca\") pod \"console-f9d7485db-r7ztb\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910440 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-audit-dir\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910482 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/144f7f69-8a83-4b7f-83f6-fa6805bf1598-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tft2c\" (UID: \"144f7f69-8a83-4b7f-83f6-fa6805bf1598\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tft2c" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910505 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/677259a2-4efb-4f49-ad8e-57357402c59a-audit-dir\") pod \"apiserver-7bbb656c7d-pw98g\" (UID: \"677259a2-4efb-4f49-ad8e-57357402c59a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910524 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/675e635c-71cb-4c9f-8c8a-dd25209abfd6-config\") pod \"controller-manager-879f6c89f-wb2pq\" (UID: \"675e635c-71cb-4c9f-8c8a-dd25209abfd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910555 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3580d0e9-f1d5-413e-a76e-d22baa741afd-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-79r5c\" (UID: \"3580d0e9-f1d5-413e-a76e-d22baa741afd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79r5c" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910575 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-audit\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910598 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxcjj\" (UniqueName: \"kubernetes.io/projected/39074957-ff47-4c7f-8c6e-26370a118b2b-kube-api-access-kxcjj\") pod \"console-operator-58897d9998-cqn4c\" (UID: \"39074957-ff47-4c7f-8c6e-26370a118b2b\") " pod="openshift-console-operator/console-operator-58897d9998-cqn4c" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910622 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pddp7\" (UID: \"cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pddp7" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910643 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-config\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910664 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79c0d15c-8fc9-4efd-b1ec-739718f313d9-trusted-ca-bundle\") pod \"console-f9d7485db-r7ztb\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910681 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/79c0d15c-8fc9-4efd-b1ec-739718f313d9-oauth-serving-cert\") pod \"console-f9d7485db-r7ztb\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910701 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p64vs\" (UniqueName: \"kubernetes.io/projected/79c0d15c-8fc9-4efd-b1ec-739718f313d9-kube-api-access-p64vs\") pod \"console-f9d7485db-r7ztb\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910719 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc42deee-84c0-4076-b553-c2bb55fd6807-config\") pod \"openshift-apiserver-operator-796bbdcf4f-xt6vv\" (UID: \"fc42deee-84c0-4076-b553-c2bb55fd6807\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xt6vv" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910739 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7kzx\" (UniqueName: \"kubernetes.io/projected/fe1d7375-1ae2-4f2a-8fd0-a2d9ce15711a-kube-api-access-l7kzx\") pod \"openshift-controller-manager-operator-756b6f6bc6-559n4\" (UID: \"fe1d7375-1ae2-4f2a-8fd0-a2d9ce15711a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-559n4" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910762 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcbfb\" (UniqueName: \"kubernetes.io/projected/cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0-kube-api-access-xcbfb\") pod \"authentication-operator-69f744f599-pddp7\" (UID: \"cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pddp7" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910784 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39074957-ff47-4c7f-8c6e-26370a118b2b-config\") pod \"console-operator-58897d9998-cqn4c\" (UID: \"39074957-ff47-4c7f-8c6e-26370a118b2b\") " pod="openshift-console-operator/console-operator-58897d9998-cqn4c" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910804 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/79c0d15c-8fc9-4efd-b1ec-739718f313d9-console-oauth-config\") pod \"console-f9d7485db-r7ztb\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910823 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpqkv\" (UniqueName: \"kubernetes.io/projected/fc42deee-84c0-4076-b553-c2bb55fd6807-kube-api-access-vpqkv\") pod \"openshift-apiserver-operator-796bbdcf4f-xt6vv\" (UID: \"fc42deee-84c0-4076-b553-c2bb55fd6807\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xt6vv" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910846 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vtm7\" (UniqueName: \"kubernetes.io/projected/144f7f69-8a83-4b7f-83f6-fa6805bf1598-kube-api-access-6vtm7\") pod \"cluster-samples-operator-665b6dd947-tft2c\" (UID: \"144f7f69-8a83-4b7f-83f6-fa6805bf1598\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tft2c" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910866 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ee803337-53f2-4467-8f6c-602a16bda8e5-images\") pod \"machine-api-operator-5694c8668f-hnzrs\" (UID: \"ee803337-53f2-4467-8f6c-602a16bda8e5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hnzrs" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910870 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910890 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/677259a2-4efb-4f49-ad8e-57357402c59a-serving-cert\") pod \"apiserver-7bbb656c7d-pw98g\" (UID: \"677259a2-4efb-4f49-ad8e-57357402c59a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910911 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/675e635c-71cb-4c9f-8c8a-dd25209abfd6-serving-cert\") pod \"controller-manager-879f6c89f-wb2pq\" (UID: \"675e635c-71cb-4c9f-8c8a-dd25209abfd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910941 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0-service-ca-bundle\") pod \"authentication-operator-69f744f599-pddp7\" (UID: \"cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pddp7" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910963 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/677259a2-4efb-4f49-ad8e-57357402c59a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pw98g\" (UID: \"677259a2-4efb-4f49-ad8e-57357402c59a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.910986 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/677259a2-4efb-4f49-ad8e-57357402c59a-encryption-config\") pod \"apiserver-7bbb656c7d-pw98g\" (UID: \"677259a2-4efb-4f49-ad8e-57357402c59a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.911008 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0-serving-cert\") pod \"authentication-operator-69f744f599-pddp7\" (UID: \"cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pddp7" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.911034 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe1d7375-1ae2-4f2a-8fd0-a2d9ce15711a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-559n4\" (UID: \"fe1d7375-1ae2-4f2a-8fd0-a2d9ce15711a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-559n4" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.911055 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3580d0e9-f1d5-413e-a76e-d22baa741afd-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-79r5c\" (UID: \"3580d0e9-f1d5-413e-a76e-d22baa741afd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79r5c" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.911076 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe1d7375-1ae2-4f2a-8fd0-a2d9ce15711a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-559n4\" (UID: \"fe1d7375-1ae2-4f2a-8fd0-a2d9ce15711a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-559n4" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.911097 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee803337-53f2-4467-8f6c-602a16bda8e5-config\") pod \"machine-api-operator-5694c8668f-hnzrs\" (UID: \"ee803337-53f2-4467-8f6c-602a16bda8e5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hnzrs" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.911117 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/675e635c-71cb-4c9f-8c8a-dd25209abfd6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wb2pq\" (UID: \"675e635c-71cb-4c9f-8c8a-dd25209abfd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.911138 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-node-pullsecrets\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.911157 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ndm7\" (UniqueName: \"kubernetes.io/projected/675e635c-71cb-4c9f-8c8a-dd25209abfd6-kube-api-access-5ndm7\") pod \"controller-manager-879f6c89f-wb2pq\" (UID: \"675e635c-71cb-4c9f-8c8a-dd25209abfd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.911177 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79c0d15c-8fc9-4efd-b1ec-739718f313d9-console-serving-cert\") pod \"console-f9d7485db-r7ztb\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.911203 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39074957-ff47-4c7f-8c6e-26370a118b2b-serving-cert\") pod \"console-operator-58897d9998-cqn4c\" (UID: \"39074957-ff47-4c7f-8c6e-26370a118b2b\") " pod="openshift-console-operator/console-operator-58897d9998-cqn4c" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.911233 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-serving-cert\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.911262 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-encryption-config\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.911286 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/677259a2-4efb-4f49-ad8e-57357402c59a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pw98g\" (UID: \"677259a2-4efb-4f49-ad8e-57357402c59a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.911308 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3580d0e9-f1d5-413e-a76e-d22baa741afd-config\") pod \"kube-apiserver-operator-766d6c64bb-79r5c\" (UID: \"3580d0e9-f1d5-413e-a76e-d22baa741afd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79r5c" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.911330 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee803337-53f2-4467-8f6c-602a16bda8e5-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hnzrs\" (UID: \"ee803337-53f2-4467-8f6c-602a16bda8e5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hnzrs" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.911352 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-etcd-serving-ca\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.911370 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/677259a2-4efb-4f49-ad8e-57357402c59a-etcd-client\") pod \"apiserver-7bbb656c7d-pw98g\" (UID: \"677259a2-4efb-4f49-ad8e-57357402c59a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.911390 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.911413 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76dvl\" (UniqueName: \"kubernetes.io/projected/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-kube-api-access-76dvl\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.911435 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw9wl\" (UniqueName: \"kubernetes.io/projected/677259a2-4efb-4f49-ad8e-57357402c59a-kube-api-access-mw9wl\") pod \"apiserver-7bbb656c7d-pw98g\" (UID: \"677259a2-4efb-4f49-ad8e-57357402c59a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.911750 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.911899 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.913908 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.917639 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-etcd-client\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.917697 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39074957-ff47-4c7f-8c6e-26370a118b2b-trusted-ca\") pod \"console-operator-58897d9998-cqn4c\" (UID: \"39074957-ff47-4c7f-8c6e-26370a118b2b\") " pod="openshift-console-operator/console-operator-58897d9998-cqn4c" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.917726 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-image-import-ca\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.917761 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/675e635c-71cb-4c9f-8c8a-dd25209abfd6-client-ca\") pod \"controller-manager-879f6c89f-wb2pq\" (UID: \"675e635c-71cb-4c9f-8c8a-dd25209abfd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.917815 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/79c0d15c-8fc9-4efd-b1ec-739718f313d9-console-config\") pod \"console-f9d7485db-r7ztb\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.917859 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc42deee-84c0-4076-b553-c2bb55fd6807-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-xt6vv\" (UID: \"fc42deee-84c0-4076-b553-c2bb55fd6807\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xt6vv" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.918344 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.921702 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-hwk8p"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.922208 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tft2c"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.922229 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-4bfkx"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.923843 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.924086 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.924288 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.924498 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.924765 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.925099 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.925293 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5ldr8"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.925630 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.925661 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pdk2l"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.925752 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.925993 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.926083 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pdk2l" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.926236 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4bfkx" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.926326 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dts9q"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.926428 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5ldr8" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.926518 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hwk8p" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.926087 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.926117 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.926293 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.926969 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-dts9q" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.927279 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pq4rr"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.927547 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.928750 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.959879 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-js7vs"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.960431 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4qwzn"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.960841 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pq4rr" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.961029 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.961142 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.961148 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.961241 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-js7vs" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.961363 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.961538 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qwzn" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.961685 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.962760 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.962887 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.962949 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.963062 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.963132 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.963215 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.963331 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.963396 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.963433 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.963780 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.964054 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.964115 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.972412 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.974663 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sm5f9"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.977362 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tnsl"] Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.977749 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.980638 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.982259 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.982386 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sm5f9" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.990767 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tnsl" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.992397 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.992805 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 28 06:54:39 crc kubenswrapper[4946]: I1128 06:54:39.993921 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.013055 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.014592 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6q9fb"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.015508 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qjlhx"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.016027 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6q9fb" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.016351 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-44jkz"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.017093 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pm27x"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.016452 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-qjlhx" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.017511 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44jkz" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.018924 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/190866a6-13b5-4de4-87c6-306883cb2998-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5ldr8\" (UID: \"190866a6-13b5-4de4-87c6-306883cb2998\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5ldr8" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.018953 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-audit-dir\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.018977 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/144f7f69-8a83-4b7f-83f6-fa6805bf1598-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tft2c\" (UID: \"144f7f69-8a83-4b7f-83f6-fa6805bf1598\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tft2c" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019004 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/475dd273-84f1-46de-9c16-3eaeca9b7d1c-trusted-ca\") pod \"ingress-operator-5b745b69d9-4qwzn\" (UID: \"475dd273-84f1-46de-9c16-3eaeca9b7d1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qwzn" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019020 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f9de5c3-e681-4c09-a93e-9f95f03c7f5e-serving-cert\") pod \"route-controller-manager-6576b87f9c-kfm25\" (UID: \"1f9de5c3-e681-4c09-a93e-9f95f03c7f5e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019039 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdj66\" (UniqueName: \"kubernetes.io/projected/d8c9e4f9-d51d-47d9-8228-c9c58873dbe3-kube-api-access-jdj66\") pod \"marketplace-operator-79b997595-js7vs\" (UID: \"d8c9e4f9-d51d-47d9-8228-c9c58873dbe3\") " pod="openshift-marketplace/marketplace-operator-79b997595-js7vs" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019061 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019081 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jppt5\" (UniqueName: \"kubernetes.io/projected/0baa90e9-548b-4a20-9d9c-6a673f4ae0d0-kube-api-access-jppt5\") pod \"etcd-operator-b45778765-dts9q\" (UID: \"0baa90e9-548b-4a20-9d9c-6a673f4ae0d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dts9q" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019095 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d8c9e4f9-d51d-47d9-8228-c9c58873dbe3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-js7vs\" (UID: \"d8c9e4f9-d51d-47d9-8228-c9c58873dbe3\") " pod="openshift-marketplace/marketplace-operator-79b997595-js7vs" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019109 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/830f7e38-bc1c-4897-bcc9-0266da4d74d0-audit-policies\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019125 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/677259a2-4efb-4f49-ad8e-57357402c59a-audit-dir\") pod \"apiserver-7bbb656c7d-pw98g\" (UID: \"677259a2-4efb-4f49-ad8e-57357402c59a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019139 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/675e635c-71cb-4c9f-8c8a-dd25209abfd6-config\") pod \"controller-manager-879f6c89f-wb2pq\" (UID: \"675e635c-71cb-4c9f-8c8a-dd25209abfd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019153 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/475dd273-84f1-46de-9c16-3eaeca9b7d1c-metrics-tls\") pod \"ingress-operator-5b745b69d9-4qwzn\" (UID: \"475dd273-84f1-46de-9c16-3eaeca9b7d1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qwzn" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019177 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3580d0e9-f1d5-413e-a76e-d22baa741afd-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-79r5c\" (UID: \"3580d0e9-f1d5-413e-a76e-d22baa741afd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79r5c" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019191 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-audit\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019206 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxcjj\" (UniqueName: \"kubernetes.io/projected/39074957-ff47-4c7f-8c6e-26370a118b2b-kube-api-access-kxcjj\") pod \"console-operator-58897d9998-cqn4c\" (UID: \"39074957-ff47-4c7f-8c6e-26370a118b2b\") " pod="openshift-console-operator/console-operator-58897d9998-cqn4c" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019222 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pddp7\" (UID: \"cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pddp7" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019238 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-config\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019255 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f9de5c3-e681-4c09-a93e-9f95f03c7f5e-client-ca\") pod \"route-controller-manager-6576b87f9c-kfm25\" (UID: \"1f9de5c3-e681-4c09-a93e-9f95f03c7f5e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019271 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7kzx\" (UniqueName: \"kubernetes.io/projected/fe1d7375-1ae2-4f2a-8fd0-a2d9ce15711a-kube-api-access-l7kzx\") pod \"openshift-controller-manager-operator-756b6f6bc6-559n4\" (UID: \"fe1d7375-1ae2-4f2a-8fd0-a2d9ce15711a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-559n4" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019310 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79c0d15c-8fc9-4efd-b1ec-739718f313d9-trusted-ca-bundle\") pod \"console-f9d7485db-r7ztb\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019325 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/79c0d15c-8fc9-4efd-b1ec-739718f313d9-oauth-serving-cert\") pod \"console-f9d7485db-r7ztb\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019339 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p64vs\" (UniqueName: \"kubernetes.io/projected/79c0d15c-8fc9-4efd-b1ec-739718f313d9-kube-api-access-p64vs\") pod \"console-f9d7485db-r7ztb\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019355 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc42deee-84c0-4076-b553-c2bb55fd6807-config\") pod \"openshift-apiserver-operator-796bbdcf4f-xt6vv\" (UID: \"fc42deee-84c0-4076-b553-c2bb55fd6807\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xt6vv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019370 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvk4b\" (UniqueName: \"kubernetes.io/projected/1f9de5c3-e681-4c09-a93e-9f95f03c7f5e-kube-api-access-rvk4b\") pod \"route-controller-manager-6576b87f9c-kfm25\" (UID: \"1f9de5c3-e681-4c09-a93e-9f95f03c7f5e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019386 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcbfb\" (UniqueName: \"kubernetes.io/projected/cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0-kube-api-access-xcbfb\") pod \"authentication-operator-69f744f599-pddp7\" (UID: \"cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pddp7" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019403 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39074957-ff47-4c7f-8c6e-26370a118b2b-config\") pod \"console-operator-58897d9998-cqn4c\" (UID: \"39074957-ff47-4c7f-8c6e-26370a118b2b\") " pod="openshift-console-operator/console-operator-58897d9998-cqn4c" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019417 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/79c0d15c-8fc9-4efd-b1ec-739718f313d9-console-oauth-config\") pod \"console-f9d7485db-r7ztb\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019779 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpqkv\" (UniqueName: \"kubernetes.io/projected/fc42deee-84c0-4076-b553-c2bb55fd6807-kube-api-access-vpqkv\") pod \"openshift-apiserver-operator-796bbdcf4f-xt6vv\" (UID: \"fc42deee-84c0-4076-b553-c2bb55fd6807\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xt6vv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019805 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vtm7\" (UniqueName: \"kubernetes.io/projected/144f7f69-8a83-4b7f-83f6-fa6805bf1598-kube-api-access-6vtm7\") pod \"cluster-samples-operator-665b6dd947-tft2c\" (UID: \"144f7f69-8a83-4b7f-83f6-fa6805bf1598\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tft2c" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019820 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ee803337-53f2-4467-8f6c-602a16bda8e5-images\") pod \"machine-api-operator-5694c8668f-hnzrs\" (UID: \"ee803337-53f2-4467-8f6c-602a16bda8e5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hnzrs" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019836 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/677259a2-4efb-4f49-ad8e-57357402c59a-serving-cert\") pod \"apiserver-7bbb656c7d-pw98g\" (UID: \"677259a2-4efb-4f49-ad8e-57357402c59a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019854 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/675e635c-71cb-4c9f-8c8a-dd25209abfd6-serving-cert\") pod \"controller-manager-879f6c89f-wb2pq\" (UID: \"675e635c-71cb-4c9f-8c8a-dd25209abfd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019871 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019891 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0-service-ca-bundle\") pod \"authentication-operator-69f744f599-pddp7\" (UID: \"cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pddp7" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019906 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/677259a2-4efb-4f49-ad8e-57357402c59a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pw98g\" (UID: \"677259a2-4efb-4f49-ad8e-57357402c59a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019922 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/677259a2-4efb-4f49-ad8e-57357402c59a-encryption-config\") pod \"apiserver-7bbb656c7d-pw98g\" (UID: \"677259a2-4efb-4f49-ad8e-57357402c59a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019940 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0baa90e9-548b-4a20-9d9c-6a673f4ae0d0-etcd-client\") pod \"etcd-operator-b45778765-dts9q\" (UID: \"0baa90e9-548b-4a20-9d9c-6a673f4ae0d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dts9q" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019955 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/190866a6-13b5-4de4-87c6-306883cb2998-config\") pod \"kube-controller-manager-operator-78b949d7b-5ldr8\" (UID: \"190866a6-13b5-4de4-87c6-306883cb2998\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5ldr8" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019974 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0-serving-cert\") pod \"authentication-operator-69f744f599-pddp7\" (UID: \"cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pddp7" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.019989 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/830f7e38-bc1c-4897-bcc9-0266da4d74d0-audit-dir\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020005 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020024 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe1d7375-1ae2-4f2a-8fd0-a2d9ce15711a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-559n4\" (UID: \"fe1d7375-1ae2-4f2a-8fd0-a2d9ce15711a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-559n4" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020042 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0baa90e9-548b-4a20-9d9c-6a673f4ae0d0-serving-cert\") pod \"etcd-operator-b45778765-dts9q\" (UID: \"0baa90e9-548b-4a20-9d9c-6a673f4ae0d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dts9q" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020058 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee803337-53f2-4467-8f6c-602a16bda8e5-config\") pod \"machine-api-operator-5694c8668f-hnzrs\" (UID: \"ee803337-53f2-4467-8f6c-602a16bda8e5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hnzrs" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020079 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3580d0e9-f1d5-413e-a76e-d22baa741afd-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-79r5c\" (UID: \"3580d0e9-f1d5-413e-a76e-d22baa741afd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79r5c" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020096 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe1d7375-1ae2-4f2a-8fd0-a2d9ce15711a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-559n4\" (UID: \"fe1d7375-1ae2-4f2a-8fd0-a2d9ce15711a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-559n4" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020110 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/675e635c-71cb-4c9f-8c8a-dd25209abfd6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wb2pq\" (UID: \"675e635c-71cb-4c9f-8c8a-dd25209abfd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020127 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020144 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-node-pullsecrets\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020161 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ndm7\" (UniqueName: \"kubernetes.io/projected/675e635c-71cb-4c9f-8c8a-dd25209abfd6-kube-api-access-5ndm7\") pod \"controller-manager-879f6c89f-wb2pq\" (UID: \"675e635c-71cb-4c9f-8c8a-dd25209abfd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020176 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79c0d15c-8fc9-4efd-b1ec-739718f313d9-console-serving-cert\") pod \"console-f9d7485db-r7ztb\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020197 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/daef0f99-b6ac-47c5-a246-40a12f61603b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m8vtf\" (UID: \"daef0f99-b6ac-47c5-a246-40a12f61603b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8vtf" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020215 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39074957-ff47-4c7f-8c6e-26370a118b2b-serving-cert\") pod \"console-operator-58897d9998-cqn4c\" (UID: \"39074957-ff47-4c7f-8c6e-26370a118b2b\") " pod="openshift-console-operator/console-operator-58897d9998-cqn4c" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020230 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0baa90e9-548b-4a20-9d9c-6a673f4ae0d0-etcd-service-ca\") pod \"etcd-operator-b45778765-dts9q\" (UID: \"0baa90e9-548b-4a20-9d9c-6a673f4ae0d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dts9q" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020245 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-477cb\" (UniqueName: \"kubernetes.io/projected/830f7e38-bc1c-4897-bcc9-0266da4d74d0-kube-api-access-477cb\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020262 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daef0f99-b6ac-47c5-a246-40a12f61603b-serving-cert\") pod \"openshift-config-operator-7777fb866f-m8vtf\" (UID: \"daef0f99-b6ac-47c5-a246-40a12f61603b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8vtf" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020278 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-serving-cert\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020292 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-encryption-config\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020307 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/677259a2-4efb-4f49-ad8e-57357402c59a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pw98g\" (UID: \"677259a2-4efb-4f49-ad8e-57357402c59a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020322 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6kk6\" (UniqueName: \"kubernetes.io/projected/daef0f99-b6ac-47c5-a246-40a12f61603b-kube-api-access-b6kk6\") pod \"openshift-config-operator-7777fb866f-m8vtf\" (UID: \"daef0f99-b6ac-47c5-a246-40a12f61603b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8vtf" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020341 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3580d0e9-f1d5-413e-a76e-d22baa741afd-config\") pod \"kube-apiserver-operator-766d6c64bb-79r5c\" (UID: \"3580d0e9-f1d5-413e-a76e-d22baa741afd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79r5c" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020357 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee803337-53f2-4467-8f6c-602a16bda8e5-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hnzrs\" (UID: \"ee803337-53f2-4467-8f6c-602a16bda8e5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hnzrs" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020373 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0baa90e9-548b-4a20-9d9c-6a673f4ae0d0-config\") pod \"etcd-operator-b45778765-dts9q\" (UID: \"0baa90e9-548b-4a20-9d9c-6a673f4ae0d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dts9q" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020389 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020407 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-etcd-serving-ca\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020422 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/677259a2-4efb-4f49-ad8e-57357402c59a-etcd-client\") pod \"apiserver-7bbb656c7d-pw98g\" (UID: \"677259a2-4efb-4f49-ad8e-57357402c59a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020437 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0baa90e9-548b-4a20-9d9c-6a673f4ae0d0-etcd-ca\") pod \"etcd-operator-b45778765-dts9q\" (UID: \"0baa90e9-548b-4a20-9d9c-6a673f4ae0d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dts9q" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020471 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020487 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76dvl\" (UniqueName: \"kubernetes.io/projected/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-kube-api-access-76dvl\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020512 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw9wl\" (UniqueName: \"kubernetes.io/projected/677259a2-4efb-4f49-ad8e-57357402c59a-kube-api-access-mw9wl\") pod \"apiserver-7bbb656c7d-pw98g\" (UID: \"677259a2-4efb-4f49-ad8e-57357402c59a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020528 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39074957-ff47-4c7f-8c6e-26370a118b2b-trusted-ca\") pod \"console-operator-58897d9998-cqn4c\" (UID: \"39074957-ff47-4c7f-8c6e-26370a118b2b\") " pod="openshift-console-operator/console-operator-58897d9998-cqn4c" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020552 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-etcd-client\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020567 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020585 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-image-import-ca\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020601 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/675e635c-71cb-4c9f-8c8a-dd25209abfd6-client-ca\") pod \"controller-manager-879f6c89f-wb2pq\" (UID: \"675e635c-71cb-4c9f-8c8a-dd25209abfd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020617 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwwjb\" (UniqueName: \"kubernetes.io/projected/475dd273-84f1-46de-9c16-3eaeca9b7d1c-kube-api-access-wwwjb\") pod \"ingress-operator-5b745b69d9-4qwzn\" (UID: \"475dd273-84f1-46de-9c16-3eaeca9b7d1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qwzn" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020633 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f9de5c3-e681-4c09-a93e-9f95f03c7f5e-config\") pod \"route-controller-manager-6576b87f9c-kfm25\" (UID: \"1f9de5c3-e681-4c09-a93e-9f95f03c7f5e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020649 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020674 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/79c0d15c-8fc9-4efd-b1ec-739718f313d9-console-config\") pod \"console-f9d7485db-r7ztb\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020688 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020707 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020727 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/190866a6-13b5-4de4-87c6-306883cb2998-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5ldr8\" (UID: \"190866a6-13b5-4de4-87c6-306883cb2998\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5ldr8" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020745 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020763 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020784 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8c9e4f9-d51d-47d9-8228-c9c58873dbe3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-js7vs\" (UID: \"d8c9e4f9-d51d-47d9-8228-c9c58873dbe3\") " pod="openshift-marketplace/marketplace-operator-79b997595-js7vs" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020804 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc42deee-84c0-4076-b553-c2bb55fd6807-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-xt6vv\" (UID: \"fc42deee-84c0-4076-b553-c2bb55fd6807\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xt6vv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020822 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0-config\") pod \"authentication-operator-69f744f599-pddp7\" (UID: \"cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pddp7" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020838 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/475dd273-84f1-46de-9c16-3eaeca9b7d1c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4qwzn\" (UID: \"475dd273-84f1-46de-9c16-3eaeca9b7d1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qwzn" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020856 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcmnj\" (UniqueName: \"kubernetes.io/projected/b4d03410-0c4e-4aa4-b760-8d8179d791a5-kube-api-access-kcmnj\") pod \"downloads-7954f5f757-4bfkx\" (UID: \"b4d03410-0c4e-4aa4-b760-8d8179d791a5\") " pod="openshift-console/downloads-7954f5f757-4bfkx" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020887 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68shv\" (UniqueName: \"kubernetes.io/projected/ee803337-53f2-4467-8f6c-602a16bda8e5-kube-api-access-68shv\") pod \"machine-api-operator-5694c8668f-hnzrs\" (UID: \"ee803337-53f2-4467-8f6c-602a16bda8e5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hnzrs" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020903 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/677259a2-4efb-4f49-ad8e-57357402c59a-audit-policies\") pod \"apiserver-7bbb656c7d-pw98g\" (UID: \"677259a2-4efb-4f49-ad8e-57357402c59a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020920 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79c0d15c-8fc9-4efd-b1ec-739718f313d9-service-ca\") pod \"console-f9d7485db-r7ztb\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020961 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-audit-dir\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.021864 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79c0d15c-8fc9-4efd-b1ec-739718f313d9-service-ca\") pod \"console-f9d7485db-r7ztb\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.022586 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/677259a2-4efb-4f49-ad8e-57357402c59a-audit-dir\") pod \"apiserver-7bbb656c7d-pw98g\" (UID: \"677259a2-4efb-4f49-ad8e-57357402c59a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.023332 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-config\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.023557 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/677259a2-4efb-4f49-ad8e-57357402c59a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pw98g\" (UID: \"677259a2-4efb-4f49-ad8e-57357402c59a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.023861 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39074957-ff47-4c7f-8c6e-26370a118b2b-config\") pod \"console-operator-58897d9998-cqn4c\" (UID: \"39074957-ff47-4c7f-8c6e-26370a118b2b\") " pod="openshift-console-operator/console-operator-58897d9998-cqn4c" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.024496 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ee803337-53f2-4467-8f6c-602a16bda8e5-images\") pod \"machine-api-operator-5694c8668f-hnzrs\" (UID: \"ee803337-53f2-4467-8f6c-602a16bda8e5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hnzrs" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.024576 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79c0d15c-8fc9-4efd-b1ec-739718f313d9-trusted-ca-bundle\") pod \"console-f9d7485db-r7ztb\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.026392 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-etcd-serving-ca\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.029031 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6k9gx"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.029784 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tn22h"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.030038 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/677259a2-4efb-4f49-ad8e-57357402c59a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pw98g\" (UID: \"677259a2-4efb-4f49-ad8e-57357402c59a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.030369 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4swtq"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.030643 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee803337-53f2-4467-8f6c-602a16bda8e5-config\") pod \"machine-api-operator-5694c8668f-hnzrs\" (UID: \"ee803337-53f2-4467-8f6c-602a16bda8e5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hnzrs" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.030715 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pddp7\" (UID: \"cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pddp7" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.031056 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.031066 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z4vpd"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.031320 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe1d7375-1ae2-4f2a-8fd0-a2d9ce15711a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-559n4\" (UID: \"fe1d7375-1ae2-4f2a-8fd0-a2d9ce15711a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-559n4" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.031354 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/675e635c-71cb-4c9f-8c8a-dd25209abfd6-config\") pod \"controller-manager-879f6c89f-wb2pq\" (UID: \"675e635c-71cb-4c9f-8c8a-dd25209abfd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.031788 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/79c0d15c-8fc9-4efd-b1ec-739718f313d9-console-oauth-config\") pod \"console-f9d7485db-r7ztb\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.031903 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-pm27x" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.032390 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6k9gx" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.033023 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tn22h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.033188 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-audit\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.032682 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3580d0e9-f1d5-413e-a76e-d22baa741afd-config\") pod \"kube-apiserver-operator-766d6c64bb-79r5c\" (UID: \"3580d0e9-f1d5-413e-a76e-d22baa741afd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79r5c" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.033454 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4swtq" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.033738 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z4vpd" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.033885 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0-serving-cert\") pod \"authentication-operator-69f744f599-pddp7\" (UID: \"cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pddp7" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.032589 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/675e635c-71cb-4c9f-8c8a-dd25209abfd6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wb2pq\" (UID: \"675e635c-71cb-4c9f-8c8a-dd25209abfd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.032646 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-node-pullsecrets\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.033518 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/677259a2-4efb-4f49-ad8e-57357402c59a-serving-cert\") pod \"apiserver-7bbb656c7d-pw98g\" (UID: \"677259a2-4efb-4f49-ad8e-57357402c59a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.034595 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/79c0d15c-8fc9-4efd-b1ec-739718f313d9-oauth-serving-cert\") pod \"console-f9d7485db-r7ztb\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.033788 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-rzxwh"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.034850 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0-service-ca-bundle\") pod \"authentication-operator-69f744f599-pddp7\" (UID: \"cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pddp7" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.035108 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39074957-ff47-4c7f-8c6e-26370a118b2b-serving-cert\") pod \"console-operator-58897d9998-cqn4c\" (UID: \"39074957-ff47-4c7f-8c6e-26370a118b2b\") " pod="openshift-console-operator/console-operator-58897d9998-cqn4c" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.036073 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzxwh" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.036215 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.036265 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/144f7f69-8a83-4b7f-83f6-fa6805bf1598-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tft2c\" (UID: \"144f7f69-8a83-4b7f-83f6-fa6805bf1598\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tft2c" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.020898 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc42deee-84c0-4076-b553-c2bb55fd6807-config\") pod \"openshift-apiserver-operator-796bbdcf4f-xt6vv\" (UID: \"fc42deee-84c0-4076-b553-c2bb55fd6807\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xt6vv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.037247 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-image-import-ca\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.037819 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79c0d15c-8fc9-4efd-b1ec-739718f313d9-console-serving-cert\") pod \"console-f9d7485db-r7ztb\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.037883 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-r8n92"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.038018 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/675e635c-71cb-4c9f-8c8a-dd25209abfd6-client-ca\") pod \"controller-manager-879f6c89f-wb2pq\" (UID: \"675e635c-71cb-4c9f-8c8a-dd25209abfd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.038584 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0-config\") pod \"authentication-operator-69f744f599-pddp7\" (UID: \"cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pddp7" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.038869 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mnst8"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.039268 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/677259a2-4efb-4f49-ad8e-57357402c59a-audit-policies\") pod \"apiserver-7bbb656c7d-pw98g\" (UID: \"677259a2-4efb-4f49-ad8e-57357402c59a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.039316 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/677259a2-4efb-4f49-ad8e-57357402c59a-etcd-client\") pod \"apiserver-7bbb656c7d-pw98g\" (UID: \"677259a2-4efb-4f49-ad8e-57357402c59a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.039901 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-r8n92" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.040839 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/79c0d15c-8fc9-4efd-b1ec-739718f313d9-console-config\") pod \"console-f9d7485db-r7ztb\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.041677 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-encryption-config\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.044026 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe1d7375-1ae2-4f2a-8fd0-a2d9ce15711a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-559n4\" (UID: \"fe1d7375-1ae2-4f2a-8fd0-a2d9ce15711a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-559n4" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.044730 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hxz9b"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.045114 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mnst8" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.045771 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-etcd-client\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.046123 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc42deee-84c0-4076-b553-c2bb55fd6807-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-xt6vv\" (UID: \"fc42deee-84c0-4076-b553-c2bb55fd6807\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xt6vv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.046986 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3580d0e9-f1d5-413e-a76e-d22baa741afd-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-79r5c\" (UID: \"3580d0e9-f1d5-413e-a76e-d22baa741afd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79r5c" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.048744 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-serving-cert\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.049722 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.049747 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405205-hbxpf"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.050112 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hxz9b" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.050907 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xv7dv"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.050946 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hnzrs"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.050959 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-qvr67"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.051310 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-hbxpf" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.052611 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pddp7"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.052642 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-cqn4c"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.052696 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-559n4"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.052723 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qvr67" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.052915 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee803337-53f2-4467-8f6c-602a16bda8e5-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hnzrs\" (UID: \"ee803337-53f2-4467-8f6c-602a16bda8e5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hnzrs" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.053131 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/677259a2-4efb-4f49-ad8e-57357402c59a-encryption-config\") pod \"apiserver-7bbb656c7d-pw98g\" (UID: \"677259a2-4efb-4f49-ad8e-57357402c59a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.053451 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wb2pq"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.057042 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xt6vv"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.057083 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.057093 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-r7ztb"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.064568 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-m444h"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.064653 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m8vtf"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.068211 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qhbbg"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.071549 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.072029 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6q9fb"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.073250 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-js7vs"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.074072 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/675e635c-71cb-4c9f-8c8a-dd25209abfd6-serving-cert\") pod \"controller-manager-879f6c89f-wb2pq\" (UID: \"675e635c-71cb-4c9f-8c8a-dd25209abfd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.074781 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4swtq"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.075958 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4bfkx"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.077261 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79r5c"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.078431 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pm27x"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.079612 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5ldr8"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.081537 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-r8n92"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.083082 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-44jkz"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.084616 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4qwzn"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.085843 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dts9q"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.087523 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6k9gx"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.089033 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-vcgqw"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.089701 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.089709 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-vcgqw" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.090396 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sc95q"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.091890 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mnst8"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.092006 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-sc95q" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.093157 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z4vpd"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.094503 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pq4rr"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.095925 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.097901 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pdk2l"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.099019 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hxz9b"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.100189 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sm5f9"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.101220 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qvr67"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.103023 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tnsl"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.104160 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qjlhx"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.105153 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tn22h"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.106190 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sc95q"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.107293 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405205-hbxpf"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.108403 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-5q8lj"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.109265 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5q8lj" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.109351 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5q8lj"] Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.110069 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.122411 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8c9e4f9-d51d-47d9-8228-c9c58873dbe3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-js7vs\" (UID: \"d8c9e4f9-d51d-47d9-8228-c9c58873dbe3\") " pod="openshift-marketplace/marketplace-operator-79b997595-js7vs" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.122664 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/475dd273-84f1-46de-9c16-3eaeca9b7d1c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4qwzn\" (UID: \"475dd273-84f1-46de-9c16-3eaeca9b7d1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qwzn" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.122867 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcmnj\" (UniqueName: \"kubernetes.io/projected/b4d03410-0c4e-4aa4-b760-8d8179d791a5-kube-api-access-kcmnj\") pod \"downloads-7954f5f757-4bfkx\" (UID: \"b4d03410-0c4e-4aa4-b760-8d8179d791a5\") " pod="openshift-console/downloads-7954f5f757-4bfkx" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.123038 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/475dd273-84f1-46de-9c16-3eaeca9b7d1c-trusted-ca\") pod \"ingress-operator-5b745b69d9-4qwzn\" (UID: \"475dd273-84f1-46de-9c16-3eaeca9b7d1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qwzn" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.123153 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f9de5c3-e681-4c09-a93e-9f95f03c7f5e-serving-cert\") pod \"route-controller-manager-6576b87f9c-kfm25\" (UID: \"1f9de5c3-e681-4c09-a93e-9f95f03c7f5e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.123231 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/190866a6-13b5-4de4-87c6-306883cb2998-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5ldr8\" (UID: \"190866a6-13b5-4de4-87c6-306883cb2998\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5ldr8" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.123311 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdj66\" (UniqueName: \"kubernetes.io/projected/d8c9e4f9-d51d-47d9-8228-c9c58873dbe3-kube-api-access-jdj66\") pod \"marketplace-operator-79b997595-js7vs\" (UID: \"d8c9e4f9-d51d-47d9-8228-c9c58873dbe3\") " pod="openshift-marketplace/marketplace-operator-79b997595-js7vs" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.123391 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.123494 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jppt5\" (UniqueName: \"kubernetes.io/projected/0baa90e9-548b-4a20-9d9c-6a673f4ae0d0-kube-api-access-jppt5\") pod \"etcd-operator-b45778765-dts9q\" (UID: \"0baa90e9-548b-4a20-9d9c-6a673f4ae0d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dts9q" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.123673 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d8c9e4f9-d51d-47d9-8228-c9c58873dbe3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-js7vs\" (UID: \"d8c9e4f9-d51d-47d9-8228-c9c58873dbe3\") " pod="openshift-marketplace/marketplace-operator-79b997595-js7vs" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.123756 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/830f7e38-bc1c-4897-bcc9-0266da4d74d0-audit-policies\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.123831 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/475dd273-84f1-46de-9c16-3eaeca9b7d1c-metrics-tls\") pod \"ingress-operator-5b745b69d9-4qwzn\" (UID: \"475dd273-84f1-46de-9c16-3eaeca9b7d1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qwzn" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.123936 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f9de5c3-e681-4c09-a93e-9f95f03c7f5e-client-ca\") pod \"route-controller-manager-6576b87f9c-kfm25\" (UID: \"1f9de5c3-e681-4c09-a93e-9f95f03c7f5e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.124008 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvk4b\" (UniqueName: \"kubernetes.io/projected/1f9de5c3-e681-4c09-a93e-9f95f03c7f5e-kube-api-access-rvk4b\") pod \"route-controller-manager-6576b87f9c-kfm25\" (UID: \"1f9de5c3-e681-4c09-a93e-9f95f03c7f5e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.124137 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0baa90e9-548b-4a20-9d9c-6a673f4ae0d0-etcd-client\") pod \"etcd-operator-b45778765-dts9q\" (UID: \"0baa90e9-548b-4a20-9d9c-6a673f4ae0d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dts9q" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.124244 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/190866a6-13b5-4de4-87c6-306883cb2998-config\") pod \"kube-controller-manager-operator-78b949d7b-5ldr8\" (UID: \"190866a6-13b5-4de4-87c6-306883cb2998\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5ldr8" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.124350 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.124456 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.124579 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/830f7e38-bc1c-4897-bcc9-0266da4d74d0-audit-dir\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.124665 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0baa90e9-548b-4a20-9d9c-6a673f4ae0d0-serving-cert\") pod \"etcd-operator-b45778765-dts9q\" (UID: \"0baa90e9-548b-4a20-9d9c-6a673f4ae0d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dts9q" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.124784 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.124865 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/daef0f99-b6ac-47c5-a246-40a12f61603b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m8vtf\" (UID: \"daef0f99-b6ac-47c5-a246-40a12f61603b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8vtf" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.124935 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0baa90e9-548b-4a20-9d9c-6a673f4ae0d0-etcd-service-ca\") pod \"etcd-operator-b45778765-dts9q\" (UID: \"0baa90e9-548b-4a20-9d9c-6a673f4ae0d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dts9q" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.125003 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-477cb\" (UniqueName: \"kubernetes.io/projected/830f7e38-bc1c-4897-bcc9-0266da4d74d0-kube-api-access-477cb\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.125081 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daef0f99-b6ac-47c5-a246-40a12f61603b-serving-cert\") pod \"openshift-config-operator-7777fb866f-m8vtf\" (UID: \"daef0f99-b6ac-47c5-a246-40a12f61603b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8vtf" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.125161 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0baa90e9-548b-4a20-9d9c-6a673f4ae0d0-config\") pod \"etcd-operator-b45778765-dts9q\" (UID: \"0baa90e9-548b-4a20-9d9c-6a673f4ae0d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dts9q" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.125239 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6kk6\" (UniqueName: \"kubernetes.io/projected/daef0f99-b6ac-47c5-a246-40a12f61603b-kube-api-access-b6kk6\") pod \"openshift-config-operator-7777fb866f-m8vtf\" (UID: \"daef0f99-b6ac-47c5-a246-40a12f61603b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8vtf" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.125317 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.125397 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0baa90e9-548b-4a20-9d9c-6a673f4ae0d0-etcd-ca\") pod \"etcd-operator-b45778765-dts9q\" (UID: \"0baa90e9-548b-4a20-9d9c-6a673f4ae0d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dts9q" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.125564 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.125654 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f9de5c3-e681-4c09-a93e-9f95f03c7f5e-config\") pod \"route-controller-manager-6576b87f9c-kfm25\" (UID: \"1f9de5c3-e681-4c09-a93e-9f95f03c7f5e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.125739 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwwjb\" (UniqueName: \"kubernetes.io/projected/475dd273-84f1-46de-9c16-3eaeca9b7d1c-kube-api-access-wwwjb\") pod \"ingress-operator-5b745b69d9-4qwzn\" (UID: \"475dd273-84f1-46de-9c16-3eaeca9b7d1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qwzn" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.125809 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.125885 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.125973 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.126048 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/190866a6-13b5-4de4-87c6-306883cb2998-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5ldr8\" (UID: \"190866a6-13b5-4de4-87c6-306883cb2998\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5ldr8" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.126125 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.126195 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.126258 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/830f7e38-bc1c-4897-bcc9-0266da4d74d0-audit-policies\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.125617 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.126757 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/190866a6-13b5-4de4-87c6-306883cb2998-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5ldr8\" (UID: \"190866a6-13b5-4de4-87c6-306883cb2998\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5ldr8" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.127478 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/190866a6-13b5-4de4-87c6-306883cb2998-config\") pod \"kube-controller-manager-operator-78b949d7b-5ldr8\" (UID: \"190866a6-13b5-4de4-87c6-306883cb2998\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5ldr8" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.127846 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/daef0f99-b6ac-47c5-a246-40a12f61603b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m8vtf\" (UID: \"daef0f99-b6ac-47c5-a246-40a12f61603b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8vtf" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.127949 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.129788 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.129836 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/830f7e38-bc1c-4897-bcc9-0266da4d74d0-audit-dir\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.130258 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.130693 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daef0f99-b6ac-47c5-a246-40a12f61603b-serving-cert\") pod \"openshift-config-operator-7777fb866f-m8vtf\" (UID: \"daef0f99-b6ac-47c5-a246-40a12f61603b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8vtf" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.131104 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.131653 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0baa90e9-548b-4a20-9d9c-6a673f4ae0d0-etcd-ca\") pod \"etcd-operator-b45778765-dts9q\" (UID: \"0baa90e9-548b-4a20-9d9c-6a673f4ae0d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dts9q" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.132224 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.133519 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.134011 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.134754 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.134900 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.135428 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.136422 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.150080 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.154563 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0baa90e9-548b-4a20-9d9c-6a673f4ae0d0-serving-cert\") pod \"etcd-operator-b45778765-dts9q\" (UID: \"0baa90e9-548b-4a20-9d9c-6a673f4ae0d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dts9q" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.169606 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.180415 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0baa90e9-548b-4a20-9d9c-6a673f4ae0d0-etcd-client\") pod \"etcd-operator-b45778765-dts9q\" (UID: \"0baa90e9-548b-4a20-9d9c-6a673f4ae0d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dts9q" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.189619 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.198118 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0baa90e9-548b-4a20-9d9c-6a673f4ae0d0-etcd-service-ca\") pod \"etcd-operator-b45778765-dts9q\" (UID: \"0baa90e9-548b-4a20-9d9c-6a673f4ae0d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dts9q" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.210072 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.230851 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.249915 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.257580 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0baa90e9-548b-4a20-9d9c-6a673f4ae0d0-config\") pod \"etcd-operator-b45778765-dts9q\" (UID: \"0baa90e9-548b-4a20-9d9c-6a673f4ae0d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dts9q" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.291137 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.310866 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.330478 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.350322 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.362434 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d8c9e4f9-d51d-47d9-8228-c9c58873dbe3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-js7vs\" (UID: \"d8c9e4f9-d51d-47d9-8228-c9c58873dbe3\") " pod="openshift-marketplace/marketplace-operator-79b997595-js7vs" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.369912 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.389964 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.410990 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.429750 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.451071 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.460455 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/475dd273-84f1-46de-9c16-3eaeca9b7d1c-metrics-tls\") pod \"ingress-operator-5b745b69d9-4qwzn\" (UID: \"475dd273-84f1-46de-9c16-3eaeca9b7d1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qwzn" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.472597 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.489760 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.510177 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.538838 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.544837 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/475dd273-84f1-46de-9c16-3eaeca9b7d1c-trusted-ca\") pod \"ingress-operator-5b745b69d9-4qwzn\" (UID: \"475dd273-84f1-46de-9c16-3eaeca9b7d1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qwzn" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.558978 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.563884 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8c9e4f9-d51d-47d9-8228-c9c58873dbe3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-js7vs\" (UID: \"d8c9e4f9-d51d-47d9-8228-c9c58873dbe3\") " pod="openshift-marketplace/marketplace-operator-79b997595-js7vs" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.570164 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.582818 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f9de5c3-e681-4c09-a93e-9f95f03c7f5e-serving-cert\") pod \"route-controller-manager-6576b87f9c-kfm25\" (UID: \"1f9de5c3-e681-4c09-a93e-9f95f03c7f5e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.590345 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.609780 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.629700 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.633199 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f9de5c3-e681-4c09-a93e-9f95f03c7f5e-config\") pod \"route-controller-manager-6576b87f9c-kfm25\" (UID: \"1f9de5c3-e681-4c09-a93e-9f95f03c7f5e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.651338 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.670878 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.676157 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f9de5c3-e681-4c09-a93e-9f95f03c7f5e-client-ca\") pod \"route-controller-manager-6576b87f9c-kfm25\" (UID: \"1f9de5c3-e681-4c09-a93e-9f95f03c7f5e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.690832 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.709697 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.730991 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.751154 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.771682 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.790817 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.830685 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.849905 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.870154 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.890144 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.910632 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.931563 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.975421 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcbfb\" (UniqueName: \"kubernetes.io/projected/cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0-kube-api-access-xcbfb\") pod \"authentication-operator-69f744f599-pddp7\" (UID: \"cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pddp7" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.984626 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpqkv\" (UniqueName: \"kubernetes.io/projected/fc42deee-84c0-4076-b553-c2bb55fd6807-kube-api-access-vpqkv\") pod \"openshift-apiserver-operator-796bbdcf4f-xt6vv\" (UID: \"fc42deee-84c0-4076-b553-c2bb55fd6807\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xt6vv" Nov 28 06:54:40 crc kubenswrapper[4946]: I1128 06:54:40.997284 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xt6vv" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.013277 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vtm7\" (UniqueName: \"kubernetes.io/projected/144f7f69-8a83-4b7f-83f6-fa6805bf1598-kube-api-access-6vtm7\") pod \"cluster-samples-operator-665b6dd947-tft2c\" (UID: \"144f7f69-8a83-4b7f-83f6-fa6805bf1598\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tft2c" Nov 28 06:54:41 crc kubenswrapper[4946]: E1128 06:54:41.037639 4946 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Nov 28 06:54:41 crc kubenswrapper[4946]: E1128 06:54:41.037792 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/39074957-ff47-4c7f-8c6e-26370a118b2b-trusted-ca podName:39074957-ff47-4c7f-8c6e-26370a118b2b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:41.537751712 +0000 UTC m=+135.915816863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/39074957-ff47-4c7f-8c6e-26370a118b2b-trusted-ca") pod "console-operator-58897d9998-cqn4c" (UID: "39074957-ff47-4c7f-8c6e-26370a118b2b") : failed to sync configmap cache: timed out waiting for the condition Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.042234 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7kzx\" (UniqueName: \"kubernetes.io/projected/fe1d7375-1ae2-4f2a-8fd0-a2d9ce15711a-kube-api-access-l7kzx\") pod \"openshift-controller-manager-operator-756b6f6bc6-559n4\" (UID: \"fe1d7375-1ae2-4f2a-8fd0-a2d9ce15711a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-559n4" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.045500 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxcjj\" (UniqueName: \"kubernetes.io/projected/39074957-ff47-4c7f-8c6e-26370a118b2b-kube-api-access-kxcjj\") pod \"console-operator-58897d9998-cqn4c\" (UID: \"39074957-ff47-4c7f-8c6e-26370a118b2b\") " pod="openshift-console-operator/console-operator-58897d9998-cqn4c" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.048805 4946 request.go:700] Waited for 1.017971914s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/serviceaccounts/kube-apiserver-operator/token Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.070250 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.077119 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tft2c" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.089826 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3580d0e9-f1d5-413e-a76e-d22baa741afd-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-79r5c\" (UID: \"3580d0e9-f1d5-413e-a76e-d22baa741afd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79r5c" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.111598 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.118331 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ndm7\" (UniqueName: \"kubernetes.io/projected/675e635c-71cb-4c9f-8c8a-dd25209abfd6-kube-api-access-5ndm7\") pod \"controller-manager-879f6c89f-wb2pq\" (UID: \"675e635c-71cb-4c9f-8c8a-dd25209abfd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.132097 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.150446 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.172769 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.185994 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pddp7" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.193794 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.211392 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-559n4" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.211416 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.231909 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.249544 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.269900 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.280181 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.288840 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79r5c" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.291814 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.291879 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xt6vv"] Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.313588 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.330270 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tft2c"] Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.331174 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.350947 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.351200 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xt6vv" event={"ID":"fc42deee-84c0-4076-b553-c2bb55fd6807","Type":"ContainerStarted","Data":"82a6cbf9be2143c7822bca59ea009210cc706bba06ec9586995f5fb772cb8beb"} Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.392451 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76dvl\" (UniqueName: \"kubernetes.io/projected/b8b3ba0a-934b-4bc0-8bc4-58991860b7fe-kube-api-access-76dvl\") pod \"apiserver-76f77b778f-xv7dv\" (UID: \"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe\") " pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.411613 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw9wl\" (UniqueName: \"kubernetes.io/projected/677259a2-4efb-4f49-ad8e-57357402c59a-kube-api-access-mw9wl\") pod \"apiserver-7bbb656c7d-pw98g\" (UID: \"677259a2-4efb-4f49-ad8e-57357402c59a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.419632 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.433592 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.439235 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p64vs\" (UniqueName: \"kubernetes.io/projected/79c0d15c-8fc9-4efd-b1ec-739718f313d9-kube-api-access-p64vs\") pod \"console-f9d7485db-r7ztb\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.443709 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pddp7"] Nov 28 06:54:41 crc kubenswrapper[4946]: W1128 06:54:41.451805 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd7a5b29_9bb7_4b7b_b39c_f8a96cc53fe0.slice/crio-6ddc00081a1903bbd3c1c86df4bf023efbdc1daad4f14058fef1496cbfd63983 WatchSource:0}: Error finding container 6ddc00081a1903bbd3c1c86df4bf023efbdc1daad4f14058fef1496cbfd63983: Status 404 returned error can't find the container with id 6ddc00081a1903bbd3c1c86df4bf023efbdc1daad4f14058fef1496cbfd63983 Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.451875 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.470203 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.470891 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-559n4"] Nov 28 06:54:41 crc kubenswrapper[4946]: W1128 06:54:41.484697 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe1d7375_1ae2_4f2a_8fd0_a2d9ce15711a.slice/crio-4780f512db105fc7bab97c9d54c1d146643a9919640d1a76910baf70ac0a7897 WatchSource:0}: Error finding container 4780f512db105fc7bab97c9d54c1d146643a9919640d1a76910baf70ac0a7897: Status 404 returned error can't find the container with id 4780f512db105fc7bab97c9d54c1d146643a9919640d1a76910baf70ac0a7897 Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.491481 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.511601 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.525341 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.526639 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wb2pq"] Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.530491 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.550425 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39074957-ff47-4c7f-8c6e-26370a118b2b-trusted-ca\") pod \"console-operator-58897d9998-cqn4c\" (UID: \"39074957-ff47-4c7f-8c6e-26370a118b2b\") " pod="openshift-console-operator/console-operator-58897d9998-cqn4c" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.550846 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.559899 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79r5c"] Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.585377 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68shv\" (UniqueName: \"kubernetes.io/projected/ee803337-53f2-4467-8f6c-602a16bda8e5-kube-api-access-68shv\") pod \"machine-api-operator-5694c8668f-hnzrs\" (UID: \"ee803337-53f2-4467-8f6c-602a16bda8e5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hnzrs" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.589348 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.596777 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xv7dv"] Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.606539 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.609960 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.629112 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.650399 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.669529 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.690647 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.701601 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hnzrs" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.710567 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.732940 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.750193 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.770336 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.790484 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.810184 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 28 06:54:41 crc kubenswrapper[4946]: W1128 06:54:41.827065 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod675e635c_71cb_4c9f_8c8a_dd25209abfd6.slice/crio-7f4519a2a4a0863bdd089938e472a6ea1a8036130f549832201d54c9a92ab828 WatchSource:0}: Error finding container 7f4519a2a4a0863bdd089938e472a6ea1a8036130f549832201d54c9a92ab828: Status 404 returned error can't find the container with id 7f4519a2a4a0863bdd089938e472a6ea1a8036130f549832201d54c9a92ab828 Nov 28 06:54:41 crc kubenswrapper[4946]: W1128 06:54:41.828641 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3580d0e9_f1d5_413e_a76e_d22baa741afd.slice/crio-b325a34b1e87b415c4fbeed6abfce6f7d5032b0aa0bdfc149fc988215ef0f22c WatchSource:0}: Error finding container b325a34b1e87b415c4fbeed6abfce6f7d5032b0aa0bdfc149fc988215ef0f22c: Status 404 returned error can't find the container with id b325a34b1e87b415c4fbeed6abfce6f7d5032b0aa0bdfc149fc988215ef0f22c Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.831163 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.854962 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.870146 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.891637 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.910066 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.930769 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.951938 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.972109 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 28 06:54:41 crc kubenswrapper[4946]: I1128 06:54:41.992879 4946 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.010426 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.044039 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.052063 4946 request.go:700] Waited for 1.942528664s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&limit=500&resourceVersion=0 Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.054447 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.076367 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.089847 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.112999 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.124069 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g"] Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.156259 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/475dd273-84f1-46de-9c16-3eaeca9b7d1c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4qwzn\" (UID: \"475dd273-84f1-46de-9c16-3eaeca9b7d1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qwzn" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.177965 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcmnj\" (UniqueName: \"kubernetes.io/projected/b4d03410-0c4e-4aa4-b760-8d8179d791a5-kube-api-access-kcmnj\") pod \"downloads-7954f5f757-4bfkx\" (UID: \"b4d03410-0c4e-4aa4-b760-8d8179d791a5\") " pod="openshift-console/downloads-7954f5f757-4bfkx" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.191144 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdj66\" (UniqueName: \"kubernetes.io/projected/d8c9e4f9-d51d-47d9-8228-c9c58873dbe3-kube-api-access-jdj66\") pod \"marketplace-operator-79b997595-js7vs\" (UID: \"d8c9e4f9-d51d-47d9-8228-c9c58873dbe3\") " pod="openshift-marketplace/marketplace-operator-79b997595-js7vs" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.209615 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jppt5\" (UniqueName: \"kubernetes.io/projected/0baa90e9-548b-4a20-9d9c-6a673f4ae0d0-kube-api-access-jppt5\") pod \"etcd-operator-b45778765-dts9q\" (UID: \"0baa90e9-548b-4a20-9d9c-6a673f4ae0d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dts9q" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.230110 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvk4b\" (UniqueName: \"kubernetes.io/projected/1f9de5c3-e681-4c09-a93e-9f95f03c7f5e-kube-api-access-rvk4b\") pod \"route-controller-manager-6576b87f9c-kfm25\" (UID: \"1f9de5c3-e681-4c09-a93e-9f95f03c7f5e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.244242 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6kk6\" (UniqueName: \"kubernetes.io/projected/daef0f99-b6ac-47c5-a246-40a12f61603b-kube-api-access-b6kk6\") pod \"openshift-config-operator-7777fb866f-m8vtf\" (UID: \"daef0f99-b6ac-47c5-a246-40a12f61603b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8vtf" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.256864 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4bfkx" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.265286 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-477cb\" (UniqueName: \"kubernetes.io/projected/830f7e38-bc1c-4897-bcc9-0266da4d74d0-kube-api-access-477cb\") pod \"oauth-openshift-558db77b4-m444h\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.279317 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-dts9q" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.292415 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwwjb\" (UniqueName: \"kubernetes.io/projected/475dd273-84f1-46de-9c16-3eaeca9b7d1c-kube-api-access-wwwjb\") pod \"ingress-operator-5b745b69d9-4qwzn\" (UID: \"475dd273-84f1-46de-9c16-3eaeca9b7d1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qwzn" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.295294 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-js7vs" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.302238 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qwzn" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.305627 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/190866a6-13b5-4de4-87c6-306883cb2998-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5ldr8\" (UID: \"190866a6-13b5-4de4-87c6-306883cb2998\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5ldr8" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.320368 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.358677 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.364117 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39074957-ff47-4c7f-8c6e-26370a118b2b-trusted-ca\") pod \"console-operator-58897d9998-cqn4c\" (UID: \"39074957-ff47-4c7f-8c6e-26370a118b2b\") " pod="openshift-console-operator/console-operator-58897d9998-cqn4c" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.365777 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pddp7" event={"ID":"cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0","Type":"ContainerStarted","Data":"db58c6e2f0572b4762bbe919f0abff2b1d7743a0d35af3240b99f72689a9226b"} Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.366035 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pddp7" event={"ID":"cd7a5b29-9bb7-4b7b-b39c-f8a96cc53fe0","Type":"ContainerStarted","Data":"6ddc00081a1903bbd3c1c86df4bf023efbdc1daad4f14058fef1496cbfd63983"} Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.370185 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-bound-sa-token\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.370227 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a09131a-f0b1-48e9-9b10-8e75e3344d3c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5tnsl\" (UID: \"6a09131a-f0b1-48e9-9b10-8e75e3344d3c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tnsl" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.370246 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/63f790e6-5c8d-475c-bd26-a1e52ffb9ed3-default-certificate\") pod \"router-default-5444994796-hwk8p\" (UID: \"63f790e6-5c8d-475c-bd26-a1e52ffb9ed3\") " pod="openshift-ingress/router-default-5444994796-hwk8p" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.370285 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shp74\" (UniqueName: \"kubernetes.io/projected/f98176b6-3525-4858-b747-06ca0fdb472e-kube-api-access-shp74\") pod \"packageserver-d55dfcdfc-sm5f9\" (UID: \"f98176b6-3525-4858-b747-06ca0fdb472e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sm5f9" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.370326 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-registry-tls\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.370346 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.370371 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.370494 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c2424a4-d3a4-4acc-a683-320abd4467ff-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pq4rr\" (UID: \"3c2424a4-d3a4-4acc-a683-320abd4467ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pq4rr" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.370677 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c2424a4-d3a4-4acc-a683-320abd4467ff-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pq4rr\" (UID: \"3c2424a4-d3a4-4acc-a683-320abd4467ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pq4rr" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.370720 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-registry-certificates\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.370745 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2cb23522-48f9-442d-8321-90df7290886d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pdk2l\" (UID: \"2cb23522-48f9-442d-8321-90df7290886d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pdk2l" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.371147 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zz4n\" (UniqueName: \"kubernetes.io/projected/2cb23522-48f9-442d-8321-90df7290886d-kube-api-access-7zz4n\") pod \"machine-config-controller-84d6567774-pdk2l\" (UID: \"2cb23522-48f9-442d-8321-90df7290886d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pdk2l" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.372840 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63f790e6-5c8d-475c-bd26-a1e52ffb9ed3-service-ca-bundle\") pod \"router-default-5444994796-hwk8p\" (UID: \"63f790e6-5c8d-475c-bd26-a1e52ffb9ed3\") " pod="openshift-ingress/router-default-5444994796-hwk8p" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.372973 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63f790e6-5c8d-475c-bd26-a1e52ffb9ed3-metrics-certs\") pod \"router-default-5444994796-hwk8p\" (UID: \"63f790e6-5c8d-475c-bd26-a1e52ffb9ed3\") " pod="openshift-ingress/router-default-5444994796-hwk8p" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.373304 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/63f790e6-5c8d-475c-bd26-a1e52ffb9ed3-stats-auth\") pod \"router-default-5444994796-hwk8p\" (UID: \"63f790e6-5c8d-475c-bd26-a1e52ffb9ed3\") " pod="openshift-ingress/router-default-5444994796-hwk8p" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.373355 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c2424a4-d3a4-4acc-a683-320abd4467ff-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pq4rr\" (UID: \"3c2424a4-d3a4-4acc-a683-320abd4467ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pq4rr" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.373590 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.373624 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f98176b6-3525-4858-b747-06ca0fdb472e-webhook-cert\") pod \"packageserver-d55dfcdfc-sm5f9\" (UID: \"f98176b6-3525-4858-b747-06ca0fdb472e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sm5f9" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.374235 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f98176b6-3525-4858-b747-06ca0fdb472e-tmpfs\") pod \"packageserver-d55dfcdfc-sm5f9\" (UID: \"f98176b6-3525-4858-b747-06ca0fdb472e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sm5f9" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.374266 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgt46\" (UniqueName: \"kubernetes.io/projected/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-kube-api-access-vgt46\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.374368 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mxpd\" (UniqueName: \"kubernetes.io/projected/6a09131a-f0b1-48e9-9b10-8e75e3344d3c-kube-api-access-2mxpd\") pod \"control-plane-machine-set-operator-78cbb6b69f-5tnsl\" (UID: \"6a09131a-f0b1-48e9-9b10-8e75e3344d3c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tnsl" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.374428 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmrbn\" (UniqueName: \"kubernetes.io/projected/63f790e6-5c8d-475c-bd26-a1e52ffb9ed3-kube-api-access-xmrbn\") pod \"router-default-5444994796-hwk8p\" (UID: \"63f790e6-5c8d-475c-bd26-a1e52ffb9ed3\") " pod="openshift-ingress/router-default-5444994796-hwk8p" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.374530 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2cb23522-48f9-442d-8321-90df7290886d-proxy-tls\") pod \"machine-config-controller-84d6567774-pdk2l\" (UID: \"2cb23522-48f9-442d-8321-90df7290886d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pdk2l" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.374565 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f98176b6-3525-4858-b747-06ca0fdb472e-apiservice-cert\") pod \"packageserver-d55dfcdfc-sm5f9\" (UID: \"f98176b6-3525-4858-b747-06ca0fdb472e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sm5f9" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.374598 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-trusted-ca\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:42 crc kubenswrapper[4946]: E1128 06:54:42.375213 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:42.875182748 +0000 UTC m=+137.253247849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.382577 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hnzrs"] Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.385141 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-r7ztb"] Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.402023 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-559n4" event={"ID":"fe1d7375-1ae2-4f2a-8fd0-a2d9ce15711a","Type":"ContainerStarted","Data":"36411f97ac729bcd44f1971748939170fc6650a96157af70878ae1403f02fd44"} Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.402266 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-559n4" event={"ID":"fe1d7375-1ae2-4f2a-8fd0-a2d9ce15711a","Type":"ContainerStarted","Data":"4780f512db105fc7bab97c9d54c1d146643a9919640d1a76910baf70ac0a7897"} Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.436829 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79r5c" event={"ID":"3580d0e9-f1d5-413e-a76e-d22baa741afd","Type":"ContainerStarted","Data":"b325a34b1e87b415c4fbeed6abfce6f7d5032b0aa0bdfc149fc988215ef0f22c"} Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.442012 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" event={"ID":"675e635c-71cb-4c9f-8c8a-dd25209abfd6","Type":"ContainerStarted","Data":"162b16ada7c3838a94fdd8184b6031d4360089663df8622aa9b11b37bed50583"} Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.442061 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" event={"ID":"675e635c-71cb-4c9f-8c8a-dd25209abfd6","Type":"ContainerStarted","Data":"7f4519a2a4a0863bdd089938e472a6ea1a8036130f549832201d54c9a92ab828"} Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.443244 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.445892 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xt6vv" event={"ID":"fc42deee-84c0-4076-b553-c2bb55fd6807","Type":"ContainerStarted","Data":"838e6f51140d60a69766d268c0d24b540d9918f64c50ca76eb94ff83312e3e32"} Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.447255 4946 generic.go:334] "Generic (PLEG): container finished" podID="b8b3ba0a-934b-4bc0-8bc4-58991860b7fe" containerID="a64595ea055ca0a9dc235306f1bf08a7663a52aa8bcbb714d4ed00f924720446" exitCode=0 Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.447314 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" event={"ID":"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe","Type":"ContainerDied","Data":"a64595ea055ca0a9dc235306f1bf08a7663a52aa8bcbb714d4ed00f924720446"} Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.447330 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" event={"ID":"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe","Type":"ContainerStarted","Data":"cba52e39e8a44a7768a5f8a34c2f0fc626cfa41fa72942a93828fbf0c0447d2c"} Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.453194 4946 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-wb2pq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.453287 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" podUID="675e635c-71cb-4c9f-8c8a-dd25209abfd6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.478667 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tft2c" event={"ID":"144f7f69-8a83-4b7f-83f6-fa6805bf1598","Type":"ContainerStarted","Data":"16c388679d2b468eb27eb0e25decbc4efa04f202d95827dc37f7efd6a1f8fec9"} Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.478720 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tft2c" event={"ID":"144f7f69-8a83-4b7f-83f6-fa6805bf1598","Type":"ContainerStarted","Data":"e49ad9431b3b2271977d82f45ff1a21433ea94c57ef714ce1552c1c43d623280"} Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.478730 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tft2c" event={"ID":"144f7f69-8a83-4b7f-83f6-fa6805bf1598","Type":"ContainerStarted","Data":"ce4368757d257920e49741af8c2a7bb00c8406b2b21e6803b80d47f583e1b41f"} Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.482584 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.483000 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmrbn\" (UniqueName: \"kubernetes.io/projected/63f790e6-5c8d-475c-bd26-a1e52ffb9ed3-kube-api-access-xmrbn\") pod \"router-default-5444994796-hwk8p\" (UID: \"63f790e6-5c8d-475c-bd26-a1e52ffb9ed3\") " pod="openshift-ingress/router-default-5444994796-hwk8p" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.483036 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9z2j\" (UniqueName: \"kubernetes.io/projected/6ed5ffa1-4361-4c09-b250-607fdf9cd2f4-kube-api-access-j9z2j\") pod \"ingress-canary-5q8lj\" (UID: \"6ed5ffa1-4361-4c09-b250-607fdf9cd2f4\") " pod="openshift-ingress-canary/ingress-canary-5q8lj" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.483061 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2cb23522-48f9-442d-8321-90df7290886d-proxy-tls\") pod \"machine-config-controller-84d6567774-pdk2l\" (UID: \"2cb23522-48f9-442d-8321-90df7290886d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pdk2l" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.483085 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk22g\" (UniqueName: \"kubernetes.io/projected/7b7188c5-eb8b-46be-b3db-4f0e058c3110-kube-api-access-fk22g\") pod \"multus-admission-controller-857f4d67dd-qjlhx\" (UID: \"7b7188c5-eb8b-46be-b3db-4f0e058c3110\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qjlhx" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.483102 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rnrl\" (UniqueName: \"kubernetes.io/projected/7ef7cd67-8788-4c1b-9f7d-e66f4fa9737e-kube-api-access-8rnrl\") pod \"dns-default-qvr67\" (UID: \"7ef7cd67-8788-4c1b-9f7d-e66f4fa9737e\") " pod="openshift-dns/dns-default-qvr67" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.483126 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f98176b6-3525-4858-b747-06ca0fdb472e-apiservice-cert\") pod \"packageserver-d55dfcdfc-sm5f9\" (UID: \"f98176b6-3525-4858-b747-06ca0fdb472e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sm5f9" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.483148 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxkrp\" (UniqueName: \"kubernetes.io/projected/c6a06ceb-1625-421c-864d-4f24d2fd00d3-kube-api-access-qxkrp\") pod \"catalog-operator-68c6474976-4swtq\" (UID: \"c6a06ceb-1625-421c-864d-4f24d2fd00d3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4swtq" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.483172 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/85300130-dd7e-4e81-b37a-7dc93d20ab33-srv-cert\") pod \"olm-operator-6b444d44fb-6k9gx\" (UID: \"85300130-dd7e-4e81-b37a-7dc93d20ab33\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6k9gx" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.483222 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-trusted-ca\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.483241 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/94ee27f9-59ea-44af-bbd3-e586f4bb7bb1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-z4vpd\" (UID: \"94ee27f9-59ea-44af-bbd3-e586f4bb7bb1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z4vpd" Nov 28 06:54:42 crc kubenswrapper[4946]: E1128 06:54:42.486734 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:42.986695872 +0000 UTC m=+137.364760983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.487181 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-trusted-ca\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.490413 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2cb23522-48f9-442d-8321-90df7290886d-proxy-tls\") pod \"machine-config-controller-84d6567774-pdk2l\" (UID: \"2cb23522-48f9-442d-8321-90df7290886d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pdk2l" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.490588 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8df2dcaf-0af5-4827-9135-e5944a2b12dc-auth-proxy-config\") pod \"machine-config-operator-74547568cd-44jkz\" (UID: \"8df2dcaf-0af5-4827-9135-e5944a2b12dc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44jkz" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.490687 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/386d3c8a-a2d4-46e1-a7cb-c456eec4860b-signing-key\") pod \"service-ca-9c57cc56f-pm27x\" (UID: \"386d3c8a-a2d4-46e1-a7cb-c456eec4860b\") " pod="openshift-service-ca/service-ca-9c57cc56f-pm27x" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.490793 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-bound-sa-token\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.490858 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a09131a-f0b1-48e9-9b10-8e75e3344d3c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5tnsl\" (UID: \"6a09131a-f0b1-48e9-9b10-8e75e3344d3c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tnsl" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.490892 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x9d2\" (UniqueName: \"kubernetes.io/projected/8df2dcaf-0af5-4827-9135-e5944a2b12dc-kube-api-access-6x9d2\") pod \"machine-config-operator-74547568cd-44jkz\" (UID: \"8df2dcaf-0af5-4827-9135-e5944a2b12dc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44jkz" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.490918 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/386d3c8a-a2d4-46e1-a7cb-c456eec4860b-signing-cabundle\") pod \"service-ca-9c57cc56f-pm27x\" (UID: \"386d3c8a-a2d4-46e1-a7cb-c456eec4860b\") " pod="openshift-service-ca/service-ca-9c57cc56f-pm27x" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.490941 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhc75\" (UniqueName: \"kubernetes.io/projected/7092cadd-becf-4ee8-bbb5-0aadd638ad46-kube-api-access-rhc75\") pod \"kube-storage-version-migrator-operator-b67b599dd-mnst8\" (UID: \"7092cadd-becf-4ee8-bbb5-0aadd638ad46\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mnst8" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.490962 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52939a0c-29ab-4cbb-bd53-7a330884139c-config\") pod \"service-ca-operator-777779d784-tn22h\" (UID: \"52939a0c-29ab-4cbb-bd53-7a330884139c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tn22h" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.490984 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d68f56c-7e3b-4207-92d1-95800df70d0e-metrics-tls\") pod \"dns-operator-744455d44c-r8n92\" (UID: \"4d68f56c-7e3b-4207-92d1-95800df70d0e\") " pod="openshift-dns-operator/dns-operator-744455d44c-r8n92" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.491035 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c6a06ceb-1625-421c-864d-4f24d2fd00d3-srv-cert\") pod \"catalog-operator-68c6474976-4swtq\" (UID: \"c6a06ceb-1625-421c-864d-4f24d2fd00d3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4swtq" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.491068 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dj6q\" (UniqueName: \"kubernetes.io/projected/85300130-dd7e-4e81-b37a-7dc93d20ab33-kube-api-access-7dj6q\") pod \"olm-operator-6b444d44fb-6k9gx\" (UID: \"85300130-dd7e-4e81-b37a-7dc93d20ab33\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6k9gx" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.491121 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/63f790e6-5c8d-475c-bd26-a1e52ffb9ed3-default-certificate\") pod \"router-default-5444994796-hwk8p\" (UID: \"63f790e6-5c8d-475c-bd26-a1e52ffb9ed3\") " pod="openshift-ingress/router-default-5444994796-hwk8p" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.491146 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/11ad130b-9da2-40c3-b106-64faae722846-certs\") pod \"machine-config-server-vcgqw\" (UID: \"11ad130b-9da2-40c3-b106-64faae722846\") " pod="openshift-machine-config-operator/machine-config-server-vcgqw" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.491225 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shp74\" (UniqueName: \"kubernetes.io/projected/f98176b6-3525-4858-b747-06ca0fdb472e-kube-api-access-shp74\") pod \"packageserver-d55dfcdfc-sm5f9\" (UID: \"f98176b6-3525-4858-b747-06ca0fdb472e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sm5f9" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.491281 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df6a8d9e-bb6c-42e9-918e-60cbb78f55ac-config\") pod \"machine-approver-56656f9798-rzxwh\" (UID: \"df6a8d9e-bb6c-42e9-918e-60cbb78f55ac\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzxwh" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.491305 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/07a101d3-99de-46c5-88c3-9d053589fd73-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6q9fb\" (UID: \"07a101d3-99de-46c5-88c3-9d053589fd73\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6q9fb" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.491326 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/98c2fa9b-f899-4167-ab43-6d97e7b1589e-socket-dir\") pod \"csi-hostpathplugin-sc95q\" (UID: \"98c2fa9b-f899-4167-ab43-6d97e7b1589e\") " pod="hostpath-provisioner/csi-hostpathplugin-sc95q" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.491346 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwxvz\" (UniqueName: \"kubernetes.io/projected/98c2fa9b-f899-4167-ab43-6d97e7b1589e-kube-api-access-mwxvz\") pod \"csi-hostpathplugin-sc95q\" (UID: \"98c2fa9b-f899-4167-ab43-6d97e7b1589e\") " pod="hostpath-provisioner/csi-hostpathplugin-sc95q" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.491381 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-registry-tls\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.491431 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6l2r\" (UniqueName: \"kubernetes.io/projected/386d3c8a-a2d4-46e1-a7cb-c456eec4860b-kube-api-access-k6l2r\") pod \"service-ca-9c57cc56f-pm27x\" (UID: \"386d3c8a-a2d4-46e1-a7cb-c456eec4860b\") " pod="openshift-service-ca/service-ca-9c57cc56f-pm27x" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.491509 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vslzh\" (UniqueName: \"kubernetes.io/projected/3b5b4fe0-1e4f-435a-aff4-3d388fb616cc-kube-api-access-vslzh\") pod \"migrator-59844c95c7-hxz9b\" (UID: \"3b5b4fe0-1e4f-435a-aff4-3d388fb616cc\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hxz9b" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.491538 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvv4s\" (UniqueName: \"kubernetes.io/projected/df6a8d9e-bb6c-42e9-918e-60cbb78f55ac-kube-api-access-dvv4s\") pod \"machine-approver-56656f9798-rzxwh\" (UID: \"df6a8d9e-bb6c-42e9-918e-60cbb78f55ac\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzxwh" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.491582 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.491602 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ef7cd67-8788-4c1b-9f7d-e66f4fa9737e-config-volume\") pod \"dns-default-qvr67\" (UID: \"7ef7cd67-8788-4c1b-9f7d-e66f4fa9737e\") " pod="openshift-dns/dns-default-qvr67" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.491643 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.491648 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f98176b6-3525-4858-b747-06ca0fdb472e-apiservice-cert\") pod \"packageserver-d55dfcdfc-sm5f9\" (UID: \"f98176b6-3525-4858-b747-06ca0fdb472e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sm5f9" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.491666 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7ef7cd67-8788-4c1b-9f7d-e66f4fa9737e-metrics-tls\") pod \"dns-default-qvr67\" (UID: \"7ef7cd67-8788-4c1b-9f7d-e66f4fa9737e\") " pod="openshift-dns/dns-default-qvr67" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.495310 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c2424a4-d3a4-4acc-a683-320abd4467ff-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pq4rr\" (UID: \"3c2424a4-d3a4-4acc-a683-320abd4467ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pq4rr" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.495367 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7b7188c5-eb8b-46be-b3db-4f0e058c3110-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qjlhx\" (UID: \"7b7188c5-eb8b-46be-b3db-4f0e058c3110\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qjlhx" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.495406 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/98c2fa9b-f899-4167-ab43-6d97e7b1589e-plugins-dir\") pod \"csi-hostpathplugin-sc95q\" (UID: \"98c2fa9b-f899-4167-ab43-6d97e7b1589e\") " pod="hostpath-provisioner/csi-hostpathplugin-sc95q" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.495438 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8df2dcaf-0af5-4827-9135-e5944a2b12dc-images\") pod \"machine-config-operator-74547568cd-44jkz\" (UID: \"8df2dcaf-0af5-4827-9135-e5944a2b12dc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44jkz" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.495518 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/94ee27f9-59ea-44af-bbd3-e586f4bb7bb1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-z4vpd\" (UID: \"94ee27f9-59ea-44af-bbd3-e586f4bb7bb1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z4vpd" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.496573 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c2424a4-d3a4-4acc-a683-320abd4467ff-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pq4rr\" (UID: \"3c2424a4-d3a4-4acc-a683-320abd4467ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pq4rr" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.496600 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.496623 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/df6a8d9e-bb6c-42e9-918e-60cbb78f55ac-auth-proxy-config\") pod \"machine-approver-56656f9798-rzxwh\" (UID: \"df6a8d9e-bb6c-42e9-918e-60cbb78f55ac\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzxwh" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.496651 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/175d3bf2-c969-401c-9b35-d91a066d0305-secret-volume\") pod \"collect-profiles-29405205-hbxpf\" (UID: \"175d3bf2-c969-401c-9b35-d91a066d0305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-hbxpf" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.496674 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/85300130-dd7e-4e81-b37a-7dc93d20ab33-profile-collector-cert\") pod \"olm-operator-6b444d44fb-6k9gx\" (UID: \"85300130-dd7e-4e81-b37a-7dc93d20ab33\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6k9gx" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.496697 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7092cadd-becf-4ee8-bbb5-0aadd638ad46-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mnst8\" (UID: \"7092cadd-becf-4ee8-bbb5-0aadd638ad46\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mnst8" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.496735 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-registry-certificates\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.496800 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4j5k\" (UniqueName: \"kubernetes.io/projected/94ee27f9-59ea-44af-bbd3-e586f4bb7bb1-kube-api-access-z4j5k\") pod \"cluster-image-registry-operator-dc59b4c8b-z4vpd\" (UID: \"94ee27f9-59ea-44af-bbd3-e586f4bb7bb1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z4vpd" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.496819 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52939a0c-29ab-4cbb-bd53-7a330884139c-serving-cert\") pod \"service-ca-operator-777779d784-tn22h\" (UID: \"52939a0c-29ab-4cbb-bd53-7a330884139c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tn22h" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.496848 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2cb23522-48f9-442d-8321-90df7290886d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pdk2l\" (UID: \"2cb23522-48f9-442d-8321-90df7290886d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pdk2l" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.496869 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/98c2fa9b-f899-4167-ab43-6d97e7b1589e-registration-dir\") pod \"csi-hostpathplugin-sc95q\" (UID: \"98c2fa9b-f899-4167-ab43-6d97e7b1589e\") " pod="hostpath-provisioner/csi-hostpathplugin-sc95q" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.496905 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/11ad130b-9da2-40c3-b106-64faae722846-node-bootstrap-token\") pod \"machine-config-server-vcgqw\" (UID: \"11ad130b-9da2-40c3-b106-64faae722846\") " pod="openshift-machine-config-operator/machine-config-server-vcgqw" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.497045 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zz4n\" (UniqueName: \"kubernetes.io/projected/2cb23522-48f9-442d-8321-90df7290886d-kube-api-access-7zz4n\") pod \"machine-config-controller-84d6567774-pdk2l\" (UID: \"2cb23522-48f9-442d-8321-90df7290886d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pdk2l" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.497080 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ed5ffa1-4361-4c09-b250-607fdf9cd2f4-cert\") pod \"ingress-canary-5q8lj\" (UID: \"6ed5ffa1-4361-4c09-b250-607fdf9cd2f4\") " pod="openshift-ingress-canary/ingress-canary-5q8lj" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.497098 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7092cadd-becf-4ee8-bbb5-0aadd638ad46-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mnst8\" (UID: \"7092cadd-becf-4ee8-bbb5-0aadd638ad46\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mnst8" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.497122 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbbz5\" (UniqueName: \"kubernetes.io/projected/4d68f56c-7e3b-4207-92d1-95800df70d0e-kube-api-access-dbbz5\") pod \"dns-operator-744455d44c-r8n92\" (UID: \"4d68f56c-7e3b-4207-92d1-95800df70d0e\") " pod="openshift-dns-operator/dns-operator-744455d44c-r8n92" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.497277 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63f790e6-5c8d-475c-bd26-a1e52ffb9ed3-service-ca-bundle\") pod \"router-default-5444994796-hwk8p\" (UID: \"63f790e6-5c8d-475c-bd26-a1e52ffb9ed3\") " pod="openshift-ingress/router-default-5444994796-hwk8p" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.497364 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63f790e6-5c8d-475c-bd26-a1e52ffb9ed3-metrics-certs\") pod \"router-default-5444994796-hwk8p\" (UID: \"63f790e6-5c8d-475c-bd26-a1e52ffb9ed3\") " pod="openshift-ingress/router-default-5444994796-hwk8p" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.497401 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/63f790e6-5c8d-475c-bd26-a1e52ffb9ed3-stats-auth\") pod \"router-default-5444994796-hwk8p\" (UID: \"63f790e6-5c8d-475c-bd26-a1e52ffb9ed3\") " pod="openshift-ingress/router-default-5444994796-hwk8p" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.497524 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/94ee27f9-59ea-44af-bbd3-e586f4bb7bb1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-z4vpd\" (UID: \"94ee27f9-59ea-44af-bbd3-e586f4bb7bb1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z4vpd" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.497603 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7twz6\" (UniqueName: \"kubernetes.io/projected/11ad130b-9da2-40c3-b106-64faae722846-kube-api-access-7twz6\") pod \"machine-config-server-vcgqw\" (UID: \"11ad130b-9da2-40c3-b106-64faae722846\") " pod="openshift-machine-config-operator/machine-config-server-vcgqw" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.497632 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/98c2fa9b-f899-4167-ab43-6d97e7b1589e-mountpoint-dir\") pod \"csi-hostpathplugin-sc95q\" (UID: \"98c2fa9b-f899-4167-ab43-6d97e7b1589e\") " pod="hostpath-provisioner/csi-hostpathplugin-sc95q" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.497685 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c2424a4-d3a4-4acc-a683-320abd4467ff-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pq4rr\" (UID: \"3c2424a4-d3a4-4acc-a683-320abd4467ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pq4rr" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.497757 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6z54\" (UniqueName: \"kubernetes.io/projected/52939a0c-29ab-4cbb-bd53-7a330884139c-kube-api-access-z6z54\") pod \"service-ca-operator-777779d784-tn22h\" (UID: \"52939a0c-29ab-4cbb-bd53-7a330884139c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tn22h" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.497785 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.497808 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c6a06ceb-1625-421c-864d-4f24d2fd00d3-profile-collector-cert\") pod \"catalog-operator-68c6474976-4swtq\" (UID: \"c6a06ceb-1625-421c-864d-4f24d2fd00d3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4swtq" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.497828 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bdnf\" (UniqueName: \"kubernetes.io/projected/07a101d3-99de-46c5-88c3-9d053589fd73-kube-api-access-4bdnf\") pod \"package-server-manager-789f6589d5-6q9fb\" (UID: \"07a101d3-99de-46c5-88c3-9d053589fd73\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6q9fb" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.497856 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/df6a8d9e-bb6c-42e9-918e-60cbb78f55ac-machine-approver-tls\") pod \"machine-approver-56656f9798-rzxwh\" (UID: \"df6a8d9e-bb6c-42e9-918e-60cbb78f55ac\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzxwh" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.497941 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f98176b6-3525-4858-b747-06ca0fdb472e-webhook-cert\") pod \"packageserver-d55dfcdfc-sm5f9\" (UID: \"f98176b6-3525-4858-b747-06ca0fdb472e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sm5f9" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.498017 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/63f790e6-5c8d-475c-bd26-a1e52ffb9ed3-default-certificate\") pod \"router-default-5444994796-hwk8p\" (UID: \"63f790e6-5c8d-475c-bd26-a1e52ffb9ed3\") " pod="openshift-ingress/router-default-5444994796-hwk8p" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.498024 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/175d3bf2-c969-401c-9b35-d91a066d0305-config-volume\") pod \"collect-profiles-29405205-hbxpf\" (UID: \"175d3bf2-c969-401c-9b35-d91a066d0305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-hbxpf" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.498257 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f98176b6-3525-4858-b747-06ca0fdb472e-tmpfs\") pod \"packageserver-d55dfcdfc-sm5f9\" (UID: \"f98176b6-3525-4858-b747-06ca0fdb472e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sm5f9" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.498288 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8df2dcaf-0af5-4827-9135-e5944a2b12dc-proxy-tls\") pod \"machine-config-operator-74547568cd-44jkz\" (UID: \"8df2dcaf-0af5-4827-9135-e5944a2b12dc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44jkz" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.498310 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7jr4\" (UniqueName: \"kubernetes.io/projected/175d3bf2-c969-401c-9b35-d91a066d0305-kube-api-access-m7jr4\") pod \"collect-profiles-29405205-hbxpf\" (UID: \"175d3bf2-c969-401c-9b35-d91a066d0305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-hbxpf" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.498327 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/98c2fa9b-f899-4167-ab43-6d97e7b1589e-csi-data-dir\") pod \"csi-hostpathplugin-sc95q\" (UID: \"98c2fa9b-f899-4167-ab43-6d97e7b1589e\") " pod="hostpath-provisioner/csi-hostpathplugin-sc95q" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.498361 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgt46\" (UniqueName: \"kubernetes.io/projected/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-kube-api-access-vgt46\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.498383 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mxpd\" (UniqueName: \"kubernetes.io/projected/6a09131a-f0b1-48e9-9b10-8e75e3344d3c-kube-api-access-2mxpd\") pod \"control-plane-machine-set-operator-78cbb6b69f-5tnsl\" (UID: \"6a09131a-f0b1-48e9-9b10-8e75e3344d3c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tnsl" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.498530 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-registry-certificates\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.500504 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" event={"ID":"677259a2-4efb-4f49-ad8e-57357402c59a","Type":"ContainerStarted","Data":"e0b3fb0d5c970a7a2678e103a8d6c61eed16de07beb0d061a40393e7930fd9dc"} Nov 28 06:54:42 crc kubenswrapper[4946]: E1128 06:54:42.500830 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:43.000815251 +0000 UTC m=+137.378880362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.502205 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63f790e6-5c8d-475c-bd26-a1e52ffb9ed3-service-ca-bundle\") pod \"router-default-5444994796-hwk8p\" (UID: \"63f790e6-5c8d-475c-bd26-a1e52ffb9ed3\") " pod="openshift-ingress/router-default-5444994796-hwk8p" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.502917 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2cb23522-48f9-442d-8321-90df7290886d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pdk2l\" (UID: \"2cb23522-48f9-442d-8321-90df7290886d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pdk2l" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.503154 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c2424a4-d3a4-4acc-a683-320abd4467ff-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pq4rr\" (UID: \"3c2424a4-d3a4-4acc-a683-320abd4467ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pq4rr" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.505419 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f98176b6-3525-4858-b747-06ca0fdb472e-tmpfs\") pod \"packageserver-d55dfcdfc-sm5f9\" (UID: \"f98176b6-3525-4858-b747-06ca0fdb472e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sm5f9" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.511327 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63f790e6-5c8d-475c-bd26-a1e52ffb9ed3-metrics-certs\") pod \"router-default-5444994796-hwk8p\" (UID: \"63f790e6-5c8d-475c-bd26-a1e52ffb9ed3\") " pod="openshift-ingress/router-default-5444994796-hwk8p" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.514517 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a09131a-f0b1-48e9-9b10-8e75e3344d3c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5tnsl\" (UID: \"6a09131a-f0b1-48e9-9b10-8e75e3344d3c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tnsl" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.514626 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.515103 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f98176b6-3525-4858-b747-06ca0fdb472e-webhook-cert\") pod \"packageserver-d55dfcdfc-sm5f9\" (UID: \"f98176b6-3525-4858-b747-06ca0fdb472e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sm5f9" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.515557 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-registry-tls\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.524502 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.525389 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/63f790e6-5c8d-475c-bd26-a1e52ffb9ed3-stats-auth\") pod \"router-default-5444994796-hwk8p\" (UID: \"63f790e6-5c8d-475c-bd26-a1e52ffb9ed3\") " pod="openshift-ingress/router-default-5444994796-hwk8p" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.533699 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8vtf" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.534645 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmrbn\" (UniqueName: \"kubernetes.io/projected/63f790e6-5c8d-475c-bd26-a1e52ffb9ed3-kube-api-access-xmrbn\") pod \"router-default-5444994796-hwk8p\" (UID: \"63f790e6-5c8d-475c-bd26-a1e52ffb9ed3\") " pod="openshift-ingress/router-default-5444994796-hwk8p" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.547094 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c2424a4-d3a4-4acc-a683-320abd4467ff-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pq4rr\" (UID: \"3c2424a4-d3a4-4acc-a683-320abd4467ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pq4rr" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.555514 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-bound-sa-token\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.567384 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5ldr8" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.568598 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shp74\" (UniqueName: \"kubernetes.io/projected/f98176b6-3525-4858-b747-06ca0fdb472e-kube-api-access-shp74\") pod \"packageserver-d55dfcdfc-sm5f9\" (UID: \"f98176b6-3525-4858-b747-06ca0fdb472e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sm5f9" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.576133 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hwk8p" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600053 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600284 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/94ee27f9-59ea-44af-bbd3-e586f4bb7bb1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-z4vpd\" (UID: \"94ee27f9-59ea-44af-bbd3-e586f4bb7bb1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z4vpd" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600315 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7twz6\" (UniqueName: \"kubernetes.io/projected/11ad130b-9da2-40c3-b106-64faae722846-kube-api-access-7twz6\") pod \"machine-config-server-vcgqw\" (UID: \"11ad130b-9da2-40c3-b106-64faae722846\") " pod="openshift-machine-config-operator/machine-config-server-vcgqw" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600335 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/98c2fa9b-f899-4167-ab43-6d97e7b1589e-mountpoint-dir\") pod \"csi-hostpathplugin-sc95q\" (UID: \"98c2fa9b-f899-4167-ab43-6d97e7b1589e\") " pod="hostpath-provisioner/csi-hostpathplugin-sc95q" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600391 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6z54\" (UniqueName: \"kubernetes.io/projected/52939a0c-29ab-4cbb-bd53-7a330884139c-kube-api-access-z6z54\") pod \"service-ca-operator-777779d784-tn22h\" (UID: \"52939a0c-29ab-4cbb-bd53-7a330884139c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tn22h" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600420 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c6a06ceb-1625-421c-864d-4f24d2fd00d3-profile-collector-cert\") pod \"catalog-operator-68c6474976-4swtq\" (UID: \"c6a06ceb-1625-421c-864d-4f24d2fd00d3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4swtq" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600437 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bdnf\" (UniqueName: \"kubernetes.io/projected/07a101d3-99de-46c5-88c3-9d053589fd73-kube-api-access-4bdnf\") pod \"package-server-manager-789f6589d5-6q9fb\" (UID: \"07a101d3-99de-46c5-88c3-9d053589fd73\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6q9fb" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600470 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/df6a8d9e-bb6c-42e9-918e-60cbb78f55ac-machine-approver-tls\") pod \"machine-approver-56656f9798-rzxwh\" (UID: \"df6a8d9e-bb6c-42e9-918e-60cbb78f55ac\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzxwh" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600510 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/175d3bf2-c969-401c-9b35-d91a066d0305-config-volume\") pod \"collect-profiles-29405205-hbxpf\" (UID: \"175d3bf2-c969-401c-9b35-d91a066d0305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-hbxpf" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600528 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8df2dcaf-0af5-4827-9135-e5944a2b12dc-proxy-tls\") pod \"machine-config-operator-74547568cd-44jkz\" (UID: \"8df2dcaf-0af5-4827-9135-e5944a2b12dc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44jkz" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600542 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7jr4\" (UniqueName: \"kubernetes.io/projected/175d3bf2-c969-401c-9b35-d91a066d0305-kube-api-access-m7jr4\") pod \"collect-profiles-29405205-hbxpf\" (UID: \"175d3bf2-c969-401c-9b35-d91a066d0305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-hbxpf" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600558 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/98c2fa9b-f899-4167-ab43-6d97e7b1589e-csi-data-dir\") pod \"csi-hostpathplugin-sc95q\" (UID: \"98c2fa9b-f899-4167-ab43-6d97e7b1589e\") " pod="hostpath-provisioner/csi-hostpathplugin-sc95q" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600605 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9z2j\" (UniqueName: \"kubernetes.io/projected/6ed5ffa1-4361-4c09-b250-607fdf9cd2f4-kube-api-access-j9z2j\") pod \"ingress-canary-5q8lj\" (UID: \"6ed5ffa1-4361-4c09-b250-607fdf9cd2f4\") " pod="openshift-ingress-canary/ingress-canary-5q8lj" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600634 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk22g\" (UniqueName: \"kubernetes.io/projected/7b7188c5-eb8b-46be-b3db-4f0e058c3110-kube-api-access-fk22g\") pod \"multus-admission-controller-857f4d67dd-qjlhx\" (UID: \"7b7188c5-eb8b-46be-b3db-4f0e058c3110\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qjlhx" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600651 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rnrl\" (UniqueName: \"kubernetes.io/projected/7ef7cd67-8788-4c1b-9f7d-e66f4fa9737e-kube-api-access-8rnrl\") pod \"dns-default-qvr67\" (UID: \"7ef7cd67-8788-4c1b-9f7d-e66f4fa9737e\") " pod="openshift-dns/dns-default-qvr67" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600678 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/85300130-dd7e-4e81-b37a-7dc93d20ab33-srv-cert\") pod \"olm-operator-6b444d44fb-6k9gx\" (UID: \"85300130-dd7e-4e81-b37a-7dc93d20ab33\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6k9gx" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600719 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxkrp\" (UniqueName: \"kubernetes.io/projected/c6a06ceb-1625-421c-864d-4f24d2fd00d3-kube-api-access-qxkrp\") pod \"catalog-operator-68c6474976-4swtq\" (UID: \"c6a06ceb-1625-421c-864d-4f24d2fd00d3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4swtq" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600734 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8df2dcaf-0af5-4827-9135-e5944a2b12dc-auth-proxy-config\") pod \"machine-config-operator-74547568cd-44jkz\" (UID: \"8df2dcaf-0af5-4827-9135-e5944a2b12dc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44jkz" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600750 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/386d3c8a-a2d4-46e1-a7cb-c456eec4860b-signing-key\") pod \"service-ca-9c57cc56f-pm27x\" (UID: \"386d3c8a-a2d4-46e1-a7cb-c456eec4860b\") " pod="openshift-service-ca/service-ca-9c57cc56f-pm27x" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600775 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/94ee27f9-59ea-44af-bbd3-e586f4bb7bb1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-z4vpd\" (UID: \"94ee27f9-59ea-44af-bbd3-e586f4bb7bb1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z4vpd" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600791 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x9d2\" (UniqueName: \"kubernetes.io/projected/8df2dcaf-0af5-4827-9135-e5944a2b12dc-kube-api-access-6x9d2\") pod \"machine-config-operator-74547568cd-44jkz\" (UID: \"8df2dcaf-0af5-4827-9135-e5944a2b12dc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44jkz" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600807 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/386d3c8a-a2d4-46e1-a7cb-c456eec4860b-signing-cabundle\") pod \"service-ca-9c57cc56f-pm27x\" (UID: \"386d3c8a-a2d4-46e1-a7cb-c456eec4860b\") " pod="openshift-service-ca/service-ca-9c57cc56f-pm27x" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600827 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhc75\" (UniqueName: \"kubernetes.io/projected/7092cadd-becf-4ee8-bbb5-0aadd638ad46-kube-api-access-rhc75\") pod \"kube-storage-version-migrator-operator-b67b599dd-mnst8\" (UID: \"7092cadd-becf-4ee8-bbb5-0aadd638ad46\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mnst8" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600843 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52939a0c-29ab-4cbb-bd53-7a330884139c-config\") pod \"service-ca-operator-777779d784-tn22h\" (UID: \"52939a0c-29ab-4cbb-bd53-7a330884139c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tn22h" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600858 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d68f56c-7e3b-4207-92d1-95800df70d0e-metrics-tls\") pod \"dns-operator-744455d44c-r8n92\" (UID: \"4d68f56c-7e3b-4207-92d1-95800df70d0e\") " pod="openshift-dns-operator/dns-operator-744455d44c-r8n92" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600876 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c6a06ceb-1625-421c-864d-4f24d2fd00d3-srv-cert\") pod \"catalog-operator-68c6474976-4swtq\" (UID: \"c6a06ceb-1625-421c-864d-4f24d2fd00d3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4swtq" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600891 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dj6q\" (UniqueName: \"kubernetes.io/projected/85300130-dd7e-4e81-b37a-7dc93d20ab33-kube-api-access-7dj6q\") pod \"olm-operator-6b444d44fb-6k9gx\" (UID: \"85300130-dd7e-4e81-b37a-7dc93d20ab33\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6k9gx" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600909 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/11ad130b-9da2-40c3-b106-64faae722846-certs\") pod \"machine-config-server-vcgqw\" (UID: \"11ad130b-9da2-40c3-b106-64faae722846\") " pod="openshift-machine-config-operator/machine-config-server-vcgqw" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600935 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df6a8d9e-bb6c-42e9-918e-60cbb78f55ac-config\") pod \"machine-approver-56656f9798-rzxwh\" (UID: \"df6a8d9e-bb6c-42e9-918e-60cbb78f55ac\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzxwh" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600950 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/07a101d3-99de-46c5-88c3-9d053589fd73-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6q9fb\" (UID: \"07a101d3-99de-46c5-88c3-9d053589fd73\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6q9fb" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600966 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/98c2fa9b-f899-4167-ab43-6d97e7b1589e-socket-dir\") pod \"csi-hostpathplugin-sc95q\" (UID: \"98c2fa9b-f899-4167-ab43-6d97e7b1589e\") " pod="hostpath-provisioner/csi-hostpathplugin-sc95q" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.600980 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwxvz\" (UniqueName: \"kubernetes.io/projected/98c2fa9b-f899-4167-ab43-6d97e7b1589e-kube-api-access-mwxvz\") pod \"csi-hostpathplugin-sc95q\" (UID: \"98c2fa9b-f899-4167-ab43-6d97e7b1589e\") " pod="hostpath-provisioner/csi-hostpathplugin-sc95q" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.601004 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6l2r\" (UniqueName: \"kubernetes.io/projected/386d3c8a-a2d4-46e1-a7cb-c456eec4860b-kube-api-access-k6l2r\") pod \"service-ca-9c57cc56f-pm27x\" (UID: \"386d3c8a-a2d4-46e1-a7cb-c456eec4860b\") " pod="openshift-service-ca/service-ca-9c57cc56f-pm27x" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.601029 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vslzh\" (UniqueName: \"kubernetes.io/projected/3b5b4fe0-1e4f-435a-aff4-3d388fb616cc-kube-api-access-vslzh\") pod \"migrator-59844c95c7-hxz9b\" (UID: \"3b5b4fe0-1e4f-435a-aff4-3d388fb616cc\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hxz9b" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.601044 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvv4s\" (UniqueName: \"kubernetes.io/projected/df6a8d9e-bb6c-42e9-918e-60cbb78f55ac-kube-api-access-dvv4s\") pod \"machine-approver-56656f9798-rzxwh\" (UID: \"df6a8d9e-bb6c-42e9-918e-60cbb78f55ac\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzxwh" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.601059 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ef7cd67-8788-4c1b-9f7d-e66f4fa9737e-config-volume\") pod \"dns-default-qvr67\" (UID: \"7ef7cd67-8788-4c1b-9f7d-e66f4fa9737e\") " pod="openshift-dns/dns-default-qvr67" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.601093 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7ef7cd67-8788-4c1b-9f7d-e66f4fa9737e-metrics-tls\") pod \"dns-default-qvr67\" (UID: \"7ef7cd67-8788-4c1b-9f7d-e66f4fa9737e\") " pod="openshift-dns/dns-default-qvr67" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.601110 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7b7188c5-eb8b-46be-b3db-4f0e058c3110-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qjlhx\" (UID: \"7b7188c5-eb8b-46be-b3db-4f0e058c3110\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qjlhx" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.601124 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/98c2fa9b-f899-4167-ab43-6d97e7b1589e-plugins-dir\") pod \"csi-hostpathplugin-sc95q\" (UID: \"98c2fa9b-f899-4167-ab43-6d97e7b1589e\") " pod="hostpath-provisioner/csi-hostpathplugin-sc95q" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.601142 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8df2dcaf-0af5-4827-9135-e5944a2b12dc-images\") pod \"machine-config-operator-74547568cd-44jkz\" (UID: \"8df2dcaf-0af5-4827-9135-e5944a2b12dc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44jkz" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.601158 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/94ee27f9-59ea-44af-bbd3-e586f4bb7bb1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-z4vpd\" (UID: \"94ee27f9-59ea-44af-bbd3-e586f4bb7bb1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z4vpd" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.601191 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/df6a8d9e-bb6c-42e9-918e-60cbb78f55ac-auth-proxy-config\") pod \"machine-approver-56656f9798-rzxwh\" (UID: \"df6a8d9e-bb6c-42e9-918e-60cbb78f55ac\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzxwh" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.601207 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/175d3bf2-c969-401c-9b35-d91a066d0305-secret-volume\") pod \"collect-profiles-29405205-hbxpf\" (UID: \"175d3bf2-c969-401c-9b35-d91a066d0305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-hbxpf" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.601221 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/85300130-dd7e-4e81-b37a-7dc93d20ab33-profile-collector-cert\") pod \"olm-operator-6b444d44fb-6k9gx\" (UID: \"85300130-dd7e-4e81-b37a-7dc93d20ab33\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6k9gx" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.601238 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7092cadd-becf-4ee8-bbb5-0aadd638ad46-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mnst8\" (UID: \"7092cadd-becf-4ee8-bbb5-0aadd638ad46\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mnst8" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.601270 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4j5k\" (UniqueName: \"kubernetes.io/projected/94ee27f9-59ea-44af-bbd3-e586f4bb7bb1-kube-api-access-z4j5k\") pod \"cluster-image-registry-operator-dc59b4c8b-z4vpd\" (UID: \"94ee27f9-59ea-44af-bbd3-e586f4bb7bb1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z4vpd" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.601286 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52939a0c-29ab-4cbb-bd53-7a330884139c-serving-cert\") pod \"service-ca-operator-777779d784-tn22h\" (UID: \"52939a0c-29ab-4cbb-bd53-7a330884139c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tn22h" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.601301 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/98c2fa9b-f899-4167-ab43-6d97e7b1589e-registration-dir\") pod \"csi-hostpathplugin-sc95q\" (UID: \"98c2fa9b-f899-4167-ab43-6d97e7b1589e\") " pod="hostpath-provisioner/csi-hostpathplugin-sc95q" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.601324 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/11ad130b-9da2-40c3-b106-64faae722846-node-bootstrap-token\") pod \"machine-config-server-vcgqw\" (UID: \"11ad130b-9da2-40c3-b106-64faae722846\") " pod="openshift-machine-config-operator/machine-config-server-vcgqw" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.601357 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ed5ffa1-4361-4c09-b250-607fdf9cd2f4-cert\") pod \"ingress-canary-5q8lj\" (UID: \"6ed5ffa1-4361-4c09-b250-607fdf9cd2f4\") " pod="openshift-ingress-canary/ingress-canary-5q8lj" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.601373 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7092cadd-becf-4ee8-bbb5-0aadd638ad46-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mnst8\" (UID: \"7092cadd-becf-4ee8-bbb5-0aadd638ad46\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mnst8" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.601400 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbbz5\" (UniqueName: \"kubernetes.io/projected/4d68f56c-7e3b-4207-92d1-95800df70d0e-kube-api-access-dbbz5\") pod \"dns-operator-744455d44c-r8n92\" (UID: \"4d68f56c-7e3b-4207-92d1-95800df70d0e\") " pod="openshift-dns-operator/dns-operator-744455d44c-r8n92" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.601674 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/175d3bf2-c969-401c-9b35-d91a066d0305-config-volume\") pod \"collect-profiles-29405205-hbxpf\" (UID: \"175d3bf2-c969-401c-9b35-d91a066d0305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-hbxpf" Nov 28 06:54:42 crc kubenswrapper[4946]: E1128 06:54:42.601785 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:43.101757497 +0000 UTC m=+137.479822608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.602273 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/98c2fa9b-f899-4167-ab43-6d97e7b1589e-socket-dir\") pod \"csi-hostpathplugin-sc95q\" (UID: \"98c2fa9b-f899-4167-ab43-6d97e7b1589e\") " pod="hostpath-provisioner/csi-hostpathplugin-sc95q" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.602601 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/98c2fa9b-f899-4167-ab43-6d97e7b1589e-csi-data-dir\") pod \"csi-hostpathplugin-sc95q\" (UID: \"98c2fa9b-f899-4167-ab43-6d97e7b1589e\") " pod="hostpath-provisioner/csi-hostpathplugin-sc95q" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.604220 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/98c2fa9b-f899-4167-ab43-6d97e7b1589e-mountpoint-dir\") pod \"csi-hostpathplugin-sc95q\" (UID: \"98c2fa9b-f899-4167-ab43-6d97e7b1589e\") " pod="hostpath-provisioner/csi-hostpathplugin-sc95q" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.607485 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/df6a8d9e-bb6c-42e9-918e-60cbb78f55ac-auth-proxy-config\") pod \"machine-approver-56656f9798-rzxwh\" (UID: \"df6a8d9e-bb6c-42e9-918e-60cbb78f55ac\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzxwh" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.607952 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ef7cd67-8788-4c1b-9f7d-e66f4fa9737e-config-volume\") pod \"dns-default-qvr67\" (UID: \"7ef7cd67-8788-4c1b-9f7d-e66f4fa9737e\") " pod="openshift-dns/dns-default-qvr67" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.609493 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/386d3c8a-a2d4-46e1-a7cb-c456eec4860b-signing-cabundle\") pod \"service-ca-9c57cc56f-pm27x\" (UID: \"386d3c8a-a2d4-46e1-a7cb-c456eec4860b\") " pod="openshift-service-ca/service-ca-9c57cc56f-pm27x" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.609932 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/94ee27f9-59ea-44af-bbd3-e586f4bb7bb1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-z4vpd\" (UID: \"94ee27f9-59ea-44af-bbd3-e586f4bb7bb1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z4vpd" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.610366 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8df2dcaf-0af5-4827-9135-e5944a2b12dc-proxy-tls\") pod \"machine-config-operator-74547568cd-44jkz\" (UID: \"8df2dcaf-0af5-4827-9135-e5944a2b12dc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44jkz" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.610562 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c2424a4-d3a4-4acc-a683-320abd4467ff-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pq4rr\" (UID: \"3c2424a4-d3a4-4acc-a683-320abd4467ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pq4rr" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.610998 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52939a0c-29ab-4cbb-bd53-7a330884139c-config\") pod \"service-ca-operator-777779d784-tn22h\" (UID: \"52939a0c-29ab-4cbb-bd53-7a330884139c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tn22h" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.611012 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c6a06ceb-1625-421c-864d-4f24d2fd00d3-profile-collector-cert\") pod \"catalog-operator-68c6474976-4swtq\" (UID: \"c6a06ceb-1625-421c-864d-4f24d2fd00d3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4swtq" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.611077 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/98c2fa9b-f899-4167-ab43-6d97e7b1589e-plugins-dir\") pod \"csi-hostpathplugin-sc95q\" (UID: \"98c2fa9b-f899-4167-ab43-6d97e7b1589e\") " pod="hostpath-provisioner/csi-hostpathplugin-sc95q" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.611504 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8df2dcaf-0af5-4827-9135-e5944a2b12dc-images\") pod \"machine-config-operator-74547568cd-44jkz\" (UID: \"8df2dcaf-0af5-4827-9135-e5944a2b12dc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44jkz" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.611614 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/386d3c8a-a2d4-46e1-a7cb-c456eec4860b-signing-key\") pod \"service-ca-9c57cc56f-pm27x\" (UID: \"386d3c8a-a2d4-46e1-a7cb-c456eec4860b\") " pod="openshift-service-ca/service-ca-9c57cc56f-pm27x" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.613132 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zz4n\" (UniqueName: \"kubernetes.io/projected/2cb23522-48f9-442d-8321-90df7290886d-kube-api-access-7zz4n\") pod \"machine-config-controller-84d6567774-pdk2l\" (UID: \"2cb23522-48f9-442d-8321-90df7290886d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pdk2l" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.613215 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/98c2fa9b-f899-4167-ab43-6d97e7b1589e-registration-dir\") pod \"csi-hostpathplugin-sc95q\" (UID: \"98c2fa9b-f899-4167-ab43-6d97e7b1589e\") " pod="hostpath-provisioner/csi-hostpathplugin-sc95q" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.613900 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df6a8d9e-bb6c-42e9-918e-60cbb78f55ac-config\") pod \"machine-approver-56656f9798-rzxwh\" (UID: \"df6a8d9e-bb6c-42e9-918e-60cbb78f55ac\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzxwh" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.615614 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7092cadd-becf-4ee8-bbb5-0aadd638ad46-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mnst8\" (UID: \"7092cadd-becf-4ee8-bbb5-0aadd638ad46\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mnst8" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.617358 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/175d3bf2-c969-401c-9b35-d91a066d0305-secret-volume\") pod \"collect-profiles-29405205-hbxpf\" (UID: \"175d3bf2-c969-401c-9b35-d91a066d0305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-hbxpf" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.617930 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/df6a8d9e-bb6c-42e9-918e-60cbb78f55ac-machine-approver-tls\") pod \"machine-approver-56656f9798-rzxwh\" (UID: \"df6a8d9e-bb6c-42e9-918e-60cbb78f55ac\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzxwh" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.618982 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c6a06ceb-1625-421c-864d-4f24d2fd00d3-srv-cert\") pod \"catalog-operator-68c6474976-4swtq\" (UID: \"c6a06ceb-1625-421c-864d-4f24d2fd00d3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4swtq" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.621257 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sm5f9" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.627428 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/85300130-dd7e-4e81-b37a-7dc93d20ab33-srv-cert\") pod \"olm-operator-6b444d44fb-6k9gx\" (UID: \"85300130-dd7e-4e81-b37a-7dc93d20ab33\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6k9gx" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.632971 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d68f56c-7e3b-4207-92d1-95800df70d0e-metrics-tls\") pod \"dns-operator-744455d44c-r8n92\" (UID: \"4d68f56c-7e3b-4207-92d1-95800df70d0e\") " pod="openshift-dns-operator/dns-operator-744455d44c-r8n92" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.637018 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8df2dcaf-0af5-4827-9135-e5944a2b12dc-auth-proxy-config\") pod \"machine-config-operator-74547568cd-44jkz\" (UID: \"8df2dcaf-0af5-4827-9135-e5944a2b12dc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44jkz" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.638272 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/94ee27f9-59ea-44af-bbd3-e586f4bb7bb1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-z4vpd\" (UID: \"94ee27f9-59ea-44af-bbd3-e586f4bb7bb1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z4vpd" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.638411 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7b7188c5-eb8b-46be-b3db-4f0e058c3110-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qjlhx\" (UID: \"7b7188c5-eb8b-46be-b3db-4f0e058c3110\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qjlhx" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.638999 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/11ad130b-9da2-40c3-b106-64faae722846-certs\") pod \"machine-config-server-vcgqw\" (UID: \"11ad130b-9da2-40c3-b106-64faae722846\") " pod="openshift-machine-config-operator/machine-config-server-vcgqw" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.653993 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7092cadd-becf-4ee8-bbb5-0aadd638ad46-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mnst8\" (UID: \"7092cadd-becf-4ee8-bbb5-0aadd638ad46\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mnst8" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.654456 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/85300130-dd7e-4e81-b37a-7dc93d20ab33-profile-collector-cert\") pod \"olm-operator-6b444d44fb-6k9gx\" (UID: \"85300130-dd7e-4e81-b37a-7dc93d20ab33\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6k9gx" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.654910 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52939a0c-29ab-4cbb-bd53-7a330884139c-serving-cert\") pod \"service-ca-operator-777779d784-tn22h\" (UID: \"52939a0c-29ab-4cbb-bd53-7a330884139c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tn22h" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.654908 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7ef7cd67-8788-4c1b-9f7d-e66f4fa9737e-metrics-tls\") pod \"dns-default-qvr67\" (UID: \"7ef7cd67-8788-4c1b-9f7d-e66f4fa9737e\") " pod="openshift-dns/dns-default-qvr67" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.654998 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/11ad130b-9da2-40c3-b106-64faae722846-node-bootstrap-token\") pod \"machine-config-server-vcgqw\" (UID: \"11ad130b-9da2-40c3-b106-64faae722846\") " pod="openshift-machine-config-operator/machine-config-server-vcgqw" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.655305 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgt46\" (UniqueName: \"kubernetes.io/projected/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-kube-api-access-vgt46\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.655874 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/07a101d3-99de-46c5-88c3-9d053589fd73-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6q9fb\" (UID: \"07a101d3-99de-46c5-88c3-9d053589fd73\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6q9fb" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.661959 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-cqn4c" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.676168 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ed5ffa1-4361-4c09-b250-607fdf9cd2f4-cert\") pod \"ingress-canary-5q8lj\" (UID: \"6ed5ffa1-4361-4c09-b250-607fdf9cd2f4\") " pod="openshift-ingress-canary/ingress-canary-5q8lj" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.697747 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mxpd\" (UniqueName: \"kubernetes.io/projected/6a09131a-f0b1-48e9-9b10-8e75e3344d3c-kube-api-access-2mxpd\") pod \"control-plane-machine-set-operator-78cbb6b69f-5tnsl\" (UID: \"6a09131a-f0b1-48e9-9b10-8e75e3344d3c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tnsl" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.702141 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:42 crc kubenswrapper[4946]: E1128 06:54:42.702571 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:43.202558819 +0000 UTC m=+137.580623930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.726085 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7jr4\" (UniqueName: \"kubernetes.io/projected/175d3bf2-c969-401c-9b35-d91a066d0305-kube-api-access-m7jr4\") pod \"collect-profiles-29405205-hbxpf\" (UID: \"175d3bf2-c969-401c-9b35-d91a066d0305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-hbxpf" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.755214 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbbz5\" (UniqueName: \"kubernetes.io/projected/4d68f56c-7e3b-4207-92d1-95800df70d0e-kube-api-access-dbbz5\") pod \"dns-operator-744455d44c-r8n92\" (UID: \"4d68f56c-7e3b-4207-92d1-95800df70d0e\") " pod="openshift-dns-operator/dns-operator-744455d44c-r8n92" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.755223 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7twz6\" (UniqueName: \"kubernetes.io/projected/11ad130b-9da2-40c3-b106-64faae722846-kube-api-access-7twz6\") pod \"machine-config-server-vcgqw\" (UID: \"11ad130b-9da2-40c3-b106-64faae722846\") " pod="openshift-machine-config-operator/machine-config-server-vcgqw" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.768171 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9z2j\" (UniqueName: \"kubernetes.io/projected/6ed5ffa1-4361-4c09-b250-607fdf9cd2f4-kube-api-access-j9z2j\") pod \"ingress-canary-5q8lj\" (UID: \"6ed5ffa1-4361-4c09-b250-607fdf9cd2f4\") " pod="openshift-ingress-canary/ingress-canary-5q8lj" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.775941 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-hbxpf" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.793017 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dts9q"] Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.798502 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-vcgqw" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.802072 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk22g\" (UniqueName: \"kubernetes.io/projected/7b7188c5-eb8b-46be-b3db-4f0e058c3110-kube-api-access-fk22g\") pod \"multus-admission-controller-857f4d67dd-qjlhx\" (UID: \"7b7188c5-eb8b-46be-b3db-4f0e058c3110\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qjlhx" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.803115 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:42 crc kubenswrapper[4946]: E1128 06:54:42.803544 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:43.303524835 +0000 UTC m=+137.681589946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.816129 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwxvz\" (UniqueName: \"kubernetes.io/projected/98c2fa9b-f899-4167-ab43-6d97e7b1589e-kube-api-access-mwxvz\") pod \"csi-hostpathplugin-sc95q\" (UID: \"98c2fa9b-f899-4167-ab43-6d97e7b1589e\") " pod="hostpath-provisioner/csi-hostpathplugin-sc95q" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.816794 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rnrl\" (UniqueName: \"kubernetes.io/projected/7ef7cd67-8788-4c1b-9f7d-e66f4fa9737e-kube-api-access-8rnrl\") pod \"dns-default-qvr67\" (UID: \"7ef7cd67-8788-4c1b-9f7d-e66f4fa9737e\") " pod="openshift-dns/dns-default-qvr67" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.825380 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5q8lj" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.846359 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6l2r\" (UniqueName: \"kubernetes.io/projected/386d3c8a-a2d4-46e1-a7cb-c456eec4860b-kube-api-access-k6l2r\") pod \"service-ca-9c57cc56f-pm27x\" (UID: \"386d3c8a-a2d4-46e1-a7cb-c456eec4860b\") " pod="openshift-service-ca/service-ca-9c57cc56f-pm27x" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.861301 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pdk2l" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.864752 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vslzh\" (UniqueName: \"kubernetes.io/projected/3b5b4fe0-1e4f-435a-aff4-3d388fb616cc-kube-api-access-vslzh\") pod \"migrator-59844c95c7-hxz9b\" (UID: \"3b5b4fe0-1e4f-435a-aff4-3d388fb616cc\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hxz9b" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.875256 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4bfkx"] Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.888413 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pq4rr" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.892897 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvv4s\" (UniqueName: \"kubernetes.io/projected/df6a8d9e-bb6c-42e9-918e-60cbb78f55ac-kube-api-access-dvv4s\") pod \"machine-approver-56656f9798-rzxwh\" (UID: \"df6a8d9e-bb6c-42e9-918e-60cbb78f55ac\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzxwh" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.894192 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6z54\" (UniqueName: \"kubernetes.io/projected/52939a0c-29ab-4cbb-bd53-7a330884139c-kube-api-access-z6z54\") pod \"service-ca-operator-777779d784-tn22h\" (UID: \"52939a0c-29ab-4cbb-bd53-7a330884139c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tn22h" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.909185 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:42 crc kubenswrapper[4946]: E1128 06:54:42.909672 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:43.409656203 +0000 UTC m=+137.787721334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.916403 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bdnf\" (UniqueName: \"kubernetes.io/projected/07a101d3-99de-46c5-88c3-9d053589fd73-kube-api-access-4bdnf\") pod \"package-server-manager-789f6589d5-6q9fb\" (UID: \"07a101d3-99de-46c5-88c3-9d053589fd73\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6q9fb" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.951549 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tnsl" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.964725 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6q9fb" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.967547 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4qwzn"] Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.971142 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-qjlhx" Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.995620 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25"] Nov 28 06:54:42 crc kubenswrapper[4946]: I1128 06:54:42.995932 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-pm27x" Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.000101 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x9d2\" (UniqueName: \"kubernetes.io/projected/8df2dcaf-0af5-4827-9135-e5944a2b12dc-kube-api-access-6x9d2\") pod \"machine-config-operator-74547568cd-44jkz\" (UID: \"8df2dcaf-0af5-4827-9135-e5944a2b12dc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44jkz" Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.000603 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhc75\" (UniqueName: \"kubernetes.io/projected/7092cadd-becf-4ee8-bbb5-0aadd638ad46-kube-api-access-rhc75\") pod \"kube-storage-version-migrator-operator-b67b599dd-mnst8\" (UID: \"7092cadd-becf-4ee8-bbb5-0aadd638ad46\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mnst8" Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.010280 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:43 crc kubenswrapper[4946]: E1128 06:54:43.010453 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:43.510417614 +0000 UTC m=+137.888482725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.017262 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxkrp\" (UniqueName: \"kubernetes.io/projected/c6a06ceb-1625-421c-864d-4f24d2fd00d3-kube-api-access-qxkrp\") pod \"catalog-operator-68c6474976-4swtq\" (UID: \"c6a06ceb-1625-421c-864d-4f24d2fd00d3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4swtq" Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.017804 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tn22h" Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.018597 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:43 crc kubenswrapper[4946]: E1128 06:54:43.018935 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:43.51892303 +0000 UTC m=+137.896988141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.023220 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dj6q\" (UniqueName: \"kubernetes.io/projected/85300130-dd7e-4e81-b37a-7dc93d20ab33-kube-api-access-7dj6q\") pod \"olm-operator-6b444d44fb-6k9gx\" (UID: \"85300130-dd7e-4e81-b37a-7dc93d20ab33\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6k9gx" Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.027395 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4swtq" Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.030103 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/94ee27f9-59ea-44af-bbd3-e586f4bb7bb1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-z4vpd\" (UID: \"94ee27f9-59ea-44af-bbd3-e586f4bb7bb1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z4vpd" Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.036823 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4j5k\" (UniqueName: \"kubernetes.io/projected/94ee27f9-59ea-44af-bbd3-e586f4bb7bb1-kube-api-access-z4j5k\") pod \"cluster-image-registry-operator-dc59b4c8b-z4vpd\" (UID: \"94ee27f9-59ea-44af-bbd3-e586f4bb7bb1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z4vpd" Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.042798 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzxwh" Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.051866 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-r8n92" Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.064869 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mnst8" Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.069264 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hxz9b" Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.086383 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qvr67" Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.134203 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-sc95q" Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.135826 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:43 crc kubenswrapper[4946]: E1128 06:54:43.136139 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:43.636109219 +0000 UTC m=+138.014174330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.180131 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-js7vs"] Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.241709 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:43 crc kubenswrapper[4946]: E1128 06:54:43.242094 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:43.742078462 +0000 UTC m=+138.120143573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.287275 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44jkz" Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.308203 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6k9gx" Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.339138 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z4vpd" Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.343161 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:43 crc kubenswrapper[4946]: E1128 06:54:43.343539 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:43.843521301 +0000 UTC m=+138.221586412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.454947 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:43 crc kubenswrapper[4946]: E1128 06:54:43.455261 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:43.955249971 +0000 UTC m=+138.333315082 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.455509 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m8vtf"] Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.456497 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-559n4" podStartSLOduration=117.456486212 podStartE2EDuration="1m57.456486212s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:43.44576561 +0000 UTC m=+137.823830721" watchObservedRunningTime="2025-11-28 06:54:43.456486212 +0000 UTC m=+137.834551323" Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.526504 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vcgqw" event={"ID":"11ad130b-9da2-40c3-b106-64faae722846","Type":"ContainerStarted","Data":"9842fa43fc3e094f6e10166e51e432cd415cafdc7856ac70863701d51ecb2b06"} Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.553677 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-js7vs" event={"ID":"d8c9e4f9-d51d-47d9-8228-c9c58873dbe3","Type":"ContainerStarted","Data":"951d846c7df7619c45cdbac5847eb55f9af534ebffc9b05aa2dda2aaabd8c9b2"} Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.556110 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:43 crc kubenswrapper[4946]: E1128 06:54:43.556557 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:44.056541536 +0000 UTC m=+138.434606637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.576018 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qwzn" event={"ID":"475dd273-84f1-46de-9c16-3eaeca9b7d1c","Type":"ContainerStarted","Data":"b2f4bdf2221342b3a6951d1df971b239054cf7a5ec974378e4c134cc83673fc8"} Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.580762 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79r5c" event={"ID":"3580d0e9-f1d5-413e-a76e-d22baa741afd","Type":"ContainerStarted","Data":"f36d30c22daf3e25514e19e9a3ac939598cd13e1805442e62050744543f01f6b"} Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.609039 4946 generic.go:334] "Generic (PLEG): container finished" podID="677259a2-4efb-4f49-ad8e-57357402c59a" containerID="3c83ce8806b1cb50d8494e17d2405a1fb4e0c8d21232fe57a7ae114f594c667c" exitCode=0 Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.609156 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" event={"ID":"677259a2-4efb-4f49-ad8e-57357402c59a","Type":"ContainerDied","Data":"3c83ce8806b1cb50d8494e17d2405a1fb4e0c8d21232fe57a7ae114f594c667c"} Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.644147 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hnzrs" event={"ID":"ee803337-53f2-4467-8f6c-602a16bda8e5","Type":"ContainerStarted","Data":"a2c53168825f54cbffd162c2b846b4f0bcfe9af66da73cbc6faa2b7b11f0a6ee"} Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.645928 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hnzrs" event={"ID":"ee803337-53f2-4467-8f6c-602a16bda8e5","Type":"ContainerStarted","Data":"c8ca6522e77db97cb1e508ef76abfcf2c3b908a1d75c1e96ccbb12cdecfe404e"} Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.662543 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:43 crc kubenswrapper[4946]: E1128 06:54:43.662883 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:44.162868358 +0000 UTC m=+138.540933469 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.683109 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25" event={"ID":"1f9de5c3-e681-4c09-a93e-9f95f03c7f5e","Type":"ContainerStarted","Data":"d3e0b2de4d9cdb8c973e6aa94fb83f8213b4d947e651181ddb14be567dae1965"} Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.683871 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-pddp7" podStartSLOduration=118.683861412 podStartE2EDuration="1m58.683861412s" podCreationTimestamp="2025-11-28 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:43.654450854 +0000 UTC m=+138.032515965" watchObservedRunningTime="2025-11-28 06:54:43.683861412 +0000 UTC m=+138.061926523" Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.741969 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" event={"ID":"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe","Type":"ContainerStarted","Data":"d6ac5e3e619d526993f4fa3138b2ae37dc890a82748edb244b4df8f2836c0608"} Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.766495 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:43 crc kubenswrapper[4946]: E1128 06:54:43.767695 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:44.267668272 +0000 UTC m=+138.645733383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.768776 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4bfkx" event={"ID":"b4d03410-0c4e-4aa4-b760-8d8179d791a5","Type":"ContainerStarted","Data":"f825acd44a0e262753478f53009899916287d9cbde4e0efe91f37b48706d23d7"} Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.808975 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-r7ztb" event={"ID":"79c0d15c-8fc9-4efd-b1ec-739718f313d9","Type":"ContainerStarted","Data":"96045c27504107ddb509a67ac3bd912c5b7570b92a1d10f14ac7150c1754851e"} Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.809498 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-r7ztb" event={"ID":"79c0d15c-8fc9-4efd-b1ec-739718f313d9","Type":"ContainerStarted","Data":"6c161cb54a6d2e2b4039d3f437ec02e41b77e7685954cf10e5515b106fdaca81"} Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.859871 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hwk8p" event={"ID":"63f790e6-5c8d-475c-bd26-a1e52ffb9ed3","Type":"ContainerStarted","Data":"1bfc0356b4e24c3bbfea4bc9e75f181a262fb47ce4d13ca7b31050ae1cd4af7e"} Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.859945 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hwk8p" event={"ID":"63f790e6-5c8d-475c-bd26-a1e52ffb9ed3","Type":"ContainerStarted","Data":"e1a938564248ddb8df9d8b2e0642f081e078337cba2c240a214ace7041563152"} Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.862173 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-cqn4c"] Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.868103 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:43 crc kubenswrapper[4946]: E1128 06:54:43.868639 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:44.368623598 +0000 UTC m=+138.746688709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.870261 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-m444h"] Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.886714 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-dts9q" event={"ID":"0baa90e9-548b-4a20-9d9c-6a673f4ae0d0","Type":"ContainerStarted","Data":"afa3f588802a54b46110754de4cd5367f4a09a0e77bd86583f907f623aae3150"} Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.907125 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.942738 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5ldr8"] Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.965558 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tft2c" podStartSLOduration=118.965537572 podStartE2EDuration="1m58.965537572s" podCreationTimestamp="2025-11-28 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:43.958430781 +0000 UTC m=+138.336495902" watchObservedRunningTime="2025-11-28 06:54:43.965537572 +0000 UTC m=+138.343602683" Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.969660 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:43 crc kubenswrapper[4946]: E1128 06:54:43.969898 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:44.469870992 +0000 UTC m=+138.847936103 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.970319 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:43 crc kubenswrapper[4946]: I1128 06:54:43.970650 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405205-hbxpf"] Nov 28 06:54:43 crc kubenswrapper[4946]: E1128 06:54:43.984993 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:44.484964135 +0000 UTC m=+138.863029246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:44 crc kubenswrapper[4946]: I1128 06:54:44.086049 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:44 crc kubenswrapper[4946]: E1128 06:54:44.087395 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:44.587378198 +0000 UTC m=+138.965443309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:44 crc kubenswrapper[4946]: I1128 06:54:44.187255 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:44 crc kubenswrapper[4946]: E1128 06:54:44.187701 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:44.687686998 +0000 UTC m=+139.065752109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:44 crc kubenswrapper[4946]: I1128 06:54:44.195540 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sm5f9"] Nov 28 06:54:44 crc kubenswrapper[4946]: I1128 06:54:44.196587 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" podStartSLOduration=118.196560854 podStartE2EDuration="1m58.196560854s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:44.177833938 +0000 UTC m=+138.555899059" watchObservedRunningTime="2025-11-28 06:54:44.196560854 +0000 UTC m=+138.574625965" Nov 28 06:54:44 crc kubenswrapper[4946]: I1128 06:54:44.288182 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:44 crc kubenswrapper[4946]: E1128 06:54:44.288977 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:44.788959272 +0000 UTC m=+139.167024383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:44 crc kubenswrapper[4946]: I1128 06:54:44.403367 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:44 crc kubenswrapper[4946]: E1128 06:54:44.403722 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:44.903711829 +0000 UTC m=+139.281776940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:44 crc kubenswrapper[4946]: I1128 06:54:44.508230 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:44 crc kubenswrapper[4946]: E1128 06:54:44.509215 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:45.009174519 +0000 UTC m=+139.387239630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:44 crc kubenswrapper[4946]: I1128 06:54:44.576980 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-hwk8p" Nov 28 06:54:44 crc kubenswrapper[4946]: I1128 06:54:44.580370 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xt6vv" podStartSLOduration=119.580350308 podStartE2EDuration="1m59.580350308s" podCreationTimestamp="2025-11-28 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:44.577706661 +0000 UTC m=+138.955771772" watchObservedRunningTime="2025-11-28 06:54:44.580350308 +0000 UTC m=+138.958415419" Nov 28 06:54:44 crc kubenswrapper[4946]: I1128 06:54:44.610139 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:44 crc kubenswrapper[4946]: E1128 06:54:44.610684 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:45.110671569 +0000 UTC m=+139.488736670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:44 crc kubenswrapper[4946]: I1128 06:54:44.713942 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:44 crc kubenswrapper[4946]: E1128 06:54:44.714152 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:45.214120388 +0000 UTC m=+139.592185499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:44 crc kubenswrapper[4946]: I1128 06:54:44.714263 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:44 crc kubenswrapper[4946]: E1128 06:54:44.714594 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:45.2145817 +0000 UTC m=+139.592646811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:44 crc kubenswrapper[4946]: I1128 06:54:44.717237 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79r5c" podStartSLOduration=118.717199236 podStartE2EDuration="1m58.717199236s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:44.692683423 +0000 UTC m=+139.070748534" watchObservedRunningTime="2025-11-28 06:54:44.717199236 +0000 UTC m=+139.095264347" Nov 28 06:54:44 crc kubenswrapper[4946]: I1128 06:54:44.735819 4946 patch_prober.go:28] interesting pod/router-default-5444994796-hwk8p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:54:44 crc kubenswrapper[4946]: [-]has-synced failed: reason withheld Nov 28 06:54:44 crc kubenswrapper[4946]: [+]process-running ok Nov 28 06:54:44 crc kubenswrapper[4946]: healthz check failed Nov 28 06:54:44 crc kubenswrapper[4946]: I1128 06:54:44.736453 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hwk8p" podUID="63f790e6-5c8d-475c-bd26-a1e52ffb9ed3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:54:44 crc kubenswrapper[4946]: I1128 06:54:44.815218 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:44 crc kubenswrapper[4946]: E1128 06:54:44.815715 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:45.31569728 +0000 UTC m=+139.693762381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:44 crc kubenswrapper[4946]: I1128 06:54:44.930186 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:44 crc kubenswrapper[4946]: E1128 06:54:44.934887 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:45.434862959 +0000 UTC m=+139.812928070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:44 crc kubenswrapper[4946]: I1128 06:54:44.968140 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" event={"ID":"b8b3ba0a-934b-4bc0-8bc4-58991860b7fe","Type":"ContainerStarted","Data":"2b782d232ae916d9707966e83e52d226534714b8c8b9363d14bfac2705643762"} Nov 28 06:54:44 crc kubenswrapper[4946]: I1128 06:54:44.973792 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4swtq"] Nov 28 06:54:44 crc kubenswrapper[4946]: I1128 06:54:44.975308 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-hwk8p" podStartSLOduration=118.975289197 podStartE2EDuration="1m58.975289197s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:44.967581031 +0000 UTC m=+139.345646142" watchObservedRunningTime="2025-11-28 06:54:44.975289197 +0000 UTC m=+139.353354308" Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.015069 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pq4rr"] Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.043280 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:45 crc kubenswrapper[4946]: E1128 06:54:45.043653 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:45.543635574 +0000 UTC m=+139.921700685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.077385 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-r7ztb" podStartSLOduration=120.077367681 podStartE2EDuration="2m0.077367681s" podCreationTimestamp="2025-11-28 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:45.076325875 +0000 UTC m=+139.454391006" watchObservedRunningTime="2025-11-28 06:54:45.077367681 +0000 UTC m=+139.455432782" Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.086144 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4bfkx" event={"ID":"b4d03410-0c4e-4aa4-b760-8d8179d791a5","Type":"ContainerStarted","Data":"84a5192120a8991970f513f5bf8088ff5888fc377e94627a9f40c175ad509c3c"} Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.103433 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-4bfkx" Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.104356 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tn22h"] Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.105502 4946 patch_prober.go:28] interesting pod/downloads-7954f5f757-4bfkx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.105547 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4bfkx" podUID="b4d03410-0c4e-4aa4-b760-8d8179d791a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.106270 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pdk2l"] Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.111758 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5q8lj"] Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.112044 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-dts9q" event={"ID":"0baa90e9-548b-4a20-9d9c-6a673f4ae0d0","Type":"ContainerStarted","Data":"a6c30fb2d2408a801bef36962796cb361d22f8705605dfcccff1ed87d1be9225"} Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.112067 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tnsl"] Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.112078 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6q9fb"] Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.125972 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hnzrs" event={"ID":"ee803337-53f2-4467-8f6c-602a16bda8e5","Type":"ContainerStarted","Data":"bd1cb93fde9ff34413902d737fe3b18d0dc9d10ba6977b29b2c0bc699a250d96"} Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.150055 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:45 crc kubenswrapper[4946]: E1128 06:54:45.152721 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:45.652708166 +0000 UTC m=+140.030773277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.207135 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hxz9b"] Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.207184 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sc95q"] Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.218766 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vcgqw" event={"ID":"11ad130b-9da2-40c3-b106-64faae722846","Type":"ContainerStarted","Data":"3a81fd15d0ff4fd2387f2914b0c3c80b16b24bdc88ec93d51e93227d8e589b70"} Nov 28 06:54:45 crc kubenswrapper[4946]: W1128 06:54:45.233070 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ed5ffa1_4361_4c09_b250_607fdf9cd2f4.slice/crio-e0dac440ea4297ae9d5791f7408925303e6deb1438fdf5d01ef6ede5438c0742 WatchSource:0}: Error finding container e0dac440ea4297ae9d5791f7408925303e6deb1438fdf5d01ef6ede5438c0742: Status 404 returned error can't find the container with id e0dac440ea4297ae9d5791f7408925303e6deb1438fdf5d01ef6ede5438c0742 Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.239774 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25" event={"ID":"1f9de5c3-e681-4c09-a93e-9f95f03c7f5e","Type":"ContainerStarted","Data":"7f1dd6ed983a306ed9a2f167a582c522138601621b665863c6930985d19002fb"} Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.240955 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25" Nov 28 06:54:45 crc kubenswrapper[4946]: W1128 06:54:45.247636 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07a101d3_99de_46c5_88c3_9d053589fd73.slice/crio-d989be6a3bd2b43586bc0039898ca8d3e0ad96862c3f83dd5a94e6ad73c6602f WatchSource:0}: Error finding container d989be6a3bd2b43586bc0039898ca8d3e0ad96862c3f83dd5a94e6ad73c6602f: Status 404 returned error can't find the container with id d989be6a3bd2b43586bc0039898ca8d3e0ad96862c3f83dd5a94e6ad73c6602f Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.251964 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:45 crc kubenswrapper[4946]: E1128 06:54:45.253295 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:45.753277633 +0000 UTC m=+140.131342744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.266939 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-hbxpf" event={"ID":"175d3bf2-c969-401c-9b35-d91a066d0305","Type":"ContainerStarted","Data":"b78eb19a99d23365dc3d4d49fee8d24a2072f0cf0d8023796e23d6ec8a3de818"} Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.274060 4946 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-kfm25 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.274158 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25" podUID="1f9de5c3-e681-4c09-a93e-9f95f03c7f5e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.327172 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-hnzrs" podStartSLOduration=119.32714265 podStartE2EDuration="1m59.32714265s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:45.213205884 +0000 UTC m=+139.591270995" watchObservedRunningTime="2025-11-28 06:54:45.32714265 +0000 UTC m=+139.705207761" Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.327314 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-4bfkx" podStartSLOduration=119.327310614 podStartE2EDuration="1m59.327310614s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:45.326200536 +0000 UTC m=+139.704265647" watchObservedRunningTime="2025-11-28 06:54:45.327310614 +0000 UTC m=+139.705375725" Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.355855 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:45 crc kubenswrapper[4946]: E1128 06:54:45.356213 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:45.856200329 +0000 UTC m=+140.234265440 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.359339 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mnst8"] Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.374037 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-m444h" event={"ID":"830f7e38-bc1c-4897-bcc9-0266da4d74d0","Type":"ContainerStarted","Data":"bc0c4f5cf319218b0ce27c9c45cdb4bfebd835db3a194c418d5c2e9df80ecf72"} Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.469121 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-r8n92"] Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.469169 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-cqn4c" event={"ID":"39074957-ff47-4c7f-8c6e-26370a118b2b","Type":"ContainerStarted","Data":"94479647eaad7665bfcbc2fcd83f065f8abf441e13e84346ae0dcf3fef9278f6"} Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.470031 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:45 crc kubenswrapper[4946]: E1128 06:54:45.470731 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:45.970714149 +0000 UTC m=+140.348779260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.487098 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8vtf" event={"ID":"daef0f99-b6ac-47c5-a246-40a12f61603b","Type":"ContainerStarted","Data":"2f6c59bbdefb796fe5ce1f825ce734f65438de8c6e25ec1b785bc30c9a47a6f0"} Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.504362 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" podStartSLOduration=120.504325534 podStartE2EDuration="2m0.504325534s" podCreationTimestamp="2025-11-28 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:45.428198849 +0000 UTC m=+139.806263950" watchObservedRunningTime="2025-11-28 06:54:45.504325534 +0000 UTC m=+139.882390645" Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.528562 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qjlhx"] Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.571149 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pm27x"] Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.571613 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-dts9q" podStartSLOduration=119.571592054 podStartE2EDuration="1m59.571592054s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:45.491321143 +0000 UTC m=+139.869386254" watchObservedRunningTime="2025-11-28 06:54:45.571592054 +0000 UTC m=+139.949657155" Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.587748 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:45 crc kubenswrapper[4946]: E1128 06:54:45.589131 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:46.089110679 +0000 UTC m=+140.467175790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.596870 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5ldr8" event={"ID":"190866a6-13b5-4de4-87c6-306883cb2998","Type":"ContainerStarted","Data":"e086d0956c5ab1e05a29851d629b6de6a07b25d4122e019c3bb668fe8db15ff0"} Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.614439 4946 patch_prober.go:28] interesting pod/router-default-5444994796-hwk8p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:54:45 crc kubenswrapper[4946]: [-]has-synced failed: reason withheld Nov 28 06:54:45 crc kubenswrapper[4946]: [+]process-running ok Nov 28 06:54:45 crc kubenswrapper[4946]: healthz check failed Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.614963 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hwk8p" podUID="63f790e6-5c8d-475c-bd26-a1e52ffb9ed3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:54:45 crc kubenswrapper[4946]: W1128 06:54:45.615439 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98c2fa9b_f899_4167_ab43_6d97e7b1589e.slice/crio-318984acfabbcf8eaabddf93b423d5b3abed60f9a6fae8d59a637a2ea6e71a96 WatchSource:0}: Error finding container 318984acfabbcf8eaabddf93b423d5b3abed60f9a6fae8d59a637a2ea6e71a96: Status 404 returned error can't find the container with id 318984acfabbcf8eaabddf93b423d5b3abed60f9a6fae8d59a637a2ea6e71a96 Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.625325 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sm5f9" event={"ID":"f98176b6-3525-4858-b747-06ca0fdb472e","Type":"ContainerStarted","Data":"5e2b559e86a7a56ca46285eec182b7b783148ff899c17444532d7d1bf66bc99d"} Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.626560 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sm5f9" Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.633121 4946 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sm5f9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.633177 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sm5f9" podUID="f98176b6-3525-4858-b747-06ca0fdb472e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.649539 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25" podStartSLOduration=119.649502134 podStartE2EDuration="1m59.649502134s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:45.646073777 +0000 UTC m=+140.024138888" watchObservedRunningTime="2025-11-28 06:54:45.649502134 +0000 UTC m=+140.027567235" Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.689780 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:45 crc kubenswrapper[4946]: E1128 06:54:45.689942 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:46.189923742 +0000 UTC m=+140.567988853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.690207 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:45 crc kubenswrapper[4946]: E1128 06:54:45.696992 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:46.196974981 +0000 UTC m=+140.575040092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.702211 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-vcgqw" podStartSLOduration=6.7021906829999995 podStartE2EDuration="6.702190683s" podCreationTimestamp="2025-11-28 06:54:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:45.689129741 +0000 UTC m=+140.067194852" watchObservedRunningTime="2025-11-28 06:54:45.702190683 +0000 UTC m=+140.080255794" Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.708021 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzxwh" event={"ID":"df6a8d9e-bb6c-42e9-918e-60cbb78f55ac","Type":"ContainerStarted","Data":"b268339e4828d198ee65a9d242fc67299602ebda27b622853c3a7e7143609cd4"} Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.750734 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sm5f9" podStartSLOduration=119.750711087 podStartE2EDuration="1m59.750711087s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:45.739972544 +0000 UTC m=+140.118037655" watchObservedRunningTime="2025-11-28 06:54:45.750711087 +0000 UTC m=+140.128776198" Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.791518 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:45 crc kubenswrapper[4946]: E1128 06:54:45.792394 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:46.292374556 +0000 UTC m=+140.670439667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.818228 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-44jkz"] Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.825109 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qvr67"] Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.893868 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:45 crc kubenswrapper[4946]: E1128 06:54:45.894650 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:46.394636485 +0000 UTC m=+140.772701596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:45 crc kubenswrapper[4946]: I1128 06:54:45.995998 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:46 crc kubenswrapper[4946]: E1128 06:54:46.007834 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:46.507807612 +0000 UTC m=+140.885872733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.012641 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6k9gx"] Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.109299 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:46 crc kubenswrapper[4946]: E1128 06:54:46.110049 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:46.61003684 +0000 UTC m=+140.988101951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.215545 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z4vpd"] Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.216131 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:46 crc kubenswrapper[4946]: E1128 06:54:46.216394 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:46.716378383 +0000 UTC m=+141.094443494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.328519 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:46 crc kubenswrapper[4946]: E1128 06:54:46.328858 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:46.828845772 +0000 UTC m=+141.206910883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.420650 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.421002 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.430732 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:46 crc kubenswrapper[4946]: E1128 06:54:46.431150 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:46.931117641 +0000 UTC m=+141.309182752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.532022 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:46 crc kubenswrapper[4946]: E1128 06:54:46.532301 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:47.032289863 +0000 UTC m=+141.410354974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.558347 4946 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xv7dv container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 28 06:54:46 crc kubenswrapper[4946]: [+]log ok Nov 28 06:54:46 crc kubenswrapper[4946]: [+]etcd ok Nov 28 06:54:46 crc kubenswrapper[4946]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 28 06:54:46 crc kubenswrapper[4946]: [+]poststarthook/generic-apiserver-start-informers ok Nov 28 06:54:46 crc kubenswrapper[4946]: [+]poststarthook/max-in-flight-filter ok Nov 28 06:54:46 crc kubenswrapper[4946]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 28 06:54:46 crc kubenswrapper[4946]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 28 06:54:46 crc kubenswrapper[4946]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 28 06:54:46 crc kubenswrapper[4946]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 28 06:54:46 crc kubenswrapper[4946]: [+]poststarthook/project.openshift.io-projectcache ok Nov 28 06:54:46 crc kubenswrapper[4946]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 28 06:54:46 crc kubenswrapper[4946]: [+]poststarthook/openshift.io-startinformers ok Nov 28 06:54:46 crc kubenswrapper[4946]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 28 06:54:46 crc kubenswrapper[4946]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 28 06:54:46 crc kubenswrapper[4946]: livez check failed Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.558411 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" podUID="b8b3ba0a-934b-4bc0-8bc4-58991860b7fe" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.591803 4946 patch_prober.go:28] interesting pod/router-default-5444994796-hwk8p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:54:46 crc kubenswrapper[4946]: [-]has-synced failed: reason withheld Nov 28 06:54:46 crc kubenswrapper[4946]: [+]process-running ok Nov 28 06:54:46 crc kubenswrapper[4946]: healthz check failed Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.591880 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hwk8p" podUID="63f790e6-5c8d-475c-bd26-a1e52ffb9ed3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.633059 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:46 crc kubenswrapper[4946]: E1128 06:54:46.633755 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:47.133732832 +0000 UTC m=+141.511797943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.735183 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:46 crc kubenswrapper[4946]: E1128 06:54:46.735629 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:47.235614461 +0000 UTC m=+141.613679562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.757563 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-cqn4c" event={"ID":"39074957-ff47-4c7f-8c6e-26370a118b2b","Type":"ContainerStarted","Data":"e9301b3ae4c7113d33c1d82c4f41a3e06eeb2bfe53706661ac3ee02f653a5c98"} Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.759298 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-cqn4c" Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.762059 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44jkz" event={"ID":"8df2dcaf-0af5-4827-9135-e5944a2b12dc","Type":"ContainerStarted","Data":"8d1517ab286cc86e7a8a8a5f6de53768d1ad116d2d87b0e9384b83abec29e47e"} Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.762709 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mnst8" event={"ID":"7092cadd-becf-4ee8-bbb5-0aadd638ad46","Type":"ContainerStarted","Data":"bf531a39bd0aee0c277524c422087eb5b82f5ca146b36ab8891bc1658158f53f"} Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.764397 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tnsl" event={"ID":"6a09131a-f0b1-48e9-9b10-8e75e3344d3c","Type":"ContainerStarted","Data":"44d9905826ffa9840d2817bd1c403fc5f0cc7071e10aacd9918f5a75d2fa0b2b"} Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.764488 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tnsl" event={"ID":"6a09131a-f0b1-48e9-9b10-8e75e3344d3c","Type":"ContainerStarted","Data":"6969007328b49af79b3870f6d6b0f590a8f58cecc52f8b7ff104825bf6e191ea"} Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.777296 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sc95q" event={"ID":"98c2fa9b-f899-4167-ab43-6d97e7b1589e","Type":"ContainerStarted","Data":"318984acfabbcf8eaabddf93b423d5b3abed60f9a6fae8d59a637a2ea6e71a96"} Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.795990 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-cqn4c" podStartSLOduration=121.795973076 podStartE2EDuration="2m1.795973076s" podCreationTimestamp="2025-11-28 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:46.795408081 +0000 UTC m=+141.173473192" watchObservedRunningTime="2025-11-28 06:54:46.795973076 +0000 UTC m=+141.174038187" Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.825812 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pdk2l" event={"ID":"2cb23522-48f9-442d-8321-90df7290886d","Type":"ContainerStarted","Data":"99115f2641ebb32807c220cc3f67bb9a8d18f474fc14d67bd8033d8788c07f9f"} Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.825854 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pdk2l" event={"ID":"2cb23522-48f9-442d-8321-90df7290886d","Type":"ContainerStarted","Data":"83f862be16dabdda4396ffa6b648e4e80d667f47cc290c62fd10de51f1faf111"} Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.845738 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.845819 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5tnsl" podStartSLOduration=120.845800052 podStartE2EDuration="2m0.845800052s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:46.844806237 +0000 UTC m=+141.222871348" watchObservedRunningTime="2025-11-28 06:54:46.845800052 +0000 UTC m=+141.223865163" Nov 28 06:54:46 crc kubenswrapper[4946]: E1128 06:54:46.846916 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:47.34689925 +0000 UTC m=+141.724964361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.873892 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qvr67" event={"ID":"7ef7cd67-8788-4c1b-9f7d-e66f4fa9737e","Type":"ContainerStarted","Data":"a2cfc706cb510e6c26870531190fc02f57018759cc72bdc3f87c4f7f0a6b2ab4"} Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.908453 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5q8lj" event={"ID":"6ed5ffa1-4361-4c09-b250-607fdf9cd2f4","Type":"ContainerStarted","Data":"e6d1552dfe04c85b7d6003b8d5036bfb83167d7d767a51cf51f7e7fecb338283"} Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.908598 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5q8lj" event={"ID":"6ed5ffa1-4361-4c09-b250-607fdf9cd2f4","Type":"ContainerStarted","Data":"e0dac440ea4297ae9d5791f7408925303e6deb1438fdf5d01ef6ede5438c0742"} Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.950617 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:46 crc kubenswrapper[4946]: E1128 06:54:46.951140 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:47.451125179 +0000 UTC m=+141.829190290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.963808 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-5q8lj" podStartSLOduration=6.963783111 podStartE2EDuration="6.963783111s" podCreationTimestamp="2025-11-28 06:54:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:46.962386015 +0000 UTC m=+141.340451126" watchObservedRunningTime="2025-11-28 06:54:46.963783111 +0000 UTC m=+141.341848212" Nov 28 06:54:46 crc kubenswrapper[4946]: I1128 06:54:46.987102 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qjlhx" event={"ID":"7b7188c5-eb8b-46be-b3db-4f0e058c3110","Type":"ContainerStarted","Data":"94df5cfd7904e1fce2c7322c9fafe21d0aea22bb53085250a409a9b02926c025"} Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.039653 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-pm27x" event={"ID":"386d3c8a-a2d4-46e1-a7cb-c456eec4860b","Type":"ContainerStarted","Data":"2d84aca5aab5bb6384e197f6834186e60f86a0d6be4cba97668f41483f272443"} Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.057209 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:47 crc kubenswrapper[4946]: E1128 06:54:47.057862 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:47.557831911 +0000 UTC m=+141.935897022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.120865 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6q9fb" event={"ID":"07a101d3-99de-46c5-88c3-9d053589fd73","Type":"ContainerStarted","Data":"d989be6a3bd2b43586bc0039898ca8d3e0ad96862c3f83dd5a94e6ad73c6602f"} Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.164455 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:47 crc kubenswrapper[4946]: E1128 06:54:47.165024 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:47.665001306 +0000 UTC m=+142.043066417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.177928 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-hbxpf" event={"ID":"175d3bf2-c969-401c-9b35-d91a066d0305","Type":"ContainerStarted","Data":"4ffa1b14e943c4fc1748dd0166c9ec7e3837202da277f99b0b4f6853ba87b4d9"} Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.208428 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z4vpd" event={"ID":"94ee27f9-59ea-44af-bbd3-e586f4bb7bb1","Type":"ContainerStarted","Data":"9601ee05a4da69f45595e25d1acfb0b291f8cbc0f350a792be93d97736b71eeb"} Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.230401 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-hbxpf" podStartSLOduration=122.230375627 podStartE2EDuration="2m2.230375627s" podCreationTimestamp="2025-11-28 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:47.229197607 +0000 UTC m=+141.607262718" watchObservedRunningTime="2025-11-28 06:54:47.230375627 +0000 UTC m=+141.608440738" Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.237189 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qwzn" event={"ID":"475dd273-84f1-46de-9c16-3eaeca9b7d1c","Type":"ContainerStarted","Data":"10bd67af1d0baa8523bcfe70c29f0b0dd17a1ebdfd563db9806e03b44a17caeb"} Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.237248 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qwzn" event={"ID":"475dd273-84f1-46de-9c16-3eaeca9b7d1c","Type":"ContainerStarted","Data":"dc674bc6280dd3ad543b3418923594a6bf8d15043c83be6c4f934b78bb1f8d50"} Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.264928 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tn22h" event={"ID":"52939a0c-29ab-4cbb-bd53-7a330884139c","Type":"ContainerStarted","Data":"1cb2b7955994ff6dc87dba8b13b5e624aa5637e3b32bfc3e77ff117f7f9dbaab"} Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.264988 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tn22h" event={"ID":"52939a0c-29ab-4cbb-bd53-7a330884139c","Type":"ContainerStarted","Data":"2f287d578621864d2f0d97ca4c81cf0cc026cef6bfdd0b4ee818ef0115bc70b8"} Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.266641 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.266681 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4qwzn" podStartSLOduration=121.26666321 podStartE2EDuration="2m1.26666321s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:47.265990502 +0000 UTC m=+141.644055613" watchObservedRunningTime="2025-11-28 06:54:47.26666321 +0000 UTC m=+141.644728321" Nov 28 06:54:47 crc kubenswrapper[4946]: E1128 06:54:47.266970 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:47.766955457 +0000 UTC m=+142.145020568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.300627 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tn22h" podStartSLOduration=121.300601942 podStartE2EDuration="2m1.300601942s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:47.292940308 +0000 UTC m=+141.671005419" watchObservedRunningTime="2025-11-28 06:54:47.300601942 +0000 UTC m=+141.678667053" Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.300894 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzxwh" event={"ID":"df6a8d9e-bb6c-42e9-918e-60cbb78f55ac","Type":"ContainerStarted","Data":"a45cf908f0fd0eb70793ef3e4ffedc14d7459da4bd26474600832cae48bb8957"} Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.336719 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6k9gx" event={"ID":"85300130-dd7e-4e81-b37a-7dc93d20ab33","Type":"ContainerStarted","Data":"db97b602aa9b5d5e120cc71ac8f223ddc265cc7c072143733052ef91a922d6f7"} Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.375484 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.376762 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-r8n92" event={"ID":"4d68f56c-7e3b-4207-92d1-95800df70d0e","Type":"ContainerStarted","Data":"a45ed33b355a234abf034e7f81923eb0f47970b3ca16ce95402b45bd81cbee02"} Nov 28 06:54:47 crc kubenswrapper[4946]: E1128 06:54:47.377770 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:47.877757453 +0000 UTC m=+142.255822564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.422006 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" event={"ID":"677259a2-4efb-4f49-ad8e-57357402c59a","Type":"ContainerStarted","Data":"6521d58fb4165bf63f9d7f026189e17dec32c2999db691d3b5b7d8b5e992be55"} Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.468911 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sm5f9" event={"ID":"f98176b6-3525-4858-b747-06ca0fdb472e","Type":"ContainerStarted","Data":"9f7a8206500fdb71c17b1842f748bfbf1ac39dae1705ae42032e4e958185dad8"} Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.477093 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:47 crc kubenswrapper[4946]: E1128 06:54:47.477739 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:47.977718334 +0000 UTC m=+142.355783445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.483959 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-cqn4c" Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.502928 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-js7vs" event={"ID":"d8c9e4f9-d51d-47d9-8228-c9c58873dbe3","Type":"ContainerStarted","Data":"e4131662dd2679ac360e03f650a5e9de467bfac888509c2f77279606e96b51e9"} Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.504669 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-js7vs" Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.509121 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" podStartSLOduration=121.509096812 podStartE2EDuration="2m1.509096812s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:47.470922551 +0000 UTC m=+141.848987662" watchObservedRunningTime="2025-11-28 06:54:47.509096812 +0000 UTC m=+141.887161923" Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.529589 4946 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-js7vs container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.529660 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-js7vs" podUID="d8c9e4f9-d51d-47d9-8228-c9c58873dbe3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.529918 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sm5f9" Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.548853 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hxz9b" event={"ID":"3b5b4fe0-1e4f-435a-aff4-3d388fb616cc","Type":"ContainerStarted","Data":"23e150ec3077f4d183fe1966eb8548bbee7fab21a0e28454e05940d50084da02"} Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.583304 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-js7vs" podStartSLOduration=121.583274517 podStartE2EDuration="2m1.583274517s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:47.574904595 +0000 UTC m=+141.952969706" watchObservedRunningTime="2025-11-28 06:54:47.583274517 +0000 UTC m=+141.961339628" Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.585101 4946 patch_prober.go:28] interesting pod/router-default-5444994796-hwk8p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:54:47 crc kubenswrapper[4946]: [-]has-synced failed: reason withheld Nov 28 06:54:47 crc kubenswrapper[4946]: [+]process-running ok Nov 28 06:54:47 crc kubenswrapper[4946]: healthz check failed Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.585181 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hwk8p" podUID="63f790e6-5c8d-475c-bd26-a1e52ffb9ed3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.588141 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pq4rr" event={"ID":"3c2424a4-d3a4-4acc-a683-320abd4467ff","Type":"ContainerStarted","Data":"b22f68d535dc799a40295496d24f264855529af0626b683c0749539d02d35fa7"} Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.590060 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:47 crc kubenswrapper[4946]: E1128 06:54:47.594067 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:48.094043861 +0000 UTC m=+142.472108972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.597225 4946 generic.go:334] "Generic (PLEG): container finished" podID="daef0f99-b6ac-47c5-a246-40a12f61603b" containerID="370f2abd76674fd85d3338180133a07590b0521f51b9a2cb0596ee0eb58a509e" exitCode=0 Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.597302 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8vtf" event={"ID":"daef0f99-b6ac-47c5-a246-40a12f61603b","Type":"ContainerDied","Data":"370f2abd76674fd85d3338180133a07590b0521f51b9a2cb0596ee0eb58a509e"} Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.598204 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8vtf" Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.627960 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4swtq" event={"ID":"c6a06ceb-1625-421c-864d-4f24d2fd00d3","Type":"ContainerStarted","Data":"122290a3e7af2397b6a96ccb9391b605aee2c4c34c4141e45c6ec53d363d1206"} Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.628756 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4swtq" Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.662387 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5ldr8" event={"ID":"190866a6-13b5-4de4-87c6-306883cb2998","Type":"ContainerStarted","Data":"732960c7039c5ce25f0c18855506f5c8bfa5938c79a2b4e3086e86cc55fafd5c"} Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.678313 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-m444h" event={"ID":"830f7e38-bc1c-4897-bcc9-0266da4d74d0","Type":"ContainerStarted","Data":"e0bb8ecd61a78d5e064a9fd14b7b65468fa9ac78827e7c6ce51b3acc43456626"} Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.680006 4946 patch_prober.go:28] interesting pod/downloads-7954f5f757-4bfkx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.680078 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4bfkx" podUID="b4d03410-0c4e-4aa4-b760-8d8179d791a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.697568 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:47 crc kubenswrapper[4946]: E1128 06:54:47.699209 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:48.199187824 +0000 UTC m=+142.577252925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.720733 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8vtf" podStartSLOduration=122.72070271 podStartE2EDuration="2m2.72070271s" podCreationTimestamp="2025-11-28 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:47.675675116 +0000 UTC m=+142.053740227" watchObservedRunningTime="2025-11-28 06:54:47.72070271 +0000 UTC m=+142.098767821" Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.722388 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pq4rr" podStartSLOduration=121.722380103 podStartE2EDuration="2m1.722380103s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:47.719189462 +0000 UTC m=+142.097254573" watchObservedRunningTime="2025-11-28 06:54:47.722380103 +0000 UTC m=+142.100445214" Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.722595 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25" Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.747932 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4swtq" Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.797296 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4swtq" podStartSLOduration=121.797269717 podStartE2EDuration="2m1.797269717s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:47.790188307 +0000 UTC m=+142.168253418" watchObservedRunningTime="2025-11-28 06:54:47.797269717 +0000 UTC m=+142.175334818" Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.804071 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:47 crc kubenswrapper[4946]: E1128 06:54:47.805539 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:48.305520006 +0000 UTC m=+142.683585117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.850979 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5ldr8" podStartSLOduration=121.847950045 podStartE2EDuration="2m1.847950045s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:47.826243793 +0000 UTC m=+142.204308904" watchObservedRunningTime="2025-11-28 06:54:47.847950045 +0000 UTC m=+142.226015156" Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.872935 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-m444h" podStartSLOduration=122.872911939 podStartE2EDuration="2m2.872911939s" podCreationTimestamp="2025-11-28 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:47.868777634 +0000 UTC m=+142.246842745" watchObservedRunningTime="2025-11-28 06:54:47.872911939 +0000 UTC m=+142.250977070" Nov 28 06:54:47 crc kubenswrapper[4946]: I1128 06:54:47.904825 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:47 crc kubenswrapper[4946]: E1128 06:54:47.905342 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:48.405323793 +0000 UTC m=+142.783388904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.005946 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:48 crc kubenswrapper[4946]: E1128 06:54:48.006399 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:48.506380002 +0000 UTC m=+142.884445113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.107395 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:48 crc kubenswrapper[4946]: E1128 06:54:48.108365 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:48.608344753 +0000 UTC m=+142.986409864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.210765 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:48 crc kubenswrapper[4946]: E1128 06:54:48.211289 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:48.711274029 +0000 UTC m=+143.089339140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.311895 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:48 crc kubenswrapper[4946]: E1128 06:54:48.312078 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:48.81204564 +0000 UTC m=+143.190110751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.312603 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:48 crc kubenswrapper[4946]: E1128 06:54:48.313251 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:48.81322461 +0000 UTC m=+143.191289871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.413747 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:48 crc kubenswrapper[4946]: E1128 06:54:48.413957 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:48.91393298 +0000 UTC m=+143.291998261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.515525 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:48 crc kubenswrapper[4946]: E1128 06:54:48.516077 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:49.016051086 +0000 UTC m=+143.394116197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.586051 4946 patch_prober.go:28] interesting pod/router-default-5444994796-hwk8p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:54:48 crc kubenswrapper[4946]: [-]has-synced failed: reason withheld Nov 28 06:54:48 crc kubenswrapper[4946]: [+]process-running ok Nov 28 06:54:48 crc kubenswrapper[4946]: healthz check failed Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.586135 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hwk8p" podUID="63f790e6-5c8d-475c-bd26-a1e52ffb9ed3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.616535 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:48 crc kubenswrapper[4946]: E1128 06:54:48.616730 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:49.116696314 +0000 UTC m=+143.494761425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.616810 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:48 crc kubenswrapper[4946]: E1128 06:54:48.617319 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:49.117302319 +0000 UTC m=+143.495367430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.686283 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8vtf" event={"ID":"daef0f99-b6ac-47c5-a246-40a12f61603b","Type":"ContainerStarted","Data":"6801d5f18d7cf66949b0e353904e3be61404449a7cb5bf119175912d843463cb"} Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.688241 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pq4rr" event={"ID":"3c2424a4-d3a4-4acc-a683-320abd4467ff","Type":"ContainerStarted","Data":"42a4b13ca6c246c39588be71be5f04c07211b053679893e5e526825a4c0e9050"} Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.690192 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6k9gx" event={"ID":"85300130-dd7e-4e81-b37a-7dc93d20ab33","Type":"ContainerStarted","Data":"8b44e43a395a8d120d969c60a2761a9944408a4526417735e283da01f1268b68"} Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.690249 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6k9gx" Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.692825 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hxz9b" event={"ID":"3b5b4fe0-1e4f-435a-aff4-3d388fb616cc","Type":"ContainerStarted","Data":"6e97f04f0e586f63cdc9c6bc8c5b1a2653f6d70733287b6d8a82456f5a1ba1d4"} Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.692883 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hxz9b" event={"ID":"3b5b4fe0-1e4f-435a-aff4-3d388fb616cc","Type":"ContainerStarted","Data":"05a6e30a9c1477b077f17adaac3060b28a432d75d0b4d13bb8b834ba77876f4a"} Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.694967 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6q9fb" event={"ID":"07a101d3-99de-46c5-88c3-9d053589fd73","Type":"ContainerStarted","Data":"b34f21254103158ec3b3fb923ca1fc66e0fc8f07a0e919a8fd3d36335744e43e"} Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.695013 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6q9fb" event={"ID":"07a101d3-99de-46c5-88c3-9d053589fd73","Type":"ContainerStarted","Data":"56cb794e50e295c19a99a3b8d55518753a08c5b839a32c594d19f8253d0a463b"} Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.695132 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6q9fb" Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.698001 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qvr67" event={"ID":"7ef7cd67-8788-4c1b-9f7d-e66f4fa9737e","Type":"ContainerStarted","Data":"227bacb925003b74365b83611e73adcc33ced8600907b091cfcde1a71f2a09f3"} Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.698055 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qvr67" event={"ID":"7ef7cd67-8788-4c1b-9f7d-e66f4fa9737e","Type":"ContainerStarted","Data":"9d98d1adb02f1925304b3ceee33e2b714059f58966cb4171a13c88ee5cc6d27f"} Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.698169 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-qvr67" Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.700241 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6k9gx" Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.700568 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-r8n92" event={"ID":"4d68f56c-7e3b-4207-92d1-95800df70d0e","Type":"ContainerStarted","Data":"a398512d2dac9793f2bac9cc4c434e6d6a54700639f71f5d6481ffde9e664266"} Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.700600 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-r8n92" event={"ID":"4d68f56c-7e3b-4207-92d1-95800df70d0e","Type":"ContainerStarted","Data":"9a4127e982c23d9d5aeeef3c56e4e77758ca8fdc0de73e3ca1c013ce49899ba1"} Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.702593 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44jkz" event={"ID":"8df2dcaf-0af5-4827-9135-e5944a2b12dc","Type":"ContainerStarted","Data":"79963fdb9ff1e9d599cc671ae928341fc0cd9e166490d374a1f3accba64556d1"} Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.702674 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44jkz" event={"ID":"8df2dcaf-0af5-4827-9135-e5944a2b12dc","Type":"ContainerStarted","Data":"9e06985d8ad1bbd080383c2cc701cc483306463f82e4b6a4e15013044c69f08c"} Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.704221 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4swtq" event={"ID":"c6a06ceb-1625-421c-864d-4f24d2fd00d3","Type":"ContainerStarted","Data":"bcda21718305355aaf87c845f981853a19ff5ed868211ae880f75945b8ff64af"} Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.706955 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mnst8" event={"ID":"7092cadd-becf-4ee8-bbb5-0aadd638ad46","Type":"ContainerStarted","Data":"f6863cc9705f26e7f385a56b07a42d67f4c96ec88ad3fd0af03a63d4c40b7646"} Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.708418 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sc95q" event={"ID":"98c2fa9b-f899-4167-ab43-6d97e7b1589e","Type":"ContainerStarted","Data":"a40495c7db33369f28c20128540a7aeda9d679e59e0b3face503681e112e42d8"} Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.710372 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pdk2l" event={"ID":"2cb23522-48f9-442d-8321-90df7290886d","Type":"ContainerStarted","Data":"41bf7870e85bc61d1e21d8292c4e8811bfb56bf5b2f7d6ba13e0f7060e878823"} Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.712796 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzxwh" event={"ID":"df6a8d9e-bb6c-42e9-918e-60cbb78f55ac","Type":"ContainerStarted","Data":"8a008dedda85e87b8762f84037db95cafb5cb82764df3070244fff789d6d5456"} Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.715489 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qjlhx" event={"ID":"7b7188c5-eb8b-46be-b3db-4f0e058c3110","Type":"ContainerStarted","Data":"381cff4102adb8aa1d6bcd02750be68a0fafb32b1230e7c641169300623113e6"} Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.715530 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qjlhx" event={"ID":"7b7188c5-eb8b-46be-b3db-4f0e058c3110","Type":"ContainerStarted","Data":"a94ef1ae96c9f9aea76f107bb9b7fcd3fa2f3aa81b955cd97cfa1d200fb84ea5"} Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.716790 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z4vpd" event={"ID":"94ee27f9-59ea-44af-bbd3-e586f4bb7bb1","Type":"ContainerStarted","Data":"52a72c1456212574a9cdf851986b73818e4c0f31f9ff22f44c02b3fb9d5a3634"} Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.717669 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:48 crc kubenswrapper[4946]: E1128 06:54:48.718104 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:49.2180484 +0000 UTC m=+143.596113511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.719287 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.723453 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6k9gx" podStartSLOduration=122.723440837 podStartE2EDuration="2m2.723440837s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:48.720918703 +0000 UTC m=+143.098983804" watchObservedRunningTime="2025-11-28 06:54:48.723440837 +0000 UTC m=+143.101505948" Nov 28 06:54:48 crc kubenswrapper[4946]: E1128 06:54:48.723875 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:49.223851988 +0000 UTC m=+143.601917099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.725383 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-pm27x" event={"ID":"386d3c8a-a2d4-46e1-a7cb-c456eec4860b","Type":"ContainerStarted","Data":"be50bd5c4a59f5d66a9c6a18d629174662bf32d391892c35dc701b32eb0f75f4"} Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.727916 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.738722 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-js7vs" Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.745922 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.793261 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pdk2l" podStartSLOduration=122.793235911 podStartE2EDuration="2m2.793235911s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:48.757510663 +0000 UTC m=+143.135575784" watchObservedRunningTime="2025-11-28 06:54:48.793235911 +0000 UTC m=+143.171301022" Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.820428 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:48 crc kubenswrapper[4946]: E1128 06:54:48.821360 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:49.321339366 +0000 UTC m=+143.699404477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.822035 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:48 crc kubenswrapper[4946]: E1128 06:54:48.829070 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:49.329053132 +0000 UTC m=+143.707118243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.842012 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-qjlhx" podStartSLOduration=122.84198342 podStartE2EDuration="2m2.84198342s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:48.795034097 +0000 UTC m=+143.173099208" watchObservedRunningTime="2025-11-28 06:54:48.84198342 +0000 UTC m=+143.220048531" Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.842496 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-r8n92" podStartSLOduration=122.842488463 podStartE2EDuration="2m2.842488463s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:48.840901773 +0000 UTC m=+143.218966884" watchObservedRunningTime="2025-11-28 06:54:48.842488463 +0000 UTC m=+143.220553574" Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.902968 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-44jkz" podStartSLOduration=122.90294506 podStartE2EDuration="2m2.90294506s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:48.868874064 +0000 UTC m=+143.246939175" watchObservedRunningTime="2025-11-28 06:54:48.90294506 +0000 UTC m=+143.281010171" Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.923498 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:48 crc kubenswrapper[4946]: E1128 06:54:48.923808 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:49.42378677 +0000 UTC m=+143.801851881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.943092 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-qvr67" podStartSLOduration=9.943050319 podStartE2EDuration="9.943050319s" podCreationTimestamp="2025-11-28 06:54:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:48.937150899 +0000 UTC m=+143.315216010" watchObservedRunningTime="2025-11-28 06:54:48.943050319 +0000 UTC m=+143.321115430" Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.943860 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z4vpd" podStartSLOduration=122.94385402 podStartE2EDuration="2m2.94385402s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:48.90373517 +0000 UTC m=+143.281800281" watchObservedRunningTime="2025-11-28 06:54:48.94385402 +0000 UTC m=+143.321919131" Nov 28 06:54:48 crc kubenswrapper[4946]: I1128 06:54:48.970543 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mnst8" podStartSLOduration=122.970517318 podStartE2EDuration="2m2.970517318s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:48.970508507 +0000 UTC m=+143.348573618" watchObservedRunningTime="2025-11-28 06:54:48.970517318 +0000 UTC m=+143.348582429" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.034630 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:49 crc kubenswrapper[4946]: E1128 06:54:49.035017 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:49.535002107 +0000 UTC m=+143.913067208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.056304 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hxz9b" podStartSLOduration=123.056280748 podStartE2EDuration="2m3.056280748s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:49.005922237 +0000 UTC m=+143.383987348" watchObservedRunningTime="2025-11-28 06:54:49.056280748 +0000 UTC m=+143.434345859" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.090443 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f7mgl"] Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.092064 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f7mgl" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.093160 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rzxwh" podStartSLOduration=124.093138355 podStartE2EDuration="2m4.093138355s" podCreationTimestamp="2025-11-28 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:49.083204912 +0000 UTC m=+143.461270023" watchObservedRunningTime="2025-11-28 06:54:49.093138355 +0000 UTC m=+143.471203466" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.120698 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.132250 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f7mgl"] Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.136029 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:49 crc kubenswrapper[4946]: E1128 06:54:49.136062 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:49.636009254 +0000 UTC m=+144.014074365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.136352 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b169ce71-a297-432f-a041-f7b544659574-catalog-content\") pod \"community-operators-f7mgl\" (UID: \"b169ce71-a297-432f-a041-f7b544659574\") " pod="openshift-marketplace/community-operators-f7mgl" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.136413 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b169ce71-a297-432f-a041-f7b544659574-utilities\") pod \"community-operators-f7mgl\" (UID: \"b169ce71-a297-432f-a041-f7b544659574\") " pod="openshift-marketplace/community-operators-f7mgl" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.136483 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w54pj\" (UniqueName: \"kubernetes.io/projected/b169ce71-a297-432f-a041-f7b544659574-kube-api-access-w54pj\") pod \"community-operators-f7mgl\" (UID: \"b169ce71-a297-432f-a041-f7b544659574\") " pod="openshift-marketplace/community-operators-f7mgl" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.136578 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:49 crc kubenswrapper[4946]: E1128 06:54:49.136909 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:49.636898317 +0000 UTC m=+144.014963428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.149652 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6q9fb" podStartSLOduration=123.14962895 podStartE2EDuration="2m3.14962895s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:49.130289409 +0000 UTC m=+143.508354520" watchObservedRunningTime="2025-11-28 06:54:49.14962895 +0000 UTC m=+143.527694061" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.242085 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:49 crc kubenswrapper[4946]: E1128 06:54:49.242210 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:49.742187273 +0000 UTC m=+144.120252384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.242373 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b169ce71-a297-432f-a041-f7b544659574-catalog-content\") pod \"community-operators-f7mgl\" (UID: \"b169ce71-a297-432f-a041-f7b544659574\") " pod="openshift-marketplace/community-operators-f7mgl" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.242403 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b169ce71-a297-432f-a041-f7b544659574-utilities\") pod \"community-operators-f7mgl\" (UID: \"b169ce71-a297-432f-a041-f7b544659574\") " pod="openshift-marketplace/community-operators-f7mgl" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.242450 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w54pj\" (UniqueName: \"kubernetes.io/projected/b169ce71-a297-432f-a041-f7b544659574-kube-api-access-w54pj\") pod \"community-operators-f7mgl\" (UID: \"b169ce71-a297-432f-a041-f7b544659574\") " pod="openshift-marketplace/community-operators-f7mgl" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.242537 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:49 crc kubenswrapper[4946]: E1128 06:54:49.242849 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:49.742831739 +0000 UTC m=+144.120896850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.243888 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b169ce71-a297-432f-a041-f7b544659574-catalog-content\") pod \"community-operators-f7mgl\" (UID: \"b169ce71-a297-432f-a041-f7b544659574\") " pod="openshift-marketplace/community-operators-f7mgl" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.244129 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b169ce71-a297-432f-a041-f7b544659574-utilities\") pod \"community-operators-f7mgl\" (UID: \"b169ce71-a297-432f-a041-f7b544659574\") " pod="openshift-marketplace/community-operators-f7mgl" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.277115 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7xrv4"] Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.278343 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xrv4" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.294373 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.315551 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-pm27x" podStartSLOduration=123.315520497 podStartE2EDuration="2m3.315520497s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:49.308860487 +0000 UTC m=+143.686925598" watchObservedRunningTime="2025-11-28 06:54:49.315520497 +0000 UTC m=+143.693585608" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.317172 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7xrv4"] Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.330011 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w54pj\" (UniqueName: \"kubernetes.io/projected/b169ce71-a297-432f-a041-f7b544659574-kube-api-access-w54pj\") pod \"community-operators-f7mgl\" (UID: \"b169ce71-a297-432f-a041-f7b544659574\") " pod="openshift-marketplace/community-operators-f7mgl" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.345537 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.346008 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1faa4eda-7193-4eec-b3f6-391ddc88b498-catalog-content\") pod \"certified-operators-7xrv4\" (UID: \"1faa4eda-7193-4eec-b3f6-391ddc88b498\") " pod="openshift-marketplace/certified-operators-7xrv4" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.346066 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1faa4eda-7193-4eec-b3f6-391ddc88b498-utilities\") pod \"certified-operators-7xrv4\" (UID: \"1faa4eda-7193-4eec-b3f6-391ddc88b498\") " pod="openshift-marketplace/certified-operators-7xrv4" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.346095 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj864\" (UniqueName: \"kubernetes.io/projected/1faa4eda-7193-4eec-b3f6-391ddc88b498-kube-api-access-pj864\") pod \"certified-operators-7xrv4\" (UID: \"1faa4eda-7193-4eec-b3f6-391ddc88b498\") " pod="openshift-marketplace/certified-operators-7xrv4" Nov 28 06:54:49 crc kubenswrapper[4946]: E1128 06:54:49.346216 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:49.846196116 +0000 UTC m=+144.224261227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.440854 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f7mgl" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.447166 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.447221 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1faa4eda-7193-4eec-b3f6-391ddc88b498-catalog-content\") pod \"certified-operators-7xrv4\" (UID: \"1faa4eda-7193-4eec-b3f6-391ddc88b498\") " pod="openshift-marketplace/certified-operators-7xrv4" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.447263 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1faa4eda-7193-4eec-b3f6-391ddc88b498-utilities\") pod \"certified-operators-7xrv4\" (UID: \"1faa4eda-7193-4eec-b3f6-391ddc88b498\") " pod="openshift-marketplace/certified-operators-7xrv4" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.447290 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj864\" (UniqueName: \"kubernetes.io/projected/1faa4eda-7193-4eec-b3f6-391ddc88b498-kube-api-access-pj864\") pod \"certified-operators-7xrv4\" (UID: \"1faa4eda-7193-4eec-b3f6-391ddc88b498\") " pod="openshift-marketplace/certified-operators-7xrv4" Nov 28 06:54:49 crc kubenswrapper[4946]: E1128 06:54:49.447877 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:49.94785372 +0000 UTC m=+144.325918821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.447958 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1faa4eda-7193-4eec-b3f6-391ddc88b498-catalog-content\") pod \"certified-operators-7xrv4\" (UID: \"1faa4eda-7193-4eec-b3f6-391ddc88b498\") " pod="openshift-marketplace/certified-operators-7xrv4" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.448066 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1faa4eda-7193-4eec-b3f6-391ddc88b498-utilities\") pod \"certified-operators-7xrv4\" (UID: \"1faa4eda-7193-4eec-b3f6-391ddc88b498\") " pod="openshift-marketplace/certified-operators-7xrv4" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.489130 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj864\" (UniqueName: \"kubernetes.io/projected/1faa4eda-7193-4eec-b3f6-391ddc88b498-kube-api-access-pj864\") pod \"certified-operators-7xrv4\" (UID: \"1faa4eda-7193-4eec-b3f6-391ddc88b498\") " pod="openshift-marketplace/certified-operators-7xrv4" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.498988 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nxxk8"] Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.500113 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nxxk8" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.521788 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nxxk8"] Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.552737 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.552947 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09b629b2-f3d1-4856-98f2-5761c99f457a-utilities\") pod \"community-operators-nxxk8\" (UID: \"09b629b2-f3d1-4856-98f2-5761c99f457a\") " pod="openshift-marketplace/community-operators-nxxk8" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.553011 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp8vk\" (UniqueName: \"kubernetes.io/projected/09b629b2-f3d1-4856-98f2-5761c99f457a-kube-api-access-qp8vk\") pod \"community-operators-nxxk8\" (UID: \"09b629b2-f3d1-4856-98f2-5761c99f457a\") " pod="openshift-marketplace/community-operators-nxxk8" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.553053 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09b629b2-f3d1-4856-98f2-5761c99f457a-catalog-content\") pod \"community-operators-nxxk8\" (UID: \"09b629b2-f3d1-4856-98f2-5761c99f457a\") " pod="openshift-marketplace/community-operators-nxxk8" Nov 28 06:54:49 crc kubenswrapper[4946]: E1128 06:54:49.553165 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:50.053140447 +0000 UTC m=+144.431205558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.588631 4946 patch_prober.go:28] interesting pod/router-default-5444994796-hwk8p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:54:49 crc kubenswrapper[4946]: [-]has-synced failed: reason withheld Nov 28 06:54:49 crc kubenswrapper[4946]: [+]process-running ok Nov 28 06:54:49 crc kubenswrapper[4946]: healthz check failed Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.588713 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hwk8p" podUID="63f790e6-5c8d-475c-bd26-a1e52ffb9ed3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.627584 4946 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.642173 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xrv4" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.655280 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09b629b2-f3d1-4856-98f2-5761c99f457a-utilities\") pod \"community-operators-nxxk8\" (UID: \"09b629b2-f3d1-4856-98f2-5761c99f457a\") " pod="openshift-marketplace/community-operators-nxxk8" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.655355 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.655411 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp8vk\" (UniqueName: \"kubernetes.io/projected/09b629b2-f3d1-4856-98f2-5761c99f457a-kube-api-access-qp8vk\") pod \"community-operators-nxxk8\" (UID: \"09b629b2-f3d1-4856-98f2-5761c99f457a\") " pod="openshift-marketplace/community-operators-nxxk8" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.655481 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09b629b2-f3d1-4856-98f2-5761c99f457a-catalog-content\") pod \"community-operators-nxxk8\" (UID: \"09b629b2-f3d1-4856-98f2-5761c99f457a\") " pod="openshift-marketplace/community-operators-nxxk8" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.655860 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09b629b2-f3d1-4856-98f2-5761c99f457a-utilities\") pod \"community-operators-nxxk8\" (UID: \"09b629b2-f3d1-4856-98f2-5761c99f457a\") " pod="openshift-marketplace/community-operators-nxxk8" Nov 28 06:54:49 crc kubenswrapper[4946]: E1128 06:54:49.656884 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:50.156867723 +0000 UTC m=+144.534932834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.657882 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09b629b2-f3d1-4856-98f2-5761c99f457a-catalog-content\") pod \"community-operators-nxxk8\" (UID: \"09b629b2-f3d1-4856-98f2-5761c99f457a\") " pod="openshift-marketplace/community-operators-nxxk8" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.669065 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8x4xm"] Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.670543 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8x4xm" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.695795 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8x4xm"] Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.713494 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp8vk\" (UniqueName: \"kubernetes.io/projected/09b629b2-f3d1-4856-98f2-5761c99f457a-kube-api-access-qp8vk\") pod \"community-operators-nxxk8\" (UID: \"09b629b2-f3d1-4856-98f2-5761c99f457a\") " pod="openshift-marketplace/community-operators-nxxk8" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.757685 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.758273 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70-catalog-content\") pod \"certified-operators-8x4xm\" (UID: \"ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70\") " pod="openshift-marketplace/certified-operators-8x4xm" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.758337 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqd8g\" (UniqueName: \"kubernetes.io/projected/ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70-kube-api-access-tqd8g\") pod \"certified-operators-8x4xm\" (UID: \"ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70\") " pod="openshift-marketplace/certified-operators-8x4xm" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.758405 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70-utilities\") pod \"certified-operators-8x4xm\" (UID: \"ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70\") " pod="openshift-marketplace/certified-operators-8x4xm" Nov 28 06:54:49 crc kubenswrapper[4946]: E1128 06:54:49.758542 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:50.258518957 +0000 UTC m=+144.636584068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.762977 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sc95q" event={"ID":"98c2fa9b-f899-4167-ab43-6d97e7b1589e","Type":"ContainerStarted","Data":"bce75836173308b2fbd36dee15656b429561ddb07380237b4577f2b9668ae8e3"} Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.763034 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sc95q" event={"ID":"98c2fa9b-f899-4167-ab43-6d97e7b1589e","Type":"ContainerStarted","Data":"ae7424a3a16b63eeb182b17f75067a8f0b5886473bbb092b00f2173339dad86a"} Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.765757 4946 generic.go:334] "Generic (PLEG): container finished" podID="175d3bf2-c969-401c-9b35-d91a066d0305" containerID="4ffa1b14e943c4fc1748dd0166c9ec7e3837202da277f99b0b4f6853ba87b4d9" exitCode=0 Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.766283 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-hbxpf" event={"ID":"175d3bf2-c969-401c-9b35-d91a066d0305","Type":"ContainerDied","Data":"4ffa1b14e943c4fc1748dd0166c9ec7e3837202da277f99b0b4f6853ba87b4d9"} Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.785529 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m8vtf" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.863510 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.863678 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70-utilities\") pod \"certified-operators-8x4xm\" (UID: \"ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70\") " pod="openshift-marketplace/certified-operators-8x4xm" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.864075 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70-catalog-content\") pod \"certified-operators-8x4xm\" (UID: \"ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70\") " pod="openshift-marketplace/certified-operators-8x4xm" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.864291 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqd8g\" (UniqueName: \"kubernetes.io/projected/ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70-kube-api-access-tqd8g\") pod \"certified-operators-8x4xm\" (UID: \"ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70\") " pod="openshift-marketplace/certified-operators-8x4xm" Nov 28 06:54:49 crc kubenswrapper[4946]: E1128 06:54:49.865672 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:50.36565707 +0000 UTC m=+144.743722181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.868798 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70-catalog-content\") pod \"certified-operators-8x4xm\" (UID: \"ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70\") " pod="openshift-marketplace/certified-operators-8x4xm" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.869756 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70-utilities\") pod \"certified-operators-8x4xm\" (UID: \"ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70\") " pod="openshift-marketplace/certified-operators-8x4xm" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.875895 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nxxk8" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.878640 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f7mgl"] Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.899770 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqd8g\" (UniqueName: \"kubernetes.io/projected/ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70-kube-api-access-tqd8g\") pod \"certified-operators-8x4xm\" (UID: \"ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70\") " pod="openshift-marketplace/certified-operators-8x4xm" Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.966597 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:49 crc kubenswrapper[4946]: E1128 06:54:49.966839 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:50.466802261 +0000 UTC m=+144.844867372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:49 crc kubenswrapper[4946]: I1128 06:54:49.970304 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:49 crc kubenswrapper[4946]: E1128 06:54:49.970903 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:50.470878305 +0000 UTC m=+144.848943416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.040610 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7xrv4"] Nov 28 06:54:50 crc kubenswrapper[4946]: W1128 06:54:50.060537 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1faa4eda_7193_4eec_b3f6_391ddc88b498.slice/crio-d163a8fc352ec8b672948a0f9fe48681f4704c738d8f5cb1802fbbedd84c02d8 WatchSource:0}: Error finding container d163a8fc352ec8b672948a0f9fe48681f4704c738d8f5cb1802fbbedd84c02d8: Status 404 returned error can't find the container with id d163a8fc352ec8b672948a0f9fe48681f4704c738d8f5cb1802fbbedd84c02d8 Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.067707 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8x4xm" Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.071703 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:50 crc kubenswrapper[4946]: E1128 06:54:50.071822 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:50.57178696 +0000 UTC m=+144.949852071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.072018 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:50 crc kubenswrapper[4946]: E1128 06:54:50.072497 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:54:50.572488777 +0000 UTC m=+144.950553888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qhbbg" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.121036 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nxxk8"] Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.173199 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:50 crc kubenswrapper[4946]: E1128 06:54:50.173551 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:54:50.673534426 +0000 UTC m=+145.051599537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.181070 4946 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-28T06:54:49.62761996Z","Handler":null,"Name":""} Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.191544 4946 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.191599 4946 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.275604 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.278636 4946 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.278763 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.435764 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qhbbg\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.478281 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.498452 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.499744 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.505052 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.505259 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.514281 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.521645 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.567342 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8x4xm"] Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.580129 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a3c555e-1d86-470a-b353-25a29c615ab6-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5a3c555e-1d86-470a-b353-25a29c615ab6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.580172 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a3c555e-1d86-470a-b353-25a29c615ab6-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5a3c555e-1d86-470a-b353-25a29c615ab6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.580752 4946 patch_prober.go:28] interesting pod/router-default-5444994796-hwk8p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:54:50 crc kubenswrapper[4946]: [-]has-synced failed: reason withheld Nov 28 06:54:50 crc kubenswrapper[4946]: [+]process-running ok Nov 28 06:54:50 crc kubenswrapper[4946]: healthz check failed Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.580843 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hwk8p" podUID="63f790e6-5c8d-475c-bd26-a1e52ffb9ed3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:54:50 crc kubenswrapper[4946]: W1128 06:54:50.607312 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce0e2c1f_e62e_4ea8_95ae_bf75a76a3a70.slice/crio-c63b487e1e5b1ad41e88a4c633bb2bc4999fbbefb54597889ed188d775e4f638 WatchSource:0}: Error finding container c63b487e1e5b1ad41e88a4c633bb2bc4999fbbefb54597889ed188d775e4f638: Status 404 returned error can't find the container with id c63b487e1e5b1ad41e88a4c633bb2bc4999fbbefb54597889ed188d775e4f638 Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.620590 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.682125 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a3c555e-1d86-470a-b353-25a29c615ab6-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5a3c555e-1d86-470a-b353-25a29c615ab6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.682181 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a3c555e-1d86-470a-b353-25a29c615ab6-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5a3c555e-1d86-470a-b353-25a29c615ab6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.682662 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a3c555e-1d86-470a-b353-25a29c615ab6-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5a3c555e-1d86-470a-b353-25a29c615ab6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.706761 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a3c555e-1d86-470a-b353-25a29c615ab6-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5a3c555e-1d86-470a-b353-25a29c615ab6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.763294 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.776305 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sc95q" event={"ID":"98c2fa9b-f899-4167-ab43-6d97e7b1589e","Type":"ContainerStarted","Data":"919e18a3499f7d25f6a017db7414b158850b8a9e11f7eae00193acf98af61071"} Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.779355 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8x4xm" event={"ID":"ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70","Type":"ContainerStarted","Data":"c63b487e1e5b1ad41e88a4c633bb2bc4999fbbefb54597889ed188d775e4f638"} Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.782520 4946 generic.go:334] "Generic (PLEG): container finished" podID="b169ce71-a297-432f-a041-f7b544659574" containerID="8b24854b3b1d473d5e5d5ff7a34a2d4ad401b987285a987e8f0e02bf6e2101be" exitCode=0 Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.782589 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f7mgl" event={"ID":"b169ce71-a297-432f-a041-f7b544659574","Type":"ContainerDied","Data":"8b24854b3b1d473d5e5d5ff7a34a2d4ad401b987285a987e8f0e02bf6e2101be"} Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.782606 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f7mgl" event={"ID":"b169ce71-a297-432f-a041-f7b544659574","Type":"ContainerStarted","Data":"c67f9eda94d2fe964888320bbe415e36c803976065f6155d868c081579ee4736"} Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.786294 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.786597 4946 generic.go:334] "Generic (PLEG): container finished" podID="1faa4eda-7193-4eec-b3f6-391ddc88b498" containerID="f94f8c91c770e2dcd038f366f35186eff4c68021402f800455db2da0ff8ddd5b" exitCode=0 Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.786649 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xrv4" event={"ID":"1faa4eda-7193-4eec-b3f6-391ddc88b498","Type":"ContainerDied","Data":"f94f8c91c770e2dcd038f366f35186eff4c68021402f800455db2da0ff8ddd5b"} Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.786668 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xrv4" event={"ID":"1faa4eda-7193-4eec-b3f6-391ddc88b498","Type":"ContainerStarted","Data":"d163a8fc352ec8b672948a0f9fe48681f4704c738d8f5cb1802fbbedd84c02d8"} Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.793038 4946 generic.go:334] "Generic (PLEG): container finished" podID="09b629b2-f3d1-4856-98f2-5761c99f457a" containerID="62d8856e00dceb21a249a38486f1902050b25d810933404627e2eb85ca809729" exitCode=0 Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.795383 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxxk8" event={"ID":"09b629b2-f3d1-4856-98f2-5761c99f457a","Type":"ContainerDied","Data":"62d8856e00dceb21a249a38486f1902050b25d810933404627e2eb85ca809729"} Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.795482 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxxk8" event={"ID":"09b629b2-f3d1-4856-98f2-5761c99f457a","Type":"ContainerStarted","Data":"fe6dd611b20c5f66c7ad28709264ed3a2688a7055cebf85a49098965557b92bb"} Nov 28 06:54:50 crc kubenswrapper[4946]: W1128 06:54:50.822744 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ba6db31_f148_40c3_b5cc_60b2a8e063e4.slice/crio-6c7296ac0d58a37d63624a8b72cb3d1be4cf2a01d44a7f321c5682408f31d83e WatchSource:0}: Error finding container 6c7296ac0d58a37d63624a8b72cb3d1be4cf2a01d44a7f321c5682408f31d83e: Status 404 returned error can't find the container with id 6c7296ac0d58a37d63624a8b72cb3d1be4cf2a01d44a7f321c5682408f31d83e Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.825215 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-sc95q" podStartSLOduration=11.82519426 podStartE2EDuration="11.82519426s" podCreationTimestamp="2025-11-28 06:54:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:50.818075919 +0000 UTC m=+145.196141040" watchObservedRunningTime="2025-11-28 06:54:50.82519426 +0000 UTC m=+145.203259371" Nov 28 06:54:50 crc kubenswrapper[4946]: I1128 06:54:50.830494 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qhbbg"] Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.054868 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-hbxpf" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.069725 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n9lpt"] Nov 28 06:54:51 crc kubenswrapper[4946]: E1128 06:54:51.070268 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175d3bf2-c969-401c-9b35-d91a066d0305" containerName="collect-profiles" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.070296 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="175d3bf2-c969-401c-9b35-d91a066d0305" containerName="collect-profiles" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.070495 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="175d3bf2-c969-401c-9b35-d91a066d0305" containerName="collect-profiles" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.071834 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9lpt" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.074886 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9lpt"] Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.080953 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.094746 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/175d3bf2-c969-401c-9b35-d91a066d0305-config-volume\") pod \"175d3bf2-c969-401c-9b35-d91a066d0305\" (UID: \"175d3bf2-c969-401c-9b35-d91a066d0305\") " Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.095105 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7jr4\" (UniqueName: \"kubernetes.io/projected/175d3bf2-c969-401c-9b35-d91a066d0305-kube-api-access-m7jr4\") pod \"175d3bf2-c969-401c-9b35-d91a066d0305\" (UID: \"175d3bf2-c969-401c-9b35-d91a066d0305\") " Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.095182 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/175d3bf2-c969-401c-9b35-d91a066d0305-secret-volume\") pod \"175d3bf2-c969-401c-9b35-d91a066d0305\" (UID: \"175d3bf2-c969-401c-9b35-d91a066d0305\") " Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.095899 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/175d3bf2-c969-401c-9b35-d91a066d0305-config-volume" (OuterVolumeSpecName: "config-volume") pod "175d3bf2-c969-401c-9b35-d91a066d0305" (UID: "175d3bf2-c969-401c-9b35-d91a066d0305"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.101936 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/175d3bf2-c969-401c-9b35-d91a066d0305-kube-api-access-m7jr4" (OuterVolumeSpecName: "kube-api-access-m7jr4") pod "175d3bf2-c969-401c-9b35-d91a066d0305" (UID: "175d3bf2-c969-401c-9b35-d91a066d0305"). InnerVolumeSpecName "kube-api-access-m7jr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.105862 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/175d3bf2-c969-401c-9b35-d91a066d0305-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "175d3bf2-c969-401c-9b35-d91a066d0305" (UID: "175d3bf2-c969-401c-9b35-d91a066d0305"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.196647 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c124c67-1e65-4489-a08a-8b580fee23cc-catalog-content\") pod \"redhat-marketplace-n9lpt\" (UID: \"1c124c67-1e65-4489-a08a-8b580fee23cc\") " pod="openshift-marketplace/redhat-marketplace-n9lpt" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.196843 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c124c67-1e65-4489-a08a-8b580fee23cc-utilities\") pod \"redhat-marketplace-n9lpt\" (UID: \"1c124c67-1e65-4489-a08a-8b580fee23cc\") " pod="openshift-marketplace/redhat-marketplace-n9lpt" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.197273 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwngf\" (UniqueName: \"kubernetes.io/projected/1c124c67-1e65-4489-a08a-8b580fee23cc-kube-api-access-lwngf\") pod \"redhat-marketplace-n9lpt\" (UID: \"1c124c67-1e65-4489-a08a-8b580fee23cc\") " pod="openshift-marketplace/redhat-marketplace-n9lpt" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.197357 4946 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/175d3bf2-c969-401c-9b35-d91a066d0305-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.198126 4946 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/175d3bf2-c969-401c-9b35-d91a066d0305-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.198154 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7jr4\" (UniqueName: \"kubernetes.io/projected/175d3bf2-c969-401c-9b35-d91a066d0305-kube-api-access-m7jr4\") on node \"crc\" DevicePath \"\"" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.206305 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.299283 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwngf\" (UniqueName: \"kubernetes.io/projected/1c124c67-1e65-4489-a08a-8b580fee23cc-kube-api-access-lwngf\") pod \"redhat-marketplace-n9lpt\" (UID: \"1c124c67-1e65-4489-a08a-8b580fee23cc\") " pod="openshift-marketplace/redhat-marketplace-n9lpt" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.299340 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c124c67-1e65-4489-a08a-8b580fee23cc-catalog-content\") pod \"redhat-marketplace-n9lpt\" (UID: \"1c124c67-1e65-4489-a08a-8b580fee23cc\") " pod="openshift-marketplace/redhat-marketplace-n9lpt" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.299382 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c124c67-1e65-4489-a08a-8b580fee23cc-utilities\") pod \"redhat-marketplace-n9lpt\" (UID: \"1c124c67-1e65-4489-a08a-8b580fee23cc\") " pod="openshift-marketplace/redhat-marketplace-n9lpt" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.300276 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c124c67-1e65-4489-a08a-8b580fee23cc-utilities\") pod \"redhat-marketplace-n9lpt\" (UID: \"1c124c67-1e65-4489-a08a-8b580fee23cc\") " pod="openshift-marketplace/redhat-marketplace-n9lpt" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.300444 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c124c67-1e65-4489-a08a-8b580fee23cc-catalog-content\") pod \"redhat-marketplace-n9lpt\" (UID: \"1c124c67-1e65-4489-a08a-8b580fee23cc\") " pod="openshift-marketplace/redhat-marketplace-n9lpt" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.325347 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwngf\" (UniqueName: \"kubernetes.io/projected/1c124c67-1e65-4489-a08a-8b580fee23cc-kube-api-access-lwngf\") pod \"redhat-marketplace-n9lpt\" (UID: \"1c124c67-1e65-4489-a08a-8b580fee23cc\") " pod="openshift-marketplace/redhat-marketplace-n9lpt" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.395650 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9lpt" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.429262 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.433814 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-xv7dv" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.460210 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fjn4d"] Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.461433 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fjn4d" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.473690 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fjn4d"] Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.503456 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f636a1-6272-432d-9b59-6b21b51f0038-catalog-content\") pod \"redhat-marketplace-fjn4d\" (UID: \"00f636a1-6272-432d-9b59-6b21b51f0038\") " pod="openshift-marketplace/redhat-marketplace-fjn4d" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.503530 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z4wx\" (UniqueName: \"kubernetes.io/projected/00f636a1-6272-432d-9b59-6b21b51f0038-kube-api-access-9z4wx\") pod \"redhat-marketplace-fjn4d\" (UID: \"00f636a1-6272-432d-9b59-6b21b51f0038\") " pod="openshift-marketplace/redhat-marketplace-fjn4d" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.503624 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f636a1-6272-432d-9b59-6b21b51f0038-utilities\") pod \"redhat-marketplace-fjn4d\" (UID: \"00f636a1-6272-432d-9b59-6b21b51f0038\") " pod="openshift-marketplace/redhat-marketplace-fjn4d" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.531858 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.532969 4946 patch_prober.go:28] interesting pod/console-f9d7485db-r7ztb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.533010 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-r7ztb" podUID="79c0d15c-8fc9-4efd-b1ec-739718f313d9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.533357 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.605251 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z4wx\" (UniqueName: \"kubernetes.io/projected/00f636a1-6272-432d-9b59-6b21b51f0038-kube-api-access-9z4wx\") pod \"redhat-marketplace-fjn4d\" (UID: \"00f636a1-6272-432d-9b59-6b21b51f0038\") " pod="openshift-marketplace/redhat-marketplace-fjn4d" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.605397 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f636a1-6272-432d-9b59-6b21b51f0038-utilities\") pod \"redhat-marketplace-fjn4d\" (UID: \"00f636a1-6272-432d-9b59-6b21b51f0038\") " pod="openshift-marketplace/redhat-marketplace-fjn4d" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.605520 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f636a1-6272-432d-9b59-6b21b51f0038-catalog-content\") pod \"redhat-marketplace-fjn4d\" (UID: \"00f636a1-6272-432d-9b59-6b21b51f0038\") " pod="openshift-marketplace/redhat-marketplace-fjn4d" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.606671 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f636a1-6272-432d-9b59-6b21b51f0038-catalog-content\") pod \"redhat-marketplace-fjn4d\" (UID: \"00f636a1-6272-432d-9b59-6b21b51f0038\") " pod="openshift-marketplace/redhat-marketplace-fjn4d" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.606891 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f636a1-6272-432d-9b59-6b21b51f0038-utilities\") pod \"redhat-marketplace-fjn4d\" (UID: \"00f636a1-6272-432d-9b59-6b21b51f0038\") " pod="openshift-marketplace/redhat-marketplace-fjn4d" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.607939 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.608353 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.644566 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z4wx\" (UniqueName: \"kubernetes.io/projected/00f636a1-6272-432d-9b59-6b21b51f0038-kube-api-access-9z4wx\") pod \"redhat-marketplace-fjn4d\" (UID: \"00f636a1-6272-432d-9b59-6b21b51f0038\") " pod="openshift-marketplace/redhat-marketplace-fjn4d" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.658834 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.659524 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.661752 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.663112 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.674521 4946 patch_prober.go:28] interesting pod/router-default-5444994796-hwk8p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:54:51 crc kubenswrapper[4946]: [-]has-synced failed: reason withheld Nov 28 06:54:51 crc kubenswrapper[4946]: [+]process-running ok Nov 28 06:54:51 crc kubenswrapper[4946]: healthz check failed Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.674599 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hwk8p" podUID="63f790e6-5c8d-475c-bd26-a1e52ffb9ed3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.692126 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.697181 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.725742 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c829db8-c04a-4785-b390-39552c029970-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1c829db8-c04a-4785-b390-39552c029970\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.725891 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c829db8-c04a-4785-b390-39552c029970-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1c829db8-c04a-4785-b390-39552c029970\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.764854 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fjn4d" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.829574 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c829db8-c04a-4785-b390-39552c029970-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1c829db8-c04a-4785-b390-39552c029970\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.829693 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c829db8-c04a-4785-b390-39552c029970-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1c829db8-c04a-4785-b390-39552c029970\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.830758 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c829db8-c04a-4785-b390-39552c029970-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1c829db8-c04a-4785-b390-39552c029970\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.891152 4946 generic.go:334] "Generic (PLEG): container finished" podID="ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70" containerID="72c5dac86fffaa557bc0f186353ad7bdd68d30c01d893bce63d615c24393dca9" exitCode=0 Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.892371 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8x4xm" event={"ID":"ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70","Type":"ContainerDied","Data":"72c5dac86fffaa557bc0f186353ad7bdd68d30c01d893bce63d615c24393dca9"} Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.893212 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c829db8-c04a-4785-b390-39552c029970-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1c829db8-c04a-4785-b390-39552c029970\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.896552 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5a3c555e-1d86-470a-b353-25a29c615ab6","Type":"ContainerStarted","Data":"4cce8bf888b6e11b2d689c6d10e32e7ea8dd72d289276de0dd9a9d83df789934"} Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.898785 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" event={"ID":"2ba6db31-f148-40c3-b5cc-60b2a8e063e4","Type":"ContainerStarted","Data":"2c59ef8dbbcf9ff500b8a57b320169e8616e4599b08b1ce2ea3e3359cf005d93"} Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.898842 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" event={"ID":"2ba6db31-f148-40c3-b5cc-60b2a8e063e4","Type":"ContainerStarted","Data":"6c7296ac0d58a37d63624a8b72cb3d1be4cf2a01d44a7f321c5682408f31d83e"} Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.899077 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.933606 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.933916 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.936666 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.936844 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.937583 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-hbxpf" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.939352 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-hbxpf" event={"ID":"175d3bf2-c969-401c-9b35-d91a066d0305","Type":"ContainerDied","Data":"b78eb19a99d23365dc3d4d49fee8d24a2072f0cf0d8023796e23d6ec8a3de818"} Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.939607 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b78eb19a99d23365dc3d4d49fee8d24a2072f0cf0d8023796e23d6ec8a3de818" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.945394 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.946444 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.948013 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pw98g" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.967253 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:54:51 crc kubenswrapper[4946]: I1128 06:54:51.971995 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.007858 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" podStartSLOduration=126.007826209 podStartE2EDuration="2m6.007826209s" podCreationTimestamp="2025-11-28 06:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:54:51.997198519 +0000 UTC m=+146.375263630" watchObservedRunningTime="2025-11-28 06:54:52.007826209 +0000 UTC m=+146.385891320" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.044867 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.076804 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.117833 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.134972 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.142769 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.191106 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9lpt"] Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.258208 4946 patch_prober.go:28] interesting pod/downloads-7954f5f757-4bfkx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.258268 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4bfkx" podUID="b4d03410-0c4e-4aa4-b760-8d8179d791a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.258215 4946 patch_prober.go:28] interesting pod/downloads-7954f5f757-4bfkx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.258502 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4bfkx" podUID="b4d03410-0c4e-4aa4-b760-8d8179d791a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.293618 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-txxk5"] Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.295821 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txxk5" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.320035 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-txxk5"] Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.334136 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.343022 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4cb05fa-0cdb-499e-9d83-e6a5e87bf144-catalog-content\") pod \"redhat-operators-txxk5\" (UID: \"d4cb05fa-0cdb-499e-9d83-e6a5e87bf144\") " pod="openshift-marketplace/redhat-operators-txxk5" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.343070 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4cb05fa-0cdb-499e-9d83-e6a5e87bf144-utilities\") pod \"redhat-operators-txxk5\" (UID: \"d4cb05fa-0cdb-499e-9d83-e6a5e87bf144\") " pod="openshift-marketplace/redhat-operators-txxk5" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.343113 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx69h\" (UniqueName: \"kubernetes.io/projected/d4cb05fa-0cdb-499e-9d83-e6a5e87bf144-kube-api-access-lx69h\") pod \"redhat-operators-txxk5\" (UID: \"d4cb05fa-0cdb-499e-9d83-e6a5e87bf144\") " pod="openshift-marketplace/redhat-operators-txxk5" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.425141 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fjn4d"] Nov 28 06:54:52 crc kubenswrapper[4946]: W1128 06:54:52.445937 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00f636a1_6272_432d_9b59_6b21b51f0038.slice/crio-35563ddff8e976e00c5d4f621c7b7e852527fa64b4c15689f856728714758da1 WatchSource:0}: Error finding container 35563ddff8e976e00c5d4f621c7b7e852527fa64b4c15689f856728714758da1: Status 404 returned error can't find the container with id 35563ddff8e976e00c5d4f621c7b7e852527fa64b4c15689f856728714758da1 Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.446010 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4cb05fa-0cdb-499e-9d83-e6a5e87bf144-catalog-content\") pod \"redhat-operators-txxk5\" (UID: \"d4cb05fa-0cdb-499e-9d83-e6a5e87bf144\") " pod="openshift-marketplace/redhat-operators-txxk5" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.446064 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4cb05fa-0cdb-499e-9d83-e6a5e87bf144-utilities\") pod \"redhat-operators-txxk5\" (UID: \"d4cb05fa-0cdb-499e-9d83-e6a5e87bf144\") " pod="openshift-marketplace/redhat-operators-txxk5" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.446098 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx69h\" (UniqueName: \"kubernetes.io/projected/d4cb05fa-0cdb-499e-9d83-e6a5e87bf144-kube-api-access-lx69h\") pod \"redhat-operators-txxk5\" (UID: \"d4cb05fa-0cdb-499e-9d83-e6a5e87bf144\") " pod="openshift-marketplace/redhat-operators-txxk5" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.448615 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4cb05fa-0cdb-499e-9d83-e6a5e87bf144-catalog-content\") pod \"redhat-operators-txxk5\" (UID: \"d4cb05fa-0cdb-499e-9d83-e6a5e87bf144\") " pod="openshift-marketplace/redhat-operators-txxk5" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.448767 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4cb05fa-0cdb-499e-9d83-e6a5e87bf144-utilities\") pod \"redhat-operators-txxk5\" (UID: \"d4cb05fa-0cdb-499e-9d83-e6a5e87bf144\") " pod="openshift-marketplace/redhat-operators-txxk5" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.497761 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx69h\" (UniqueName: \"kubernetes.io/projected/d4cb05fa-0cdb-499e-9d83-e6a5e87bf144-kube-api-access-lx69h\") pod \"redhat-operators-txxk5\" (UID: \"d4cb05fa-0cdb-499e-9d83-e6a5e87bf144\") " pod="openshift-marketplace/redhat-operators-txxk5" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.578903 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-hwk8p" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.582925 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-hwk8p" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.620673 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txxk5" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.673335 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6249s"] Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.675158 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6249s" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.685691 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6249s"] Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.869204 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/766b6f99-3942-4119-a40d-8b8b327e1880-catalog-content\") pod \"redhat-operators-6249s\" (UID: \"766b6f99-3942-4119-a40d-8b8b327e1880\") " pod="openshift-marketplace/redhat-operators-6249s" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.870418 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2cmz\" (UniqueName: \"kubernetes.io/projected/766b6f99-3942-4119-a40d-8b8b327e1880-kube-api-access-w2cmz\") pod \"redhat-operators-6249s\" (UID: \"766b6f99-3942-4119-a40d-8b8b327e1880\") " pod="openshift-marketplace/redhat-operators-6249s" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.870556 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/766b6f99-3942-4119-a40d-8b8b327e1880-utilities\") pod \"redhat-operators-6249s\" (UID: \"766b6f99-3942-4119-a40d-8b8b327e1880\") " pod="openshift-marketplace/redhat-operators-6249s" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.964441 4946 generic.go:334] "Generic (PLEG): container finished" podID="00f636a1-6272-432d-9b59-6b21b51f0038" containerID="53bd432122bd0627016d52028f018d21e3f1047889a140b7ab3e5c7c490b2376" exitCode=0 Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.964652 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fjn4d" event={"ID":"00f636a1-6272-432d-9b59-6b21b51f0038","Type":"ContainerDied","Data":"53bd432122bd0627016d52028f018d21e3f1047889a140b7ab3e5c7c490b2376"} Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.964736 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fjn4d" event={"ID":"00f636a1-6272-432d-9b59-6b21b51f0038","Type":"ContainerStarted","Data":"35563ddff8e976e00c5d4f621c7b7e852527fa64b4c15689f856728714758da1"} Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.971725 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/766b6f99-3942-4119-a40d-8b8b327e1880-catalog-content\") pod \"redhat-operators-6249s\" (UID: \"766b6f99-3942-4119-a40d-8b8b327e1880\") " pod="openshift-marketplace/redhat-operators-6249s" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.971827 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2cmz\" (UniqueName: \"kubernetes.io/projected/766b6f99-3942-4119-a40d-8b8b327e1880-kube-api-access-w2cmz\") pod \"redhat-operators-6249s\" (UID: \"766b6f99-3942-4119-a40d-8b8b327e1880\") " pod="openshift-marketplace/redhat-operators-6249s" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.972327 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/766b6f99-3942-4119-a40d-8b8b327e1880-utilities\") pod \"redhat-operators-6249s\" (UID: \"766b6f99-3942-4119-a40d-8b8b327e1880\") " pod="openshift-marketplace/redhat-operators-6249s" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.972257 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/766b6f99-3942-4119-a40d-8b8b327e1880-catalog-content\") pod \"redhat-operators-6249s\" (UID: \"766b6f99-3942-4119-a40d-8b8b327e1880\") " pod="openshift-marketplace/redhat-operators-6249s" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.972593 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/766b6f99-3942-4119-a40d-8b8b327e1880-utilities\") pod \"redhat-operators-6249s\" (UID: \"766b6f99-3942-4119-a40d-8b8b327e1880\") " pod="openshift-marketplace/redhat-operators-6249s" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.981664 4946 generic.go:334] "Generic (PLEG): container finished" podID="5a3c555e-1d86-470a-b353-25a29c615ab6" containerID="608566ab1e38535322aed14f643fc191796cc82f65d040b66c04c92c4206209c" exitCode=0 Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.981989 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5a3c555e-1d86-470a-b353-25a29c615ab6","Type":"ContainerDied","Data":"608566ab1e38535322aed14f643fc191796cc82f65d040b66c04c92c4206209c"} Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.988629 4946 generic.go:334] "Generic (PLEG): container finished" podID="1c124c67-1e65-4489-a08a-8b580fee23cc" containerID="3db9c17b7e2eb91543d8f2e6337cec61e832c6e03f99d7c7c25301d82ac922ea" exitCode=0 Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.992532 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9lpt" event={"ID":"1c124c67-1e65-4489-a08a-8b580fee23cc","Type":"ContainerDied","Data":"3db9c17b7e2eb91543d8f2e6337cec61e832c6e03f99d7c7c25301d82ac922ea"} Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.992939 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9lpt" event={"ID":"1c124c67-1e65-4489-a08a-8b580fee23cc","Type":"ContainerStarted","Data":"2395dccab10e9f4e47c6c5e55380ebe56f116e268ec0e3bcb4c75389d44ec0b5"} Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.994172 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-hwk8p" Nov 28 06:54:52 crc kubenswrapper[4946]: I1128 06:54:52.997621 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2cmz\" (UniqueName: \"kubernetes.io/projected/766b6f99-3942-4119-a40d-8b8b327e1880-kube-api-access-w2cmz\") pod \"redhat-operators-6249s\" (UID: \"766b6f99-3942-4119-a40d-8b8b327e1880\") " pod="openshift-marketplace/redhat-operators-6249s" Nov 28 06:54:53 crc kubenswrapper[4946]: I1128 06:54:53.047750 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6249s" Nov 28 06:54:53 crc kubenswrapper[4946]: I1128 06:54:53.175217 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 06:54:53 crc kubenswrapper[4946]: W1128 06:54:53.243072 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1c829db8_c04a_4785_b390_39552c029970.slice/crio-e7992876808443e41746c0ebc719898cee2d9f34ed8a4c804a8d4707568b6f6e WatchSource:0}: Error finding container e7992876808443e41746c0ebc719898cee2d9f34ed8a4c804a8d4707568b6f6e: Status 404 returned error can't find the container with id e7992876808443e41746c0ebc719898cee2d9f34ed8a4c804a8d4707568b6f6e Nov 28 06:54:53 crc kubenswrapper[4946]: I1128 06:54:53.286952 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-txxk5"] Nov 28 06:54:53 crc kubenswrapper[4946]: I1128 06:54:53.782894 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6249s"] Nov 28 06:54:54 crc kubenswrapper[4946]: I1128 06:54:54.058208 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"fd2cbeb375dc7176c1668105c6a7e790f859ff10214df82898d258cf1f3bf784"} Nov 28 06:54:54 crc kubenswrapper[4946]: I1128 06:54:54.058601 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"fd456769985eb841e3b1c47924690329941e8e7811e98dde4273528827dd05f6"} Nov 28 06:54:54 crc kubenswrapper[4946]: I1128 06:54:54.062552 4946 generic.go:334] "Generic (PLEG): container finished" podID="d4cb05fa-0cdb-499e-9d83-e6a5e87bf144" containerID="c491b418a616aed07ffb6a301897c4e49f739e28430ba6aa7675b39ae2af5a70" exitCode=0 Nov 28 06:54:54 crc kubenswrapper[4946]: I1128 06:54:54.063050 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txxk5" event={"ID":"d4cb05fa-0cdb-499e-9d83-e6a5e87bf144","Type":"ContainerDied","Data":"c491b418a616aed07ffb6a301897c4e49f739e28430ba6aa7675b39ae2af5a70"} Nov 28 06:54:54 crc kubenswrapper[4946]: I1128 06:54:54.063069 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txxk5" event={"ID":"d4cb05fa-0cdb-499e-9d83-e6a5e87bf144","Type":"ContainerStarted","Data":"259265bdd043bf7fb2fa4607d9b1475c934e01430a4e58c3e2dd7dd2612874d0"} Nov 28 06:54:54 crc kubenswrapper[4946]: I1128 06:54:54.079477 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"02816f03ca2211f3398e65f46221b029146e9577f265daa2618e929aa6922480"} Nov 28 06:54:54 crc kubenswrapper[4946]: I1128 06:54:54.079528 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5d1dfa0907285f244ac6f3885b662b5c99820d08e507703ddcd3d791c0ee01b7"} Nov 28 06:54:54 crc kubenswrapper[4946]: I1128 06:54:54.080734 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:54:54 crc kubenswrapper[4946]: I1128 06:54:54.127795 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1c829db8-c04a-4785-b390-39552c029970","Type":"ContainerStarted","Data":"e7992876808443e41746c0ebc719898cee2d9f34ed8a4c804a8d4707568b6f6e"} Nov 28 06:54:54 crc kubenswrapper[4946]: I1128 06:54:54.153861 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a25c51d927dc8817bd1a61c4df51dae69fa3d4792cd089c8f172792ba2eddabf"} Nov 28 06:54:54 crc kubenswrapper[4946]: I1128 06:54:54.153938 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1f877db9a72ccc16c8b058976e2d9818a007c3061107065e3e453a917ca552ac"} Nov 28 06:54:54 crc kubenswrapper[4946]: I1128 06:54:54.159272 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6249s" event={"ID":"766b6f99-3942-4119-a40d-8b8b327e1880","Type":"ContainerStarted","Data":"31069b5fe91781c20e49addaa144a1c41fb5e24bcc19e7c2a26a6b772e27beff"} Nov 28 06:54:54 crc kubenswrapper[4946]: I1128 06:54:54.591024 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:54:54 crc kubenswrapper[4946]: I1128 06:54:54.731306 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:54:54 crc kubenswrapper[4946]: I1128 06:54:54.731372 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:54:54 crc kubenswrapper[4946]: I1128 06:54:54.733879 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a3c555e-1d86-470a-b353-25a29c615ab6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5a3c555e-1d86-470a-b353-25a29c615ab6" (UID: "5a3c555e-1d86-470a-b353-25a29c615ab6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:54:54 crc kubenswrapper[4946]: I1128 06:54:54.734278 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a3c555e-1d86-470a-b353-25a29c615ab6-kubelet-dir\") pod \"5a3c555e-1d86-470a-b353-25a29c615ab6\" (UID: \"5a3c555e-1d86-470a-b353-25a29c615ab6\") " Nov 28 06:54:54 crc kubenswrapper[4946]: I1128 06:54:54.734352 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a3c555e-1d86-470a-b353-25a29c615ab6-kube-api-access\") pod \"5a3c555e-1d86-470a-b353-25a29c615ab6\" (UID: \"5a3c555e-1d86-470a-b353-25a29c615ab6\") " Nov 28 06:54:54 crc kubenswrapper[4946]: I1128 06:54:54.737520 4946 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a3c555e-1d86-470a-b353-25a29c615ab6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:54:54 crc kubenswrapper[4946]: I1128 06:54:54.756915 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a3c555e-1d86-470a-b353-25a29c615ab6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5a3c555e-1d86-470a-b353-25a29c615ab6" (UID: "5a3c555e-1d86-470a-b353-25a29c615ab6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:54:54 crc kubenswrapper[4946]: I1128 06:54:54.838593 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a3c555e-1d86-470a-b353-25a29c615ab6-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 06:54:55 crc kubenswrapper[4946]: I1128 06:54:55.177442 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:54:55 crc kubenswrapper[4946]: I1128 06:54:55.184640 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5a3c555e-1d86-470a-b353-25a29c615ab6","Type":"ContainerDied","Data":"4cce8bf888b6e11b2d689c6d10e32e7ea8dd72d289276de0dd9a9d83df789934"} Nov 28 06:54:55 crc kubenswrapper[4946]: I1128 06:54:55.184712 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cce8bf888b6e11b2d689c6d10e32e7ea8dd72d289276de0dd9a9d83df789934" Nov 28 06:54:55 crc kubenswrapper[4946]: I1128 06:54:55.189025 4946 generic.go:334] "Generic (PLEG): container finished" podID="766b6f99-3942-4119-a40d-8b8b327e1880" containerID="ac1c13b6d6e8915d63f7159a37fecca5e57caf98fba55acf4303bd5948c910fc" exitCode=0 Nov 28 06:54:55 crc kubenswrapper[4946]: I1128 06:54:55.189078 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6249s" event={"ID":"766b6f99-3942-4119-a40d-8b8b327e1880","Type":"ContainerDied","Data":"ac1c13b6d6e8915d63f7159a37fecca5e57caf98fba55acf4303bd5948c910fc"} Nov 28 06:54:55 crc kubenswrapper[4946]: I1128 06:54:55.197344 4946 generic.go:334] "Generic (PLEG): container finished" podID="1c829db8-c04a-4785-b390-39552c029970" containerID="28f01845e50216ebb11a214e111706a1043f6add2890fb27d996e453b9a45ccf" exitCode=0 Nov 28 06:54:55 crc kubenswrapper[4946]: I1128 06:54:55.197436 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1c829db8-c04a-4785-b390-39552c029970","Type":"ContainerDied","Data":"28f01845e50216ebb11a214e111706a1043f6add2890fb27d996e453b9a45ccf"} Nov 28 06:54:56 crc kubenswrapper[4946]: I1128 06:54:56.688749 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:54:56 crc kubenswrapper[4946]: I1128 06:54:56.709989 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c829db8-c04a-4785-b390-39552c029970-kube-api-access\") pod \"1c829db8-c04a-4785-b390-39552c029970\" (UID: \"1c829db8-c04a-4785-b390-39552c029970\") " Nov 28 06:54:56 crc kubenswrapper[4946]: I1128 06:54:56.710732 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c829db8-c04a-4785-b390-39552c029970-kubelet-dir\") pod \"1c829db8-c04a-4785-b390-39552c029970\" (UID: \"1c829db8-c04a-4785-b390-39552c029970\") " Nov 28 06:54:56 crc kubenswrapper[4946]: I1128 06:54:56.711681 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c829db8-c04a-4785-b390-39552c029970-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1c829db8-c04a-4785-b390-39552c029970" (UID: "1c829db8-c04a-4785-b390-39552c029970"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:54:56 crc kubenswrapper[4946]: I1128 06:54:56.736220 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c829db8-c04a-4785-b390-39552c029970-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1c829db8-c04a-4785-b390-39552c029970" (UID: "1c829db8-c04a-4785-b390-39552c029970"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:54:56 crc kubenswrapper[4946]: I1128 06:54:56.819623 4946 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c829db8-c04a-4785-b390-39552c029970-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:54:56 crc kubenswrapper[4946]: I1128 06:54:56.819669 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c829db8-c04a-4785-b390-39552c029970-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 06:54:57 crc kubenswrapper[4946]: I1128 06:54:57.249806 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1c829db8-c04a-4785-b390-39552c029970","Type":"ContainerDied","Data":"e7992876808443e41746c0ebc719898cee2d9f34ed8a4c804a8d4707568b6f6e"} Nov 28 06:54:57 crc kubenswrapper[4946]: I1128 06:54:57.249858 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7992876808443e41746c0ebc719898cee2d9f34ed8a4c804a8d4707568b6f6e" Nov 28 06:54:57 crc kubenswrapper[4946]: I1128 06:54:57.249877 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:54:58 crc kubenswrapper[4946]: I1128 06:54:58.097216 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-qvr67" Nov 28 06:55:01 crc kubenswrapper[4946]: I1128 06:55:01.672797 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:55:01 crc kubenswrapper[4946]: I1128 06:55:01.677800 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 06:55:02 crc kubenswrapper[4946]: I1128 06:55:02.274684 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-4bfkx" Nov 28 06:55:08 crc kubenswrapper[4946]: I1128 06:55:08.776233 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs\") pod \"network-metrics-daemon-gkg79\" (UID: \"4e6983b1-6887-4d13-8f9a-f261a745115f\") " pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:55:08 crc kubenswrapper[4946]: I1128 06:55:08.785202 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e6983b1-6887-4d13-8f9a-f261a745115f-metrics-certs\") pod \"network-metrics-daemon-gkg79\" (UID: \"4e6983b1-6887-4d13-8f9a-f261a745115f\") " pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:55:08 crc kubenswrapper[4946]: I1128 06:55:08.818910 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gkg79" Nov 28 06:55:10 crc kubenswrapper[4946]: I1128 06:55:10.627788 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:55:22 crc kubenswrapper[4946]: E1128 06:55:22.485779 4946 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 28 06:55:22 crc kubenswrapper[4946]: E1128 06:55:22.487645 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tqd8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8x4xm_openshift-marketplace(ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 06:55:22 crc kubenswrapper[4946]: E1128 06:55:22.489105 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-8x4xm" podUID="ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70" Nov 28 06:55:22 crc kubenswrapper[4946]: E1128 06:55:22.580726 4946 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 28 06:55:22 crc kubenswrapper[4946]: E1128 06:55:22.580946 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pj864,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7xrv4_openshift-marketplace(1faa4eda-7193-4eec-b3f6-391ddc88b498): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 06:55:22 crc kubenswrapper[4946]: E1128 06:55:22.582371 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-7xrv4" podUID="1faa4eda-7193-4eec-b3f6-391ddc88b498" Nov 28 06:55:22 crc kubenswrapper[4946]: I1128 06:55:22.971847 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6q9fb" Nov 28 06:55:23 crc kubenswrapper[4946]: E1128 06:55:23.593709 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8x4xm" podUID="ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70" Nov 28 06:55:23 crc kubenswrapper[4946]: E1128 06:55:23.593949 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-7xrv4" podUID="1faa4eda-7193-4eec-b3f6-391ddc88b498" Nov 28 06:55:23 crc kubenswrapper[4946]: E1128 06:55:23.680905 4946 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 28 06:55:23 crc kubenswrapper[4946]: E1128 06:55:23.681100 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9z4wx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fjn4d_openshift-marketplace(00f636a1-6272-432d-9b59-6b21b51f0038): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 06:55:23 crc kubenswrapper[4946]: E1128 06:55:23.683095 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-fjn4d" podUID="00f636a1-6272-432d-9b59-6b21b51f0038" Nov 28 06:55:24 crc kubenswrapper[4946]: I1128 06:55:24.730864 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:55:24 crc kubenswrapper[4946]: I1128 06:55:24.730931 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:55:26 crc kubenswrapper[4946]: E1128 06:55:26.996882 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fjn4d" podUID="00f636a1-6272-432d-9b59-6b21b51f0038" Nov 28 06:55:27 crc kubenswrapper[4946]: E1128 06:55:27.070317 4946 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 28 06:55:27 crc kubenswrapper[4946]: E1128 06:55:27.070591 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lx69h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-txxk5_openshift-marketplace(d4cb05fa-0cdb-499e-9d83-e6a5e87bf144): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 06:55:27 crc kubenswrapper[4946]: E1128 06:55:27.072557 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-txxk5" podUID="d4cb05fa-0cdb-499e-9d83-e6a5e87bf144" Nov 28 06:55:28 crc kubenswrapper[4946]: E1128 06:55:28.538180 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-txxk5" podUID="d4cb05fa-0cdb-499e-9d83-e6a5e87bf144" Nov 28 06:55:28 crc kubenswrapper[4946]: E1128 06:55:28.566242 4946 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 28 06:55:28 crc kubenswrapper[4946]: E1128 06:55:28.566623 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w2cmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-6249s_openshift-marketplace(766b6f99-3942-4119-a40d-8b8b327e1880): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 06:55:28 crc kubenswrapper[4946]: E1128 06:55:28.568302 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-6249s" podUID="766b6f99-3942-4119-a40d-8b8b327e1880" Nov 28 06:55:28 crc kubenswrapper[4946]: E1128 06:55:28.646083 4946 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 28 06:55:28 crc kubenswrapper[4946]: E1128 06:55:28.646313 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwngf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-n9lpt_openshift-marketplace(1c124c67-1e65-4489-a08a-8b580fee23cc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 06:55:28 crc kubenswrapper[4946]: E1128 06:55:28.648886 4946 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 28 06:55:28 crc kubenswrapper[4946]: E1128 06:55:28.649137 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qp8vk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-nxxk8_openshift-marketplace(09b629b2-f3d1-4856-98f2-5761c99f457a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 06:55:28 crc kubenswrapper[4946]: E1128 06:55:28.649976 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-n9lpt" podUID="1c124c67-1e65-4489-a08a-8b580fee23cc" Nov 28 06:55:28 crc kubenswrapper[4946]: E1128 06:55:28.650430 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-nxxk8" podUID="09b629b2-f3d1-4856-98f2-5761c99f457a" Nov 28 06:55:28 crc kubenswrapper[4946]: E1128 06:55:28.658665 4946 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 28 06:55:28 crc kubenswrapper[4946]: E1128 06:55:28.658861 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w54pj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-f7mgl_openshift-marketplace(b169ce71-a297-432f-a041-f7b544659574): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 06:55:28 crc kubenswrapper[4946]: E1128 06:55:28.660093 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-f7mgl" podUID="b169ce71-a297-432f-a041-f7b544659574" Nov 28 06:55:28 crc kubenswrapper[4946]: I1128 06:55:28.765702 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gkg79"] Nov 28 06:55:29 crc kubenswrapper[4946]: I1128 06:55:29.509721 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gkg79" event={"ID":"4e6983b1-6887-4d13-8f9a-f261a745115f","Type":"ContainerStarted","Data":"9db72b83a47d2a1476de56185c154260f6b289ed1b712a4d17893c62cab2f763"} Nov 28 06:55:29 crc kubenswrapper[4946]: I1128 06:55:29.510228 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gkg79" event={"ID":"4e6983b1-6887-4d13-8f9a-f261a745115f","Type":"ContainerStarted","Data":"1291697e7e5332e69f57696fd25bee61dcae8ce0f8f475c71aa8e9dedb9c5ed4"} Nov 28 06:55:29 crc kubenswrapper[4946]: I1128 06:55:29.510299 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gkg79" event={"ID":"4e6983b1-6887-4d13-8f9a-f261a745115f","Type":"ContainerStarted","Data":"92110c07ecf519224fc8691f0d4e0844bed20cbf2e660a392dbc30180a284bd1"} Nov 28 06:55:29 crc kubenswrapper[4946]: E1128 06:55:29.511885 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-nxxk8" podUID="09b629b2-f3d1-4856-98f2-5761c99f457a" Nov 28 06:55:29 crc kubenswrapper[4946]: E1128 06:55:29.511887 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-n9lpt" podUID="1c124c67-1e65-4489-a08a-8b580fee23cc" Nov 28 06:55:29 crc kubenswrapper[4946]: E1128 06:55:29.511965 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-f7mgl" podUID="b169ce71-a297-432f-a041-f7b544659574" Nov 28 06:55:29 crc kubenswrapper[4946]: E1128 06:55:29.512779 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-6249s" podUID="766b6f99-3942-4119-a40d-8b8b327e1880" Nov 28 06:55:29 crc kubenswrapper[4946]: I1128 06:55:29.569529 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-gkg79" podStartSLOduration=164.569503719 podStartE2EDuration="2m44.569503719s" podCreationTimestamp="2025-11-28 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:55:29.564657325 +0000 UTC m=+183.942722446" watchObservedRunningTime="2025-11-28 06:55:29.569503719 +0000 UTC m=+183.947568850" Nov 28 06:55:32 crc kubenswrapper[4946]: I1128 06:55:32.123078 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:55:32 crc kubenswrapper[4946]: I1128 06:55:32.498510 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 06:55:32 crc kubenswrapper[4946]: E1128 06:55:32.499164 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a3c555e-1d86-470a-b353-25a29c615ab6" containerName="pruner" Nov 28 06:55:32 crc kubenswrapper[4946]: I1128 06:55:32.499183 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a3c555e-1d86-470a-b353-25a29c615ab6" containerName="pruner" Nov 28 06:55:32 crc kubenswrapper[4946]: E1128 06:55:32.499195 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c829db8-c04a-4785-b390-39552c029970" containerName="pruner" Nov 28 06:55:32 crc kubenswrapper[4946]: I1128 06:55:32.499202 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c829db8-c04a-4785-b390-39552c029970" containerName="pruner" Nov 28 06:55:32 crc kubenswrapper[4946]: I1128 06:55:32.499326 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a3c555e-1d86-470a-b353-25a29c615ab6" containerName="pruner" Nov 28 06:55:32 crc kubenswrapper[4946]: I1128 06:55:32.499342 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c829db8-c04a-4785-b390-39552c029970" containerName="pruner" Nov 28 06:55:32 crc kubenswrapper[4946]: I1128 06:55:32.499810 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:55:32 crc kubenswrapper[4946]: I1128 06:55:32.503193 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 28 06:55:32 crc kubenswrapper[4946]: I1128 06:55:32.503409 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 28 06:55:32 crc kubenswrapper[4946]: I1128 06:55:32.511328 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 06:55:32 crc kubenswrapper[4946]: I1128 06:55:32.602628 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a69369b0-d780-449f-8dab-a2b481dff9d7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a69369b0-d780-449f-8dab-a2b481dff9d7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:55:32 crc kubenswrapper[4946]: I1128 06:55:32.602697 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a69369b0-d780-449f-8dab-a2b481dff9d7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a69369b0-d780-449f-8dab-a2b481dff9d7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:55:32 crc kubenswrapper[4946]: I1128 06:55:32.704390 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a69369b0-d780-449f-8dab-a2b481dff9d7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a69369b0-d780-449f-8dab-a2b481dff9d7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:55:32 crc kubenswrapper[4946]: I1128 06:55:32.704517 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a69369b0-d780-449f-8dab-a2b481dff9d7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a69369b0-d780-449f-8dab-a2b481dff9d7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:55:32 crc kubenswrapper[4946]: I1128 06:55:32.704555 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a69369b0-d780-449f-8dab-a2b481dff9d7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a69369b0-d780-449f-8dab-a2b481dff9d7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:55:32 crc kubenswrapper[4946]: I1128 06:55:32.728796 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a69369b0-d780-449f-8dab-a2b481dff9d7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a69369b0-d780-449f-8dab-a2b481dff9d7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:55:32 crc kubenswrapper[4946]: I1128 06:55:32.873092 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:55:33 crc kubenswrapper[4946]: I1128 06:55:33.083702 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 06:55:33 crc kubenswrapper[4946]: W1128 06:55:33.093061 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda69369b0_d780_449f_8dab_a2b481dff9d7.slice/crio-b0182c3187295a13a415fc569b4ff1c4b5f61133b35dbd861f0d439e8084c545 WatchSource:0}: Error finding container b0182c3187295a13a415fc569b4ff1c4b5f61133b35dbd861f0d439e8084c545: Status 404 returned error can't find the container with id b0182c3187295a13a415fc569b4ff1c4b5f61133b35dbd861f0d439e8084c545 Nov 28 06:55:33 crc kubenswrapper[4946]: I1128 06:55:33.569340 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a69369b0-d780-449f-8dab-a2b481dff9d7","Type":"ContainerStarted","Data":"b0182c3187295a13a415fc569b4ff1c4b5f61133b35dbd861f0d439e8084c545"} Nov 28 06:55:34 crc kubenswrapper[4946]: I1128 06:55:34.577524 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a69369b0-d780-449f-8dab-a2b481dff9d7","Type":"ContainerStarted","Data":"e50a71b8e46b7d770dcca1d51df08b8ef46039f2fba9337ead7e48d17122bd51"} Nov 28 06:55:34 crc kubenswrapper[4946]: I1128 06:55:34.599212 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.599192313 podStartE2EDuration="2.599192313s" podCreationTimestamp="2025-11-28 06:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:55:34.596663439 +0000 UTC m=+188.974728580" watchObservedRunningTime="2025-11-28 06:55:34.599192313 +0000 UTC m=+188.977257424" Nov 28 06:55:35 crc kubenswrapper[4946]: I1128 06:55:35.585364 4946 generic.go:334] "Generic (PLEG): container finished" podID="1faa4eda-7193-4eec-b3f6-391ddc88b498" containerID="6baf56d2b654f021d64a764ade285191cee715c50dd7f0d2f559291423800432" exitCode=0 Nov 28 06:55:35 crc kubenswrapper[4946]: I1128 06:55:35.585479 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xrv4" event={"ID":"1faa4eda-7193-4eec-b3f6-391ddc88b498","Type":"ContainerDied","Data":"6baf56d2b654f021d64a764ade285191cee715c50dd7f0d2f559291423800432"} Nov 28 06:55:35 crc kubenswrapper[4946]: I1128 06:55:35.588146 4946 generic.go:334] "Generic (PLEG): container finished" podID="a69369b0-d780-449f-8dab-a2b481dff9d7" containerID="e50a71b8e46b7d770dcca1d51df08b8ef46039f2fba9337ead7e48d17122bd51" exitCode=0 Nov 28 06:55:35 crc kubenswrapper[4946]: I1128 06:55:35.588202 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a69369b0-d780-449f-8dab-a2b481dff9d7","Type":"ContainerDied","Data":"e50a71b8e46b7d770dcca1d51df08b8ef46039f2fba9337ead7e48d17122bd51"} Nov 28 06:55:36 crc kubenswrapper[4946]: I1128 06:55:36.612505 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xrv4" event={"ID":"1faa4eda-7193-4eec-b3f6-391ddc88b498","Type":"ContainerStarted","Data":"48410f674d59fe5f38032625b8645643e9161c2bf089c7480171bb7ecb835e92"} Nov 28 06:55:36 crc kubenswrapper[4946]: I1128 06:55:36.635346 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7xrv4" podStartSLOduration=2.402250817 podStartE2EDuration="47.635327906s" podCreationTimestamp="2025-11-28 06:54:49 +0000 UTC" firstStartedPulling="2025-11-28 06:54:50.790498508 +0000 UTC m=+145.168563619" lastFinishedPulling="2025-11-28 06:55:36.023575587 +0000 UTC m=+190.401640708" observedRunningTime="2025-11-28 06:55:36.634368082 +0000 UTC m=+191.012433213" watchObservedRunningTime="2025-11-28 06:55:36.635327906 +0000 UTC m=+191.013393017" Nov 28 06:55:36 crc kubenswrapper[4946]: I1128 06:55:36.857951 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:55:36 crc kubenswrapper[4946]: I1128 06:55:36.976170 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a69369b0-d780-449f-8dab-a2b481dff9d7-kubelet-dir\") pod \"a69369b0-d780-449f-8dab-a2b481dff9d7\" (UID: \"a69369b0-d780-449f-8dab-a2b481dff9d7\") " Nov 28 06:55:36 crc kubenswrapper[4946]: I1128 06:55:36.976292 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a69369b0-d780-449f-8dab-a2b481dff9d7-kube-api-access\") pod \"a69369b0-d780-449f-8dab-a2b481dff9d7\" (UID: \"a69369b0-d780-449f-8dab-a2b481dff9d7\") " Nov 28 06:55:36 crc kubenswrapper[4946]: I1128 06:55:36.976319 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a69369b0-d780-449f-8dab-a2b481dff9d7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a69369b0-d780-449f-8dab-a2b481dff9d7" (UID: "a69369b0-d780-449f-8dab-a2b481dff9d7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:55:36 crc kubenswrapper[4946]: I1128 06:55:36.976710 4946 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a69369b0-d780-449f-8dab-a2b481dff9d7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:36 crc kubenswrapper[4946]: I1128 06:55:36.985729 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a69369b0-d780-449f-8dab-a2b481dff9d7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a69369b0-d780-449f-8dab-a2b481dff9d7" (UID: "a69369b0-d780-449f-8dab-a2b481dff9d7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:55:37 crc kubenswrapper[4946]: I1128 06:55:37.078284 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a69369b0-d780-449f-8dab-a2b481dff9d7-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:37 crc kubenswrapper[4946]: I1128 06:55:37.619222 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a69369b0-d780-449f-8dab-a2b481dff9d7","Type":"ContainerDied","Data":"b0182c3187295a13a415fc569b4ff1c4b5f61133b35dbd861f0d439e8084c545"} Nov 28 06:55:37 crc kubenswrapper[4946]: I1128 06:55:37.619599 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0182c3187295a13a415fc569b4ff1c4b5f61133b35dbd861f0d439e8084c545" Nov 28 06:55:37 crc kubenswrapper[4946]: I1128 06:55:37.619264 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:55:38 crc kubenswrapper[4946]: I1128 06:55:38.627180 4946 generic.go:334] "Generic (PLEG): container finished" podID="ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70" containerID="4a902b83dcef37685cd522658fec281155ad2d76e06e5c0ec296121e0a42a457" exitCode=0 Nov 28 06:55:38 crc kubenswrapper[4946]: I1128 06:55:38.627251 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8x4xm" event={"ID":"ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70","Type":"ContainerDied","Data":"4a902b83dcef37685cd522658fec281155ad2d76e06e5c0ec296121e0a42a457"} Nov 28 06:55:39 crc kubenswrapper[4946]: I1128 06:55:39.635028 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8x4xm" event={"ID":"ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70","Type":"ContainerStarted","Data":"4ae1abc3f141bc1e2cf5292b99d172694f156148a37292cb41b1716a82ed8697"} Nov 28 06:55:39 crc kubenswrapper[4946]: I1128 06:55:39.644172 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7xrv4" Nov 28 06:55:39 crc kubenswrapper[4946]: I1128 06:55:39.644272 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7xrv4" Nov 28 06:55:39 crc kubenswrapper[4946]: I1128 06:55:39.657748 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8x4xm" podStartSLOduration=3.160815047 podStartE2EDuration="50.657737629s" podCreationTimestamp="2025-11-28 06:54:49 +0000 UTC" firstStartedPulling="2025-11-28 06:54:51.89652081 +0000 UTC m=+146.274585921" lastFinishedPulling="2025-11-28 06:55:39.393443392 +0000 UTC m=+193.771508503" observedRunningTime="2025-11-28 06:55:39.654927078 +0000 UTC m=+194.032992199" watchObservedRunningTime="2025-11-28 06:55:39.657737629 +0000 UTC m=+194.035802740" Nov 28 06:55:39 crc kubenswrapper[4946]: I1128 06:55:39.701720 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7xrv4" Nov 28 06:55:39 crc kubenswrapper[4946]: I1128 06:55:39.895961 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 06:55:39 crc kubenswrapper[4946]: E1128 06:55:39.896556 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a69369b0-d780-449f-8dab-a2b481dff9d7" containerName="pruner" Nov 28 06:55:39 crc kubenswrapper[4946]: I1128 06:55:39.896651 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="a69369b0-d780-449f-8dab-a2b481dff9d7" containerName="pruner" Nov 28 06:55:39 crc kubenswrapper[4946]: I1128 06:55:39.896866 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="a69369b0-d780-449f-8dab-a2b481dff9d7" containerName="pruner" Nov 28 06:55:39 crc kubenswrapper[4946]: I1128 06:55:39.897418 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:55:39 crc kubenswrapper[4946]: I1128 06:55:39.903297 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 28 06:55:39 crc kubenswrapper[4946]: I1128 06:55:39.903993 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 28 06:55:39 crc kubenswrapper[4946]: I1128 06:55:39.911202 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 06:55:40 crc kubenswrapper[4946]: I1128 06:55:40.019956 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/986236c0-f012-41ae-aa64-097ad1f7117e-kube-api-access\") pod \"installer-9-crc\" (UID: \"986236c0-f012-41ae-aa64-097ad1f7117e\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:55:40 crc kubenswrapper[4946]: I1128 06:55:40.020013 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/986236c0-f012-41ae-aa64-097ad1f7117e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"986236c0-f012-41ae-aa64-097ad1f7117e\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:55:40 crc kubenswrapper[4946]: I1128 06:55:40.020041 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/986236c0-f012-41ae-aa64-097ad1f7117e-var-lock\") pod \"installer-9-crc\" (UID: \"986236c0-f012-41ae-aa64-097ad1f7117e\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:55:40 crc kubenswrapper[4946]: I1128 06:55:40.069088 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8x4xm" Nov 28 06:55:40 crc kubenswrapper[4946]: I1128 06:55:40.069147 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8x4xm" Nov 28 06:55:40 crc kubenswrapper[4946]: I1128 06:55:40.121583 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/986236c0-f012-41ae-aa64-097ad1f7117e-kube-api-access\") pod \"installer-9-crc\" (UID: \"986236c0-f012-41ae-aa64-097ad1f7117e\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:55:40 crc kubenswrapper[4946]: I1128 06:55:40.121676 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/986236c0-f012-41ae-aa64-097ad1f7117e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"986236c0-f012-41ae-aa64-097ad1f7117e\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:55:40 crc kubenswrapper[4946]: I1128 06:55:40.121712 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/986236c0-f012-41ae-aa64-097ad1f7117e-var-lock\") pod \"installer-9-crc\" (UID: \"986236c0-f012-41ae-aa64-097ad1f7117e\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:55:40 crc kubenswrapper[4946]: I1128 06:55:40.122616 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/986236c0-f012-41ae-aa64-097ad1f7117e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"986236c0-f012-41ae-aa64-097ad1f7117e\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:55:40 crc kubenswrapper[4946]: I1128 06:55:40.122773 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/986236c0-f012-41ae-aa64-097ad1f7117e-var-lock\") pod \"installer-9-crc\" (UID: \"986236c0-f012-41ae-aa64-097ad1f7117e\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:55:40 crc kubenswrapper[4946]: I1128 06:55:40.156269 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/986236c0-f012-41ae-aa64-097ad1f7117e-kube-api-access\") pod \"installer-9-crc\" (UID: \"986236c0-f012-41ae-aa64-097ad1f7117e\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:55:40 crc kubenswrapper[4946]: I1128 06:55:40.214996 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:55:40 crc kubenswrapper[4946]: I1128 06:55:40.418574 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 06:55:40 crc kubenswrapper[4946]: I1128 06:55:40.655611 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"986236c0-f012-41ae-aa64-097ad1f7117e","Type":"ContainerStarted","Data":"050e2cc9a1b8da35f7087eac204a32aa529a49b5ce15b9d93d27d18abc834fb2"} Nov 28 06:55:41 crc kubenswrapper[4946]: I1128 06:55:41.110057 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8x4xm" podUID="ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70" containerName="registry-server" probeResult="failure" output=< Nov 28 06:55:41 crc kubenswrapper[4946]: timeout: failed to connect service ":50051" within 1s Nov 28 06:55:41 crc kubenswrapper[4946]: > Nov 28 06:55:41 crc kubenswrapper[4946]: I1128 06:55:41.666061 4946 generic.go:334] "Generic (PLEG): container finished" podID="00f636a1-6272-432d-9b59-6b21b51f0038" containerID="c60b7608ad8dfb1ccab0a3a9251bb1f9ce0eb21ec27b18c6338e0ca1de8ba24c" exitCode=0 Nov 28 06:55:41 crc kubenswrapper[4946]: I1128 06:55:41.666140 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fjn4d" event={"ID":"00f636a1-6272-432d-9b59-6b21b51f0038","Type":"ContainerDied","Data":"c60b7608ad8dfb1ccab0a3a9251bb1f9ce0eb21ec27b18c6338e0ca1de8ba24c"} Nov 28 06:55:41 crc kubenswrapper[4946]: I1128 06:55:41.670290 4946 generic.go:334] "Generic (PLEG): container finished" podID="09b629b2-f3d1-4856-98f2-5761c99f457a" containerID="7a4db7e5fc504001e92e3dc8a7cf5a8b710938bb35cf4c4dd2ac60edff73f29f" exitCode=0 Nov 28 06:55:41 crc kubenswrapper[4946]: I1128 06:55:41.670348 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxxk8" event={"ID":"09b629b2-f3d1-4856-98f2-5761c99f457a","Type":"ContainerDied","Data":"7a4db7e5fc504001e92e3dc8a7cf5a8b710938bb35cf4c4dd2ac60edff73f29f"} Nov 28 06:55:41 crc kubenswrapper[4946]: I1128 06:55:41.677342 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"986236c0-f012-41ae-aa64-097ad1f7117e","Type":"ContainerStarted","Data":"950d634bea04b64e3d7e6cb83eec8670e3fcef7cb83361148c88ce21028cb9bd"} Nov 28 06:55:41 crc kubenswrapper[4946]: I1128 06:55:41.733047 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7xrv4" Nov 28 06:55:41 crc kubenswrapper[4946]: I1128 06:55:41.733013 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.732983838 podStartE2EDuration="2.732983838s" podCreationTimestamp="2025-11-28 06:55:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:55:41.720296606 +0000 UTC m=+196.098361717" watchObservedRunningTime="2025-11-28 06:55:41.732983838 +0000 UTC m=+196.111048949" Nov 28 06:55:42 crc kubenswrapper[4946]: I1128 06:55:42.704311 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txxk5" event={"ID":"d4cb05fa-0cdb-499e-9d83-e6a5e87bf144","Type":"ContainerStarted","Data":"bdb8c7be7ae7c270d6770efc7c429c096a63a95f9cbbfc62f3472212668bae38"} Nov 28 06:55:42 crc kubenswrapper[4946]: I1128 06:55:42.710685 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fjn4d" event={"ID":"00f636a1-6272-432d-9b59-6b21b51f0038","Type":"ContainerStarted","Data":"5d62e064649a5e6a75cb87ffb6b23994e330c2b3cd76091ec23cdc922b42bb53"} Nov 28 06:55:42 crc kubenswrapper[4946]: I1128 06:55:42.714018 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxxk8" event={"ID":"09b629b2-f3d1-4856-98f2-5761c99f457a","Type":"ContainerStarted","Data":"906dd7d1102631d68e565db04ab0b9d27f5ff781fd2bfec94d0d8270648a2c07"} Nov 28 06:55:42 crc kubenswrapper[4946]: I1128 06:55:42.716518 4946 generic.go:334] "Generic (PLEG): container finished" podID="1c124c67-1e65-4489-a08a-8b580fee23cc" containerID="ae9792e5ba2567cd20535718e3083200e1f3056e954ef59ed27763533d3a69fd" exitCode=0 Nov 28 06:55:42 crc kubenswrapper[4946]: I1128 06:55:42.716858 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9lpt" event={"ID":"1c124c67-1e65-4489-a08a-8b580fee23cc","Type":"ContainerDied","Data":"ae9792e5ba2567cd20535718e3083200e1f3056e954ef59ed27763533d3a69fd"} Nov 28 06:55:42 crc kubenswrapper[4946]: I1128 06:55:42.757610 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fjn4d" podStartSLOduration=2.657646995 podStartE2EDuration="51.757589451s" podCreationTimestamp="2025-11-28 06:54:51 +0000 UTC" firstStartedPulling="2025-11-28 06:54:52.971894774 +0000 UTC m=+147.349959885" lastFinishedPulling="2025-11-28 06:55:42.07183723 +0000 UTC m=+196.449902341" observedRunningTime="2025-11-28 06:55:42.757042427 +0000 UTC m=+197.135107538" watchObservedRunningTime="2025-11-28 06:55:42.757589451 +0000 UTC m=+197.135654562" Nov 28 06:55:43 crc kubenswrapper[4946]: I1128 06:55:43.066946 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nxxk8" podStartSLOduration=2.684042268 podStartE2EDuration="54.066917281s" podCreationTimestamp="2025-11-28 06:54:49 +0000 UTC" firstStartedPulling="2025-11-28 06:54:50.804124734 +0000 UTC m=+145.182189835" lastFinishedPulling="2025-11-28 06:55:42.186999737 +0000 UTC m=+196.565064848" observedRunningTime="2025-11-28 06:55:42.806762251 +0000 UTC m=+197.184827362" watchObservedRunningTime="2025-11-28 06:55:43.066917281 +0000 UTC m=+197.444982402" Nov 28 06:55:43 crc kubenswrapper[4946]: I1128 06:55:43.067540 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wb2pq"] Nov 28 06:55:43 crc kubenswrapper[4946]: I1128 06:55:43.067906 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" podUID="675e635c-71cb-4c9f-8c8a-dd25209abfd6" containerName="controller-manager" containerID="cri-o://162b16ada7c3838a94fdd8184b6031d4360089663df8622aa9b11b37bed50583" gracePeriod=30 Nov 28 06:55:43 crc kubenswrapper[4946]: I1128 06:55:43.166615 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25"] Nov 28 06:55:43 crc kubenswrapper[4946]: I1128 06:55:43.166884 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25" podUID="1f9de5c3-e681-4c09-a93e-9f95f03c7f5e" containerName="route-controller-manager" containerID="cri-o://7f1dd6ed983a306ed9a2f167a582c522138601621b665863c6930985d19002fb" gracePeriod=30 Nov 28 06:55:43 crc kubenswrapper[4946]: I1128 06:55:43.724874 4946 generic.go:334] "Generic (PLEG): container finished" podID="d4cb05fa-0cdb-499e-9d83-e6a5e87bf144" containerID="bdb8c7be7ae7c270d6770efc7c429c096a63a95f9cbbfc62f3472212668bae38" exitCode=0 Nov 28 06:55:43 crc kubenswrapper[4946]: I1128 06:55:43.724997 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txxk5" event={"ID":"d4cb05fa-0cdb-499e-9d83-e6a5e87bf144","Type":"ContainerDied","Data":"bdb8c7be7ae7c270d6770efc7c429c096a63a95f9cbbfc62f3472212668bae38"} Nov 28 06:55:43 crc kubenswrapper[4946]: I1128 06:55:43.740419 4946 generic.go:334] "Generic (PLEG): container finished" podID="675e635c-71cb-4c9f-8c8a-dd25209abfd6" containerID="162b16ada7c3838a94fdd8184b6031d4360089663df8622aa9b11b37bed50583" exitCode=0 Nov 28 06:55:43 crc kubenswrapper[4946]: I1128 06:55:43.740506 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" event={"ID":"675e635c-71cb-4c9f-8c8a-dd25209abfd6","Type":"ContainerDied","Data":"162b16ada7c3838a94fdd8184b6031d4360089663df8622aa9b11b37bed50583"} Nov 28 06:55:44 crc kubenswrapper[4946]: I1128 06:55:44.748892 4946 generic.go:334] "Generic (PLEG): container finished" podID="1f9de5c3-e681-4c09-a93e-9f95f03c7f5e" containerID="7f1dd6ed983a306ed9a2f167a582c522138601621b665863c6930985d19002fb" exitCode=0 Nov 28 06:55:44 crc kubenswrapper[4946]: I1128 06:55:44.748961 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25" event={"ID":"1f9de5c3-e681-4c09-a93e-9f95f03c7f5e","Type":"ContainerDied","Data":"7f1dd6ed983a306ed9a2f167a582c522138601621b665863c6930985d19002fb"} Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.430368 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.457657 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-654687fc99-5lchk"] Nov 28 06:55:45 crc kubenswrapper[4946]: E1128 06:55:45.457970 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="675e635c-71cb-4c9f-8c8a-dd25209abfd6" containerName="controller-manager" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.457983 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="675e635c-71cb-4c9f-8c8a-dd25209abfd6" containerName="controller-manager" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.458096 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="675e635c-71cb-4c9f-8c8a-dd25209abfd6" containerName="controller-manager" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.458640 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.474616 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-654687fc99-5lchk"] Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.541143 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/675e635c-71cb-4c9f-8c8a-dd25209abfd6-config\") pod \"675e635c-71cb-4c9f-8c8a-dd25209abfd6\" (UID: \"675e635c-71cb-4c9f-8c8a-dd25209abfd6\") " Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.541204 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ndm7\" (UniqueName: \"kubernetes.io/projected/675e635c-71cb-4c9f-8c8a-dd25209abfd6-kube-api-access-5ndm7\") pod \"675e635c-71cb-4c9f-8c8a-dd25209abfd6\" (UID: \"675e635c-71cb-4c9f-8c8a-dd25209abfd6\") " Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.541338 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/675e635c-71cb-4c9f-8c8a-dd25209abfd6-proxy-ca-bundles\") pod \"675e635c-71cb-4c9f-8c8a-dd25209abfd6\" (UID: \"675e635c-71cb-4c9f-8c8a-dd25209abfd6\") " Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.541382 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/675e635c-71cb-4c9f-8c8a-dd25209abfd6-client-ca\") pod \"675e635c-71cb-4c9f-8c8a-dd25209abfd6\" (UID: \"675e635c-71cb-4c9f-8c8a-dd25209abfd6\") " Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.541485 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/675e635c-71cb-4c9f-8c8a-dd25209abfd6-serving-cert\") pod \"675e635c-71cb-4c9f-8c8a-dd25209abfd6\" (UID: \"675e635c-71cb-4c9f-8c8a-dd25209abfd6\") " Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.541686 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8964807-1867-4e86-8d13-c2418e09290f-proxy-ca-bundles\") pod \"controller-manager-654687fc99-5lchk\" (UID: \"d8964807-1867-4e86-8d13-c2418e09290f\") " pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.541741 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnj6c\" (UniqueName: \"kubernetes.io/projected/d8964807-1867-4e86-8d13-c2418e09290f-kube-api-access-tnj6c\") pod \"controller-manager-654687fc99-5lchk\" (UID: \"d8964807-1867-4e86-8d13-c2418e09290f\") " pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.541777 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8964807-1867-4e86-8d13-c2418e09290f-client-ca\") pod \"controller-manager-654687fc99-5lchk\" (UID: \"d8964807-1867-4e86-8d13-c2418e09290f\") " pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.541870 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8964807-1867-4e86-8d13-c2418e09290f-config\") pod \"controller-manager-654687fc99-5lchk\" (UID: \"d8964807-1867-4e86-8d13-c2418e09290f\") " pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.542001 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8964807-1867-4e86-8d13-c2418e09290f-serving-cert\") pod \"controller-manager-654687fc99-5lchk\" (UID: \"d8964807-1867-4e86-8d13-c2418e09290f\") " pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.542344 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/675e635c-71cb-4c9f-8c8a-dd25209abfd6-client-ca" (OuterVolumeSpecName: "client-ca") pod "675e635c-71cb-4c9f-8c8a-dd25209abfd6" (UID: "675e635c-71cb-4c9f-8c8a-dd25209abfd6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.542361 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/675e635c-71cb-4c9f-8c8a-dd25209abfd6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "675e635c-71cb-4c9f-8c8a-dd25209abfd6" (UID: "675e635c-71cb-4c9f-8c8a-dd25209abfd6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.542415 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/675e635c-71cb-4c9f-8c8a-dd25209abfd6-config" (OuterVolumeSpecName: "config") pod "675e635c-71cb-4c9f-8c8a-dd25209abfd6" (UID: "675e635c-71cb-4c9f-8c8a-dd25209abfd6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.549694 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/675e635c-71cb-4c9f-8c8a-dd25209abfd6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "675e635c-71cb-4c9f-8c8a-dd25209abfd6" (UID: "675e635c-71cb-4c9f-8c8a-dd25209abfd6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.549736 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/675e635c-71cb-4c9f-8c8a-dd25209abfd6-kube-api-access-5ndm7" (OuterVolumeSpecName: "kube-api-access-5ndm7") pod "675e635c-71cb-4c9f-8c8a-dd25209abfd6" (UID: "675e635c-71cb-4c9f-8c8a-dd25209abfd6"). InnerVolumeSpecName "kube-api-access-5ndm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.644059 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8964807-1867-4e86-8d13-c2418e09290f-proxy-ca-bundles\") pod \"controller-manager-654687fc99-5lchk\" (UID: \"d8964807-1867-4e86-8d13-c2418e09290f\") " pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.644131 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnj6c\" (UniqueName: \"kubernetes.io/projected/d8964807-1867-4e86-8d13-c2418e09290f-kube-api-access-tnj6c\") pod \"controller-manager-654687fc99-5lchk\" (UID: \"d8964807-1867-4e86-8d13-c2418e09290f\") " pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.644174 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8964807-1867-4e86-8d13-c2418e09290f-client-ca\") pod \"controller-manager-654687fc99-5lchk\" (UID: \"d8964807-1867-4e86-8d13-c2418e09290f\") " pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.644257 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8964807-1867-4e86-8d13-c2418e09290f-config\") pod \"controller-manager-654687fc99-5lchk\" (UID: \"d8964807-1867-4e86-8d13-c2418e09290f\") " pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.644321 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8964807-1867-4e86-8d13-c2418e09290f-serving-cert\") pod \"controller-manager-654687fc99-5lchk\" (UID: \"d8964807-1867-4e86-8d13-c2418e09290f\") " pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.644372 4946 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/675e635c-71cb-4c9f-8c8a-dd25209abfd6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.644387 4946 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/675e635c-71cb-4c9f-8c8a-dd25209abfd6-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.644438 4946 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/675e635c-71cb-4c9f-8c8a-dd25209abfd6-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.644489 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/675e635c-71cb-4c9f-8c8a-dd25209abfd6-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.644502 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ndm7\" (UniqueName: \"kubernetes.io/projected/675e635c-71cb-4c9f-8c8a-dd25209abfd6-kube-api-access-5ndm7\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.646394 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8964807-1867-4e86-8d13-c2418e09290f-client-ca\") pod \"controller-manager-654687fc99-5lchk\" (UID: \"d8964807-1867-4e86-8d13-c2418e09290f\") " pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.646423 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8964807-1867-4e86-8d13-c2418e09290f-proxy-ca-bundles\") pod \"controller-manager-654687fc99-5lchk\" (UID: \"d8964807-1867-4e86-8d13-c2418e09290f\") " pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.689105 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8964807-1867-4e86-8d13-c2418e09290f-serving-cert\") pod \"controller-manager-654687fc99-5lchk\" (UID: \"d8964807-1867-4e86-8d13-c2418e09290f\") " pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.689905 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8964807-1867-4e86-8d13-c2418e09290f-config\") pod \"controller-manager-654687fc99-5lchk\" (UID: \"d8964807-1867-4e86-8d13-c2418e09290f\") " pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.693849 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnj6c\" (UniqueName: \"kubernetes.io/projected/d8964807-1867-4e86-8d13-c2418e09290f-kube-api-access-tnj6c\") pod \"controller-manager-654687fc99-5lchk\" (UID: \"d8964807-1867-4e86-8d13-c2418e09290f\") " pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.757727 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" event={"ID":"675e635c-71cb-4c9f-8c8a-dd25209abfd6","Type":"ContainerDied","Data":"7f4519a2a4a0863bdd089938e472a6ea1a8036130f549832201d54c9a92ab828"} Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.757807 4946 scope.go:117] "RemoveContainer" containerID="162b16ada7c3838a94fdd8184b6031d4360089663df8622aa9b11b37bed50583" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.757814 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wb2pq" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.788963 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.812677 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wb2pq"] Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.823565 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wb2pq"] Nov 28 06:55:45 crc kubenswrapper[4946]: I1128 06:55:45.999318 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="675e635c-71cb-4c9f-8c8a-dd25209abfd6" path="/var/lib/kubelet/pods/675e635c-71cb-4c9f-8c8a-dd25209abfd6/volumes" Nov 28 06:55:47 crc kubenswrapper[4946]: I1128 06:55:47.048574 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25" Nov 28 06:55:47 crc kubenswrapper[4946]: I1128 06:55:47.168996 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f9de5c3-e681-4c09-a93e-9f95f03c7f5e-client-ca\") pod \"1f9de5c3-e681-4c09-a93e-9f95f03c7f5e\" (UID: \"1f9de5c3-e681-4c09-a93e-9f95f03c7f5e\") " Nov 28 06:55:47 crc kubenswrapper[4946]: I1128 06:55:47.169070 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f9de5c3-e681-4c09-a93e-9f95f03c7f5e-config\") pod \"1f9de5c3-e681-4c09-a93e-9f95f03c7f5e\" (UID: \"1f9de5c3-e681-4c09-a93e-9f95f03c7f5e\") " Nov 28 06:55:47 crc kubenswrapper[4946]: I1128 06:55:47.169193 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvk4b\" (UniqueName: \"kubernetes.io/projected/1f9de5c3-e681-4c09-a93e-9f95f03c7f5e-kube-api-access-rvk4b\") pod \"1f9de5c3-e681-4c09-a93e-9f95f03c7f5e\" (UID: \"1f9de5c3-e681-4c09-a93e-9f95f03c7f5e\") " Nov 28 06:55:47 crc kubenswrapper[4946]: I1128 06:55:47.169254 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f9de5c3-e681-4c09-a93e-9f95f03c7f5e-serving-cert\") pod \"1f9de5c3-e681-4c09-a93e-9f95f03c7f5e\" (UID: \"1f9de5c3-e681-4c09-a93e-9f95f03c7f5e\") " Nov 28 06:55:47 crc kubenswrapper[4946]: I1128 06:55:47.170390 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f9de5c3-e681-4c09-a93e-9f95f03c7f5e-config" (OuterVolumeSpecName: "config") pod "1f9de5c3-e681-4c09-a93e-9f95f03c7f5e" (UID: "1f9de5c3-e681-4c09-a93e-9f95f03c7f5e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:55:47 crc kubenswrapper[4946]: I1128 06:55:47.170550 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f9de5c3-e681-4c09-a93e-9f95f03c7f5e-client-ca" (OuterVolumeSpecName: "client-ca") pod "1f9de5c3-e681-4c09-a93e-9f95f03c7f5e" (UID: "1f9de5c3-e681-4c09-a93e-9f95f03c7f5e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:55:47 crc kubenswrapper[4946]: I1128 06:55:47.177827 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f9de5c3-e681-4c09-a93e-9f95f03c7f5e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1f9de5c3-e681-4c09-a93e-9f95f03c7f5e" (UID: "1f9de5c3-e681-4c09-a93e-9f95f03c7f5e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:55:47 crc kubenswrapper[4946]: I1128 06:55:47.178863 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f9de5c3-e681-4c09-a93e-9f95f03c7f5e-kube-api-access-rvk4b" (OuterVolumeSpecName: "kube-api-access-rvk4b") pod "1f9de5c3-e681-4c09-a93e-9f95f03c7f5e" (UID: "1f9de5c3-e681-4c09-a93e-9f95f03c7f5e"). InnerVolumeSpecName "kube-api-access-rvk4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:55:47 crc kubenswrapper[4946]: I1128 06:55:47.271121 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvk4b\" (UniqueName: \"kubernetes.io/projected/1f9de5c3-e681-4c09-a93e-9f95f03c7f5e-kube-api-access-rvk4b\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:47 crc kubenswrapper[4946]: I1128 06:55:47.271193 4946 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f9de5c3-e681-4c09-a93e-9f95f03c7f5e-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:47 crc kubenswrapper[4946]: I1128 06:55:47.271213 4946 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f9de5c3-e681-4c09-a93e-9f95f03c7f5e-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:47 crc kubenswrapper[4946]: I1128 06:55:47.271232 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f9de5c3-e681-4c09-a93e-9f95f03c7f5e-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:47 crc kubenswrapper[4946]: I1128 06:55:47.798532 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25" event={"ID":"1f9de5c3-e681-4c09-a93e-9f95f03c7f5e","Type":"ContainerDied","Data":"d3e0b2de4d9cdb8c973e6aa94fb83f8213b4d947e651181ddb14be567dae1965"} Nov 28 06:55:47 crc kubenswrapper[4946]: I1128 06:55:47.799189 4946 scope.go:117] "RemoveContainer" containerID="7f1dd6ed983a306ed9a2f167a582c522138601621b665863c6930985d19002fb" Nov 28 06:55:47 crc kubenswrapper[4946]: I1128 06:55:47.798563 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25" Nov 28 06:55:47 crc kubenswrapper[4946]: I1128 06:55:47.856094 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25"] Nov 28 06:55:47 crc kubenswrapper[4946]: I1128 06:55:47.864361 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kfm25"] Nov 28 06:55:47 crc kubenswrapper[4946]: I1128 06:55:47.904282 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-654687fc99-5lchk"] Nov 28 06:55:47 crc kubenswrapper[4946]: I1128 06:55:47.998501 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f9de5c3-e681-4c09-a93e-9f95f03c7f5e" path="/var/lib/kubelet/pods/1f9de5c3-e681-4c09-a93e-9f95f03c7f5e/volumes" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.203229 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj"] Nov 28 06:55:48 crc kubenswrapper[4946]: E1128 06:55:48.203921 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f9de5c3-e681-4c09-a93e-9f95f03c7f5e" containerName="route-controller-manager" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.203938 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f9de5c3-e681-4c09-a93e-9f95f03c7f5e" containerName="route-controller-manager" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.204054 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f9de5c3-e681-4c09-a93e-9f95f03c7f5e" containerName="route-controller-manager" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.204486 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.209023 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.209143 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.209411 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.209534 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.212013 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.212418 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.242943 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj"] Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.285303 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/874c9215-546d-4c25-bbb2-7dd82cffde6c-client-ca\") pod \"route-controller-manager-59d99b5-ghmxj\" (UID: \"874c9215-546d-4c25-bbb2-7dd82cffde6c\") " pod="openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.285371 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7269\" (UniqueName: \"kubernetes.io/projected/874c9215-546d-4c25-bbb2-7dd82cffde6c-kube-api-access-n7269\") pod \"route-controller-manager-59d99b5-ghmxj\" (UID: \"874c9215-546d-4c25-bbb2-7dd82cffde6c\") " pod="openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.285403 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/874c9215-546d-4c25-bbb2-7dd82cffde6c-serving-cert\") pod \"route-controller-manager-59d99b5-ghmxj\" (UID: \"874c9215-546d-4c25-bbb2-7dd82cffde6c\") " pod="openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.285447 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/874c9215-546d-4c25-bbb2-7dd82cffde6c-config\") pod \"route-controller-manager-59d99b5-ghmxj\" (UID: \"874c9215-546d-4c25-bbb2-7dd82cffde6c\") " pod="openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.389036 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/874c9215-546d-4c25-bbb2-7dd82cffde6c-serving-cert\") pod \"route-controller-manager-59d99b5-ghmxj\" (UID: \"874c9215-546d-4c25-bbb2-7dd82cffde6c\") " pod="openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.389158 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/874c9215-546d-4c25-bbb2-7dd82cffde6c-config\") pod \"route-controller-manager-59d99b5-ghmxj\" (UID: \"874c9215-546d-4c25-bbb2-7dd82cffde6c\") " pod="openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.389587 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/874c9215-546d-4c25-bbb2-7dd82cffde6c-client-ca\") pod \"route-controller-manager-59d99b5-ghmxj\" (UID: \"874c9215-546d-4c25-bbb2-7dd82cffde6c\") " pod="openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.389688 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7269\" (UniqueName: \"kubernetes.io/projected/874c9215-546d-4c25-bbb2-7dd82cffde6c-kube-api-access-n7269\") pod \"route-controller-manager-59d99b5-ghmxj\" (UID: \"874c9215-546d-4c25-bbb2-7dd82cffde6c\") " pod="openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.390689 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/874c9215-546d-4c25-bbb2-7dd82cffde6c-config\") pod \"route-controller-manager-59d99b5-ghmxj\" (UID: \"874c9215-546d-4c25-bbb2-7dd82cffde6c\") " pod="openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.390733 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/874c9215-546d-4c25-bbb2-7dd82cffde6c-client-ca\") pod \"route-controller-manager-59d99b5-ghmxj\" (UID: \"874c9215-546d-4c25-bbb2-7dd82cffde6c\") " pod="openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.402248 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/874c9215-546d-4c25-bbb2-7dd82cffde6c-serving-cert\") pod \"route-controller-manager-59d99b5-ghmxj\" (UID: \"874c9215-546d-4c25-bbb2-7dd82cffde6c\") " pod="openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.412363 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7269\" (UniqueName: \"kubernetes.io/projected/874c9215-546d-4c25-bbb2-7dd82cffde6c-kube-api-access-n7269\") pod \"route-controller-manager-59d99b5-ghmxj\" (UID: \"874c9215-546d-4c25-bbb2-7dd82cffde6c\") " pod="openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.535488 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.809040 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" event={"ID":"d8964807-1867-4e86-8d13-c2418e09290f","Type":"ContainerStarted","Data":"d8ba0fdcb12d9df48fe8f486a39441e8d524cb65c7e5db72415aa0aefe018442"} Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.809596 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" event={"ID":"d8964807-1867-4e86-8d13-c2418e09290f","Type":"ContainerStarted","Data":"59ed74cc82d8dd1dc0ad53378c95b32b0ec02c617fb6e0fb3e4627f64428dbb1"} Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.810255 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.812225 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f7mgl" event={"ID":"b169ce71-a297-432f-a041-f7b544659574","Type":"ContainerStarted","Data":"67f4bfc66b91f17d4b366b4b306c847009d632417e1602fcdcd2ebee1e4e22fc"} Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.816566 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6249s" event={"ID":"766b6f99-3942-4119-a40d-8b8b327e1880","Type":"ContainerStarted","Data":"d220eaceb193c9974d3213b152b7cc90a5049e718728b7193c35b159b2a96482"} Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.818839 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" Nov 28 06:55:48 crc kubenswrapper[4946]: I1128 06:55:48.830758 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" podStartSLOduration=5.8307320350000005 podStartE2EDuration="5.830732035s" podCreationTimestamp="2025-11-28 06:55:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:55:48.827565493 +0000 UTC m=+203.205630604" watchObservedRunningTime="2025-11-28 06:55:48.830732035 +0000 UTC m=+203.208797156" Nov 28 06:55:49 crc kubenswrapper[4946]: I1128 06:55:49.773631 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj"] Nov 28 06:55:49 crc kubenswrapper[4946]: W1128 06:55:49.789582 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod874c9215_546d_4c25_bbb2_7dd82cffde6c.slice/crio-1f5b70a086bf5550a569d27909e4c9dfb5e5453b222d766c2af6e901888a28e7 WatchSource:0}: Error finding container 1f5b70a086bf5550a569d27909e4c9dfb5e5453b222d766c2af6e901888a28e7: Status 404 returned error can't find the container with id 1f5b70a086bf5550a569d27909e4c9dfb5e5453b222d766c2af6e901888a28e7 Nov 28 06:55:49 crc kubenswrapper[4946]: I1128 06:55:49.826723 4946 generic.go:334] "Generic (PLEG): container finished" podID="b169ce71-a297-432f-a041-f7b544659574" containerID="67f4bfc66b91f17d4b366b4b306c847009d632417e1602fcdcd2ebee1e4e22fc" exitCode=0 Nov 28 06:55:49 crc kubenswrapper[4946]: I1128 06:55:49.826896 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f7mgl" event={"ID":"b169ce71-a297-432f-a041-f7b544659574","Type":"ContainerDied","Data":"67f4bfc66b91f17d4b366b4b306c847009d632417e1602fcdcd2ebee1e4e22fc"} Nov 28 06:55:49 crc kubenswrapper[4946]: I1128 06:55:49.831562 4946 generic.go:334] "Generic (PLEG): container finished" podID="766b6f99-3942-4119-a40d-8b8b327e1880" containerID="d220eaceb193c9974d3213b152b7cc90a5049e718728b7193c35b159b2a96482" exitCode=0 Nov 28 06:55:49 crc kubenswrapper[4946]: I1128 06:55:49.831658 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6249s" event={"ID":"766b6f99-3942-4119-a40d-8b8b327e1880","Type":"ContainerDied","Data":"d220eaceb193c9974d3213b152b7cc90a5049e718728b7193c35b159b2a96482"} Nov 28 06:55:49 crc kubenswrapper[4946]: I1128 06:55:49.841342 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj" event={"ID":"874c9215-546d-4c25-bbb2-7dd82cffde6c","Type":"ContainerStarted","Data":"1f5b70a086bf5550a569d27909e4c9dfb5e5453b222d766c2af6e901888a28e7"} Nov 28 06:55:49 crc kubenswrapper[4946]: I1128 06:55:49.879746 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nxxk8" Nov 28 06:55:49 crc kubenswrapper[4946]: I1128 06:55:49.880223 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nxxk8" Nov 28 06:55:49 crc kubenswrapper[4946]: I1128 06:55:49.926014 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nxxk8" Nov 28 06:55:50 crc kubenswrapper[4946]: I1128 06:55:50.120280 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8x4xm" Nov 28 06:55:50 crc kubenswrapper[4946]: I1128 06:55:50.190364 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8x4xm" Nov 28 06:55:50 crc kubenswrapper[4946]: I1128 06:55:50.881721 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9lpt" event={"ID":"1c124c67-1e65-4489-a08a-8b580fee23cc","Type":"ContainerStarted","Data":"0a77650cb871753d826fa0eb5020e7d92f0dfa3f63dedf3cda0ccf863d9e2ef1"} Nov 28 06:55:50 crc kubenswrapper[4946]: I1128 06:55:50.887699 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj" event={"ID":"874c9215-546d-4c25-bbb2-7dd82cffde6c","Type":"ContainerStarted","Data":"41376c405cb72467547aec27aa26a2c94e611b91fb2ff480c48d31a29df37d4c"} Nov 28 06:55:50 crc kubenswrapper[4946]: I1128 06:55:50.906905 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n9lpt" podStartSLOduration=3.592520887 podStartE2EDuration="59.906874132s" podCreationTimestamp="2025-11-28 06:54:51 +0000 UTC" firstStartedPulling="2025-11-28 06:54:53.046571502 +0000 UTC m=+147.424636613" lastFinishedPulling="2025-11-28 06:55:49.360924727 +0000 UTC m=+203.738989858" observedRunningTime="2025-11-28 06:55:50.905759033 +0000 UTC m=+205.283824144" watchObservedRunningTime="2025-11-28 06:55:50.906874132 +0000 UTC m=+205.284939263" Nov 28 06:55:50 crc kubenswrapper[4946]: I1128 06:55:50.931534 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nxxk8" Nov 28 06:55:51 crc kubenswrapper[4946]: I1128 06:55:51.259596 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-m444h"] Nov 28 06:55:51 crc kubenswrapper[4946]: I1128 06:55:51.397051 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n9lpt" Nov 28 06:55:51 crc kubenswrapper[4946]: I1128 06:55:51.397136 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n9lpt" Nov 28 06:55:51 crc kubenswrapper[4946]: I1128 06:55:51.766697 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fjn4d" Nov 28 06:55:51 crc kubenswrapper[4946]: I1128 06:55:51.766773 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fjn4d" Nov 28 06:55:51 crc kubenswrapper[4946]: I1128 06:55:51.816517 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fjn4d" Nov 28 06:55:51 crc kubenswrapper[4946]: I1128 06:55:51.870744 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8x4xm"] Nov 28 06:55:51 crc kubenswrapper[4946]: I1128 06:55:51.894051 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8x4xm" podUID="ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70" containerName="registry-server" containerID="cri-o://4ae1abc3f141bc1e2cf5292b99d172694f156148a37292cb41b1716a82ed8697" gracePeriod=2 Nov 28 06:55:51 crc kubenswrapper[4946]: I1128 06:55:51.918481 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj" podStartSLOduration=8.918421156 podStartE2EDuration="8.918421156s" podCreationTimestamp="2025-11-28 06:55:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:55:51.918406556 +0000 UTC m=+206.296471677" watchObservedRunningTime="2025-11-28 06:55:51.918421156 +0000 UTC m=+206.296486277" Nov 28 06:55:51 crc kubenswrapper[4946]: I1128 06:55:51.950340 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fjn4d" Nov 28 06:55:52 crc kubenswrapper[4946]: I1128 06:55:52.073538 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nxxk8"] Nov 28 06:55:52 crc kubenswrapper[4946]: I1128 06:55:52.442310 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-n9lpt" podUID="1c124c67-1e65-4489-a08a-8b580fee23cc" containerName="registry-server" probeResult="failure" output=< Nov 28 06:55:52 crc kubenswrapper[4946]: timeout: failed to connect service ":50051" within 1s Nov 28 06:55:52 crc kubenswrapper[4946]: > Nov 28 06:55:52 crc kubenswrapper[4946]: I1128 06:55:52.904741 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6249s" event={"ID":"766b6f99-3942-4119-a40d-8b8b327e1880","Type":"ContainerStarted","Data":"5b3c370545a0f70c002a42a63a91b8561274596af34b443bd126e941884ecd8b"} Nov 28 06:55:52 crc kubenswrapper[4946]: I1128 06:55:52.908132 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txxk5" event={"ID":"d4cb05fa-0cdb-499e-9d83-e6a5e87bf144","Type":"ContainerStarted","Data":"59e0ebde0c1aba3eb7c6c208646384865b52fcc104200a084717110505444149"} Nov 28 06:55:52 crc kubenswrapper[4946]: I1128 06:55:52.911279 4946 generic.go:334] "Generic (PLEG): container finished" podID="ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70" containerID="4ae1abc3f141bc1e2cf5292b99d172694f156148a37292cb41b1716a82ed8697" exitCode=0 Nov 28 06:55:52 crc kubenswrapper[4946]: I1128 06:55:52.911364 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8x4xm" event={"ID":"ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70","Type":"ContainerDied","Data":"4ae1abc3f141bc1e2cf5292b99d172694f156148a37292cb41b1716a82ed8697"} Nov 28 06:55:52 crc kubenswrapper[4946]: I1128 06:55:52.916021 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f7mgl" event={"ID":"b169ce71-a297-432f-a041-f7b544659574","Type":"ContainerStarted","Data":"aaa71615617e77d25b1b5f41fc53dd4e6020d672abe4da55e9b796aed37cd626"} Nov 28 06:55:52 crc kubenswrapper[4946]: I1128 06:55:52.931823 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6249s" podStartSLOduration=3.7671272670000002 podStartE2EDuration="1m0.931777276s" podCreationTimestamp="2025-11-28 06:54:52 +0000 UTC" firstStartedPulling="2025-11-28 06:54:55.193251517 +0000 UTC m=+149.571316628" lastFinishedPulling="2025-11-28 06:55:52.357901526 +0000 UTC m=+206.735966637" observedRunningTime="2025-11-28 06:55:52.926963142 +0000 UTC m=+207.305028253" watchObservedRunningTime="2025-11-28 06:55:52.931777276 +0000 UTC m=+207.309842387" Nov 28 06:55:52 crc kubenswrapper[4946]: I1128 06:55:52.952109 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-txxk5" podStartSLOduration=2.8552005510000003 podStartE2EDuration="1m0.95208305s" podCreationTimestamp="2025-11-28 06:54:52 +0000 UTC" firstStartedPulling="2025-11-28 06:54:54.073402292 +0000 UTC m=+148.451467403" lastFinishedPulling="2025-11-28 06:55:52.170284781 +0000 UTC m=+206.548349902" observedRunningTime="2025-11-28 06:55:52.94974371 +0000 UTC m=+207.327808821" watchObservedRunningTime="2025-11-28 06:55:52.95208305 +0000 UTC m=+207.330148161" Nov 28 06:55:52 crc kubenswrapper[4946]: I1128 06:55:52.974988 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f7mgl" podStartSLOduration=2.5856489639999998 podStartE2EDuration="1m3.974963961s" podCreationTimestamp="2025-11-28 06:54:49 +0000 UTC" firstStartedPulling="2025-11-28 06:54:50.785917282 +0000 UTC m=+145.163982393" lastFinishedPulling="2025-11-28 06:55:52.175232269 +0000 UTC m=+206.553297390" observedRunningTime="2025-11-28 06:55:52.97027246 +0000 UTC m=+207.348337571" watchObservedRunningTime="2025-11-28 06:55:52.974963961 +0000 UTC m=+207.353029062" Nov 28 06:55:52 crc kubenswrapper[4946]: I1128 06:55:52.997642 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8x4xm" Nov 28 06:55:53 crc kubenswrapper[4946]: I1128 06:55:53.048684 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6249s" Nov 28 06:55:53 crc kubenswrapper[4946]: I1128 06:55:53.048744 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6249s" Nov 28 06:55:53 crc kubenswrapper[4946]: I1128 06:55:53.082698 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70-utilities\") pod \"ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70\" (UID: \"ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70\") " Nov 28 06:55:53 crc kubenswrapper[4946]: I1128 06:55:53.082775 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70-catalog-content\") pod \"ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70\" (UID: \"ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70\") " Nov 28 06:55:53 crc kubenswrapper[4946]: I1128 06:55:53.082822 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqd8g\" (UniqueName: \"kubernetes.io/projected/ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70-kube-api-access-tqd8g\") pod \"ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70\" (UID: \"ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70\") " Nov 28 06:55:53 crc kubenswrapper[4946]: I1128 06:55:53.084419 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70-utilities" (OuterVolumeSpecName: "utilities") pod "ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70" (UID: "ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:55:53 crc kubenswrapper[4946]: I1128 06:55:53.092670 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70-kube-api-access-tqd8g" (OuterVolumeSpecName: "kube-api-access-tqd8g") pod "ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70" (UID: "ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70"). InnerVolumeSpecName "kube-api-access-tqd8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:55:53 crc kubenswrapper[4946]: I1128 06:55:53.130424 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70" (UID: "ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:55:53 crc kubenswrapper[4946]: I1128 06:55:53.184869 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:53 crc kubenswrapper[4946]: I1128 06:55:53.184937 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:53 crc kubenswrapper[4946]: I1128 06:55:53.184965 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqd8g\" (UniqueName: \"kubernetes.io/projected/ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70-kube-api-access-tqd8g\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:53 crc kubenswrapper[4946]: I1128 06:55:53.925859 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8x4xm" event={"ID":"ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70","Type":"ContainerDied","Data":"c63b487e1e5b1ad41e88a4c633bb2bc4999fbbefb54597889ed188d775e4f638"} Nov 28 06:55:53 crc kubenswrapper[4946]: I1128 06:55:53.925951 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8x4xm" Nov 28 06:55:53 crc kubenswrapper[4946]: I1128 06:55:53.925981 4946 scope.go:117] "RemoveContainer" containerID="4ae1abc3f141bc1e2cf5292b99d172694f156148a37292cb41b1716a82ed8697" Nov 28 06:55:53 crc kubenswrapper[4946]: I1128 06:55:53.926926 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nxxk8" podUID="09b629b2-f3d1-4856-98f2-5761c99f457a" containerName="registry-server" containerID="cri-o://906dd7d1102631d68e565db04ab0b9d27f5ff781fd2bfec94d0d8270648a2c07" gracePeriod=2 Nov 28 06:55:53 crc kubenswrapper[4946]: I1128 06:55:53.952388 4946 scope.go:117] "RemoveContainer" containerID="4a902b83dcef37685cd522658fec281155ad2d76e06e5c0ec296121e0a42a457" Nov 28 06:55:53 crc kubenswrapper[4946]: I1128 06:55:53.974177 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8x4xm"] Nov 28 06:55:53 crc kubenswrapper[4946]: I1128 06:55:53.979075 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8x4xm"] Nov 28 06:55:53 crc kubenswrapper[4946]: I1128 06:55:53.998554 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70" path="/var/lib/kubelet/pods/ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70/volumes" Nov 28 06:55:54 crc kubenswrapper[4946]: I1128 06:55:54.098779 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6249s" podUID="766b6f99-3942-4119-a40d-8b8b327e1880" containerName="registry-server" probeResult="failure" output=< Nov 28 06:55:54 crc kubenswrapper[4946]: timeout: failed to connect service ":50051" within 1s Nov 28 06:55:54 crc kubenswrapper[4946]: > Nov 28 06:55:54 crc kubenswrapper[4946]: I1128 06:55:54.470962 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fjn4d"] Nov 28 06:55:54 crc kubenswrapper[4946]: I1128 06:55:54.471213 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fjn4d" podUID="00f636a1-6272-432d-9b59-6b21b51f0038" containerName="registry-server" containerID="cri-o://5d62e064649a5e6a75cb87ffb6b23994e330c2b3cd76091ec23cdc922b42bb53" gracePeriod=2 Nov 28 06:55:54 crc kubenswrapper[4946]: I1128 06:55:54.731250 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:55:54 crc kubenswrapper[4946]: I1128 06:55:54.731329 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:55:54 crc kubenswrapper[4946]: I1128 06:55:54.731410 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 06:55:54 crc kubenswrapper[4946]: I1128 06:55:54.732260 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 06:55:54 crc kubenswrapper[4946]: I1128 06:55:54.732381 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67" gracePeriod=600 Nov 28 06:55:54 crc kubenswrapper[4946]: I1128 06:55:54.953527 4946 scope.go:117] "RemoveContainer" containerID="72c5dac86fffaa557bc0f186353ad7bdd68d30c01d893bce63d615c24393dca9" Nov 28 06:55:55 crc kubenswrapper[4946]: I1128 06:55:55.961647 4946 generic.go:334] "Generic (PLEG): container finished" podID="00f636a1-6272-432d-9b59-6b21b51f0038" containerID="5d62e064649a5e6a75cb87ffb6b23994e330c2b3cd76091ec23cdc922b42bb53" exitCode=0 Nov 28 06:55:55 crc kubenswrapper[4946]: I1128 06:55:55.961766 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fjn4d" event={"ID":"00f636a1-6272-432d-9b59-6b21b51f0038","Type":"ContainerDied","Data":"5d62e064649a5e6a75cb87ffb6b23994e330c2b3cd76091ec23cdc922b42bb53"} Nov 28 06:55:55 crc kubenswrapper[4946]: I1128 06:55:55.964780 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67" exitCode=0 Nov 28 06:55:55 crc kubenswrapper[4946]: I1128 06:55:55.964854 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67"} Nov 28 06:55:55 crc kubenswrapper[4946]: I1128 06:55:55.967923 4946 generic.go:334] "Generic (PLEG): container finished" podID="09b629b2-f3d1-4856-98f2-5761c99f457a" containerID="906dd7d1102631d68e565db04ab0b9d27f5ff781fd2bfec94d0d8270648a2c07" exitCode=0 Nov 28 06:55:55 crc kubenswrapper[4946]: I1128 06:55:55.967980 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxxk8" event={"ID":"09b629b2-f3d1-4856-98f2-5761c99f457a","Type":"ContainerDied","Data":"906dd7d1102631d68e565db04ab0b9d27f5ff781fd2bfec94d0d8270648a2c07"} Nov 28 06:55:56 crc kubenswrapper[4946]: I1128 06:55:56.590298 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fjn4d" Nov 28 06:55:56 crc kubenswrapper[4946]: I1128 06:55:56.714966 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nxxk8" Nov 28 06:55:56 crc kubenswrapper[4946]: I1128 06:55:56.759766 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4wx\" (UniqueName: \"kubernetes.io/projected/00f636a1-6272-432d-9b59-6b21b51f0038-kube-api-access-9z4wx\") pod \"00f636a1-6272-432d-9b59-6b21b51f0038\" (UID: \"00f636a1-6272-432d-9b59-6b21b51f0038\") " Nov 28 06:55:56 crc kubenswrapper[4946]: I1128 06:55:56.759906 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f636a1-6272-432d-9b59-6b21b51f0038-utilities\") pod \"00f636a1-6272-432d-9b59-6b21b51f0038\" (UID: \"00f636a1-6272-432d-9b59-6b21b51f0038\") " Nov 28 06:55:56 crc kubenswrapper[4946]: I1128 06:55:56.759942 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f636a1-6272-432d-9b59-6b21b51f0038-catalog-content\") pod \"00f636a1-6272-432d-9b59-6b21b51f0038\" (UID: \"00f636a1-6272-432d-9b59-6b21b51f0038\") " Nov 28 06:55:56 crc kubenswrapper[4946]: I1128 06:55:56.761522 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00f636a1-6272-432d-9b59-6b21b51f0038-utilities" (OuterVolumeSpecName: "utilities") pod "00f636a1-6272-432d-9b59-6b21b51f0038" (UID: "00f636a1-6272-432d-9b59-6b21b51f0038"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:55:56 crc kubenswrapper[4946]: I1128 06:55:56.767010 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00f636a1-6272-432d-9b59-6b21b51f0038-kube-api-access-9z4wx" (OuterVolumeSpecName: "kube-api-access-9z4wx") pod "00f636a1-6272-432d-9b59-6b21b51f0038" (UID: "00f636a1-6272-432d-9b59-6b21b51f0038"). InnerVolumeSpecName "kube-api-access-9z4wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:55:56 crc kubenswrapper[4946]: I1128 06:55:56.781640 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00f636a1-6272-432d-9b59-6b21b51f0038-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00f636a1-6272-432d-9b59-6b21b51f0038" (UID: "00f636a1-6272-432d-9b59-6b21b51f0038"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:55:56 crc kubenswrapper[4946]: I1128 06:55:56.861116 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09b629b2-f3d1-4856-98f2-5761c99f457a-catalog-content\") pod \"09b629b2-f3d1-4856-98f2-5761c99f457a\" (UID: \"09b629b2-f3d1-4856-98f2-5761c99f457a\") " Nov 28 06:55:56 crc kubenswrapper[4946]: I1128 06:55:56.861195 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09b629b2-f3d1-4856-98f2-5761c99f457a-utilities\") pod \"09b629b2-f3d1-4856-98f2-5761c99f457a\" (UID: \"09b629b2-f3d1-4856-98f2-5761c99f457a\") " Nov 28 06:55:56 crc kubenswrapper[4946]: I1128 06:55:56.861316 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp8vk\" (UniqueName: \"kubernetes.io/projected/09b629b2-f3d1-4856-98f2-5761c99f457a-kube-api-access-qp8vk\") pod \"09b629b2-f3d1-4856-98f2-5761c99f457a\" (UID: \"09b629b2-f3d1-4856-98f2-5761c99f457a\") " Nov 28 06:55:56 crc kubenswrapper[4946]: I1128 06:55:56.861570 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f636a1-6272-432d-9b59-6b21b51f0038-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:56 crc kubenswrapper[4946]: I1128 06:55:56.861588 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f636a1-6272-432d-9b59-6b21b51f0038-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:56 crc kubenswrapper[4946]: I1128 06:55:56.861601 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z4wx\" (UniqueName: \"kubernetes.io/projected/00f636a1-6272-432d-9b59-6b21b51f0038-kube-api-access-9z4wx\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:56 crc kubenswrapper[4946]: I1128 06:55:56.862396 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09b629b2-f3d1-4856-98f2-5761c99f457a-utilities" (OuterVolumeSpecName: "utilities") pod "09b629b2-f3d1-4856-98f2-5761c99f457a" (UID: "09b629b2-f3d1-4856-98f2-5761c99f457a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:55:56 crc kubenswrapper[4946]: I1128 06:55:56.866206 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09b629b2-f3d1-4856-98f2-5761c99f457a-kube-api-access-qp8vk" (OuterVolumeSpecName: "kube-api-access-qp8vk") pod "09b629b2-f3d1-4856-98f2-5761c99f457a" (UID: "09b629b2-f3d1-4856-98f2-5761c99f457a"). InnerVolumeSpecName "kube-api-access-qp8vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:55:56 crc kubenswrapper[4946]: I1128 06:55:56.906694 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09b629b2-f3d1-4856-98f2-5761c99f457a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09b629b2-f3d1-4856-98f2-5761c99f457a" (UID: "09b629b2-f3d1-4856-98f2-5761c99f457a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:55:56 crc kubenswrapper[4946]: I1128 06:55:56.962539 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09b629b2-f3d1-4856-98f2-5761c99f457a-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:56 crc kubenswrapper[4946]: I1128 06:55:56.963762 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp8vk\" (UniqueName: \"kubernetes.io/projected/09b629b2-f3d1-4856-98f2-5761c99f457a-kube-api-access-qp8vk\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:56 crc kubenswrapper[4946]: I1128 06:55:56.963871 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09b629b2-f3d1-4856-98f2-5761c99f457a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:56 crc kubenswrapper[4946]: I1128 06:55:56.984911 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fjn4d" event={"ID":"00f636a1-6272-432d-9b59-6b21b51f0038","Type":"ContainerDied","Data":"35563ddff8e976e00c5d4f621c7b7e852527fa64b4c15689f856728714758da1"} Nov 28 06:55:56 crc kubenswrapper[4946]: I1128 06:55:56.984995 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fjn4d" Nov 28 06:55:56 crc kubenswrapper[4946]: I1128 06:55:56.985256 4946 scope.go:117] "RemoveContainer" containerID="5d62e064649a5e6a75cb87ffb6b23994e330c2b3cd76091ec23cdc922b42bb53" Nov 28 06:55:56 crc kubenswrapper[4946]: I1128 06:55:56.996989 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"434a29b0903cdd2aa9424d48ae517522642263b1fb83a0aeca569e01bd8b7068"} Nov 28 06:55:57 crc kubenswrapper[4946]: I1128 06:55:57.000181 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxxk8" event={"ID":"09b629b2-f3d1-4856-98f2-5761c99f457a","Type":"ContainerDied","Data":"fe6dd611b20c5f66c7ad28709264ed3a2688a7055cebf85a49098965557b92bb"} Nov 28 06:55:57 crc kubenswrapper[4946]: I1128 06:55:57.000250 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nxxk8" Nov 28 06:55:57 crc kubenswrapper[4946]: I1128 06:55:57.051861 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fjn4d"] Nov 28 06:55:57 crc kubenswrapper[4946]: I1128 06:55:57.055171 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fjn4d"] Nov 28 06:55:57 crc kubenswrapper[4946]: I1128 06:55:57.080523 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nxxk8"] Nov 28 06:55:57 crc kubenswrapper[4946]: I1128 06:55:57.094142 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nxxk8"] Nov 28 06:55:57 crc kubenswrapper[4946]: I1128 06:55:57.284376 4946 scope.go:117] "RemoveContainer" containerID="c60b7608ad8dfb1ccab0a3a9251bb1f9ce0eb21ec27b18c6338e0ca1de8ba24c" Nov 28 06:55:57 crc kubenswrapper[4946]: I1128 06:55:57.305481 4946 scope.go:117] "RemoveContainer" containerID="53bd432122bd0627016d52028f018d21e3f1047889a140b7ab3e5c7c490b2376" Nov 28 06:55:57 crc kubenswrapper[4946]: I1128 06:55:57.323244 4946 scope.go:117] "RemoveContainer" containerID="906dd7d1102631d68e565db04ab0b9d27f5ff781fd2bfec94d0d8270648a2c07" Nov 28 06:55:57 crc kubenswrapper[4946]: I1128 06:55:57.340908 4946 scope.go:117] "RemoveContainer" containerID="7a4db7e5fc504001e92e3dc8a7cf5a8b710938bb35cf4c4dd2ac60edff73f29f" Nov 28 06:55:57 crc kubenswrapper[4946]: I1128 06:55:57.370312 4946 scope.go:117] "RemoveContainer" containerID="62d8856e00dceb21a249a38486f1902050b25d810933404627e2eb85ca809729" Nov 28 06:55:57 crc kubenswrapper[4946]: I1128 06:55:57.999810 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00f636a1-6272-432d-9b59-6b21b51f0038" path="/var/lib/kubelet/pods/00f636a1-6272-432d-9b59-6b21b51f0038/volumes" Nov 28 06:55:58 crc kubenswrapper[4946]: I1128 06:55:58.000735 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09b629b2-f3d1-4856-98f2-5761c99f457a" path="/var/lib/kubelet/pods/09b629b2-f3d1-4856-98f2-5761c99f457a/volumes" Nov 28 06:55:58 crc kubenswrapper[4946]: I1128 06:55:58.536367 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj" Nov 28 06:55:58 crc kubenswrapper[4946]: I1128 06:55:58.542054 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj" Nov 28 06:55:59 crc kubenswrapper[4946]: I1128 06:55:59.441636 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f7mgl" Nov 28 06:55:59 crc kubenswrapper[4946]: I1128 06:55:59.442025 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f7mgl" Nov 28 06:55:59 crc kubenswrapper[4946]: I1128 06:55:59.512102 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f7mgl" Nov 28 06:56:00 crc kubenswrapper[4946]: I1128 06:56:00.084774 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f7mgl" Nov 28 06:56:01 crc kubenswrapper[4946]: I1128 06:56:01.462647 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n9lpt" Nov 28 06:56:01 crc kubenswrapper[4946]: I1128 06:56:01.506035 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n9lpt" Nov 28 06:56:02 crc kubenswrapper[4946]: I1128 06:56:02.621674 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-txxk5" Nov 28 06:56:02 crc kubenswrapper[4946]: I1128 06:56:02.621736 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-txxk5" Nov 28 06:56:02 crc kubenswrapper[4946]: I1128 06:56:02.684336 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-txxk5" Nov 28 06:56:03 crc kubenswrapper[4946]: I1128 06:56:03.081503 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-654687fc99-5lchk"] Nov 28 06:56:03 crc kubenswrapper[4946]: I1128 06:56:03.082133 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" podUID="d8964807-1867-4e86-8d13-c2418e09290f" containerName="controller-manager" containerID="cri-o://d8ba0fdcb12d9df48fe8f486a39441e8d524cb65c7e5db72415aa0aefe018442" gracePeriod=30 Nov 28 06:56:03 crc kubenswrapper[4946]: I1128 06:56:03.098422 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj"] Nov 28 06:56:03 crc kubenswrapper[4946]: I1128 06:56:03.098756 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj" podUID="874c9215-546d-4c25-bbb2-7dd82cffde6c" containerName="route-controller-manager" containerID="cri-o://41376c405cb72467547aec27aa26a2c94e611b91fb2ff480c48d31a29df37d4c" gracePeriod=30 Nov 28 06:56:03 crc kubenswrapper[4946]: I1128 06:56:03.123553 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-txxk5" Nov 28 06:56:03 crc kubenswrapper[4946]: I1128 06:56:03.137277 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6249s" Nov 28 06:56:03 crc kubenswrapper[4946]: I1128 06:56:03.239081 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6249s" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.057538 4946 generic.go:334] "Generic (PLEG): container finished" podID="874c9215-546d-4c25-bbb2-7dd82cffde6c" containerID="41376c405cb72467547aec27aa26a2c94e611b91fb2ff480c48d31a29df37d4c" exitCode=0 Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.057641 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj" event={"ID":"874c9215-546d-4c25-bbb2-7dd82cffde6c","Type":"ContainerDied","Data":"41376c405cb72467547aec27aa26a2c94e611b91fb2ff480c48d31a29df37d4c"} Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.061729 4946 generic.go:334] "Generic (PLEG): container finished" podID="d8964807-1867-4e86-8d13-c2418e09290f" containerID="d8ba0fdcb12d9df48fe8f486a39441e8d524cb65c7e5db72415aa0aefe018442" exitCode=0 Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.061826 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" event={"ID":"d8964807-1867-4e86-8d13-c2418e09290f","Type":"ContainerDied","Data":"d8ba0fdcb12d9df48fe8f486a39441e8d524cb65c7e5db72415aa0aefe018442"} Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.496569 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.538781 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b99fd5b68-qtcj2"] Nov 28 06:56:05 crc kubenswrapper[4946]: E1128 06:56:05.539189 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09b629b2-f3d1-4856-98f2-5761c99f457a" containerName="registry-server" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.539205 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="09b629b2-f3d1-4856-98f2-5761c99f457a" containerName="registry-server" Nov 28 06:56:05 crc kubenswrapper[4946]: E1128 06:56:05.539213 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874c9215-546d-4c25-bbb2-7dd82cffde6c" containerName="route-controller-manager" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.539220 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="874c9215-546d-4c25-bbb2-7dd82cffde6c" containerName="route-controller-manager" Nov 28 06:56:05 crc kubenswrapper[4946]: E1128 06:56:05.539238 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70" containerName="extract-content" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.539247 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70" containerName="extract-content" Nov 28 06:56:05 crc kubenswrapper[4946]: E1128 06:56:05.539263 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09b629b2-f3d1-4856-98f2-5761c99f457a" containerName="extract-utilities" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.539274 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="09b629b2-f3d1-4856-98f2-5761c99f457a" containerName="extract-utilities" Nov 28 06:56:05 crc kubenswrapper[4946]: E1128 06:56:05.539285 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00f636a1-6272-432d-9b59-6b21b51f0038" containerName="registry-server" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.539293 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="00f636a1-6272-432d-9b59-6b21b51f0038" containerName="registry-server" Nov 28 06:56:05 crc kubenswrapper[4946]: E1128 06:56:05.539309 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00f636a1-6272-432d-9b59-6b21b51f0038" containerName="extract-utilities" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.539316 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="00f636a1-6272-432d-9b59-6b21b51f0038" containerName="extract-utilities" Nov 28 06:56:05 crc kubenswrapper[4946]: E1128 06:56:05.539328 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00f636a1-6272-432d-9b59-6b21b51f0038" containerName="extract-content" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.539338 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="00f636a1-6272-432d-9b59-6b21b51f0038" containerName="extract-content" Nov 28 06:56:05 crc kubenswrapper[4946]: E1128 06:56:05.539349 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70" containerName="extract-utilities" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.539358 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70" containerName="extract-utilities" Nov 28 06:56:05 crc kubenswrapper[4946]: E1128 06:56:05.539366 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09b629b2-f3d1-4856-98f2-5761c99f457a" containerName="extract-content" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.539372 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="09b629b2-f3d1-4856-98f2-5761c99f457a" containerName="extract-content" Nov 28 06:56:05 crc kubenswrapper[4946]: E1128 06:56:05.539380 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70" containerName="registry-server" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.539385 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70" containerName="registry-server" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.539517 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce0e2c1f-e62e-4ea8-95ae-bf75a76a3a70" containerName="registry-server" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.539536 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="874c9215-546d-4c25-bbb2-7dd82cffde6c" containerName="route-controller-manager" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.539547 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="09b629b2-f3d1-4856-98f2-5761c99f457a" containerName="registry-server" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.539558 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="00f636a1-6272-432d-9b59-6b21b51f0038" containerName="registry-server" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.540120 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b99fd5b68-qtcj2" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.548616 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.551448 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b99fd5b68-qtcj2"] Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.649267 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/874c9215-546d-4c25-bbb2-7dd82cffde6c-serving-cert\") pod \"874c9215-546d-4c25-bbb2-7dd82cffde6c\" (UID: \"874c9215-546d-4c25-bbb2-7dd82cffde6c\") " Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.649337 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8964807-1867-4e86-8d13-c2418e09290f-proxy-ca-bundles\") pod \"d8964807-1867-4e86-8d13-c2418e09290f\" (UID: \"d8964807-1867-4e86-8d13-c2418e09290f\") " Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.649402 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnj6c\" (UniqueName: \"kubernetes.io/projected/d8964807-1867-4e86-8d13-c2418e09290f-kube-api-access-tnj6c\") pod \"d8964807-1867-4e86-8d13-c2418e09290f\" (UID: \"d8964807-1867-4e86-8d13-c2418e09290f\") " Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.649827 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/874c9215-546d-4c25-bbb2-7dd82cffde6c-config\") pod \"874c9215-546d-4c25-bbb2-7dd82cffde6c\" (UID: \"874c9215-546d-4c25-bbb2-7dd82cffde6c\") " Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.649910 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7269\" (UniqueName: \"kubernetes.io/projected/874c9215-546d-4c25-bbb2-7dd82cffde6c-kube-api-access-n7269\") pod \"874c9215-546d-4c25-bbb2-7dd82cffde6c\" (UID: \"874c9215-546d-4c25-bbb2-7dd82cffde6c\") " Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.649949 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/874c9215-546d-4c25-bbb2-7dd82cffde6c-client-ca\") pod \"874c9215-546d-4c25-bbb2-7dd82cffde6c\" (UID: \"874c9215-546d-4c25-bbb2-7dd82cffde6c\") " Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.649996 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8964807-1867-4e86-8d13-c2418e09290f-client-ca\") pod \"d8964807-1867-4e86-8d13-c2418e09290f\" (UID: \"d8964807-1867-4e86-8d13-c2418e09290f\") " Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.650048 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8964807-1867-4e86-8d13-c2418e09290f-serving-cert\") pod \"d8964807-1867-4e86-8d13-c2418e09290f\" (UID: \"d8964807-1867-4e86-8d13-c2418e09290f\") " Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.650119 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8964807-1867-4e86-8d13-c2418e09290f-config\") pod \"d8964807-1867-4e86-8d13-c2418e09290f\" (UID: \"d8964807-1867-4e86-8d13-c2418e09290f\") " Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.650360 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnc7g\" (UniqueName: \"kubernetes.io/projected/ff1ba711-cfc3-45ae-a0d4-dc363b487d8e-kube-api-access-jnc7g\") pod \"route-controller-manager-7b99fd5b68-qtcj2\" (UID: \"ff1ba711-cfc3-45ae-a0d4-dc363b487d8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b99fd5b68-qtcj2" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.650407 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff1ba711-cfc3-45ae-a0d4-dc363b487d8e-config\") pod \"route-controller-manager-7b99fd5b68-qtcj2\" (UID: \"ff1ba711-cfc3-45ae-a0d4-dc363b487d8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b99fd5b68-qtcj2" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.650456 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff1ba711-cfc3-45ae-a0d4-dc363b487d8e-serving-cert\") pod \"route-controller-manager-7b99fd5b68-qtcj2\" (UID: \"ff1ba711-cfc3-45ae-a0d4-dc363b487d8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b99fd5b68-qtcj2" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.650504 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff1ba711-cfc3-45ae-a0d4-dc363b487d8e-client-ca\") pod \"route-controller-manager-7b99fd5b68-qtcj2\" (UID: \"ff1ba711-cfc3-45ae-a0d4-dc363b487d8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b99fd5b68-qtcj2" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.650745 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/874c9215-546d-4c25-bbb2-7dd82cffde6c-config" (OuterVolumeSpecName: "config") pod "874c9215-546d-4c25-bbb2-7dd82cffde6c" (UID: "874c9215-546d-4c25-bbb2-7dd82cffde6c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.650774 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8964807-1867-4e86-8d13-c2418e09290f-client-ca" (OuterVolumeSpecName: "client-ca") pod "d8964807-1867-4e86-8d13-c2418e09290f" (UID: "d8964807-1867-4e86-8d13-c2418e09290f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.651066 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8964807-1867-4e86-8d13-c2418e09290f-config" (OuterVolumeSpecName: "config") pod "d8964807-1867-4e86-8d13-c2418e09290f" (UID: "d8964807-1867-4e86-8d13-c2418e09290f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.651161 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8964807-1867-4e86-8d13-c2418e09290f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d8964807-1867-4e86-8d13-c2418e09290f" (UID: "d8964807-1867-4e86-8d13-c2418e09290f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.651377 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/874c9215-546d-4c25-bbb2-7dd82cffde6c-client-ca" (OuterVolumeSpecName: "client-ca") pod "874c9215-546d-4c25-bbb2-7dd82cffde6c" (UID: "874c9215-546d-4c25-bbb2-7dd82cffde6c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.655022 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8964807-1867-4e86-8d13-c2418e09290f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d8964807-1867-4e86-8d13-c2418e09290f" (UID: "d8964807-1867-4e86-8d13-c2418e09290f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.655769 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/874c9215-546d-4c25-bbb2-7dd82cffde6c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "874c9215-546d-4c25-bbb2-7dd82cffde6c" (UID: "874c9215-546d-4c25-bbb2-7dd82cffde6c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.655839 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8964807-1867-4e86-8d13-c2418e09290f-kube-api-access-tnj6c" (OuterVolumeSpecName: "kube-api-access-tnj6c") pod "d8964807-1867-4e86-8d13-c2418e09290f" (UID: "d8964807-1867-4e86-8d13-c2418e09290f"). InnerVolumeSpecName "kube-api-access-tnj6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.655874 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/874c9215-546d-4c25-bbb2-7dd82cffde6c-kube-api-access-n7269" (OuterVolumeSpecName: "kube-api-access-n7269") pod "874c9215-546d-4c25-bbb2-7dd82cffde6c" (UID: "874c9215-546d-4c25-bbb2-7dd82cffde6c"). InnerVolumeSpecName "kube-api-access-n7269". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.752056 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff1ba711-cfc3-45ae-a0d4-dc363b487d8e-client-ca\") pod \"route-controller-manager-7b99fd5b68-qtcj2\" (UID: \"ff1ba711-cfc3-45ae-a0d4-dc363b487d8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b99fd5b68-qtcj2" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.752176 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnc7g\" (UniqueName: \"kubernetes.io/projected/ff1ba711-cfc3-45ae-a0d4-dc363b487d8e-kube-api-access-jnc7g\") pod \"route-controller-manager-7b99fd5b68-qtcj2\" (UID: \"ff1ba711-cfc3-45ae-a0d4-dc363b487d8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b99fd5b68-qtcj2" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.752219 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff1ba711-cfc3-45ae-a0d4-dc363b487d8e-config\") pod \"route-controller-manager-7b99fd5b68-qtcj2\" (UID: \"ff1ba711-cfc3-45ae-a0d4-dc363b487d8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b99fd5b68-qtcj2" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.752260 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff1ba711-cfc3-45ae-a0d4-dc363b487d8e-serving-cert\") pod \"route-controller-manager-7b99fd5b68-qtcj2\" (UID: \"ff1ba711-cfc3-45ae-a0d4-dc363b487d8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b99fd5b68-qtcj2" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.752310 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8964807-1867-4e86-8d13-c2418e09290f-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.752325 4946 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/874c9215-546d-4c25-bbb2-7dd82cffde6c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.752340 4946 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8964807-1867-4e86-8d13-c2418e09290f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.752353 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnj6c\" (UniqueName: \"kubernetes.io/projected/d8964807-1867-4e86-8d13-c2418e09290f-kube-api-access-tnj6c\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.752364 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/874c9215-546d-4c25-bbb2-7dd82cffde6c-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.752375 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7269\" (UniqueName: \"kubernetes.io/projected/874c9215-546d-4c25-bbb2-7dd82cffde6c-kube-api-access-n7269\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.752386 4946 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/874c9215-546d-4c25-bbb2-7dd82cffde6c-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.752399 4946 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8964807-1867-4e86-8d13-c2418e09290f-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.752410 4946 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8964807-1867-4e86-8d13-c2418e09290f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.753643 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff1ba711-cfc3-45ae-a0d4-dc363b487d8e-client-ca\") pod \"route-controller-manager-7b99fd5b68-qtcj2\" (UID: \"ff1ba711-cfc3-45ae-a0d4-dc363b487d8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b99fd5b68-qtcj2" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.754054 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff1ba711-cfc3-45ae-a0d4-dc363b487d8e-config\") pod \"route-controller-manager-7b99fd5b68-qtcj2\" (UID: \"ff1ba711-cfc3-45ae-a0d4-dc363b487d8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b99fd5b68-qtcj2" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.757519 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff1ba711-cfc3-45ae-a0d4-dc363b487d8e-serving-cert\") pod \"route-controller-manager-7b99fd5b68-qtcj2\" (UID: \"ff1ba711-cfc3-45ae-a0d4-dc363b487d8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b99fd5b68-qtcj2" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.777577 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnc7g\" (UniqueName: \"kubernetes.io/projected/ff1ba711-cfc3-45ae-a0d4-dc363b487d8e-kube-api-access-jnc7g\") pod \"route-controller-manager-7b99fd5b68-qtcj2\" (UID: \"ff1ba711-cfc3-45ae-a0d4-dc363b487d8e\") " pod="openshift-route-controller-manager/route-controller-manager-7b99fd5b68-qtcj2" Nov 28 06:56:05 crc kubenswrapper[4946]: I1128 06:56:05.862533 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b99fd5b68-qtcj2" Nov 28 06:56:06 crc kubenswrapper[4946]: I1128 06:56:06.073904 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" event={"ID":"d8964807-1867-4e86-8d13-c2418e09290f","Type":"ContainerDied","Data":"59ed74cc82d8dd1dc0ad53378c95b32b0ec02c617fb6e0fb3e4627f64428dbb1"} Nov 28 06:56:06 crc kubenswrapper[4946]: I1128 06:56:06.074435 4946 scope.go:117] "RemoveContainer" containerID="d8ba0fdcb12d9df48fe8f486a39441e8d524cb65c7e5db72415aa0aefe018442" Nov 28 06:56:06 crc kubenswrapper[4946]: I1128 06:56:06.073942 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-654687fc99-5lchk" Nov 28 06:56:06 crc kubenswrapper[4946]: I1128 06:56:06.077856 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj" Nov 28 06:56:06 crc kubenswrapper[4946]: I1128 06:56:06.078512 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj" event={"ID":"874c9215-546d-4c25-bbb2-7dd82cffde6c","Type":"ContainerDied","Data":"1f5b70a086bf5550a569d27909e4c9dfb5e5453b222d766c2af6e901888a28e7"} Nov 28 06:56:06 crc kubenswrapper[4946]: I1128 06:56:06.096852 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-654687fc99-5lchk"] Nov 28 06:56:06 crc kubenswrapper[4946]: I1128 06:56:06.103179 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-654687fc99-5lchk"] Nov 28 06:56:06 crc kubenswrapper[4946]: I1128 06:56:06.107374 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b99fd5b68-qtcj2"] Nov 28 06:56:06 crc kubenswrapper[4946]: I1128 06:56:06.110439 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj"] Nov 28 06:56:06 crc kubenswrapper[4946]: I1128 06:56:06.111755 4946 scope.go:117] "RemoveContainer" containerID="41376c405cb72467547aec27aa26a2c94e611b91fb2ff480c48d31a29df37d4c" Nov 28 06:56:06 crc kubenswrapper[4946]: W1128 06:56:06.121831 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff1ba711_cfc3_45ae_a0d4_dc363b487d8e.slice/crio-0b8e110b868d4d9eba9c9a43b5a042b5772f6c7106d4c92199b1d3480949247f WatchSource:0}: Error finding container 0b8e110b868d4d9eba9c9a43b5a042b5772f6c7106d4c92199b1d3480949247f: Status 404 returned error can't find the container with id 0b8e110b868d4d9eba9c9a43b5a042b5772f6c7106d4c92199b1d3480949247f Nov 28 06:56:06 crc kubenswrapper[4946]: I1128 06:56:06.133602 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d99b5-ghmxj"] Nov 28 06:56:06 crc kubenswrapper[4946]: I1128 06:56:06.276823 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6249s"] Nov 28 06:56:06 crc kubenswrapper[4946]: I1128 06:56:06.277532 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6249s" podUID="766b6f99-3942-4119-a40d-8b8b327e1880" containerName="registry-server" containerID="cri-o://5b3c370545a0f70c002a42a63a91b8561274596af34b443bd126e941884ecd8b" gracePeriod=2 Nov 28 06:56:06 crc kubenswrapper[4946]: I1128 06:56:06.723419 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6249s" Nov 28 06:56:06 crc kubenswrapper[4946]: I1128 06:56:06.871584 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/766b6f99-3942-4119-a40d-8b8b327e1880-catalog-content\") pod \"766b6f99-3942-4119-a40d-8b8b327e1880\" (UID: \"766b6f99-3942-4119-a40d-8b8b327e1880\") " Nov 28 06:56:06 crc kubenswrapper[4946]: I1128 06:56:06.871742 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/766b6f99-3942-4119-a40d-8b8b327e1880-utilities\") pod \"766b6f99-3942-4119-a40d-8b8b327e1880\" (UID: \"766b6f99-3942-4119-a40d-8b8b327e1880\") " Nov 28 06:56:06 crc kubenswrapper[4946]: I1128 06:56:06.871848 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2cmz\" (UniqueName: \"kubernetes.io/projected/766b6f99-3942-4119-a40d-8b8b327e1880-kube-api-access-w2cmz\") pod \"766b6f99-3942-4119-a40d-8b8b327e1880\" (UID: \"766b6f99-3942-4119-a40d-8b8b327e1880\") " Nov 28 06:56:06 crc kubenswrapper[4946]: I1128 06:56:06.873165 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/766b6f99-3942-4119-a40d-8b8b327e1880-utilities" (OuterVolumeSpecName: "utilities") pod "766b6f99-3942-4119-a40d-8b8b327e1880" (UID: "766b6f99-3942-4119-a40d-8b8b327e1880"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:56:06 crc kubenswrapper[4946]: I1128 06:56:06.878530 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/766b6f99-3942-4119-a40d-8b8b327e1880-kube-api-access-w2cmz" (OuterVolumeSpecName: "kube-api-access-w2cmz") pod "766b6f99-3942-4119-a40d-8b8b327e1880" (UID: "766b6f99-3942-4119-a40d-8b8b327e1880"). InnerVolumeSpecName "kube-api-access-w2cmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:56:06 crc kubenswrapper[4946]: I1128 06:56:06.973622 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/766b6f99-3942-4119-a40d-8b8b327e1880-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:06 crc kubenswrapper[4946]: I1128 06:56:06.973676 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2cmz\" (UniqueName: \"kubernetes.io/projected/766b6f99-3942-4119-a40d-8b8b327e1880-kube-api-access-w2cmz\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:07 crc kubenswrapper[4946]: I1128 06:56:07.018810 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/766b6f99-3942-4119-a40d-8b8b327e1880-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "766b6f99-3942-4119-a40d-8b8b327e1880" (UID: "766b6f99-3942-4119-a40d-8b8b327e1880"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:56:07 crc kubenswrapper[4946]: I1128 06:56:07.076437 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/766b6f99-3942-4119-a40d-8b8b327e1880-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:07 crc kubenswrapper[4946]: I1128 06:56:07.086889 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b99fd5b68-qtcj2" event={"ID":"ff1ba711-cfc3-45ae-a0d4-dc363b487d8e","Type":"ContainerStarted","Data":"1af8d66970daa0399c1f31ec290c292ac8c1a991852a074e49bae7e96498d694"} Nov 28 06:56:07 crc kubenswrapper[4946]: I1128 06:56:07.086957 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b99fd5b68-qtcj2" event={"ID":"ff1ba711-cfc3-45ae-a0d4-dc363b487d8e","Type":"ContainerStarted","Data":"0b8e110b868d4d9eba9c9a43b5a042b5772f6c7106d4c92199b1d3480949247f"} Nov 28 06:56:07 crc kubenswrapper[4946]: I1128 06:56:07.087345 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b99fd5b68-qtcj2" Nov 28 06:56:07 crc kubenswrapper[4946]: I1128 06:56:07.091869 4946 generic.go:334] "Generic (PLEG): container finished" podID="766b6f99-3942-4119-a40d-8b8b327e1880" containerID="5b3c370545a0f70c002a42a63a91b8561274596af34b443bd126e941884ecd8b" exitCode=0 Nov 28 06:56:07 crc kubenswrapper[4946]: I1128 06:56:07.091948 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6249s" event={"ID":"766b6f99-3942-4119-a40d-8b8b327e1880","Type":"ContainerDied","Data":"5b3c370545a0f70c002a42a63a91b8561274596af34b443bd126e941884ecd8b"} Nov 28 06:56:07 crc kubenswrapper[4946]: I1128 06:56:07.092047 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6249s" Nov 28 06:56:07 crc kubenswrapper[4946]: I1128 06:56:07.092097 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6249s" event={"ID":"766b6f99-3942-4119-a40d-8b8b327e1880","Type":"ContainerDied","Data":"31069b5fe91781c20e49addaa144a1c41fb5e24bcc19e7c2a26a6b772e27beff"} Nov 28 06:56:07 crc kubenswrapper[4946]: I1128 06:56:07.092127 4946 scope.go:117] "RemoveContainer" containerID="5b3c370545a0f70c002a42a63a91b8561274596af34b443bd126e941884ecd8b" Nov 28 06:56:07 crc kubenswrapper[4946]: I1128 06:56:07.104558 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b99fd5b68-qtcj2" Nov 28 06:56:07 crc kubenswrapper[4946]: I1128 06:56:07.119450 4946 scope.go:117] "RemoveContainer" containerID="d220eaceb193c9974d3213b152b7cc90a5049e718728b7193c35b159b2a96482" Nov 28 06:56:07 crc kubenswrapper[4946]: I1128 06:56:07.157740 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b99fd5b68-qtcj2" podStartSLOduration=4.157718238 podStartE2EDuration="4.157718238s" podCreationTimestamp="2025-11-28 06:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:56:07.121427071 +0000 UTC m=+221.499492222" watchObservedRunningTime="2025-11-28 06:56:07.157718238 +0000 UTC m=+221.535783349" Nov 28 06:56:07 crc kubenswrapper[4946]: I1128 06:56:07.164234 4946 scope.go:117] "RemoveContainer" containerID="ac1c13b6d6e8915d63f7159a37fecca5e57caf98fba55acf4303bd5948c910fc" Nov 28 06:56:07 crc kubenswrapper[4946]: I1128 06:56:07.177757 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6249s"] Nov 28 06:56:07 crc kubenswrapper[4946]: I1128 06:56:07.182026 4946 scope.go:117] "RemoveContainer" containerID="5b3c370545a0f70c002a42a63a91b8561274596af34b443bd126e941884ecd8b" Nov 28 06:56:07 crc kubenswrapper[4946]: E1128 06:56:07.183233 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b3c370545a0f70c002a42a63a91b8561274596af34b443bd126e941884ecd8b\": container with ID starting with 5b3c370545a0f70c002a42a63a91b8561274596af34b443bd126e941884ecd8b not found: ID does not exist" containerID="5b3c370545a0f70c002a42a63a91b8561274596af34b443bd126e941884ecd8b" Nov 28 06:56:07 crc kubenswrapper[4946]: I1128 06:56:07.183285 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3c370545a0f70c002a42a63a91b8561274596af34b443bd126e941884ecd8b"} err="failed to get container status \"5b3c370545a0f70c002a42a63a91b8561274596af34b443bd126e941884ecd8b\": rpc error: code = NotFound desc = could not find container \"5b3c370545a0f70c002a42a63a91b8561274596af34b443bd126e941884ecd8b\": container with ID starting with 5b3c370545a0f70c002a42a63a91b8561274596af34b443bd126e941884ecd8b not found: ID does not exist" Nov 28 06:56:07 crc kubenswrapper[4946]: I1128 06:56:07.183319 4946 scope.go:117] "RemoveContainer" containerID="d220eaceb193c9974d3213b152b7cc90a5049e718728b7193c35b159b2a96482" Nov 28 06:56:07 crc kubenswrapper[4946]: E1128 06:56:07.183781 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d220eaceb193c9974d3213b152b7cc90a5049e718728b7193c35b159b2a96482\": container with ID starting with d220eaceb193c9974d3213b152b7cc90a5049e718728b7193c35b159b2a96482 not found: ID does not exist" containerID="d220eaceb193c9974d3213b152b7cc90a5049e718728b7193c35b159b2a96482" Nov 28 06:56:07 crc kubenswrapper[4946]: I1128 06:56:07.183804 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d220eaceb193c9974d3213b152b7cc90a5049e718728b7193c35b159b2a96482"} err="failed to get container status \"d220eaceb193c9974d3213b152b7cc90a5049e718728b7193c35b159b2a96482\": rpc error: code = NotFound desc = could not find container \"d220eaceb193c9974d3213b152b7cc90a5049e718728b7193c35b159b2a96482\": container with ID starting with d220eaceb193c9974d3213b152b7cc90a5049e718728b7193c35b159b2a96482 not found: ID does not exist" Nov 28 06:56:07 crc kubenswrapper[4946]: I1128 06:56:07.183819 4946 scope.go:117] "RemoveContainer" containerID="ac1c13b6d6e8915d63f7159a37fecca5e57caf98fba55acf4303bd5948c910fc" Nov 28 06:56:07 crc kubenswrapper[4946]: E1128 06:56:07.184019 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac1c13b6d6e8915d63f7159a37fecca5e57caf98fba55acf4303bd5948c910fc\": container with ID starting with ac1c13b6d6e8915d63f7159a37fecca5e57caf98fba55acf4303bd5948c910fc not found: ID does not exist" containerID="ac1c13b6d6e8915d63f7159a37fecca5e57caf98fba55acf4303bd5948c910fc" Nov 28 06:56:07 crc kubenswrapper[4946]: I1128 06:56:07.184037 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac1c13b6d6e8915d63f7159a37fecca5e57caf98fba55acf4303bd5948c910fc"} err="failed to get container status \"ac1c13b6d6e8915d63f7159a37fecca5e57caf98fba55acf4303bd5948c910fc\": rpc error: code = NotFound desc = could not find container \"ac1c13b6d6e8915d63f7159a37fecca5e57caf98fba55acf4303bd5948c910fc\": container with ID starting with ac1c13b6d6e8915d63f7159a37fecca5e57caf98fba55acf4303bd5948c910fc not found: ID does not exist" Nov 28 06:56:07 crc kubenswrapper[4946]: I1128 06:56:07.184524 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6249s"] Nov 28 06:56:07 crc kubenswrapper[4946]: I1128 06:56:07.998830 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="766b6f99-3942-4119-a40d-8b8b327e1880" path="/var/lib/kubelet/pods/766b6f99-3942-4119-a40d-8b8b327e1880/volumes" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.001356 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="874c9215-546d-4c25-bbb2-7dd82cffde6c" path="/var/lib/kubelet/pods/874c9215-546d-4c25-bbb2-7dd82cffde6c/volumes" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.002530 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8964807-1867-4e86-8d13-c2418e09290f" path="/var/lib/kubelet/pods/d8964807-1867-4e86-8d13-c2418e09290f/volumes" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.211221 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-678b65684-p5mvx"] Nov 28 06:56:08 crc kubenswrapper[4946]: E1128 06:56:08.211587 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8964807-1867-4e86-8d13-c2418e09290f" containerName="controller-manager" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.211610 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8964807-1867-4e86-8d13-c2418e09290f" containerName="controller-manager" Nov 28 06:56:08 crc kubenswrapper[4946]: E1128 06:56:08.211648 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766b6f99-3942-4119-a40d-8b8b327e1880" containerName="extract-utilities" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.211661 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="766b6f99-3942-4119-a40d-8b8b327e1880" containerName="extract-utilities" Nov 28 06:56:08 crc kubenswrapper[4946]: E1128 06:56:08.211678 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766b6f99-3942-4119-a40d-8b8b327e1880" containerName="registry-server" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.211690 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="766b6f99-3942-4119-a40d-8b8b327e1880" containerName="registry-server" Nov 28 06:56:08 crc kubenswrapper[4946]: E1128 06:56:08.211728 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766b6f99-3942-4119-a40d-8b8b327e1880" containerName="extract-content" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.211740 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="766b6f99-3942-4119-a40d-8b8b327e1880" containerName="extract-content" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.211912 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8964807-1867-4e86-8d13-c2418e09290f" containerName="controller-manager" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.211934 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="766b6f99-3942-4119-a40d-8b8b327e1880" containerName="registry-server" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.212920 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678b65684-p5mvx" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.215654 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.215716 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.215886 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.215964 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.216163 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.216188 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.224775 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-678b65684-p5mvx"] Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.225848 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.396769 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/905018b6-f87d-40b7-91a1-e87626269917-serving-cert\") pod \"controller-manager-678b65684-p5mvx\" (UID: \"905018b6-f87d-40b7-91a1-e87626269917\") " pod="openshift-controller-manager/controller-manager-678b65684-p5mvx" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.396973 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/905018b6-f87d-40b7-91a1-e87626269917-proxy-ca-bundles\") pod \"controller-manager-678b65684-p5mvx\" (UID: \"905018b6-f87d-40b7-91a1-e87626269917\") " pod="openshift-controller-manager/controller-manager-678b65684-p5mvx" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.397027 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntdcz\" (UniqueName: \"kubernetes.io/projected/905018b6-f87d-40b7-91a1-e87626269917-kube-api-access-ntdcz\") pod \"controller-manager-678b65684-p5mvx\" (UID: \"905018b6-f87d-40b7-91a1-e87626269917\") " pod="openshift-controller-manager/controller-manager-678b65684-p5mvx" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.397075 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/905018b6-f87d-40b7-91a1-e87626269917-client-ca\") pod \"controller-manager-678b65684-p5mvx\" (UID: \"905018b6-f87d-40b7-91a1-e87626269917\") " pod="openshift-controller-manager/controller-manager-678b65684-p5mvx" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.397130 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/905018b6-f87d-40b7-91a1-e87626269917-config\") pod \"controller-manager-678b65684-p5mvx\" (UID: \"905018b6-f87d-40b7-91a1-e87626269917\") " pod="openshift-controller-manager/controller-manager-678b65684-p5mvx" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.498131 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/905018b6-f87d-40b7-91a1-e87626269917-serving-cert\") pod \"controller-manager-678b65684-p5mvx\" (UID: \"905018b6-f87d-40b7-91a1-e87626269917\") " pod="openshift-controller-manager/controller-manager-678b65684-p5mvx" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.498251 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/905018b6-f87d-40b7-91a1-e87626269917-proxy-ca-bundles\") pod \"controller-manager-678b65684-p5mvx\" (UID: \"905018b6-f87d-40b7-91a1-e87626269917\") " pod="openshift-controller-manager/controller-manager-678b65684-p5mvx" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.498291 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntdcz\" (UniqueName: \"kubernetes.io/projected/905018b6-f87d-40b7-91a1-e87626269917-kube-api-access-ntdcz\") pod \"controller-manager-678b65684-p5mvx\" (UID: \"905018b6-f87d-40b7-91a1-e87626269917\") " pod="openshift-controller-manager/controller-manager-678b65684-p5mvx" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.498336 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/905018b6-f87d-40b7-91a1-e87626269917-client-ca\") pod \"controller-manager-678b65684-p5mvx\" (UID: \"905018b6-f87d-40b7-91a1-e87626269917\") " pod="openshift-controller-manager/controller-manager-678b65684-p5mvx" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.498365 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/905018b6-f87d-40b7-91a1-e87626269917-config\") pod \"controller-manager-678b65684-p5mvx\" (UID: \"905018b6-f87d-40b7-91a1-e87626269917\") " pod="openshift-controller-manager/controller-manager-678b65684-p5mvx" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.500245 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/905018b6-f87d-40b7-91a1-e87626269917-config\") pod \"controller-manager-678b65684-p5mvx\" (UID: \"905018b6-f87d-40b7-91a1-e87626269917\") " pod="openshift-controller-manager/controller-manager-678b65684-p5mvx" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.500423 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/905018b6-f87d-40b7-91a1-e87626269917-client-ca\") pod \"controller-manager-678b65684-p5mvx\" (UID: \"905018b6-f87d-40b7-91a1-e87626269917\") " pod="openshift-controller-manager/controller-manager-678b65684-p5mvx" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.500896 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/905018b6-f87d-40b7-91a1-e87626269917-proxy-ca-bundles\") pod \"controller-manager-678b65684-p5mvx\" (UID: \"905018b6-f87d-40b7-91a1-e87626269917\") " pod="openshift-controller-manager/controller-manager-678b65684-p5mvx" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.512418 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/905018b6-f87d-40b7-91a1-e87626269917-serving-cert\") pod \"controller-manager-678b65684-p5mvx\" (UID: \"905018b6-f87d-40b7-91a1-e87626269917\") " pod="openshift-controller-manager/controller-manager-678b65684-p5mvx" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.523274 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntdcz\" (UniqueName: \"kubernetes.io/projected/905018b6-f87d-40b7-91a1-e87626269917-kube-api-access-ntdcz\") pod \"controller-manager-678b65684-p5mvx\" (UID: \"905018b6-f87d-40b7-91a1-e87626269917\") " pod="openshift-controller-manager/controller-manager-678b65684-p5mvx" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.531848 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678b65684-p5mvx" Nov 28 06:56:08 crc kubenswrapper[4946]: I1128 06:56:08.735230 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-678b65684-p5mvx"] Nov 28 06:56:09 crc kubenswrapper[4946]: I1128 06:56:09.110383 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-678b65684-p5mvx" event={"ID":"905018b6-f87d-40b7-91a1-e87626269917","Type":"ContainerStarted","Data":"f0dffd178a29af4855cb9f20ef35545a2e6710cfde21465f9fe37ef60036b8bc"} Nov 28 06:56:09 crc kubenswrapper[4946]: I1128 06:56:09.110863 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-678b65684-p5mvx" event={"ID":"905018b6-f87d-40b7-91a1-e87626269917","Type":"ContainerStarted","Data":"0c04fbc6c13d78fbfc11d5c239d84d0f4e032e7de4afac578d7eb260611d1945"} Nov 28 06:56:09 crc kubenswrapper[4946]: I1128 06:56:09.172876 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-678b65684-p5mvx" podStartSLOduration=6.17285376 podStartE2EDuration="6.17285376s" podCreationTimestamp="2025-11-28 06:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:56:09.171761602 +0000 UTC m=+223.549826713" watchObservedRunningTime="2025-11-28 06:56:09.17285376 +0000 UTC m=+223.550918871" Nov 28 06:56:10 crc kubenswrapper[4946]: I1128 06:56:10.117879 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-678b65684-p5mvx" Nov 28 06:56:10 crc kubenswrapper[4946]: I1128 06:56:10.127690 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-678b65684-p5mvx" Nov 28 06:56:16 crc kubenswrapper[4946]: I1128 06:56:16.293533 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-m444h" podUID="830f7e38-bc1c-4897-bcc9-0266da4d74d0" containerName="oauth-openshift" containerID="cri-o://e0bb8ecd61a78d5e064a9fd14b7b65468fa9ac78827e7c6ce51b3acc43456626" gracePeriod=15 Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.179070 4946 generic.go:334] "Generic (PLEG): container finished" podID="830f7e38-bc1c-4897-bcc9-0266da4d74d0" containerID="e0bb8ecd61a78d5e064a9fd14b7b65468fa9ac78827e7c6ce51b3acc43456626" exitCode=0 Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.179144 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-m444h" event={"ID":"830f7e38-bc1c-4897-bcc9-0266da4d74d0","Type":"ContainerDied","Data":"e0bb8ecd61a78d5e064a9fd14b7b65468fa9ac78827e7c6ce51b3acc43456626"} Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.269293 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.440501 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-user-template-login\") pod \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.440553 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/830f7e38-bc1c-4897-bcc9-0266da4d74d0-audit-dir\") pod \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.440609 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-trusted-ca-bundle\") pod \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.440651 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-user-template-error\") pod \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.440692 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-user-template-provider-selection\") pod \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.440755 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-cliconfig\") pod \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.440747 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/830f7e38-bc1c-4897-bcc9-0266da4d74d0-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "830f7e38-bc1c-4897-bcc9-0266da4d74d0" (UID: "830f7e38-bc1c-4897-bcc9-0266da4d74d0"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.440785 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-service-ca\") pod \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.440977 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-serving-cert\") pod \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.441016 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-session\") pod \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.441070 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-router-certs\") pod \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.441092 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-477cb\" (UniqueName: \"kubernetes.io/projected/830f7e38-bc1c-4897-bcc9-0266da4d74d0-kube-api-access-477cb\") pod \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.441119 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/830f7e38-bc1c-4897-bcc9-0266da4d74d0-audit-policies\") pod \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.441313 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-ocp-branding-template\") pod \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.441340 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-user-idp-0-file-data\") pod \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\" (UID: \"830f7e38-bc1c-4897-bcc9-0266da4d74d0\") " Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.441871 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "830f7e38-bc1c-4897-bcc9-0266da4d74d0" (UID: "830f7e38-bc1c-4897-bcc9-0266da4d74d0"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.441892 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "830f7e38-bc1c-4897-bcc9-0266da4d74d0" (UID: "830f7e38-bc1c-4897-bcc9-0266da4d74d0"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.441918 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "830f7e38-bc1c-4897-bcc9-0266da4d74d0" (UID: "830f7e38-bc1c-4897-bcc9-0266da4d74d0"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.442874 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/830f7e38-bc1c-4897-bcc9-0266da4d74d0-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "830f7e38-bc1c-4897-bcc9-0266da4d74d0" (UID: "830f7e38-bc1c-4897-bcc9-0266da4d74d0"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.442979 4946 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.442998 4946 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.443013 4946 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/830f7e38-bc1c-4897-bcc9-0266da4d74d0-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.443029 4946 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/830f7e38-bc1c-4897-bcc9-0266da4d74d0-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.443040 4946 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.447494 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "830f7e38-bc1c-4897-bcc9-0266da4d74d0" (UID: "830f7e38-bc1c-4897-bcc9-0266da4d74d0"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.448002 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/830f7e38-bc1c-4897-bcc9-0266da4d74d0-kube-api-access-477cb" (OuterVolumeSpecName: "kube-api-access-477cb") pod "830f7e38-bc1c-4897-bcc9-0266da4d74d0" (UID: "830f7e38-bc1c-4897-bcc9-0266da4d74d0"). InnerVolumeSpecName "kube-api-access-477cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.448718 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "830f7e38-bc1c-4897-bcc9-0266da4d74d0" (UID: "830f7e38-bc1c-4897-bcc9-0266da4d74d0"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.449708 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "830f7e38-bc1c-4897-bcc9-0266da4d74d0" (UID: "830f7e38-bc1c-4897-bcc9-0266da4d74d0"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.450223 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "830f7e38-bc1c-4897-bcc9-0266da4d74d0" (UID: "830f7e38-bc1c-4897-bcc9-0266da4d74d0"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.450554 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "830f7e38-bc1c-4897-bcc9-0266da4d74d0" (UID: "830f7e38-bc1c-4897-bcc9-0266da4d74d0"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.452882 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "830f7e38-bc1c-4897-bcc9-0266da4d74d0" (UID: "830f7e38-bc1c-4897-bcc9-0266da4d74d0"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.453867 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "830f7e38-bc1c-4897-bcc9-0266da4d74d0" (UID: "830f7e38-bc1c-4897-bcc9-0266da4d74d0"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.455260 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "830f7e38-bc1c-4897-bcc9-0266da4d74d0" (UID: "830f7e38-bc1c-4897-bcc9-0266da4d74d0"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.544017 4946 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.544101 4946 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.544119 4946 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.544131 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-477cb\" (UniqueName: \"kubernetes.io/projected/830f7e38-bc1c-4897-bcc9-0266da4d74d0-kube-api-access-477cb\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.544143 4946 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.544154 4946 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.544164 4946 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.544173 4946 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:17 crc kubenswrapper[4946]: I1128 06:56:17.544185 4946 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/830f7e38-bc1c-4897-bcc9-0266da4d74d0-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.189672 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-m444h" event={"ID":"830f7e38-bc1c-4897-bcc9-0266da4d74d0","Type":"ContainerDied","Data":"bc0c4f5cf319218b0ce27c9c45cdb4bfebd835db3a194c418d5c2e9df80ecf72"} Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.189757 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-m444h" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.189784 4946 scope.go:117] "RemoveContainer" containerID="e0bb8ecd61a78d5e064a9fd14b7b65468fa9ac78827e7c6ce51b3acc43456626" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.226755 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-m444h"] Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.235089 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-m444h"] Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.641860 4946 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 06:56:18 crc kubenswrapper[4946]: E1128 06:56:18.642301 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="830f7e38-bc1c-4897-bcc9-0266da4d74d0" containerName="oauth-openshift" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.642317 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="830f7e38-bc1c-4897-bcc9-0266da4d74d0" containerName="oauth-openshift" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.642518 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="830f7e38-bc1c-4897-bcc9-0266da4d74d0" containerName="oauth-openshift" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.643079 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.660223 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.660320 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.660552 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.660637 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.660686 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.687641 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.709367 4946 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.709850 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea" gracePeriod=15 Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.709900 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb" gracePeriod=15 Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.710042 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e" gracePeriod=15 Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.710231 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35" gracePeriod=15 Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.712248 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627" gracePeriod=15 Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.714551 4946 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 06:56:18 crc kubenswrapper[4946]: E1128 06:56:18.714884 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.714906 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 28 06:56:18 crc kubenswrapper[4946]: E1128 06:56:18.714927 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.714937 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 06:56:18 crc kubenswrapper[4946]: E1128 06:56:18.714947 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.714955 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 06:56:18 crc kubenswrapper[4946]: E1128 06:56:18.714970 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.714978 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 06:56:18 crc kubenswrapper[4946]: E1128 06:56:18.715122 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.715138 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 06:56:18 crc kubenswrapper[4946]: E1128 06:56:18.715150 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.715157 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 06:56:18 crc kubenswrapper[4946]: E1128 06:56:18.715167 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.715175 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.715331 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.715345 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.715354 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.715374 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.715384 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.715393 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.769820 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.769902 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.769932 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.770006 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.770012 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.770165 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.770191 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.770233 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.770287 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.770298 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.770348 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.770369 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.770411 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.871777 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.872228 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.871924 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.872297 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.872365 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.872412 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:56:18 crc kubenswrapper[4946]: I1128 06:56:18.983061 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:56:19 crc kubenswrapper[4946]: W1128 06:56:19.008395 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-5a2f99d3a21ac80ad817e9d4e741b88d02132bad399dd7f42f62f77d9a13e467 WatchSource:0}: Error finding container 5a2f99d3a21ac80ad817e9d4e741b88d02132bad399dd7f42f62f77d9a13e467: Status 404 returned error can't find the container with id 5a2f99d3a21ac80ad817e9d4e741b88d02132bad399dd7f42f62f77d9a13e467 Nov 28 06:56:19 crc kubenswrapper[4946]: E1128 06:56:19.012470 4946 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.2:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c194e829a1933 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 06:56:19.011672371 +0000 UTC m=+233.389737482,LastTimestamp:2025-11-28 06:56:19.011672371 +0000 UTC m=+233.389737482,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 06:56:19 crc kubenswrapper[4946]: I1128 06:56:19.203253 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 06:56:19 crc kubenswrapper[4946]: I1128 06:56:19.204793 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 06:56:19 crc kubenswrapper[4946]: I1128 06:56:19.205705 4946 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35" exitCode=0 Nov 28 06:56:19 crc kubenswrapper[4946]: I1128 06:56:19.205738 4946 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e" exitCode=0 Nov 28 06:56:19 crc kubenswrapper[4946]: I1128 06:56:19.205747 4946 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb" exitCode=0 Nov 28 06:56:19 crc kubenswrapper[4946]: I1128 06:56:19.205757 4946 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea" exitCode=2 Nov 28 06:56:19 crc kubenswrapper[4946]: I1128 06:56:19.205830 4946 scope.go:117] "RemoveContainer" containerID="b4c7e2754f62b9d955da7589f04acb0136ed3c504a75b0db30f22cbeec54c413" Nov 28 06:56:19 crc kubenswrapper[4946]: I1128 06:56:19.208828 4946 generic.go:334] "Generic (PLEG): container finished" podID="986236c0-f012-41ae-aa64-097ad1f7117e" containerID="950d634bea04b64e3d7e6cb83eec8670e3fcef7cb83361148c88ce21028cb9bd" exitCode=0 Nov 28 06:56:19 crc kubenswrapper[4946]: I1128 06:56:19.208925 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"986236c0-f012-41ae-aa64-097ad1f7117e","Type":"ContainerDied","Data":"950d634bea04b64e3d7e6cb83eec8670e3fcef7cb83361148c88ce21028cb9bd"} Nov 28 06:56:19 crc kubenswrapper[4946]: I1128 06:56:19.209769 4946 status_manager.go:851] "Failed to get status for pod" podUID="986236c0-f012-41ae-aa64-097ad1f7117e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:19 crc kubenswrapper[4946]: I1128 06:56:19.209931 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5a2f99d3a21ac80ad817e9d4e741b88d02132bad399dd7f42f62f77d9a13e467"} Nov 28 06:56:19 crc kubenswrapper[4946]: I1128 06:56:19.210203 4946 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:19 crc kubenswrapper[4946]: I1128 06:56:19.211051 4946 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:19 crc kubenswrapper[4946]: I1128 06:56:19.371585 4946 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 28 06:56:19 crc kubenswrapper[4946]: I1128 06:56:19.371709 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 28 06:56:20 crc kubenswrapper[4946]: I1128 06:56:20.013300 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="830f7e38-bc1c-4897-bcc9-0266da4d74d0" path="/var/lib/kubelet/pods/830f7e38-bc1c-4897-bcc9-0266da4d74d0/volumes" Nov 28 06:56:20 crc kubenswrapper[4946]: I1128 06:56:20.220090 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a72b73e0459c7dddf4a9449613a4a316ee6b27e1b639f9947f7eaf20497fcce3"} Nov 28 06:56:20 crc kubenswrapper[4946]: I1128 06:56:20.221098 4946 status_manager.go:851] "Failed to get status for pod" podUID="986236c0-f012-41ae-aa64-097ad1f7117e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:20 crc kubenswrapper[4946]: I1128 06:56:20.221513 4946 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:20 crc kubenswrapper[4946]: I1128 06:56:20.225691 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 06:56:20 crc kubenswrapper[4946]: I1128 06:56:20.642827 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:56:20 crc kubenswrapper[4946]: I1128 06:56:20.643597 4946 status_manager.go:851] "Failed to get status for pod" podUID="986236c0-f012-41ae-aa64-097ad1f7117e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:20 crc kubenswrapper[4946]: I1128 06:56:20.643902 4946 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:20 crc kubenswrapper[4946]: I1128 06:56:20.693882 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/986236c0-f012-41ae-aa64-097ad1f7117e-kube-api-access\") pod \"986236c0-f012-41ae-aa64-097ad1f7117e\" (UID: \"986236c0-f012-41ae-aa64-097ad1f7117e\") " Nov 28 06:56:20 crc kubenswrapper[4946]: I1128 06:56:20.694145 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/986236c0-f012-41ae-aa64-097ad1f7117e-kubelet-dir\") pod \"986236c0-f012-41ae-aa64-097ad1f7117e\" (UID: \"986236c0-f012-41ae-aa64-097ad1f7117e\") " Nov 28 06:56:20 crc kubenswrapper[4946]: I1128 06:56:20.694179 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/986236c0-f012-41ae-aa64-097ad1f7117e-var-lock\") pod \"986236c0-f012-41ae-aa64-097ad1f7117e\" (UID: \"986236c0-f012-41ae-aa64-097ad1f7117e\") " Nov 28 06:56:20 crc kubenswrapper[4946]: I1128 06:56:20.694243 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/986236c0-f012-41ae-aa64-097ad1f7117e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "986236c0-f012-41ae-aa64-097ad1f7117e" (UID: "986236c0-f012-41ae-aa64-097ad1f7117e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:56:20 crc kubenswrapper[4946]: I1128 06:56:20.694398 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/986236c0-f012-41ae-aa64-097ad1f7117e-var-lock" (OuterVolumeSpecName: "var-lock") pod "986236c0-f012-41ae-aa64-097ad1f7117e" (UID: "986236c0-f012-41ae-aa64-097ad1f7117e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:56:20 crc kubenswrapper[4946]: I1128 06:56:20.694933 4946 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/986236c0-f012-41ae-aa64-097ad1f7117e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:20 crc kubenswrapper[4946]: I1128 06:56:20.694970 4946 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/986236c0-f012-41ae-aa64-097ad1f7117e-var-lock\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:20 crc kubenswrapper[4946]: I1128 06:56:20.699695 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/986236c0-f012-41ae-aa64-097ad1f7117e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "986236c0-f012-41ae-aa64-097ad1f7117e" (UID: "986236c0-f012-41ae-aa64-097ad1f7117e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:56:20 crc kubenswrapper[4946]: I1128 06:56:20.795724 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/986236c0-f012-41ae-aa64-097ad1f7117e-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.094053 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.095081 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.095901 4946 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.096568 4946 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.097199 4946 status_manager.go:851] "Failed to get status for pod" podUID="986236c0-f012-41ae-aa64-097ad1f7117e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:21 crc kubenswrapper[4946]: E1128 06:56:21.099707 4946 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.100312 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.100356 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.100413 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.100427 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.100529 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.100475 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.101070 4946 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.101132 4946 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.101154 4946 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:56:21 crc kubenswrapper[4946]: E1128 06:56:21.101025 4946 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:21 crc kubenswrapper[4946]: E1128 06:56:21.101670 4946 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:21 crc kubenswrapper[4946]: E1128 06:56:21.102128 4946 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:21 crc kubenswrapper[4946]: E1128 06:56:21.102534 4946 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.102598 4946 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 28 06:56:21 crc kubenswrapper[4946]: E1128 06:56:21.103080 4946 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="200ms" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.237299 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"986236c0-f012-41ae-aa64-097ad1f7117e","Type":"ContainerDied","Data":"050e2cc9a1b8da35f7087eac204a32aa529a49b5ce15b9d93d27d18abc834fb2"} Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.237325 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.237447 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="050e2cc9a1b8da35f7087eac204a32aa529a49b5ce15b9d93d27d18abc834fb2" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.241512 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.242500 4946 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627" exitCode=0 Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.242568 4946 scope.go:117] "RemoveContainer" containerID="de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.242597 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.267413 4946 status_manager.go:851] "Failed to get status for pod" podUID="986236c0-f012-41ae-aa64-097ad1f7117e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.267758 4946 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.268075 4946 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.268376 4946 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.268671 4946 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.269000 4946 status_manager.go:851] "Failed to get status for pod" podUID="986236c0-f012-41ae-aa64-097ad1f7117e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.271091 4946 scope.go:117] "RemoveContainer" containerID="326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.290361 4946 scope.go:117] "RemoveContainer" containerID="1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb" Nov 28 06:56:21 crc kubenswrapper[4946]: E1128 06:56:21.305257 4946 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="400ms" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.308005 4946 scope.go:117] "RemoveContainer" containerID="c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.331524 4946 scope.go:117] "RemoveContainer" containerID="eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.352067 4946 scope.go:117] "RemoveContainer" containerID="e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.379842 4946 scope.go:117] "RemoveContainer" containerID="de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35" Nov 28 06:56:21 crc kubenswrapper[4946]: E1128 06:56:21.382603 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\": container with ID starting with de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35 not found: ID does not exist" containerID="de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.382667 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35"} err="failed to get container status \"de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\": rpc error: code = NotFound desc = could not find container \"de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35\": container with ID starting with de5dc7ef4ea06b54b6de621ebe79d3587324b0c4cdd90e078d4116dc5252ad35 not found: ID does not exist" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.382709 4946 scope.go:117] "RemoveContainer" containerID="326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e" Nov 28 06:56:21 crc kubenswrapper[4946]: E1128 06:56:21.383216 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\": container with ID starting with 326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e not found: ID does not exist" containerID="326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.383266 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e"} err="failed to get container status \"326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\": rpc error: code = NotFound desc = could not find container \"326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e\": container with ID starting with 326666c58a9c75d0f08586c82bca1a0686ed2935f475b3c359cddc01875b248e not found: ID does not exist" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.383301 4946 scope.go:117] "RemoveContainer" containerID="1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb" Nov 28 06:56:21 crc kubenswrapper[4946]: E1128 06:56:21.383632 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\": container with ID starting with 1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb not found: ID does not exist" containerID="1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.383664 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb"} err="failed to get container status \"1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\": rpc error: code = NotFound desc = could not find container \"1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb\": container with ID starting with 1a7152b6653aaccd36bb018d24abf196dd6f13d41edddb0939917f96bcd9e9bb not found: ID does not exist" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.383681 4946 scope.go:117] "RemoveContainer" containerID="c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea" Nov 28 06:56:21 crc kubenswrapper[4946]: E1128 06:56:21.384015 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\": container with ID starting with c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea not found: ID does not exist" containerID="c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.384051 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea"} err="failed to get container status \"c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\": rpc error: code = NotFound desc = could not find container \"c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea\": container with ID starting with c5be506bd8debbc12ff5ca56d77fdf667855d2c4078cea1b1d1388f5ef8831ea not found: ID does not exist" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.384076 4946 scope.go:117] "RemoveContainer" containerID="eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627" Nov 28 06:56:21 crc kubenswrapper[4946]: E1128 06:56:21.384521 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\": container with ID starting with eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627 not found: ID does not exist" containerID="eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.384557 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627"} err="failed to get container status \"eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\": rpc error: code = NotFound desc = could not find container \"eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627\": container with ID starting with eef1ad3ca814effd7a94c23d46fc3b14149ccc6ae6181e19c5f24884967b7627 not found: ID does not exist" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.384577 4946 scope.go:117] "RemoveContainer" containerID="e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d" Nov 28 06:56:21 crc kubenswrapper[4946]: E1128 06:56:21.385143 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\": container with ID starting with e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d not found: ID does not exist" containerID="e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.385206 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d"} err="failed to get container status \"e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\": rpc error: code = NotFound desc = could not find container \"e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d\": container with ID starting with e914a7f2fd1894a73e436ea8e85e17eefa8b42f2f4b79393555c4e93150c860d not found: ID does not exist" Nov 28 06:56:21 crc kubenswrapper[4946]: E1128 06:56:21.707054 4946 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="800ms" Nov 28 06:56:21 crc kubenswrapper[4946]: I1128 06:56:21.996025 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 28 06:56:22 crc kubenswrapper[4946]: E1128 06:56:22.509139 4946 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="1.6s" Nov 28 06:56:24 crc kubenswrapper[4946]: E1128 06:56:24.112855 4946 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="3.2s" Nov 28 06:56:25 crc kubenswrapper[4946]: I1128 06:56:25.995435 4946 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:25 crc kubenswrapper[4946]: I1128 06:56:25.997614 4946 status_manager.go:851] "Failed to get status for pod" podUID="986236c0-f012-41ae-aa64-097ad1f7117e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:27 crc kubenswrapper[4946]: E1128 06:56:27.314526 4946 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="6.4s" Nov 28 06:56:27 crc kubenswrapper[4946]: E1128 06:56:27.867485 4946 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.2:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c194e829a1933 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 06:56:19.011672371 +0000 UTC m=+233.389737482,LastTimestamp:2025-11-28 06:56:19.011672371 +0000 UTC m=+233.389737482,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 06:56:32 crc kubenswrapper[4946]: I1128 06:56:32.989184 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:56:32 crc kubenswrapper[4946]: I1128 06:56:32.991232 4946 status_manager.go:851] "Failed to get status for pod" podUID="986236c0-f012-41ae-aa64-097ad1f7117e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:32 crc kubenswrapper[4946]: I1128 06:56:32.991897 4946 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:33 crc kubenswrapper[4946]: I1128 06:56:33.011159 4946 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="68d47c52-9827-4ebb-88a2-ba498092709e" Nov 28 06:56:33 crc kubenswrapper[4946]: I1128 06:56:33.011206 4946 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="68d47c52-9827-4ebb-88a2-ba498092709e" Nov 28 06:56:33 crc kubenswrapper[4946]: E1128 06:56:33.011788 4946 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:56:33 crc kubenswrapper[4946]: I1128 06:56:33.012534 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:56:33 crc kubenswrapper[4946]: W1128 06:56:33.046631 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-df637e6f5bebdc21826690e13feed13e4e77d28163af07adec373babf4597ee1 WatchSource:0}: Error finding container df637e6f5bebdc21826690e13feed13e4e77d28163af07adec373babf4597ee1: Status 404 returned error can't find the container with id df637e6f5bebdc21826690e13feed13e4e77d28163af07adec373babf4597ee1 Nov 28 06:56:33 crc kubenswrapper[4946]: I1128 06:56:33.350631 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"df637e6f5bebdc21826690e13feed13e4e77d28163af07adec373babf4597ee1"} Nov 28 06:56:33 crc kubenswrapper[4946]: E1128 06:56:33.716228 4946 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.2:6443: connect: connection refused" interval="7s" Nov 28 06:56:34 crc kubenswrapper[4946]: I1128 06:56:34.023186 4946 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 28 06:56:34 crc kubenswrapper[4946]: I1128 06:56:34.023287 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 28 06:56:34 crc kubenswrapper[4946]: I1128 06:56:34.359431 4946 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="32d83bc0c052ea3bba0a5005ef7df53437c4316c79d6043d885f934641abc40e" exitCode=0 Nov 28 06:56:34 crc kubenswrapper[4946]: I1128 06:56:34.359637 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"32d83bc0c052ea3bba0a5005ef7df53437c4316c79d6043d885f934641abc40e"} Nov 28 06:56:34 crc kubenswrapper[4946]: I1128 06:56:34.359873 4946 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="68d47c52-9827-4ebb-88a2-ba498092709e" Nov 28 06:56:34 crc kubenswrapper[4946]: I1128 06:56:34.359904 4946 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="68d47c52-9827-4ebb-88a2-ba498092709e" Nov 28 06:56:34 crc kubenswrapper[4946]: E1128 06:56:34.360581 4946 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:56:34 crc kubenswrapper[4946]: I1128 06:56:34.360601 4946 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:34 crc kubenswrapper[4946]: I1128 06:56:34.361537 4946 status_manager.go:851] "Failed to get status for pod" podUID="986236c0-f012-41ae-aa64-097ad1f7117e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:34 crc kubenswrapper[4946]: I1128 06:56:34.363344 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 28 06:56:34 crc kubenswrapper[4946]: I1128 06:56:34.363421 4946 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4" exitCode=1 Nov 28 06:56:34 crc kubenswrapper[4946]: I1128 06:56:34.363492 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4"} Nov 28 06:56:34 crc kubenswrapper[4946]: I1128 06:56:34.364035 4946 scope.go:117] "RemoveContainer" containerID="d384f5fd04cfe61e6a8d13538eb1b0267ca88eb0d226d8d85b5065fb5f7de9b4" Nov 28 06:56:34 crc kubenswrapper[4946]: I1128 06:56:34.364813 4946 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:34 crc kubenswrapper[4946]: I1128 06:56:34.365210 4946 status_manager.go:851] "Failed to get status for pod" podUID="986236c0-f012-41ae-aa64-097ad1f7117e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:34 crc kubenswrapper[4946]: I1128 06:56:34.366714 4946 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.2:6443: connect: connection refused" Nov 28 06:56:35 crc kubenswrapper[4946]: I1128 06:56:35.385342 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 28 06:56:35 crc kubenswrapper[4946]: I1128 06:56:35.386041 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0f6ba33547600bc4e4856cdd830d7f81b86ae36b37fcce883f8ca18a99f79fb0"} Nov 28 06:56:35 crc kubenswrapper[4946]: I1128 06:56:35.389583 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"86472267947e977cd9856d559d134b81c0506b7641de09e10dda2210c3aa016a"} Nov 28 06:56:35 crc kubenswrapper[4946]: I1128 06:56:35.389648 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cd909f0c2fdea208e463cc5561d9be495db791f1ab1afe0a009ee3c9f4e12216"} Nov 28 06:56:35 crc kubenswrapper[4946]: I1128 06:56:35.389664 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"99c56e61e6b7a0f99b1db70ae66a523930b036e6e12472228259515e2f76b225"} Nov 28 06:56:36 crc kubenswrapper[4946]: I1128 06:56:36.397912 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0931d0b2b422217630059903c5d1a4e5b0b8380b931628e7790c8a342de5f337"} Nov 28 06:56:36 crc kubenswrapper[4946]: I1128 06:56:36.397982 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"61a897715fa7f093a496705562330853d11a102382592d447d23a85b147acbfa"} Nov 28 06:56:36 crc kubenswrapper[4946]: I1128 06:56:36.398120 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:56:36 crc kubenswrapper[4946]: I1128 06:56:36.398222 4946 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="68d47c52-9827-4ebb-88a2-ba498092709e" Nov 28 06:56:36 crc kubenswrapper[4946]: I1128 06:56:36.398254 4946 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="68d47c52-9827-4ebb-88a2-ba498092709e" Nov 28 06:56:38 crc kubenswrapper[4946]: I1128 06:56:38.012786 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:56:38 crc kubenswrapper[4946]: I1128 06:56:38.012856 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:56:38 crc kubenswrapper[4946]: I1128 06:56:38.018571 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:56:40 crc kubenswrapper[4946]: I1128 06:56:40.973132 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:56:41 crc kubenswrapper[4946]: I1128 06:56:41.412602 4946 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:56:41 crc kubenswrapper[4946]: I1128 06:56:41.509020 4946 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="169b4143-f4f7-412b-b3f9-a924b7130d3e" Nov 28 06:56:42 crc kubenswrapper[4946]: I1128 06:56:42.448077 4946 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="68d47c52-9827-4ebb-88a2-ba498092709e" Nov 28 06:56:42 crc kubenswrapper[4946]: I1128 06:56:42.448124 4946 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="68d47c52-9827-4ebb-88a2-ba498092709e" Nov 28 06:56:42 crc kubenswrapper[4946]: I1128 06:56:42.452078 4946 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="169b4143-f4f7-412b-b3f9-a924b7130d3e" Nov 28 06:56:43 crc kubenswrapper[4946]: I1128 06:56:43.682354 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:56:43 crc kubenswrapper[4946]: I1128 06:56:43.683051 4946 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 28 06:56:43 crc kubenswrapper[4946]: I1128 06:56:43.684029 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 28 06:56:51 crc kubenswrapper[4946]: I1128 06:56:51.032195 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 28 06:56:51 crc kubenswrapper[4946]: I1128 06:56:51.524686 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 28 06:56:51 crc kubenswrapper[4946]: I1128 06:56:51.613682 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 28 06:56:51 crc kubenswrapper[4946]: I1128 06:56:51.676781 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 28 06:56:52 crc kubenswrapper[4946]: I1128 06:56:52.344552 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 28 06:56:52 crc kubenswrapper[4946]: I1128 06:56:52.428671 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 28 06:56:52 crc kubenswrapper[4946]: I1128 06:56:52.627033 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 28 06:56:52 crc kubenswrapper[4946]: I1128 06:56:52.715615 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 28 06:56:52 crc kubenswrapper[4946]: I1128 06:56:52.881196 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 28 06:56:52 crc kubenswrapper[4946]: I1128 06:56:52.899985 4946 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 28 06:56:52 crc kubenswrapper[4946]: I1128 06:56:52.916637 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 28 06:56:52 crc kubenswrapper[4946]: I1128 06:56:52.949823 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 28 06:56:53 crc kubenswrapper[4946]: I1128 06:56:53.305556 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 28 06:56:53 crc kubenswrapper[4946]: I1128 06:56:53.338668 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 28 06:56:53 crc kubenswrapper[4946]: I1128 06:56:53.490442 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 28 06:56:53 crc kubenswrapper[4946]: I1128 06:56:53.661056 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 28 06:56:53 crc kubenswrapper[4946]: I1128 06:56:53.682412 4946 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 28 06:56:53 crc kubenswrapper[4946]: I1128 06:56:53.682564 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 28 06:56:53 crc kubenswrapper[4946]: I1128 06:56:53.864555 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 28 06:56:53 crc kubenswrapper[4946]: I1128 06:56:53.910707 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 28 06:56:54 crc kubenswrapper[4946]: I1128 06:56:54.294409 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 28 06:56:54 crc kubenswrapper[4946]: I1128 06:56:54.463873 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 28 06:56:54 crc kubenswrapper[4946]: I1128 06:56:54.530133 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 06:56:54 crc kubenswrapper[4946]: I1128 06:56:54.642541 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 28 06:56:54 crc kubenswrapper[4946]: I1128 06:56:54.762375 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 28 06:56:54 crc kubenswrapper[4946]: I1128 06:56:54.763100 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 28 06:56:54 crc kubenswrapper[4946]: I1128 06:56:54.796905 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 28 06:56:54 crc kubenswrapper[4946]: I1128 06:56:54.894499 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 28 06:56:54 crc kubenswrapper[4946]: I1128 06:56:54.982929 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 28 06:56:54 crc kubenswrapper[4946]: I1128 06:56:54.997842 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 28 06:56:55 crc kubenswrapper[4946]: I1128 06:56:55.043423 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 28 06:56:55 crc kubenswrapper[4946]: I1128 06:56:55.124223 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 06:56:55 crc kubenswrapper[4946]: I1128 06:56:55.168378 4946 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 28 06:56:55 crc kubenswrapper[4946]: I1128 06:56:55.410205 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 28 06:56:55 crc kubenswrapper[4946]: I1128 06:56:55.527027 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 28 06:56:55 crc kubenswrapper[4946]: I1128 06:56:55.548807 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 28 06:56:55 crc kubenswrapper[4946]: I1128 06:56:55.571897 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 28 06:56:55 crc kubenswrapper[4946]: I1128 06:56:55.709909 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 28 06:56:55 crc kubenswrapper[4946]: I1128 06:56:55.785623 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 28 06:56:55 crc kubenswrapper[4946]: I1128 06:56:55.821018 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 28 06:56:55 crc kubenswrapper[4946]: I1128 06:56:55.887851 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 28 06:56:55 crc kubenswrapper[4946]: I1128 06:56:55.944443 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 28 06:56:55 crc kubenswrapper[4946]: I1128 06:56:55.997449 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 28 06:56:56 crc kubenswrapper[4946]: I1128 06:56:56.021270 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 28 06:56:56 crc kubenswrapper[4946]: I1128 06:56:56.031696 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 28 06:56:56 crc kubenswrapper[4946]: I1128 06:56:56.099207 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 28 06:56:56 crc kubenswrapper[4946]: I1128 06:56:56.150747 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 28 06:56:56 crc kubenswrapper[4946]: I1128 06:56:56.156087 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 28 06:56:56 crc kubenswrapper[4946]: I1128 06:56:56.184272 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 28 06:56:56 crc kubenswrapper[4946]: I1128 06:56:56.209710 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 28 06:56:56 crc kubenswrapper[4946]: I1128 06:56:56.282160 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 28 06:56:56 crc kubenswrapper[4946]: I1128 06:56:56.287783 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 28 06:56:56 crc kubenswrapper[4946]: I1128 06:56:56.354492 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 06:56:56 crc kubenswrapper[4946]: I1128 06:56:56.399234 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 28 06:56:56 crc kubenswrapper[4946]: I1128 06:56:56.427793 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 28 06:56:56 crc kubenswrapper[4946]: I1128 06:56:56.461032 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 28 06:56:56 crc kubenswrapper[4946]: I1128 06:56:56.570187 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 28 06:56:56 crc kubenswrapper[4946]: I1128 06:56:56.576826 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 28 06:56:56 crc kubenswrapper[4946]: I1128 06:56:56.599878 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 28 06:56:56 crc kubenswrapper[4946]: I1128 06:56:56.687848 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 28 06:56:56 crc kubenswrapper[4946]: I1128 06:56:56.701183 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 28 06:56:56 crc kubenswrapper[4946]: I1128 06:56:56.706616 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 28 06:56:56 crc kubenswrapper[4946]: I1128 06:56:56.730536 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 28 06:56:56 crc kubenswrapper[4946]: I1128 06:56:56.755437 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 28 06:56:56 crc kubenswrapper[4946]: I1128 06:56:56.762636 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 28 06:56:56 crc kubenswrapper[4946]: I1128 06:56:56.817956 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 28 06:56:56 crc kubenswrapper[4946]: I1128 06:56:56.982810 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 28 06:56:57 crc kubenswrapper[4946]: I1128 06:56:57.009377 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 28 06:56:57 crc kubenswrapper[4946]: I1128 06:56:57.050597 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 28 06:56:57 crc kubenswrapper[4946]: I1128 06:56:57.104088 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 28 06:56:57 crc kubenswrapper[4946]: I1128 06:56:57.124408 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 28 06:56:57 crc kubenswrapper[4946]: I1128 06:56:57.256167 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 28 06:56:57 crc kubenswrapper[4946]: I1128 06:56:57.335611 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 28 06:56:57 crc kubenswrapper[4946]: I1128 06:56:57.410233 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 28 06:56:57 crc kubenswrapper[4946]: I1128 06:56:57.442519 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 28 06:56:57 crc kubenswrapper[4946]: I1128 06:56:57.519337 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 28 06:56:57 crc kubenswrapper[4946]: I1128 06:56:57.555366 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 28 06:56:57 crc kubenswrapper[4946]: I1128 06:56:57.565232 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 28 06:56:57 crc kubenswrapper[4946]: I1128 06:56:57.654829 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 28 06:56:57 crc kubenswrapper[4946]: I1128 06:56:57.832303 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 28 06:56:57 crc kubenswrapper[4946]: I1128 06:56:57.835967 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 28 06:56:57 crc kubenswrapper[4946]: I1128 06:56:57.893604 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 28 06:56:57 crc kubenswrapper[4946]: I1128 06:56:57.911889 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 28 06:56:57 crc kubenswrapper[4946]: I1128 06:56:57.925834 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 28 06:56:57 crc kubenswrapper[4946]: I1128 06:56:57.942040 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 28 06:56:57 crc kubenswrapper[4946]: I1128 06:56:57.943318 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 28 06:56:58 crc kubenswrapper[4946]: I1128 06:56:58.048760 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 28 06:56:58 crc kubenswrapper[4946]: I1128 06:56:58.101349 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 06:56:58 crc kubenswrapper[4946]: I1128 06:56:58.125023 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 28 06:56:58 crc kubenswrapper[4946]: I1128 06:56:58.210309 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 28 06:56:58 crc kubenswrapper[4946]: I1128 06:56:58.273906 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 28 06:56:58 crc kubenswrapper[4946]: I1128 06:56:58.279844 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 28 06:56:58 crc kubenswrapper[4946]: I1128 06:56:58.295621 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 06:56:58 crc kubenswrapper[4946]: I1128 06:56:58.323570 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 28 06:56:58 crc kubenswrapper[4946]: I1128 06:56:58.468401 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 28 06:56:58 crc kubenswrapper[4946]: I1128 06:56:58.488958 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 28 06:56:58 crc kubenswrapper[4946]: I1128 06:56:58.584263 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 06:56:58 crc kubenswrapper[4946]: I1128 06:56:58.594929 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 28 06:56:58 crc kubenswrapper[4946]: I1128 06:56:58.628486 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 28 06:56:58 crc kubenswrapper[4946]: I1128 06:56:58.653490 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 28 06:56:58 crc kubenswrapper[4946]: I1128 06:56:58.661377 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 06:56:58 crc kubenswrapper[4946]: I1128 06:56:58.710793 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 28 06:56:58 crc kubenswrapper[4946]: I1128 06:56:58.728158 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 28 06:56:58 crc kubenswrapper[4946]: I1128 06:56:58.815563 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 28 06:56:58 crc kubenswrapper[4946]: I1128 06:56:58.820904 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 28 06:56:58 crc kubenswrapper[4946]: I1128 06:56:58.906293 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 28 06:56:58 crc kubenswrapper[4946]: I1128 06:56:58.953067 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 28 06:56:58 crc kubenswrapper[4946]: I1128 06:56:58.961230 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 28 06:56:58 crc kubenswrapper[4946]: I1128 06:56:58.970386 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 28 06:56:59 crc kubenswrapper[4946]: I1128 06:56:59.244171 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 28 06:56:59 crc kubenswrapper[4946]: I1128 06:56:59.253550 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 28 06:56:59 crc kubenswrapper[4946]: I1128 06:56:59.295557 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 06:56:59 crc kubenswrapper[4946]: I1128 06:56:59.357853 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 28 06:56:59 crc kubenswrapper[4946]: I1128 06:56:59.554192 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 28 06:56:59 crc kubenswrapper[4946]: I1128 06:56:59.560342 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 28 06:56:59 crc kubenswrapper[4946]: I1128 06:56:59.578254 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 28 06:56:59 crc kubenswrapper[4946]: I1128 06:56:59.648882 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 28 06:56:59 crc kubenswrapper[4946]: I1128 06:56:59.731107 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 28 06:56:59 crc kubenswrapper[4946]: I1128 06:56:59.741273 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 28 06:56:59 crc kubenswrapper[4946]: I1128 06:56:59.943996 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 28 06:56:59 crc kubenswrapper[4946]: I1128 06:56:59.945364 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 28 06:56:59 crc kubenswrapper[4946]: I1128 06:56:59.972337 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 28 06:56:59 crc kubenswrapper[4946]: I1128 06:56:59.994207 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 28 06:57:00 crc kubenswrapper[4946]: I1128 06:57:00.067953 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 28 06:57:00 crc kubenswrapper[4946]: I1128 06:57:00.080201 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 28 06:57:00 crc kubenswrapper[4946]: I1128 06:57:00.112555 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 28 06:57:00 crc kubenswrapper[4946]: I1128 06:57:00.244705 4946 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 28 06:57:00 crc kubenswrapper[4946]: I1128 06:57:00.386910 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 28 06:57:00 crc kubenswrapper[4946]: I1128 06:57:00.439964 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 28 06:57:00 crc kubenswrapper[4946]: I1128 06:57:00.461403 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 28 06:57:00 crc kubenswrapper[4946]: I1128 06:57:00.535851 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 06:57:00 crc kubenswrapper[4946]: I1128 06:57:00.571149 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 28 06:57:00 crc kubenswrapper[4946]: I1128 06:57:00.599135 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 28 06:57:00 crc kubenswrapper[4946]: I1128 06:57:00.846103 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 28 06:57:00 crc kubenswrapper[4946]: I1128 06:57:00.864341 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 06:57:00 crc kubenswrapper[4946]: I1128 06:57:00.934502 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 28 06:57:00 crc kubenswrapper[4946]: I1128 06:57:00.953708 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.009496 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.030485 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.040407 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.076722 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.405534 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.475615 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.530302 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.543424 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.606105 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.611856 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.634024 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.657332 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.689033 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.769525 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.782078 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.822546 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.853268 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.885768 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.902944 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.942206 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.948680 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.974996 4946 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.982903 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=43.982868511 podStartE2EDuration="43.982868511s" podCreationTimestamp="2025-11-28 06:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:56:41.466230951 +0000 UTC m=+255.844296072" watchObservedRunningTime="2025-11-28 06:57:01.982868511 +0000 UTC m=+276.360933662" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.986030 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.986103 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-7795679f96-5hsss"] Nov 28 06:57:01 crc kubenswrapper[4946]: E1128 06:57:01.986496 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986236c0-f012-41ae-aa64-097ad1f7117e" containerName="installer" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.986524 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="986236c0-f012-41ae-aa64-097ad1f7117e" containerName="installer" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.986696 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="986236c0-f012-41ae-aa64-097ad1f7117e" containerName="installer" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.986688 4946 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="68d47c52-9827-4ebb-88a2-ba498092709e" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.986865 4946 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="68d47c52-9827-4ebb-88a2-ba498092709e" Nov 28 06:57:01 crc kubenswrapper[4946]: I1128 06:57:01.989317 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.000056 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.001518 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.005832 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.005874 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.005932 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.006134 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.008267 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.008478 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.008646 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.008749 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.008792 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.009866 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.010008 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.016855 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.020688 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.021030 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.025621 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.045450 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2889e552-3c97-474a-8f40-686d8e13ddda-audit-dir\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.045705 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-user-template-login\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.045806 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2889e552-3c97-474a-8f40-686d8e13ddda-audit-policies\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.045907 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.045990 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.046090 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.046285 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8gfx\" (UniqueName: \"kubernetes.io/projected/2889e552-3c97-474a-8f40-686d8e13ddda-kube-api-access-q8gfx\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.046453 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-user-template-error\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.046630 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.046734 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.047041 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.047107 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-system-service-ca\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.047137 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-system-router-certs\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.047292 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-system-session\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.058334 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.058312798 podStartE2EDuration="21.058312798s" podCreationTimestamp="2025-11-28 06:56:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:57:02.05530808 +0000 UTC m=+276.433373261" watchObservedRunningTime="2025-11-28 06:57:02.058312798 +0000 UTC m=+276.436377919" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.130156 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.148791 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-user-template-error\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.148880 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.148912 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.148962 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.148984 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-system-service-ca\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.149177 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-system-router-certs\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.149211 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-system-session\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.149261 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2889e552-3c97-474a-8f40-686d8e13ddda-audit-dir\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.149292 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-user-template-login\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.149405 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2889e552-3c97-474a-8f40-686d8e13ddda-audit-policies\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.149432 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.149456 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.149767 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.149909 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2889e552-3c97-474a-8f40-686d8e13ddda-audit-dir\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.150117 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8gfx\" (UniqueName: \"kubernetes.io/projected/2889e552-3c97-474a-8f40-686d8e13ddda-kube-api-access-q8gfx\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.150532 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-system-service-ca\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.150653 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2889e552-3c97-474a-8f40-686d8e13ddda-audit-policies\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.151514 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.157296 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.166288 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-user-template-error\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.167178 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.168971 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-system-session\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.169440 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.169633 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.170038 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-user-template-login\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.171303 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-system-router-certs\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.171800 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8gfx\" (UniqueName: \"kubernetes.io/projected/2889e552-3c97-474a-8f40-686d8e13ddda-kube-api-access-q8gfx\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.174082 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2889e552-3c97-474a-8f40-686d8e13ddda-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7795679f96-5hsss\" (UID: \"2889e552-3c97-474a-8f40-686d8e13ddda\") " pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.204567 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.206599 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.321852 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.325959 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.338615 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.340207 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.347192 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.382890 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.463418 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.463661 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.470047 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.472513 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.476081 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.483385 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.520768 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.553212 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.553645 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.605812 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.657960 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.737757 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.768154 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7795679f96-5hsss"] Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.804189 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.820067 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.834011 4946 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.849038 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 28 06:57:02 crc kubenswrapper[4946]: I1128 06:57:02.986826 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 28 06:57:03 crc kubenswrapper[4946]: I1128 06:57:03.124482 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 28 06:57:03 crc kubenswrapper[4946]: I1128 06:57:03.124830 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 28 06:57:03 crc kubenswrapper[4946]: I1128 06:57:03.160533 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 28 06:57:03 crc kubenswrapper[4946]: I1128 06:57:03.260560 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 28 06:57:03 crc kubenswrapper[4946]: I1128 06:57:03.361824 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 28 06:57:03 crc kubenswrapper[4946]: I1128 06:57:03.374129 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 28 06:57:03 crc kubenswrapper[4946]: I1128 06:57:03.454527 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 28 06:57:03 crc kubenswrapper[4946]: I1128 06:57:03.502691 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 28 06:57:03 crc kubenswrapper[4946]: I1128 06:57:03.570620 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 28 06:57:03 crc kubenswrapper[4946]: I1128 06:57:03.607751 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" event={"ID":"2889e552-3c97-474a-8f40-686d8e13ddda","Type":"ContainerStarted","Data":"210b0eda5ca2f7bf2a73fc1e39387c4536710d1d36738a0365c0ce05407f4b6d"} Nov 28 06:57:03 crc kubenswrapper[4946]: I1128 06:57:03.607799 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" event={"ID":"2889e552-3c97-474a-8f40-686d8e13ddda","Type":"ContainerStarted","Data":"f7e7acff4a517222c6e861972447d35e05ac3f43be1cde536ad5249fb2c50995"} Nov 28 06:57:03 crc kubenswrapper[4946]: I1128 06:57:03.608179 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:03 crc kubenswrapper[4946]: I1128 06:57:03.614046 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" Nov 28 06:57:03 crc kubenswrapper[4946]: I1128 06:57:03.632963 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7795679f96-5hsss" podStartSLOduration=72.63294574 podStartE2EDuration="1m12.63294574s" podCreationTimestamp="2025-11-28 06:55:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:57:03.631450712 +0000 UTC m=+278.009515833" watchObservedRunningTime="2025-11-28 06:57:03.63294574 +0000 UTC m=+278.011010851" Nov 28 06:57:03 crc kubenswrapper[4946]: I1128 06:57:03.657647 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 28 06:57:03 crc kubenswrapper[4946]: I1128 06:57:03.682735 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 28 06:57:03 crc kubenswrapper[4946]: I1128 06:57:03.687689 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:57:03 crc kubenswrapper[4946]: I1128 06:57:03.691514 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:57:03 crc kubenswrapper[4946]: I1128 06:57:03.701097 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 28 06:57:03 crc kubenswrapper[4946]: I1128 06:57:03.742627 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 28 06:57:03 crc kubenswrapper[4946]: I1128 06:57:03.806301 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 28 06:57:03 crc kubenswrapper[4946]: I1128 06:57:03.812923 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 28 06:57:03 crc kubenswrapper[4946]: I1128 06:57:03.921297 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 28 06:57:04 crc kubenswrapper[4946]: I1128 06:57:04.025115 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 28 06:57:04 crc kubenswrapper[4946]: I1128 06:57:04.080576 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 28 06:57:04 crc kubenswrapper[4946]: I1128 06:57:04.119545 4946 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 06:57:04 crc kubenswrapper[4946]: I1128 06:57:04.119936 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://a72b73e0459c7dddf4a9449613a4a316ee6b27e1b639f9947f7eaf20497fcce3" gracePeriod=5 Nov 28 06:57:04 crc kubenswrapper[4946]: I1128 06:57:04.167034 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 28 06:57:04 crc kubenswrapper[4946]: I1128 06:57:04.257591 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 28 06:57:04 crc kubenswrapper[4946]: I1128 06:57:04.277842 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 28 06:57:04 crc kubenswrapper[4946]: I1128 06:57:04.385946 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 28 06:57:04 crc kubenswrapper[4946]: I1128 06:57:04.406812 4946 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 28 06:57:04 crc kubenswrapper[4946]: I1128 06:57:04.434438 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 28 06:57:04 crc kubenswrapper[4946]: I1128 06:57:04.464153 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 28 06:57:04 crc kubenswrapper[4946]: I1128 06:57:04.474728 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 28 06:57:04 crc kubenswrapper[4946]: I1128 06:57:04.842882 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 28 06:57:04 crc kubenswrapper[4946]: I1128 06:57:04.887374 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 28 06:57:05 crc kubenswrapper[4946]: I1128 06:57:05.085987 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 28 06:57:05 crc kubenswrapper[4946]: I1128 06:57:05.185803 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 28 06:57:05 crc kubenswrapper[4946]: I1128 06:57:05.265364 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 28 06:57:05 crc kubenswrapper[4946]: I1128 06:57:05.283663 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 28 06:57:05 crc kubenswrapper[4946]: I1128 06:57:05.305668 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 28 06:57:05 crc kubenswrapper[4946]: I1128 06:57:05.338890 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 28 06:57:05 crc kubenswrapper[4946]: I1128 06:57:05.340545 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 28 06:57:05 crc kubenswrapper[4946]: I1128 06:57:05.351490 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 06:57:05 crc kubenswrapper[4946]: I1128 06:57:05.512832 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 28 06:57:05 crc kubenswrapper[4946]: I1128 06:57:05.618609 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 28 06:57:05 crc kubenswrapper[4946]: I1128 06:57:05.664582 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 28 06:57:05 crc kubenswrapper[4946]: I1128 06:57:05.716989 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 28 06:57:05 crc kubenswrapper[4946]: I1128 06:57:05.776440 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 28 06:57:05 crc kubenswrapper[4946]: I1128 06:57:05.829523 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 28 06:57:05 crc kubenswrapper[4946]: I1128 06:57:05.922166 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 28 06:57:05 crc kubenswrapper[4946]: I1128 06:57:05.988622 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 28 06:57:06 crc kubenswrapper[4946]: I1128 06:57:06.113825 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 28 06:57:06 crc kubenswrapper[4946]: I1128 06:57:06.640156 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 28 06:57:06 crc kubenswrapper[4946]: I1128 06:57:06.655481 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 28 06:57:06 crc kubenswrapper[4946]: I1128 06:57:06.845456 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 06:57:07 crc kubenswrapper[4946]: I1128 06:57:07.039622 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 28 06:57:07 crc kubenswrapper[4946]: I1128 06:57:07.088069 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 28 06:57:07 crc kubenswrapper[4946]: I1128 06:57:07.467787 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 28 06:57:09 crc kubenswrapper[4946]: I1128 06:57:09.652016 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 28 06:57:09 crc kubenswrapper[4946]: I1128 06:57:09.653187 4946 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="a72b73e0459c7dddf4a9449613a4a316ee6b27e1b639f9947f7eaf20497fcce3" exitCode=137 Nov 28 06:57:09 crc kubenswrapper[4946]: I1128 06:57:09.729615 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 28 06:57:09 crc kubenswrapper[4946]: I1128 06:57:09.729730 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:57:09 crc kubenswrapper[4946]: I1128 06:57:09.780004 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 06:57:09 crc kubenswrapper[4946]: I1128 06:57:09.780137 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 06:57:09 crc kubenswrapper[4946]: I1128 06:57:09.780204 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 06:57:09 crc kubenswrapper[4946]: I1128 06:57:09.780294 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 06:57:09 crc kubenswrapper[4946]: I1128 06:57:09.780330 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:57:09 crc kubenswrapper[4946]: I1128 06:57:09.780367 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:57:09 crc kubenswrapper[4946]: I1128 06:57:09.780383 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 06:57:09 crc kubenswrapper[4946]: I1128 06:57:09.780504 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:57:09 crc kubenswrapper[4946]: I1128 06:57:09.780526 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:57:09 crc kubenswrapper[4946]: I1128 06:57:09.781286 4946 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 28 06:57:09 crc kubenswrapper[4946]: I1128 06:57:09.781323 4946 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:57:09 crc kubenswrapper[4946]: I1128 06:57:09.781341 4946 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 28 06:57:09 crc kubenswrapper[4946]: I1128 06:57:09.781356 4946 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 28 06:57:09 crc kubenswrapper[4946]: I1128 06:57:09.791417 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:57:09 crc kubenswrapper[4946]: I1128 06:57:09.883301 4946 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:57:10 crc kubenswrapper[4946]: I1128 06:57:10.004242 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 28 06:57:10 crc kubenswrapper[4946]: I1128 06:57:10.004861 4946 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Nov 28 06:57:10 crc kubenswrapper[4946]: I1128 06:57:10.023726 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 06:57:10 crc kubenswrapper[4946]: I1128 06:57:10.023852 4946 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="fd10b652-29a1-485c-ba7d-a6b3068cf6e3" Nov 28 06:57:10 crc kubenswrapper[4946]: I1128 06:57:10.031304 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 06:57:10 crc kubenswrapper[4946]: I1128 06:57:10.031360 4946 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="fd10b652-29a1-485c-ba7d-a6b3068cf6e3" Nov 28 06:57:10 crc kubenswrapper[4946]: I1128 06:57:10.661541 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 28 06:57:10 crc kubenswrapper[4946]: I1128 06:57:10.661990 4946 scope.go:117] "RemoveContainer" containerID="a72b73e0459c7dddf4a9449613a4a316ee6b27e1b639f9947f7eaf20497fcce3" Nov 28 06:57:10 crc kubenswrapper[4946]: I1128 06:57:10.662157 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:57:15 crc kubenswrapper[4946]: I1128 06:57:15.961891 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 28 06:57:20 crc kubenswrapper[4946]: I1128 06:57:20.644398 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 28 06:57:26 crc kubenswrapper[4946]: I1128 06:57:26.791235 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 28 06:57:30 crc kubenswrapper[4946]: I1128 06:57:30.589069 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 28 06:57:41 crc kubenswrapper[4946]: I1128 06:57:41.738347 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tjmjq"] Nov 28 06:57:41 crc kubenswrapper[4946]: E1128 06:57:41.739354 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 06:57:41 crc kubenswrapper[4946]: I1128 06:57:41.739369 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 06:57:41 crc kubenswrapper[4946]: I1128 06:57:41.739575 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 06:57:41 crc kubenswrapper[4946]: I1128 06:57:41.740025 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:41 crc kubenswrapper[4946]: I1128 06:57:41.756634 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tjmjq"] Nov 28 06:57:41 crc kubenswrapper[4946]: I1128 06:57:41.887818 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05fb7b63-e4ce-4d1d-8611-be887d4bbd92-bound-sa-token\") pod \"image-registry-66df7c8f76-tjmjq\" (UID: \"05fb7b63-e4ce-4d1d-8611-be887d4bbd92\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:41 crc kubenswrapper[4946]: I1128 06:57:41.887877 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/05fb7b63-e4ce-4d1d-8611-be887d4bbd92-registry-certificates\") pod \"image-registry-66df7c8f76-tjmjq\" (UID: \"05fb7b63-e4ce-4d1d-8611-be887d4bbd92\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:41 crc kubenswrapper[4946]: I1128 06:57:41.887977 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h62xf\" (UniqueName: \"kubernetes.io/projected/05fb7b63-e4ce-4d1d-8611-be887d4bbd92-kube-api-access-h62xf\") pod \"image-registry-66df7c8f76-tjmjq\" (UID: \"05fb7b63-e4ce-4d1d-8611-be887d4bbd92\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:41 crc kubenswrapper[4946]: I1128 06:57:41.888015 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/05fb7b63-e4ce-4d1d-8611-be887d4bbd92-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tjmjq\" (UID: \"05fb7b63-e4ce-4d1d-8611-be887d4bbd92\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:41 crc kubenswrapper[4946]: I1128 06:57:41.888165 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/05fb7b63-e4ce-4d1d-8611-be887d4bbd92-registry-tls\") pod \"image-registry-66df7c8f76-tjmjq\" (UID: \"05fb7b63-e4ce-4d1d-8611-be887d4bbd92\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:41 crc kubenswrapper[4946]: I1128 06:57:41.888313 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05fb7b63-e4ce-4d1d-8611-be887d4bbd92-trusted-ca\") pod \"image-registry-66df7c8f76-tjmjq\" (UID: \"05fb7b63-e4ce-4d1d-8611-be887d4bbd92\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:41 crc kubenswrapper[4946]: I1128 06:57:41.888354 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/05fb7b63-e4ce-4d1d-8611-be887d4bbd92-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tjmjq\" (UID: \"05fb7b63-e4ce-4d1d-8611-be887d4bbd92\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:41 crc kubenswrapper[4946]: I1128 06:57:41.888435 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tjmjq\" (UID: \"05fb7b63-e4ce-4d1d-8611-be887d4bbd92\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:41 crc kubenswrapper[4946]: I1128 06:57:41.910816 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tjmjq\" (UID: \"05fb7b63-e4ce-4d1d-8611-be887d4bbd92\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:41 crc kubenswrapper[4946]: I1128 06:57:41.990431 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h62xf\" (UniqueName: \"kubernetes.io/projected/05fb7b63-e4ce-4d1d-8611-be887d4bbd92-kube-api-access-h62xf\") pod \"image-registry-66df7c8f76-tjmjq\" (UID: \"05fb7b63-e4ce-4d1d-8611-be887d4bbd92\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:41 crc kubenswrapper[4946]: I1128 06:57:41.990524 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/05fb7b63-e4ce-4d1d-8611-be887d4bbd92-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tjmjq\" (UID: \"05fb7b63-e4ce-4d1d-8611-be887d4bbd92\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:41 crc kubenswrapper[4946]: I1128 06:57:41.990552 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/05fb7b63-e4ce-4d1d-8611-be887d4bbd92-registry-tls\") pod \"image-registry-66df7c8f76-tjmjq\" (UID: \"05fb7b63-e4ce-4d1d-8611-be887d4bbd92\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:41 crc kubenswrapper[4946]: I1128 06:57:41.990586 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/05fb7b63-e4ce-4d1d-8611-be887d4bbd92-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tjmjq\" (UID: \"05fb7b63-e4ce-4d1d-8611-be887d4bbd92\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:41 crc kubenswrapper[4946]: I1128 06:57:41.990601 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05fb7b63-e4ce-4d1d-8611-be887d4bbd92-trusted-ca\") pod \"image-registry-66df7c8f76-tjmjq\" (UID: \"05fb7b63-e4ce-4d1d-8611-be887d4bbd92\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:41 crc kubenswrapper[4946]: I1128 06:57:41.990639 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05fb7b63-e4ce-4d1d-8611-be887d4bbd92-bound-sa-token\") pod \"image-registry-66df7c8f76-tjmjq\" (UID: \"05fb7b63-e4ce-4d1d-8611-be887d4bbd92\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:41 crc kubenswrapper[4946]: I1128 06:57:41.990669 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/05fb7b63-e4ce-4d1d-8611-be887d4bbd92-registry-certificates\") pod \"image-registry-66df7c8f76-tjmjq\" (UID: \"05fb7b63-e4ce-4d1d-8611-be887d4bbd92\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:41 crc kubenswrapper[4946]: I1128 06:57:41.991379 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/05fb7b63-e4ce-4d1d-8611-be887d4bbd92-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tjmjq\" (UID: \"05fb7b63-e4ce-4d1d-8611-be887d4bbd92\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:41 crc kubenswrapper[4946]: I1128 06:57:41.992055 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05fb7b63-e4ce-4d1d-8611-be887d4bbd92-trusted-ca\") pod \"image-registry-66df7c8f76-tjmjq\" (UID: \"05fb7b63-e4ce-4d1d-8611-be887d4bbd92\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:41 crc kubenswrapper[4946]: I1128 06:57:41.992109 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/05fb7b63-e4ce-4d1d-8611-be887d4bbd92-registry-certificates\") pod \"image-registry-66df7c8f76-tjmjq\" (UID: \"05fb7b63-e4ce-4d1d-8611-be887d4bbd92\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:41 crc kubenswrapper[4946]: I1128 06:57:41.999489 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/05fb7b63-e4ce-4d1d-8611-be887d4bbd92-registry-tls\") pod \"image-registry-66df7c8f76-tjmjq\" (UID: \"05fb7b63-e4ce-4d1d-8611-be887d4bbd92\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:42 crc kubenswrapper[4946]: I1128 06:57:42.004860 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/05fb7b63-e4ce-4d1d-8611-be887d4bbd92-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tjmjq\" (UID: \"05fb7b63-e4ce-4d1d-8611-be887d4bbd92\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:42 crc kubenswrapper[4946]: I1128 06:57:42.009670 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05fb7b63-e4ce-4d1d-8611-be887d4bbd92-bound-sa-token\") pod \"image-registry-66df7c8f76-tjmjq\" (UID: \"05fb7b63-e4ce-4d1d-8611-be887d4bbd92\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:42 crc kubenswrapper[4946]: I1128 06:57:42.012198 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h62xf\" (UniqueName: \"kubernetes.io/projected/05fb7b63-e4ce-4d1d-8611-be887d4bbd92-kube-api-access-h62xf\") pod \"image-registry-66df7c8f76-tjmjq\" (UID: \"05fb7b63-e4ce-4d1d-8611-be887d4bbd92\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:42 crc kubenswrapper[4946]: I1128 06:57:42.060279 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:42 crc kubenswrapper[4946]: I1128 06:57:42.528146 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tjmjq"] Nov 28 06:57:42 crc kubenswrapper[4946]: I1128 06:57:42.891756 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" event={"ID":"05fb7b63-e4ce-4d1d-8611-be887d4bbd92","Type":"ContainerStarted","Data":"49e111de2c3d96ed22bfb6b4d971cb4cb6613da6074d7f264df90073d077923f"} Nov 28 06:57:42 crc kubenswrapper[4946]: I1128 06:57:42.893160 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" event={"ID":"05fb7b63-e4ce-4d1d-8611-be887d4bbd92","Type":"ContainerStarted","Data":"7a385f25276668ff627e9ae5c2066e33115afb31f6b36157363b0a43166ddfce"} Nov 28 06:57:42 crc kubenswrapper[4946]: I1128 06:57:42.894042 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:57:42 crc kubenswrapper[4946]: I1128 06:57:42.914519 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" podStartSLOduration=1.914501892 podStartE2EDuration="1.914501892s" podCreationTimestamp="2025-11-28 06:57:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:57:42.910374126 +0000 UTC m=+317.288439237" watchObservedRunningTime="2025-11-28 06:57:42.914501892 +0000 UTC m=+317.292567003" Nov 28 06:58:02 crc kubenswrapper[4946]: I1128 06:58:02.072207 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-tjmjq" Nov 28 06:58:02 crc kubenswrapper[4946]: I1128 06:58:02.149200 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qhbbg"] Nov 28 06:58:24 crc kubenswrapper[4946]: I1128 06:58:24.731084 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:58:24 crc kubenswrapper[4946]: I1128 06:58:24.731864 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:58:27 crc kubenswrapper[4946]: I1128 06:58:27.199955 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" podUID="2ba6db31-f148-40c3-b5cc-60b2a8e063e4" containerName="registry" containerID="cri-o://2c59ef8dbbcf9ff500b8a57b320169e8616e4599b08b1ce2ea3e3359cf005d93" gracePeriod=30 Nov 28 06:58:27 crc kubenswrapper[4946]: I1128 06:58:27.645709 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:58:27 crc kubenswrapper[4946]: I1128 06:58:27.767598 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-bound-sa-token\") pod \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " Nov 28 06:58:27 crc kubenswrapper[4946]: I1128 06:58:27.767653 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-installation-pull-secrets\") pod \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " Nov 28 06:58:27 crc kubenswrapper[4946]: I1128 06:58:27.767771 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-registry-tls\") pod \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " Nov 28 06:58:27 crc kubenswrapper[4946]: I1128 06:58:27.767812 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-ca-trust-extracted\") pod \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " Nov 28 06:58:27 crc kubenswrapper[4946]: I1128 06:58:27.767854 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-trusted-ca\") pod \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " Nov 28 06:58:27 crc kubenswrapper[4946]: I1128 06:58:27.768074 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " Nov 28 06:58:27 crc kubenswrapper[4946]: I1128 06:58:27.768110 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-registry-certificates\") pod \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " Nov 28 06:58:27 crc kubenswrapper[4946]: I1128 06:58:27.768182 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgt46\" (UniqueName: \"kubernetes.io/projected/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-kube-api-access-vgt46\") pod \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\" (UID: \"2ba6db31-f148-40c3-b5cc-60b2a8e063e4\") " Nov 28 06:58:27 crc kubenswrapper[4946]: I1128 06:58:27.769354 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "2ba6db31-f148-40c3-b5cc-60b2a8e063e4" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:58:27 crc kubenswrapper[4946]: I1128 06:58:27.769343 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2ba6db31-f148-40c3-b5cc-60b2a8e063e4" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:58:27 crc kubenswrapper[4946]: I1128 06:58:27.775817 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "2ba6db31-f148-40c3-b5cc-60b2a8e063e4" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:58:27 crc kubenswrapper[4946]: I1128 06:58:27.776088 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-kube-api-access-vgt46" (OuterVolumeSpecName: "kube-api-access-vgt46") pod "2ba6db31-f148-40c3-b5cc-60b2a8e063e4" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4"). InnerVolumeSpecName "kube-api-access-vgt46". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:58:27 crc kubenswrapper[4946]: I1128 06:58:27.778061 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "2ba6db31-f148-40c3-b5cc-60b2a8e063e4" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:58:27 crc kubenswrapper[4946]: I1128 06:58:27.778910 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "2ba6db31-f148-40c3-b5cc-60b2a8e063e4" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:58:27 crc kubenswrapper[4946]: I1128 06:58:27.791488 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "2ba6db31-f148-40c3-b5cc-60b2a8e063e4" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:58:27 crc kubenswrapper[4946]: I1128 06:58:27.793537 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "2ba6db31-f148-40c3-b5cc-60b2a8e063e4" (UID: "2ba6db31-f148-40c3-b5cc-60b2a8e063e4"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 28 06:58:27 crc kubenswrapper[4946]: I1128 06:58:27.870443 4946 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:58:27 crc kubenswrapper[4946]: I1128 06:58:27.870530 4946 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 28 06:58:27 crc kubenswrapper[4946]: I1128 06:58:27.870545 4946 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:58:27 crc kubenswrapper[4946]: I1128 06:58:27.870557 4946 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 28 06:58:27 crc kubenswrapper[4946]: I1128 06:58:27.870576 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgt46\" (UniqueName: \"kubernetes.io/projected/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-kube-api-access-vgt46\") on node \"crc\" DevicePath \"\"" Nov 28 06:58:27 crc kubenswrapper[4946]: I1128 06:58:27.870589 4946 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 06:58:27 crc kubenswrapper[4946]: I1128 06:58:27.870605 4946 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2ba6db31-f148-40c3-b5cc-60b2a8e063e4-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 28 06:58:28 crc kubenswrapper[4946]: I1128 06:58:28.242610 4946 generic.go:334] "Generic (PLEG): container finished" podID="2ba6db31-f148-40c3-b5cc-60b2a8e063e4" containerID="2c59ef8dbbcf9ff500b8a57b320169e8616e4599b08b1ce2ea3e3359cf005d93" exitCode=0 Nov 28 06:58:28 crc kubenswrapper[4946]: I1128 06:58:28.242741 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" Nov 28 06:58:28 crc kubenswrapper[4946]: I1128 06:58:28.242764 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" event={"ID":"2ba6db31-f148-40c3-b5cc-60b2a8e063e4","Type":"ContainerDied","Data":"2c59ef8dbbcf9ff500b8a57b320169e8616e4599b08b1ce2ea3e3359cf005d93"} Nov 28 06:58:28 crc kubenswrapper[4946]: I1128 06:58:28.242885 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qhbbg" event={"ID":"2ba6db31-f148-40c3-b5cc-60b2a8e063e4","Type":"ContainerDied","Data":"6c7296ac0d58a37d63624a8b72cb3d1be4cf2a01d44a7f321c5682408f31d83e"} Nov 28 06:58:28 crc kubenswrapper[4946]: I1128 06:58:28.242918 4946 scope.go:117] "RemoveContainer" containerID="2c59ef8dbbcf9ff500b8a57b320169e8616e4599b08b1ce2ea3e3359cf005d93" Nov 28 06:58:28 crc kubenswrapper[4946]: I1128 06:58:28.279687 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qhbbg"] Nov 28 06:58:28 crc kubenswrapper[4946]: I1128 06:58:28.284212 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qhbbg"] Nov 28 06:58:28 crc kubenswrapper[4946]: I1128 06:58:28.285674 4946 scope.go:117] "RemoveContainer" containerID="2c59ef8dbbcf9ff500b8a57b320169e8616e4599b08b1ce2ea3e3359cf005d93" Nov 28 06:58:28 crc kubenswrapper[4946]: E1128 06:58:28.286320 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c59ef8dbbcf9ff500b8a57b320169e8616e4599b08b1ce2ea3e3359cf005d93\": container with ID starting with 2c59ef8dbbcf9ff500b8a57b320169e8616e4599b08b1ce2ea3e3359cf005d93 not found: ID does not exist" containerID="2c59ef8dbbcf9ff500b8a57b320169e8616e4599b08b1ce2ea3e3359cf005d93" Nov 28 06:58:28 crc kubenswrapper[4946]: I1128 06:58:28.286372 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c59ef8dbbcf9ff500b8a57b320169e8616e4599b08b1ce2ea3e3359cf005d93"} err="failed to get container status \"2c59ef8dbbcf9ff500b8a57b320169e8616e4599b08b1ce2ea3e3359cf005d93\": rpc error: code = NotFound desc = could not find container \"2c59ef8dbbcf9ff500b8a57b320169e8616e4599b08b1ce2ea3e3359cf005d93\": container with ID starting with 2c59ef8dbbcf9ff500b8a57b320169e8616e4599b08b1ce2ea3e3359cf005d93 not found: ID does not exist" Nov 28 06:58:30 crc kubenswrapper[4946]: I1128 06:58:30.004977 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ba6db31-f148-40c3-b5cc-60b2a8e063e4" path="/var/lib/kubelet/pods/2ba6db31-f148-40c3-b5cc-60b2a8e063e4/volumes" Nov 28 06:58:43 crc kubenswrapper[4946]: I1128 06:58:43.627313 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7xrv4"] Nov 28 06:58:43 crc kubenswrapper[4946]: I1128 06:58:43.630342 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7xrv4" podUID="1faa4eda-7193-4eec-b3f6-391ddc88b498" containerName="registry-server" containerID="cri-o://48410f674d59fe5f38032625b8645643e9161c2bf089c7480171bb7ecb835e92" gracePeriod=30 Nov 28 06:58:43 crc kubenswrapper[4946]: I1128 06:58:43.638085 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f7mgl"] Nov 28 06:58:43 crc kubenswrapper[4946]: I1128 06:58:43.638447 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f7mgl" podUID="b169ce71-a297-432f-a041-f7b544659574" containerName="registry-server" containerID="cri-o://aaa71615617e77d25b1b5f41fc53dd4e6020d672abe4da55e9b796aed37cd626" gracePeriod=30 Nov 28 06:58:43 crc kubenswrapper[4946]: I1128 06:58:43.663051 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-js7vs"] Nov 28 06:58:43 crc kubenswrapper[4946]: I1128 06:58:43.663805 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-js7vs" podUID="d8c9e4f9-d51d-47d9-8228-c9c58873dbe3" containerName="marketplace-operator" containerID="cri-o://e4131662dd2679ac360e03f650a5e9de467bfac888509c2f77279606e96b51e9" gracePeriod=30 Nov 28 06:58:43 crc kubenswrapper[4946]: I1128 06:58:43.680953 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9lpt"] Nov 28 06:58:43 crc kubenswrapper[4946]: I1128 06:58:43.682004 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n9lpt" podUID="1c124c67-1e65-4489-a08a-8b580fee23cc" containerName="registry-server" containerID="cri-o://0a77650cb871753d826fa0eb5020e7d92f0dfa3f63dedf3cda0ccf863d9e2ef1" gracePeriod=30 Nov 28 06:58:43 crc kubenswrapper[4946]: I1128 06:58:43.692542 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-txxk5"] Nov 28 06:58:43 crc kubenswrapper[4946]: I1128 06:58:43.694005 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-txxk5" podUID="d4cb05fa-0cdb-499e-9d83-e6a5e87bf144" containerName="registry-server" containerID="cri-o://59e0ebde0c1aba3eb7c6c208646384865b52fcc104200a084717110505444149" gracePeriod=30 Nov 28 06:58:43 crc kubenswrapper[4946]: I1128 06:58:43.706161 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lr6sl"] Nov 28 06:58:43 crc kubenswrapper[4946]: E1128 06:58:43.706656 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba6db31-f148-40c3-b5cc-60b2a8e063e4" containerName="registry" Nov 28 06:58:43 crc kubenswrapper[4946]: I1128 06:58:43.706679 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba6db31-f148-40c3-b5cc-60b2a8e063e4" containerName="registry" Nov 28 06:58:43 crc kubenswrapper[4946]: I1128 06:58:43.706933 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ba6db31-f148-40c3-b5cc-60b2a8e063e4" containerName="registry" Nov 28 06:58:43 crc kubenswrapper[4946]: I1128 06:58:43.707787 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lr6sl" Nov 28 06:58:43 crc kubenswrapper[4946]: I1128 06:58:43.710351 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lr6sl"] Nov 28 06:58:43 crc kubenswrapper[4946]: I1128 06:58:43.854732 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x5ct\" (UniqueName: \"kubernetes.io/projected/29fb1915-7644-4b45-87f3-ad6ffc6b289e-kube-api-access-8x5ct\") pod \"marketplace-operator-79b997595-lr6sl\" (UID: \"29fb1915-7644-4b45-87f3-ad6ffc6b289e\") " pod="openshift-marketplace/marketplace-operator-79b997595-lr6sl" Nov 28 06:58:43 crc kubenswrapper[4946]: I1128 06:58:43.855275 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29fb1915-7644-4b45-87f3-ad6ffc6b289e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lr6sl\" (UID: \"29fb1915-7644-4b45-87f3-ad6ffc6b289e\") " pod="openshift-marketplace/marketplace-operator-79b997595-lr6sl" Nov 28 06:58:43 crc kubenswrapper[4946]: I1128 06:58:43.855309 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/29fb1915-7644-4b45-87f3-ad6ffc6b289e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lr6sl\" (UID: \"29fb1915-7644-4b45-87f3-ad6ffc6b289e\") " pod="openshift-marketplace/marketplace-operator-79b997595-lr6sl" Nov 28 06:58:43 crc kubenswrapper[4946]: I1128 06:58:43.959436 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x5ct\" (UniqueName: \"kubernetes.io/projected/29fb1915-7644-4b45-87f3-ad6ffc6b289e-kube-api-access-8x5ct\") pod \"marketplace-operator-79b997595-lr6sl\" (UID: \"29fb1915-7644-4b45-87f3-ad6ffc6b289e\") " pod="openshift-marketplace/marketplace-operator-79b997595-lr6sl" Nov 28 06:58:43 crc kubenswrapper[4946]: I1128 06:58:43.959527 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29fb1915-7644-4b45-87f3-ad6ffc6b289e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lr6sl\" (UID: \"29fb1915-7644-4b45-87f3-ad6ffc6b289e\") " pod="openshift-marketplace/marketplace-operator-79b997595-lr6sl" Nov 28 06:58:43 crc kubenswrapper[4946]: I1128 06:58:43.959553 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/29fb1915-7644-4b45-87f3-ad6ffc6b289e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lr6sl\" (UID: \"29fb1915-7644-4b45-87f3-ad6ffc6b289e\") " pod="openshift-marketplace/marketplace-operator-79b997595-lr6sl" Nov 28 06:58:43 crc kubenswrapper[4946]: I1128 06:58:43.961746 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29fb1915-7644-4b45-87f3-ad6ffc6b289e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lr6sl\" (UID: \"29fb1915-7644-4b45-87f3-ad6ffc6b289e\") " pod="openshift-marketplace/marketplace-operator-79b997595-lr6sl" Nov 28 06:58:43 crc kubenswrapper[4946]: I1128 06:58:43.969051 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/29fb1915-7644-4b45-87f3-ad6ffc6b289e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lr6sl\" (UID: \"29fb1915-7644-4b45-87f3-ad6ffc6b289e\") " pod="openshift-marketplace/marketplace-operator-79b997595-lr6sl" Nov 28 06:58:43 crc kubenswrapper[4946]: I1128 06:58:43.976931 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x5ct\" (UniqueName: \"kubernetes.io/projected/29fb1915-7644-4b45-87f3-ad6ffc6b289e-kube-api-access-8x5ct\") pod \"marketplace-operator-79b997595-lr6sl\" (UID: \"29fb1915-7644-4b45-87f3-ad6ffc6b289e\") " pod="openshift-marketplace/marketplace-operator-79b997595-lr6sl" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.126811 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lr6sl" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.131965 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txxk5" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.143805 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9lpt" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.154312 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xrv4" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.158366 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-js7vs" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.262171 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1faa4eda-7193-4eec-b3f6-391ddc88b498-catalog-content\") pod \"1faa4eda-7193-4eec-b3f6-391ddc88b498\" (UID: \"1faa4eda-7193-4eec-b3f6-391ddc88b498\") " Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.262670 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj864\" (UniqueName: \"kubernetes.io/projected/1faa4eda-7193-4eec-b3f6-391ddc88b498-kube-api-access-pj864\") pod \"1faa4eda-7193-4eec-b3f6-391ddc88b498\" (UID: \"1faa4eda-7193-4eec-b3f6-391ddc88b498\") " Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.262692 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4cb05fa-0cdb-499e-9d83-e6a5e87bf144-catalog-content\") pod \"d4cb05fa-0cdb-499e-9d83-e6a5e87bf144\" (UID: \"d4cb05fa-0cdb-499e-9d83-e6a5e87bf144\") " Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.262710 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4cb05fa-0cdb-499e-9d83-e6a5e87bf144-utilities\") pod \"d4cb05fa-0cdb-499e-9d83-e6a5e87bf144\" (UID: \"d4cb05fa-0cdb-499e-9d83-e6a5e87bf144\") " Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.262729 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c124c67-1e65-4489-a08a-8b580fee23cc-catalog-content\") pod \"1c124c67-1e65-4489-a08a-8b580fee23cc\" (UID: \"1c124c67-1e65-4489-a08a-8b580fee23cc\") " Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.262747 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwngf\" (UniqueName: \"kubernetes.io/projected/1c124c67-1e65-4489-a08a-8b580fee23cc-kube-api-access-lwngf\") pod \"1c124c67-1e65-4489-a08a-8b580fee23cc\" (UID: \"1c124c67-1e65-4489-a08a-8b580fee23cc\") " Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.262779 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d8c9e4f9-d51d-47d9-8228-c9c58873dbe3-marketplace-operator-metrics\") pod \"d8c9e4f9-d51d-47d9-8228-c9c58873dbe3\" (UID: \"d8c9e4f9-d51d-47d9-8228-c9c58873dbe3\") " Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.262801 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c124c67-1e65-4489-a08a-8b580fee23cc-utilities\") pod \"1c124c67-1e65-4489-a08a-8b580fee23cc\" (UID: \"1c124c67-1e65-4489-a08a-8b580fee23cc\") " Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.262827 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdj66\" (UniqueName: \"kubernetes.io/projected/d8c9e4f9-d51d-47d9-8228-c9c58873dbe3-kube-api-access-jdj66\") pod \"d8c9e4f9-d51d-47d9-8228-c9c58873dbe3\" (UID: \"d8c9e4f9-d51d-47d9-8228-c9c58873dbe3\") " Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.262849 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx69h\" (UniqueName: \"kubernetes.io/projected/d4cb05fa-0cdb-499e-9d83-e6a5e87bf144-kube-api-access-lx69h\") pod \"d4cb05fa-0cdb-499e-9d83-e6a5e87bf144\" (UID: \"d4cb05fa-0cdb-499e-9d83-e6a5e87bf144\") " Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.262875 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1faa4eda-7193-4eec-b3f6-391ddc88b498-utilities\") pod \"1faa4eda-7193-4eec-b3f6-391ddc88b498\" (UID: \"1faa4eda-7193-4eec-b3f6-391ddc88b498\") " Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.262907 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8c9e4f9-d51d-47d9-8228-c9c58873dbe3-marketplace-trusted-ca\") pod \"d8c9e4f9-d51d-47d9-8228-c9c58873dbe3\" (UID: \"d8c9e4f9-d51d-47d9-8228-c9c58873dbe3\") " Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.263514 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4cb05fa-0cdb-499e-9d83-e6a5e87bf144-utilities" (OuterVolumeSpecName: "utilities") pod "d4cb05fa-0cdb-499e-9d83-e6a5e87bf144" (UID: "d4cb05fa-0cdb-499e-9d83-e6a5e87bf144"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.264355 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4cb05fa-0cdb-499e-9d83-e6a5e87bf144-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.267082 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8c9e4f9-d51d-47d9-8228-c9c58873dbe3-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "d8c9e4f9-d51d-47d9-8228-c9c58873dbe3" (UID: "d8c9e4f9-d51d-47d9-8228-c9c58873dbe3"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.268517 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1faa4eda-7193-4eec-b3f6-391ddc88b498-utilities" (OuterVolumeSpecName: "utilities") pod "1faa4eda-7193-4eec-b3f6-391ddc88b498" (UID: "1faa4eda-7193-4eec-b3f6-391ddc88b498"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.268526 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c124c67-1e65-4489-a08a-8b580fee23cc-utilities" (OuterVolumeSpecName: "utilities") pod "1c124c67-1e65-4489-a08a-8b580fee23cc" (UID: "1c124c67-1e65-4489-a08a-8b580fee23cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.274745 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1faa4eda-7193-4eec-b3f6-391ddc88b498-kube-api-access-pj864" (OuterVolumeSpecName: "kube-api-access-pj864") pod "1faa4eda-7193-4eec-b3f6-391ddc88b498" (UID: "1faa4eda-7193-4eec-b3f6-391ddc88b498"). InnerVolumeSpecName "kube-api-access-pj864". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.282998 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4cb05fa-0cdb-499e-9d83-e6a5e87bf144-kube-api-access-lx69h" (OuterVolumeSpecName: "kube-api-access-lx69h") pod "d4cb05fa-0cdb-499e-9d83-e6a5e87bf144" (UID: "d4cb05fa-0cdb-499e-9d83-e6a5e87bf144"). InnerVolumeSpecName "kube-api-access-lx69h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.286271 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c124c67-1e65-4489-a08a-8b580fee23cc-kube-api-access-lwngf" (OuterVolumeSpecName: "kube-api-access-lwngf") pod "1c124c67-1e65-4489-a08a-8b580fee23cc" (UID: "1c124c67-1e65-4489-a08a-8b580fee23cc"). InnerVolumeSpecName "kube-api-access-lwngf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.288424 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8c9e4f9-d51d-47d9-8228-c9c58873dbe3-kube-api-access-jdj66" (OuterVolumeSpecName: "kube-api-access-jdj66") pod "d8c9e4f9-d51d-47d9-8228-c9c58873dbe3" (UID: "d8c9e4f9-d51d-47d9-8228-c9c58873dbe3"). InnerVolumeSpecName "kube-api-access-jdj66". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.292229 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8c9e4f9-d51d-47d9-8228-c9c58873dbe3-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "d8c9e4f9-d51d-47d9-8228-c9c58873dbe3" (UID: "d8c9e4f9-d51d-47d9-8228-c9c58873dbe3"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.294370 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c124c67-1e65-4489-a08a-8b580fee23cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c124c67-1e65-4489-a08a-8b580fee23cc" (UID: "1c124c67-1e65-4489-a08a-8b580fee23cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.357805 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lr6sl"] Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.362817 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1faa4eda-7193-4eec-b3f6-391ddc88b498-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1faa4eda-7193-4eec-b3f6-391ddc88b498" (UID: "1faa4eda-7193-4eec-b3f6-391ddc88b498"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:58:44 crc kubenswrapper[4946]: W1128 06:58:44.366891 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29fb1915_7644_4b45_87f3_ad6ffc6b289e.slice/crio-792798b0e8511c6a85f986d5087ea65502ca6ff370a0142ab85393281dfedf5a WatchSource:0}: Error finding container 792798b0e8511c6a85f986d5087ea65502ca6ff370a0142ab85393281dfedf5a: Status 404 returned error can't find the container with id 792798b0e8511c6a85f986d5087ea65502ca6ff370a0142ab85393281dfedf5a Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.367487 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c124c67-1e65-4489-a08a-8b580fee23cc-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.367638 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdj66\" (UniqueName: \"kubernetes.io/projected/d8c9e4f9-d51d-47d9-8228-c9c58873dbe3-kube-api-access-jdj66\") on node \"crc\" DevicePath \"\"" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.367658 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx69h\" (UniqueName: \"kubernetes.io/projected/d4cb05fa-0cdb-499e-9d83-e6a5e87bf144-kube-api-access-lx69h\") on node \"crc\" DevicePath \"\"" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.367666 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1faa4eda-7193-4eec-b3f6-391ddc88b498-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.367677 4946 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8c9e4f9-d51d-47d9-8228-c9c58873dbe3-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.367685 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1faa4eda-7193-4eec-b3f6-391ddc88b498-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.367694 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj864\" (UniqueName: \"kubernetes.io/projected/1faa4eda-7193-4eec-b3f6-391ddc88b498-kube-api-access-pj864\") on node \"crc\" DevicePath \"\"" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.367703 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c124c67-1e65-4489-a08a-8b580fee23cc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.367711 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwngf\" (UniqueName: \"kubernetes.io/projected/1c124c67-1e65-4489-a08a-8b580fee23cc-kube-api-access-lwngf\") on node \"crc\" DevicePath \"\"" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.367720 4946 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d8c9e4f9-d51d-47d9-8228-c9c58873dbe3-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.370936 4946 generic.go:334] "Generic (PLEG): container finished" podID="b169ce71-a297-432f-a041-f7b544659574" containerID="aaa71615617e77d25b1b5f41fc53dd4e6020d672abe4da55e9b796aed37cd626" exitCode=0 Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.371002 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f7mgl" event={"ID":"b169ce71-a297-432f-a041-f7b544659574","Type":"ContainerDied","Data":"aaa71615617e77d25b1b5f41fc53dd4e6020d672abe4da55e9b796aed37cd626"} Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.381380 4946 generic.go:334] "Generic (PLEG): container finished" podID="1faa4eda-7193-4eec-b3f6-391ddc88b498" containerID="48410f674d59fe5f38032625b8645643e9161c2bf089c7480171bb7ecb835e92" exitCode=0 Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.381439 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xrv4" event={"ID":"1faa4eda-7193-4eec-b3f6-391ddc88b498","Type":"ContainerDied","Data":"48410f674d59fe5f38032625b8645643e9161c2bf089c7480171bb7ecb835e92"} Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.381497 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xrv4" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.381556 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xrv4" event={"ID":"1faa4eda-7193-4eec-b3f6-391ddc88b498","Type":"ContainerDied","Data":"d163a8fc352ec8b672948a0f9fe48681f4704c738d8f5cb1802fbbedd84c02d8"} Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.381590 4946 scope.go:117] "RemoveContainer" containerID="48410f674d59fe5f38032625b8645643e9161c2bf089c7480171bb7ecb835e92" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.386926 4946 generic.go:334] "Generic (PLEG): container finished" podID="1c124c67-1e65-4489-a08a-8b580fee23cc" containerID="0a77650cb871753d826fa0eb5020e7d92f0dfa3f63dedf3cda0ccf863d9e2ef1" exitCode=0 Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.387028 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9lpt" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.387055 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9lpt" event={"ID":"1c124c67-1e65-4489-a08a-8b580fee23cc","Type":"ContainerDied","Data":"0a77650cb871753d826fa0eb5020e7d92f0dfa3f63dedf3cda0ccf863d9e2ef1"} Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.387094 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9lpt" event={"ID":"1c124c67-1e65-4489-a08a-8b580fee23cc","Type":"ContainerDied","Data":"2395dccab10e9f4e47c6c5e55380ebe56f116e268ec0e3bcb4c75389d44ec0b5"} Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.395271 4946 generic.go:334] "Generic (PLEG): container finished" podID="d8c9e4f9-d51d-47d9-8228-c9c58873dbe3" containerID="e4131662dd2679ac360e03f650a5e9de467bfac888509c2f77279606e96b51e9" exitCode=0 Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.395361 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-js7vs" event={"ID":"d8c9e4f9-d51d-47d9-8228-c9c58873dbe3","Type":"ContainerDied","Data":"e4131662dd2679ac360e03f650a5e9de467bfac888509c2f77279606e96b51e9"} Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.395398 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-js7vs" event={"ID":"d8c9e4f9-d51d-47d9-8228-c9c58873dbe3","Type":"ContainerDied","Data":"951d846c7df7619c45cdbac5847eb55f9af534ebffc9b05aa2dda2aaabd8c9b2"} Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.395439 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-js7vs" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.401309 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4cb05fa-0cdb-499e-9d83-e6a5e87bf144-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4cb05fa-0cdb-499e-9d83-e6a5e87bf144" (UID: "d4cb05fa-0cdb-499e-9d83-e6a5e87bf144"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.407626 4946 scope.go:117] "RemoveContainer" containerID="6baf56d2b654f021d64a764ade285191cee715c50dd7f0d2f559291423800432" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.415644 4946 generic.go:334] "Generic (PLEG): container finished" podID="d4cb05fa-0cdb-499e-9d83-e6a5e87bf144" containerID="59e0ebde0c1aba3eb7c6c208646384865b52fcc104200a084717110505444149" exitCode=0 Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.415750 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txxk5" event={"ID":"d4cb05fa-0cdb-499e-9d83-e6a5e87bf144","Type":"ContainerDied","Data":"59e0ebde0c1aba3eb7c6c208646384865b52fcc104200a084717110505444149"} Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.416284 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txxk5" event={"ID":"d4cb05fa-0cdb-499e-9d83-e6a5e87bf144","Type":"ContainerDied","Data":"259265bdd043bf7fb2fa4607d9b1475c934e01430a4e58c3e2dd7dd2612874d0"} Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.416600 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txxk5" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.425417 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7xrv4"] Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.430395 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7xrv4"] Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.460674 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-js7vs"] Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.471760 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-js7vs"] Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.472508 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4cb05fa-0cdb-499e-9d83-e6a5e87bf144-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.475878 4946 scope.go:117] "RemoveContainer" containerID="f94f8c91c770e2dcd038f366f35186eff4c68021402f800455db2da0ff8ddd5b" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.480547 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-txxk5"] Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.486380 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-txxk5"] Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.490961 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9lpt"] Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.494705 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9lpt"] Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.504658 4946 scope.go:117] "RemoveContainer" containerID="48410f674d59fe5f38032625b8645643e9161c2bf089c7480171bb7ecb835e92" Nov 28 06:58:44 crc kubenswrapper[4946]: E1128 06:58:44.505344 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48410f674d59fe5f38032625b8645643e9161c2bf089c7480171bb7ecb835e92\": container with ID starting with 48410f674d59fe5f38032625b8645643e9161c2bf089c7480171bb7ecb835e92 not found: ID does not exist" containerID="48410f674d59fe5f38032625b8645643e9161c2bf089c7480171bb7ecb835e92" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.505405 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48410f674d59fe5f38032625b8645643e9161c2bf089c7480171bb7ecb835e92"} err="failed to get container status \"48410f674d59fe5f38032625b8645643e9161c2bf089c7480171bb7ecb835e92\": rpc error: code = NotFound desc = could not find container \"48410f674d59fe5f38032625b8645643e9161c2bf089c7480171bb7ecb835e92\": container with ID starting with 48410f674d59fe5f38032625b8645643e9161c2bf089c7480171bb7ecb835e92 not found: ID does not exist" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.505437 4946 scope.go:117] "RemoveContainer" containerID="6baf56d2b654f021d64a764ade285191cee715c50dd7f0d2f559291423800432" Nov 28 06:58:44 crc kubenswrapper[4946]: E1128 06:58:44.505911 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6baf56d2b654f021d64a764ade285191cee715c50dd7f0d2f559291423800432\": container with ID starting with 6baf56d2b654f021d64a764ade285191cee715c50dd7f0d2f559291423800432 not found: ID does not exist" containerID="6baf56d2b654f021d64a764ade285191cee715c50dd7f0d2f559291423800432" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.505950 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6baf56d2b654f021d64a764ade285191cee715c50dd7f0d2f559291423800432"} err="failed to get container status \"6baf56d2b654f021d64a764ade285191cee715c50dd7f0d2f559291423800432\": rpc error: code = NotFound desc = could not find container \"6baf56d2b654f021d64a764ade285191cee715c50dd7f0d2f559291423800432\": container with ID starting with 6baf56d2b654f021d64a764ade285191cee715c50dd7f0d2f559291423800432 not found: ID does not exist" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.505981 4946 scope.go:117] "RemoveContainer" containerID="f94f8c91c770e2dcd038f366f35186eff4c68021402f800455db2da0ff8ddd5b" Nov 28 06:58:44 crc kubenswrapper[4946]: E1128 06:58:44.506387 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f94f8c91c770e2dcd038f366f35186eff4c68021402f800455db2da0ff8ddd5b\": container with ID starting with f94f8c91c770e2dcd038f366f35186eff4c68021402f800455db2da0ff8ddd5b not found: ID does not exist" containerID="f94f8c91c770e2dcd038f366f35186eff4c68021402f800455db2da0ff8ddd5b" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.506436 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f94f8c91c770e2dcd038f366f35186eff4c68021402f800455db2da0ff8ddd5b"} err="failed to get container status \"f94f8c91c770e2dcd038f366f35186eff4c68021402f800455db2da0ff8ddd5b\": rpc error: code = NotFound desc = could not find container \"f94f8c91c770e2dcd038f366f35186eff4c68021402f800455db2da0ff8ddd5b\": container with ID starting with f94f8c91c770e2dcd038f366f35186eff4c68021402f800455db2da0ff8ddd5b not found: ID does not exist" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.506479 4946 scope.go:117] "RemoveContainer" containerID="0a77650cb871753d826fa0eb5020e7d92f0dfa3f63dedf3cda0ccf863d9e2ef1" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.523525 4946 scope.go:117] "RemoveContainer" containerID="ae9792e5ba2567cd20535718e3083200e1f3056e954ef59ed27763533d3a69fd" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.543250 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f7mgl" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.544497 4946 scope.go:117] "RemoveContainer" containerID="3db9c17b7e2eb91543d8f2e6337cec61e832c6e03f99d7c7c25301d82ac922ea" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.572262 4946 scope.go:117] "RemoveContainer" containerID="0a77650cb871753d826fa0eb5020e7d92f0dfa3f63dedf3cda0ccf863d9e2ef1" Nov 28 06:58:44 crc kubenswrapper[4946]: E1128 06:58:44.574499 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a77650cb871753d826fa0eb5020e7d92f0dfa3f63dedf3cda0ccf863d9e2ef1\": container with ID starting with 0a77650cb871753d826fa0eb5020e7d92f0dfa3f63dedf3cda0ccf863d9e2ef1 not found: ID does not exist" containerID="0a77650cb871753d826fa0eb5020e7d92f0dfa3f63dedf3cda0ccf863d9e2ef1" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.574555 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a77650cb871753d826fa0eb5020e7d92f0dfa3f63dedf3cda0ccf863d9e2ef1"} err="failed to get container status \"0a77650cb871753d826fa0eb5020e7d92f0dfa3f63dedf3cda0ccf863d9e2ef1\": rpc error: code = NotFound desc = could not find container \"0a77650cb871753d826fa0eb5020e7d92f0dfa3f63dedf3cda0ccf863d9e2ef1\": container with ID starting with 0a77650cb871753d826fa0eb5020e7d92f0dfa3f63dedf3cda0ccf863d9e2ef1 not found: ID does not exist" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.574586 4946 scope.go:117] "RemoveContainer" containerID="ae9792e5ba2567cd20535718e3083200e1f3056e954ef59ed27763533d3a69fd" Nov 28 06:58:44 crc kubenswrapper[4946]: E1128 06:58:44.576731 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae9792e5ba2567cd20535718e3083200e1f3056e954ef59ed27763533d3a69fd\": container with ID starting with ae9792e5ba2567cd20535718e3083200e1f3056e954ef59ed27763533d3a69fd not found: ID does not exist" containerID="ae9792e5ba2567cd20535718e3083200e1f3056e954ef59ed27763533d3a69fd" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.576759 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae9792e5ba2567cd20535718e3083200e1f3056e954ef59ed27763533d3a69fd"} err="failed to get container status \"ae9792e5ba2567cd20535718e3083200e1f3056e954ef59ed27763533d3a69fd\": rpc error: code = NotFound desc = could not find container \"ae9792e5ba2567cd20535718e3083200e1f3056e954ef59ed27763533d3a69fd\": container with ID starting with ae9792e5ba2567cd20535718e3083200e1f3056e954ef59ed27763533d3a69fd not found: ID does not exist" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.576777 4946 scope.go:117] "RemoveContainer" containerID="3db9c17b7e2eb91543d8f2e6337cec61e832c6e03f99d7c7c25301d82ac922ea" Nov 28 06:58:44 crc kubenswrapper[4946]: E1128 06:58:44.577265 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3db9c17b7e2eb91543d8f2e6337cec61e832c6e03f99d7c7c25301d82ac922ea\": container with ID starting with 3db9c17b7e2eb91543d8f2e6337cec61e832c6e03f99d7c7c25301d82ac922ea not found: ID does not exist" containerID="3db9c17b7e2eb91543d8f2e6337cec61e832c6e03f99d7c7c25301d82ac922ea" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.577319 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3db9c17b7e2eb91543d8f2e6337cec61e832c6e03f99d7c7c25301d82ac922ea"} err="failed to get container status \"3db9c17b7e2eb91543d8f2e6337cec61e832c6e03f99d7c7c25301d82ac922ea\": rpc error: code = NotFound desc = could not find container \"3db9c17b7e2eb91543d8f2e6337cec61e832c6e03f99d7c7c25301d82ac922ea\": container with ID starting with 3db9c17b7e2eb91543d8f2e6337cec61e832c6e03f99d7c7c25301d82ac922ea not found: ID does not exist" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.577353 4946 scope.go:117] "RemoveContainer" containerID="e4131662dd2679ac360e03f650a5e9de467bfac888509c2f77279606e96b51e9" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.592530 4946 scope.go:117] "RemoveContainer" containerID="e4131662dd2679ac360e03f650a5e9de467bfac888509c2f77279606e96b51e9" Nov 28 06:58:44 crc kubenswrapper[4946]: E1128 06:58:44.593086 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4131662dd2679ac360e03f650a5e9de467bfac888509c2f77279606e96b51e9\": container with ID starting with e4131662dd2679ac360e03f650a5e9de467bfac888509c2f77279606e96b51e9 not found: ID does not exist" containerID="e4131662dd2679ac360e03f650a5e9de467bfac888509c2f77279606e96b51e9" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.593110 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4131662dd2679ac360e03f650a5e9de467bfac888509c2f77279606e96b51e9"} err="failed to get container status \"e4131662dd2679ac360e03f650a5e9de467bfac888509c2f77279606e96b51e9\": rpc error: code = NotFound desc = could not find container \"e4131662dd2679ac360e03f650a5e9de467bfac888509c2f77279606e96b51e9\": container with ID starting with e4131662dd2679ac360e03f650a5e9de467bfac888509c2f77279606e96b51e9 not found: ID does not exist" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.593130 4946 scope.go:117] "RemoveContainer" containerID="59e0ebde0c1aba3eb7c6c208646384865b52fcc104200a084717110505444149" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.604222 4946 scope.go:117] "RemoveContainer" containerID="bdb8c7be7ae7c270d6770efc7c429c096a63a95f9cbbfc62f3472212668bae38" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.623435 4946 scope.go:117] "RemoveContainer" containerID="c491b418a616aed07ffb6a301897c4e49f739e28430ba6aa7675b39ae2af5a70" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.637753 4946 scope.go:117] "RemoveContainer" containerID="59e0ebde0c1aba3eb7c6c208646384865b52fcc104200a084717110505444149" Nov 28 06:58:44 crc kubenswrapper[4946]: E1128 06:58:44.639760 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59e0ebde0c1aba3eb7c6c208646384865b52fcc104200a084717110505444149\": container with ID starting with 59e0ebde0c1aba3eb7c6c208646384865b52fcc104200a084717110505444149 not found: ID does not exist" containerID="59e0ebde0c1aba3eb7c6c208646384865b52fcc104200a084717110505444149" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.639812 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59e0ebde0c1aba3eb7c6c208646384865b52fcc104200a084717110505444149"} err="failed to get container status \"59e0ebde0c1aba3eb7c6c208646384865b52fcc104200a084717110505444149\": rpc error: code = NotFound desc = could not find container \"59e0ebde0c1aba3eb7c6c208646384865b52fcc104200a084717110505444149\": container with ID starting with 59e0ebde0c1aba3eb7c6c208646384865b52fcc104200a084717110505444149 not found: ID does not exist" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.639838 4946 scope.go:117] "RemoveContainer" containerID="bdb8c7be7ae7c270d6770efc7c429c096a63a95f9cbbfc62f3472212668bae38" Nov 28 06:58:44 crc kubenswrapper[4946]: E1128 06:58:44.640228 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdb8c7be7ae7c270d6770efc7c429c096a63a95f9cbbfc62f3472212668bae38\": container with ID starting with bdb8c7be7ae7c270d6770efc7c429c096a63a95f9cbbfc62f3472212668bae38 not found: ID does not exist" containerID="bdb8c7be7ae7c270d6770efc7c429c096a63a95f9cbbfc62f3472212668bae38" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.640303 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdb8c7be7ae7c270d6770efc7c429c096a63a95f9cbbfc62f3472212668bae38"} err="failed to get container status \"bdb8c7be7ae7c270d6770efc7c429c096a63a95f9cbbfc62f3472212668bae38\": rpc error: code = NotFound desc = could not find container \"bdb8c7be7ae7c270d6770efc7c429c096a63a95f9cbbfc62f3472212668bae38\": container with ID starting with bdb8c7be7ae7c270d6770efc7c429c096a63a95f9cbbfc62f3472212668bae38 not found: ID does not exist" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.640348 4946 scope.go:117] "RemoveContainer" containerID="c491b418a616aed07ffb6a301897c4e49f739e28430ba6aa7675b39ae2af5a70" Nov 28 06:58:44 crc kubenswrapper[4946]: E1128 06:58:44.640642 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c491b418a616aed07ffb6a301897c4e49f739e28430ba6aa7675b39ae2af5a70\": container with ID starting with c491b418a616aed07ffb6a301897c4e49f739e28430ba6aa7675b39ae2af5a70 not found: ID does not exist" containerID="c491b418a616aed07ffb6a301897c4e49f739e28430ba6aa7675b39ae2af5a70" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.640662 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c491b418a616aed07ffb6a301897c4e49f739e28430ba6aa7675b39ae2af5a70"} err="failed to get container status \"c491b418a616aed07ffb6a301897c4e49f739e28430ba6aa7675b39ae2af5a70\": rpc error: code = NotFound desc = could not find container \"c491b418a616aed07ffb6a301897c4e49f739e28430ba6aa7675b39ae2af5a70\": container with ID starting with c491b418a616aed07ffb6a301897c4e49f739e28430ba6aa7675b39ae2af5a70 not found: ID does not exist" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.674903 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b169ce71-a297-432f-a041-f7b544659574-utilities\") pod \"b169ce71-a297-432f-a041-f7b544659574\" (UID: \"b169ce71-a297-432f-a041-f7b544659574\") " Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.675577 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b169ce71-a297-432f-a041-f7b544659574-utilities" (OuterVolumeSpecName: "utilities") pod "b169ce71-a297-432f-a041-f7b544659574" (UID: "b169ce71-a297-432f-a041-f7b544659574"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.675733 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b169ce71-a297-432f-a041-f7b544659574-catalog-content\") pod \"b169ce71-a297-432f-a041-f7b544659574\" (UID: \"b169ce71-a297-432f-a041-f7b544659574\") " Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.675810 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w54pj\" (UniqueName: \"kubernetes.io/projected/b169ce71-a297-432f-a041-f7b544659574-kube-api-access-w54pj\") pod \"b169ce71-a297-432f-a041-f7b544659574\" (UID: \"b169ce71-a297-432f-a041-f7b544659574\") " Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.676479 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b169ce71-a297-432f-a041-f7b544659574-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.682931 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b169ce71-a297-432f-a041-f7b544659574-kube-api-access-w54pj" (OuterVolumeSpecName: "kube-api-access-w54pj") pod "b169ce71-a297-432f-a041-f7b544659574" (UID: "b169ce71-a297-432f-a041-f7b544659574"). InnerVolumeSpecName "kube-api-access-w54pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.724249 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b169ce71-a297-432f-a041-f7b544659574-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b169ce71-a297-432f-a041-f7b544659574" (UID: "b169ce71-a297-432f-a041-f7b544659574"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.778150 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b169ce71-a297-432f-a041-f7b544659574-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:58:44 crc kubenswrapper[4946]: I1128 06:58:44.778809 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w54pj\" (UniqueName: \"kubernetes.io/projected/b169ce71-a297-432f-a041-f7b544659574-kube-api-access-w54pj\") on node \"crc\" DevicePath \"\"" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.430404 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lr6sl" event={"ID":"29fb1915-7644-4b45-87f3-ad6ffc6b289e","Type":"ContainerStarted","Data":"ae1c2d273d1ca4c29086de984a18f68a49bddc747759b71c6950e07c4e3f21f9"} Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.430485 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lr6sl" event={"ID":"29fb1915-7644-4b45-87f3-ad6ffc6b289e","Type":"ContainerStarted","Data":"792798b0e8511c6a85f986d5087ea65502ca6ff370a0142ab85393281dfedf5a"} Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.430970 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lr6sl" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.434115 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f7mgl" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.434130 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f7mgl" event={"ID":"b169ce71-a297-432f-a041-f7b544659574","Type":"ContainerDied","Data":"c67f9eda94d2fe964888320bbe415e36c803976065f6155d868c081579ee4736"} Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.434240 4946 scope.go:117] "RemoveContainer" containerID="aaa71615617e77d25b1b5f41fc53dd4e6020d672abe4da55e9b796aed37cd626" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.436228 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lr6sl" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.455199 4946 scope.go:117] "RemoveContainer" containerID="67f4bfc66b91f17d4b366b4b306c847009d632417e1602fcdcd2ebee1e4e22fc" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.499538 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-lr6sl" podStartSLOduration=2.499508439 podStartE2EDuration="2.499508439s" podCreationTimestamp="2025-11-28 06:58:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:58:45.466522115 +0000 UTC m=+379.844587256" watchObservedRunningTime="2025-11-28 06:58:45.499508439 +0000 UTC m=+379.877573560" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.504653 4946 scope.go:117] "RemoveContainer" containerID="8b24854b3b1d473d5e5d5ff7a34a2d4ad401b987285a987e8f0e02bf6e2101be" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.523870 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f7mgl"] Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.533250 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f7mgl"] Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.855537 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h2hpw"] Nov 28 06:58:45 crc kubenswrapper[4946]: E1128 06:58:45.855796 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4cb05fa-0cdb-499e-9d83-e6a5e87bf144" containerName="registry-server" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.855810 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4cb05fa-0cdb-499e-9d83-e6a5e87bf144" containerName="registry-server" Nov 28 06:58:45 crc kubenswrapper[4946]: E1128 06:58:45.855823 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b169ce71-a297-432f-a041-f7b544659574" containerName="extract-content" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.855829 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b169ce71-a297-432f-a041-f7b544659574" containerName="extract-content" Nov 28 06:58:45 crc kubenswrapper[4946]: E1128 06:58:45.855837 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b169ce71-a297-432f-a041-f7b544659574" containerName="extract-utilities" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.855844 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b169ce71-a297-432f-a041-f7b544659574" containerName="extract-utilities" Nov 28 06:58:45 crc kubenswrapper[4946]: E1128 06:58:45.855853 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1faa4eda-7193-4eec-b3f6-391ddc88b498" containerName="extract-utilities" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.855859 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1faa4eda-7193-4eec-b3f6-391ddc88b498" containerName="extract-utilities" Nov 28 06:58:45 crc kubenswrapper[4946]: E1128 06:58:45.855870 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1faa4eda-7193-4eec-b3f6-391ddc88b498" containerName="registry-server" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.855876 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1faa4eda-7193-4eec-b3f6-391ddc88b498" containerName="registry-server" Nov 28 06:58:45 crc kubenswrapper[4946]: E1128 06:58:45.855886 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c124c67-1e65-4489-a08a-8b580fee23cc" containerName="extract-content" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.855893 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c124c67-1e65-4489-a08a-8b580fee23cc" containerName="extract-content" Nov 28 06:58:45 crc kubenswrapper[4946]: E1128 06:58:45.855902 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4cb05fa-0cdb-499e-9d83-e6a5e87bf144" containerName="extract-utilities" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.855908 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4cb05fa-0cdb-499e-9d83-e6a5e87bf144" containerName="extract-utilities" Nov 28 06:58:45 crc kubenswrapper[4946]: E1128 06:58:45.855932 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1faa4eda-7193-4eec-b3f6-391ddc88b498" containerName="extract-content" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.855937 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1faa4eda-7193-4eec-b3f6-391ddc88b498" containerName="extract-content" Nov 28 06:58:45 crc kubenswrapper[4946]: E1128 06:58:45.855946 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4cb05fa-0cdb-499e-9d83-e6a5e87bf144" containerName="extract-content" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.855952 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4cb05fa-0cdb-499e-9d83-e6a5e87bf144" containerName="extract-content" Nov 28 06:58:45 crc kubenswrapper[4946]: E1128 06:58:45.855958 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8c9e4f9-d51d-47d9-8228-c9c58873dbe3" containerName="marketplace-operator" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.855964 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8c9e4f9-d51d-47d9-8228-c9c58873dbe3" containerName="marketplace-operator" Nov 28 06:58:45 crc kubenswrapper[4946]: E1128 06:58:45.855975 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c124c67-1e65-4489-a08a-8b580fee23cc" containerName="extract-utilities" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.855981 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c124c67-1e65-4489-a08a-8b580fee23cc" containerName="extract-utilities" Nov 28 06:58:45 crc kubenswrapper[4946]: E1128 06:58:45.855990 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c124c67-1e65-4489-a08a-8b580fee23cc" containerName="registry-server" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.855998 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c124c67-1e65-4489-a08a-8b580fee23cc" containerName="registry-server" Nov 28 06:58:45 crc kubenswrapper[4946]: E1128 06:58:45.856006 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b169ce71-a297-432f-a041-f7b544659574" containerName="registry-server" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.856011 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b169ce71-a297-432f-a041-f7b544659574" containerName="registry-server" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.856157 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b169ce71-a297-432f-a041-f7b544659574" containerName="registry-server" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.856171 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c124c67-1e65-4489-a08a-8b580fee23cc" containerName="registry-server" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.856180 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8c9e4f9-d51d-47d9-8228-c9c58873dbe3" containerName="marketplace-operator" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.856186 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4cb05fa-0cdb-499e-9d83-e6a5e87bf144" containerName="registry-server" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.856195 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="1faa4eda-7193-4eec-b3f6-391ddc88b498" containerName="registry-server" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.857088 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2hpw" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.860030 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.869925 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2hpw"] Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.998496 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c124c67-1e65-4489-a08a-8b580fee23cc" path="/var/lib/kubelet/pods/1c124c67-1e65-4489-a08a-8b580fee23cc/volumes" Nov 28 06:58:45 crc kubenswrapper[4946]: I1128 06:58:45.999356 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1faa4eda-7193-4eec-b3f6-391ddc88b498" path="/var/lib/kubelet/pods/1faa4eda-7193-4eec-b3f6-391ddc88b498/volumes" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.000140 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b169ce71-a297-432f-a041-f7b544659574" path="/var/lib/kubelet/pods/b169ce71-a297-432f-a041-f7b544659574/volumes" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.001319 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4cb05fa-0cdb-499e-9d83-e6a5e87bf144" path="/var/lib/kubelet/pods/d4cb05fa-0cdb-499e-9d83-e6a5e87bf144/volumes" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.001991 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8c9e4f9-d51d-47d9-8228-c9c58873dbe3" path="/var/lib/kubelet/pods/d8c9e4f9-d51d-47d9-8228-c9c58873dbe3/volumes" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.005802 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d51c16-8bde-4b97-9ce2-501ddfdaf893-catalog-content\") pod \"redhat-marketplace-h2hpw\" (UID: \"e8d51c16-8bde-4b97-9ce2-501ddfdaf893\") " pod="openshift-marketplace/redhat-marketplace-h2hpw" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.005861 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d51c16-8bde-4b97-9ce2-501ddfdaf893-utilities\") pod \"redhat-marketplace-h2hpw\" (UID: \"e8d51c16-8bde-4b97-9ce2-501ddfdaf893\") " pod="openshift-marketplace/redhat-marketplace-h2hpw" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.005914 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwrj8\" (UniqueName: \"kubernetes.io/projected/e8d51c16-8bde-4b97-9ce2-501ddfdaf893-kube-api-access-fwrj8\") pod \"redhat-marketplace-h2hpw\" (UID: \"e8d51c16-8bde-4b97-9ce2-501ddfdaf893\") " pod="openshift-marketplace/redhat-marketplace-h2hpw" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.047869 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9s587"] Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.049265 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9s587" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.053361 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.059031 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9s587"] Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.107507 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwwpw\" (UniqueName: \"kubernetes.io/projected/0c331d4e-3f84-4908-8dc4-71701be41cc0-kube-api-access-mwwpw\") pod \"redhat-operators-9s587\" (UID: \"0c331d4e-3f84-4908-8dc4-71701be41cc0\") " pod="openshift-marketplace/redhat-operators-9s587" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.107558 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c331d4e-3f84-4908-8dc4-71701be41cc0-utilities\") pod \"redhat-operators-9s587\" (UID: \"0c331d4e-3f84-4908-8dc4-71701be41cc0\") " pod="openshift-marketplace/redhat-operators-9s587" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.107607 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d51c16-8bde-4b97-9ce2-501ddfdaf893-catalog-content\") pod \"redhat-marketplace-h2hpw\" (UID: \"e8d51c16-8bde-4b97-9ce2-501ddfdaf893\") " pod="openshift-marketplace/redhat-marketplace-h2hpw" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.107630 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d51c16-8bde-4b97-9ce2-501ddfdaf893-utilities\") pod \"redhat-marketplace-h2hpw\" (UID: \"e8d51c16-8bde-4b97-9ce2-501ddfdaf893\") " pod="openshift-marketplace/redhat-marketplace-h2hpw" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.107650 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwrj8\" (UniqueName: \"kubernetes.io/projected/e8d51c16-8bde-4b97-9ce2-501ddfdaf893-kube-api-access-fwrj8\") pod \"redhat-marketplace-h2hpw\" (UID: \"e8d51c16-8bde-4b97-9ce2-501ddfdaf893\") " pod="openshift-marketplace/redhat-marketplace-h2hpw" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.107699 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c331d4e-3f84-4908-8dc4-71701be41cc0-catalog-content\") pod \"redhat-operators-9s587\" (UID: \"0c331d4e-3f84-4908-8dc4-71701be41cc0\") " pod="openshift-marketplace/redhat-operators-9s587" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.108499 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d51c16-8bde-4b97-9ce2-501ddfdaf893-utilities\") pod \"redhat-marketplace-h2hpw\" (UID: \"e8d51c16-8bde-4b97-9ce2-501ddfdaf893\") " pod="openshift-marketplace/redhat-marketplace-h2hpw" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.109135 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d51c16-8bde-4b97-9ce2-501ddfdaf893-catalog-content\") pod \"redhat-marketplace-h2hpw\" (UID: \"e8d51c16-8bde-4b97-9ce2-501ddfdaf893\") " pod="openshift-marketplace/redhat-marketplace-h2hpw" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.128701 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwrj8\" (UniqueName: \"kubernetes.io/projected/e8d51c16-8bde-4b97-9ce2-501ddfdaf893-kube-api-access-fwrj8\") pod \"redhat-marketplace-h2hpw\" (UID: \"e8d51c16-8bde-4b97-9ce2-501ddfdaf893\") " pod="openshift-marketplace/redhat-marketplace-h2hpw" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.181452 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2hpw" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.209006 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c331d4e-3f84-4908-8dc4-71701be41cc0-catalog-content\") pod \"redhat-operators-9s587\" (UID: \"0c331d4e-3f84-4908-8dc4-71701be41cc0\") " pod="openshift-marketplace/redhat-operators-9s587" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.209525 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwwpw\" (UniqueName: \"kubernetes.io/projected/0c331d4e-3f84-4908-8dc4-71701be41cc0-kube-api-access-mwwpw\") pod \"redhat-operators-9s587\" (UID: \"0c331d4e-3f84-4908-8dc4-71701be41cc0\") " pod="openshift-marketplace/redhat-operators-9s587" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.209554 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c331d4e-3f84-4908-8dc4-71701be41cc0-utilities\") pod \"redhat-operators-9s587\" (UID: \"0c331d4e-3f84-4908-8dc4-71701be41cc0\") " pod="openshift-marketplace/redhat-operators-9s587" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.209638 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c331d4e-3f84-4908-8dc4-71701be41cc0-catalog-content\") pod \"redhat-operators-9s587\" (UID: \"0c331d4e-3f84-4908-8dc4-71701be41cc0\") " pod="openshift-marketplace/redhat-operators-9s587" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.209846 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c331d4e-3f84-4908-8dc4-71701be41cc0-utilities\") pod \"redhat-operators-9s587\" (UID: \"0c331d4e-3f84-4908-8dc4-71701be41cc0\") " pod="openshift-marketplace/redhat-operators-9s587" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.233198 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwwpw\" (UniqueName: \"kubernetes.io/projected/0c331d4e-3f84-4908-8dc4-71701be41cc0-kube-api-access-mwwpw\") pod \"redhat-operators-9s587\" (UID: \"0c331d4e-3f84-4908-8dc4-71701be41cc0\") " pod="openshift-marketplace/redhat-operators-9s587" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.409593 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2hpw"] Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.415800 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9s587" Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.461784 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2hpw" event={"ID":"e8d51c16-8bde-4b97-9ce2-501ddfdaf893","Type":"ContainerStarted","Data":"4c7747240b2584fd97fd8b9ed7eb43d995c2a7649d1dcc5fdaedea36eeb563f1"} Nov 28 06:58:46 crc kubenswrapper[4946]: I1128 06:58:46.622685 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9s587"] Nov 28 06:58:46 crc kubenswrapper[4946]: W1128 06:58:46.685603 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c331d4e_3f84_4908_8dc4_71701be41cc0.slice/crio-1afff790cdacc1e06efff1a848dd5fae8dce055ef7aae22d073c400c2d481f20 WatchSource:0}: Error finding container 1afff790cdacc1e06efff1a848dd5fae8dce055ef7aae22d073c400c2d481f20: Status 404 returned error can't find the container with id 1afff790cdacc1e06efff1a848dd5fae8dce055ef7aae22d073c400c2d481f20 Nov 28 06:58:47 crc kubenswrapper[4946]: I1128 06:58:47.468122 4946 generic.go:334] "Generic (PLEG): container finished" podID="e8d51c16-8bde-4b97-9ce2-501ddfdaf893" containerID="6fabdd6395ea463b312a32123626a839f5c395dac97d45bcddcbeeaeae5a7d1f" exitCode=0 Nov 28 06:58:47 crc kubenswrapper[4946]: I1128 06:58:47.468192 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2hpw" event={"ID":"e8d51c16-8bde-4b97-9ce2-501ddfdaf893","Type":"ContainerDied","Data":"6fabdd6395ea463b312a32123626a839f5c395dac97d45bcddcbeeaeae5a7d1f"} Nov 28 06:58:47 crc kubenswrapper[4946]: I1128 06:58:47.470922 4946 generic.go:334] "Generic (PLEG): container finished" podID="0c331d4e-3f84-4908-8dc4-71701be41cc0" containerID="6dc819924cff91ecb31dfe63a806760b257b71d3bca85ac9490ac94392be62c1" exitCode=0 Nov 28 06:58:47 crc kubenswrapper[4946]: I1128 06:58:47.471558 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9s587" event={"ID":"0c331d4e-3f84-4908-8dc4-71701be41cc0","Type":"ContainerDied","Data":"6dc819924cff91ecb31dfe63a806760b257b71d3bca85ac9490ac94392be62c1"} Nov 28 06:58:47 crc kubenswrapper[4946]: I1128 06:58:47.471613 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9s587" event={"ID":"0c331d4e-3f84-4908-8dc4-71701be41cc0","Type":"ContainerStarted","Data":"1afff790cdacc1e06efff1a848dd5fae8dce055ef7aae22d073c400c2d481f20"} Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.254297 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bd5ld"] Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.256139 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bd5ld" Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.261810 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.264015 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bd5ld"] Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.450596 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28162389-233d-44e7-a81c-c92f186e9f94-catalog-content\") pod \"community-operators-bd5ld\" (UID: \"28162389-233d-44e7-a81c-c92f186e9f94\") " pod="openshift-marketplace/community-operators-bd5ld" Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.450668 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28162389-233d-44e7-a81c-c92f186e9f94-utilities\") pod \"community-operators-bd5ld\" (UID: \"28162389-233d-44e7-a81c-c92f186e9f94\") " pod="openshift-marketplace/community-operators-bd5ld" Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.450701 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxzrv\" (UniqueName: \"kubernetes.io/projected/28162389-233d-44e7-a81c-c92f186e9f94-kube-api-access-qxzrv\") pod \"community-operators-bd5ld\" (UID: \"28162389-233d-44e7-a81c-c92f186e9f94\") " pod="openshift-marketplace/community-operators-bd5ld" Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.455573 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5rk47"] Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.457794 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5rk47" Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.460396 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.469745 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5rk47"] Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.552158 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28162389-233d-44e7-a81c-c92f186e9f94-utilities\") pod \"community-operators-bd5ld\" (UID: \"28162389-233d-44e7-a81c-c92f186e9f94\") " pod="openshift-marketplace/community-operators-bd5ld" Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.552229 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affebde7-d83f-4478-85fa-2dcba3ec2499-utilities\") pod \"certified-operators-5rk47\" (UID: \"affebde7-d83f-4478-85fa-2dcba3ec2499\") " pod="openshift-marketplace/certified-operators-5rk47" Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.552260 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxzrv\" (UniqueName: \"kubernetes.io/projected/28162389-233d-44e7-a81c-c92f186e9f94-kube-api-access-qxzrv\") pod \"community-operators-bd5ld\" (UID: \"28162389-233d-44e7-a81c-c92f186e9f94\") " pod="openshift-marketplace/community-operators-bd5ld" Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.552292 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhlfk\" (UniqueName: \"kubernetes.io/projected/affebde7-d83f-4478-85fa-2dcba3ec2499-kube-api-access-vhlfk\") pod \"certified-operators-5rk47\" (UID: \"affebde7-d83f-4478-85fa-2dcba3ec2499\") " pod="openshift-marketplace/certified-operators-5rk47" Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.552351 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affebde7-d83f-4478-85fa-2dcba3ec2499-catalog-content\") pod \"certified-operators-5rk47\" (UID: \"affebde7-d83f-4478-85fa-2dcba3ec2499\") " pod="openshift-marketplace/certified-operators-5rk47" Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.552396 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28162389-233d-44e7-a81c-c92f186e9f94-catalog-content\") pod \"community-operators-bd5ld\" (UID: \"28162389-233d-44e7-a81c-c92f186e9f94\") " pod="openshift-marketplace/community-operators-bd5ld" Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.552655 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28162389-233d-44e7-a81c-c92f186e9f94-utilities\") pod \"community-operators-bd5ld\" (UID: \"28162389-233d-44e7-a81c-c92f186e9f94\") " pod="openshift-marketplace/community-operators-bd5ld" Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.552877 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28162389-233d-44e7-a81c-c92f186e9f94-catalog-content\") pod \"community-operators-bd5ld\" (UID: \"28162389-233d-44e7-a81c-c92f186e9f94\") " pod="openshift-marketplace/community-operators-bd5ld" Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.581033 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxzrv\" (UniqueName: \"kubernetes.io/projected/28162389-233d-44e7-a81c-c92f186e9f94-kube-api-access-qxzrv\") pod \"community-operators-bd5ld\" (UID: \"28162389-233d-44e7-a81c-c92f186e9f94\") " pod="openshift-marketplace/community-operators-bd5ld" Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.592542 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bd5ld" Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.653690 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affebde7-d83f-4478-85fa-2dcba3ec2499-utilities\") pod \"certified-operators-5rk47\" (UID: \"affebde7-d83f-4478-85fa-2dcba3ec2499\") " pod="openshift-marketplace/certified-operators-5rk47" Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.653748 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhlfk\" (UniqueName: \"kubernetes.io/projected/affebde7-d83f-4478-85fa-2dcba3ec2499-kube-api-access-vhlfk\") pod \"certified-operators-5rk47\" (UID: \"affebde7-d83f-4478-85fa-2dcba3ec2499\") " pod="openshift-marketplace/certified-operators-5rk47" Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.653801 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affebde7-d83f-4478-85fa-2dcba3ec2499-catalog-content\") pod \"certified-operators-5rk47\" (UID: \"affebde7-d83f-4478-85fa-2dcba3ec2499\") " pod="openshift-marketplace/certified-operators-5rk47" Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.654350 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affebde7-d83f-4478-85fa-2dcba3ec2499-utilities\") pod \"certified-operators-5rk47\" (UID: \"affebde7-d83f-4478-85fa-2dcba3ec2499\") " pod="openshift-marketplace/certified-operators-5rk47" Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.654753 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affebde7-d83f-4478-85fa-2dcba3ec2499-catalog-content\") pod \"certified-operators-5rk47\" (UID: \"affebde7-d83f-4478-85fa-2dcba3ec2499\") " pod="openshift-marketplace/certified-operators-5rk47" Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.675849 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhlfk\" (UniqueName: \"kubernetes.io/projected/affebde7-d83f-4478-85fa-2dcba3ec2499-kube-api-access-vhlfk\") pod \"certified-operators-5rk47\" (UID: \"affebde7-d83f-4478-85fa-2dcba3ec2499\") " pod="openshift-marketplace/certified-operators-5rk47" Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.806358 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bd5ld"] Nov 28 06:58:48 crc kubenswrapper[4946]: I1128 06:58:48.814912 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5rk47" Nov 28 06:58:48 crc kubenswrapper[4946]: W1128 06:58:48.832335 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28162389_233d_44e7_a81c_c92f186e9f94.slice/crio-b83ec898b36446b50cc01b44baa4fef198a01335ff83cf02edb36be220da936e WatchSource:0}: Error finding container b83ec898b36446b50cc01b44baa4fef198a01335ff83cf02edb36be220da936e: Status 404 returned error can't find the container with id b83ec898b36446b50cc01b44baa4fef198a01335ff83cf02edb36be220da936e Nov 28 06:58:49 crc kubenswrapper[4946]: I1128 06:58:49.023632 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5rk47"] Nov 28 06:58:49 crc kubenswrapper[4946]: W1128 06:58:49.067052 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaffebde7_d83f_4478_85fa_2dcba3ec2499.slice/crio-3f6c157240b95f9093427a2409d9c381a8a3939b6f288cadac5512e3f052e825 WatchSource:0}: Error finding container 3f6c157240b95f9093427a2409d9c381a8a3939b6f288cadac5512e3f052e825: Status 404 returned error can't find the container with id 3f6c157240b95f9093427a2409d9c381a8a3939b6f288cadac5512e3f052e825 Nov 28 06:58:49 crc kubenswrapper[4946]: I1128 06:58:49.485882 4946 generic.go:334] "Generic (PLEG): container finished" podID="0c331d4e-3f84-4908-8dc4-71701be41cc0" containerID="f4a8973ebccbf5affac60af01e309383fa20d5aea94511eb4e73846eea49b6d5" exitCode=0 Nov 28 06:58:49 crc kubenswrapper[4946]: I1128 06:58:49.485978 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9s587" event={"ID":"0c331d4e-3f84-4908-8dc4-71701be41cc0","Type":"ContainerDied","Data":"f4a8973ebccbf5affac60af01e309383fa20d5aea94511eb4e73846eea49b6d5"} Nov 28 06:58:49 crc kubenswrapper[4946]: I1128 06:58:49.487904 4946 generic.go:334] "Generic (PLEG): container finished" podID="28162389-233d-44e7-a81c-c92f186e9f94" containerID="f07131c6b5394e80013d7f3dde43ad3b31c1c51f5814bebd824dd6cc216ea568" exitCode=0 Nov 28 06:58:49 crc kubenswrapper[4946]: I1128 06:58:49.487944 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd5ld" event={"ID":"28162389-233d-44e7-a81c-c92f186e9f94","Type":"ContainerDied","Data":"f07131c6b5394e80013d7f3dde43ad3b31c1c51f5814bebd824dd6cc216ea568"} Nov 28 06:58:49 crc kubenswrapper[4946]: I1128 06:58:49.487959 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd5ld" event={"ID":"28162389-233d-44e7-a81c-c92f186e9f94","Type":"ContainerStarted","Data":"b83ec898b36446b50cc01b44baa4fef198a01335ff83cf02edb36be220da936e"} Nov 28 06:58:49 crc kubenswrapper[4946]: I1128 06:58:49.492166 4946 generic.go:334] "Generic (PLEG): container finished" podID="affebde7-d83f-4478-85fa-2dcba3ec2499" containerID="528837ebaa02f1c228381924dbb77c8ce056319330899045fc68f1cd0f5263b4" exitCode=0 Nov 28 06:58:49 crc kubenswrapper[4946]: I1128 06:58:49.492221 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5rk47" event={"ID":"affebde7-d83f-4478-85fa-2dcba3ec2499","Type":"ContainerDied","Data":"528837ebaa02f1c228381924dbb77c8ce056319330899045fc68f1cd0f5263b4"} Nov 28 06:58:49 crc kubenswrapper[4946]: I1128 06:58:49.492255 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5rk47" event={"ID":"affebde7-d83f-4478-85fa-2dcba3ec2499","Type":"ContainerStarted","Data":"3f6c157240b95f9093427a2409d9c381a8a3939b6f288cadac5512e3f052e825"} Nov 28 06:58:51 crc kubenswrapper[4946]: I1128 06:58:51.512320 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd5ld" event={"ID":"28162389-233d-44e7-a81c-c92f186e9f94","Type":"ContainerStarted","Data":"2ee3a9b9c633a143ae7ac0e5a831c95dc486da364631443656104ad4767ddf07"} Nov 28 06:58:52 crc kubenswrapper[4946]: I1128 06:58:52.521204 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9s587" event={"ID":"0c331d4e-3f84-4908-8dc4-71701be41cc0","Type":"ContainerStarted","Data":"04a86412e830b5884f9a7a68a2ded6fee13c5e1ba6ddaf238c1de64712425309"} Nov 28 06:58:52 crc kubenswrapper[4946]: I1128 06:58:52.524241 4946 generic.go:334] "Generic (PLEG): container finished" podID="28162389-233d-44e7-a81c-c92f186e9f94" containerID="2ee3a9b9c633a143ae7ac0e5a831c95dc486da364631443656104ad4767ddf07" exitCode=0 Nov 28 06:58:52 crc kubenswrapper[4946]: I1128 06:58:52.524428 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd5ld" event={"ID":"28162389-233d-44e7-a81c-c92f186e9f94","Type":"ContainerDied","Data":"2ee3a9b9c633a143ae7ac0e5a831c95dc486da364631443656104ad4767ddf07"} Nov 28 06:58:52 crc kubenswrapper[4946]: I1128 06:58:52.527317 4946 generic.go:334] "Generic (PLEG): container finished" podID="affebde7-d83f-4478-85fa-2dcba3ec2499" containerID="4efe97155707deb6860ae9515826ad72caad61f259ecd515ef7536baad8a5ade" exitCode=0 Nov 28 06:58:52 crc kubenswrapper[4946]: I1128 06:58:52.527407 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5rk47" event={"ID":"affebde7-d83f-4478-85fa-2dcba3ec2499","Type":"ContainerDied","Data":"4efe97155707deb6860ae9515826ad72caad61f259ecd515ef7536baad8a5ade"} Nov 28 06:58:52 crc kubenswrapper[4946]: I1128 06:58:52.532108 4946 generic.go:334] "Generic (PLEG): container finished" podID="e8d51c16-8bde-4b97-9ce2-501ddfdaf893" containerID="cfde7adb4969db76be01ca4d76597e03e887a23cb3066b120e65574c9573a0ef" exitCode=0 Nov 28 06:58:52 crc kubenswrapper[4946]: I1128 06:58:52.532147 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2hpw" event={"ID":"e8d51c16-8bde-4b97-9ce2-501ddfdaf893","Type":"ContainerDied","Data":"cfde7adb4969db76be01ca4d76597e03e887a23cb3066b120e65574c9573a0ef"} Nov 28 06:58:52 crc kubenswrapper[4946]: I1128 06:58:52.550262 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9s587" podStartSLOduration=2.605248804 podStartE2EDuration="6.550240762s" podCreationTimestamp="2025-11-28 06:58:46 +0000 UTC" firstStartedPulling="2025-11-28 06:58:47.474666993 +0000 UTC m=+381.852732104" lastFinishedPulling="2025-11-28 06:58:51.419658931 +0000 UTC m=+385.797724062" observedRunningTime="2025-11-28 06:58:52.545973482 +0000 UTC m=+386.924038593" watchObservedRunningTime="2025-11-28 06:58:52.550240762 +0000 UTC m=+386.928305873" Nov 28 06:58:53 crc kubenswrapper[4946]: I1128 06:58:53.549222 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5rk47" event={"ID":"affebde7-d83f-4478-85fa-2dcba3ec2499","Type":"ContainerStarted","Data":"9c3e47b930804795d4e9b89145622e35b9c34eee790bff2fe27e6bfcf7e83b31"} Nov 28 06:58:53 crc kubenswrapper[4946]: I1128 06:58:53.552872 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2hpw" event={"ID":"e8d51c16-8bde-4b97-9ce2-501ddfdaf893","Type":"ContainerStarted","Data":"9c0837f3de0096af79f5b913b68b0279e0bde33e4607bd80fa6216f5606a301c"} Nov 28 06:58:53 crc kubenswrapper[4946]: I1128 06:58:53.555669 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd5ld" event={"ID":"28162389-233d-44e7-a81c-c92f186e9f94","Type":"ContainerStarted","Data":"696a46e7105b499af83bc28770240bc54264f43e8c1d2ee455c773afec333fe5"} Nov 28 06:58:53 crc kubenswrapper[4946]: I1128 06:58:53.592532 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5rk47" podStartSLOduration=2.058909144 podStartE2EDuration="5.592515589s" podCreationTimestamp="2025-11-28 06:58:48 +0000 UTC" firstStartedPulling="2025-11-28 06:58:49.493441504 +0000 UTC m=+383.871506605" lastFinishedPulling="2025-11-28 06:58:53.027047939 +0000 UTC m=+387.405113050" observedRunningTime="2025-11-28 06:58:53.574725569 +0000 UTC m=+387.952790680" watchObservedRunningTime="2025-11-28 06:58:53.592515589 +0000 UTC m=+387.970580700" Nov 28 06:58:53 crc kubenswrapper[4946]: I1128 06:58:53.620642 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bd5ld" podStartSLOduration=2.05563916 podStartE2EDuration="5.620618746s" podCreationTimestamp="2025-11-28 06:58:48 +0000 UTC" firstStartedPulling="2025-11-28 06:58:49.488955508 +0000 UTC m=+383.867020609" lastFinishedPulling="2025-11-28 06:58:53.053935084 +0000 UTC m=+387.432000195" observedRunningTime="2025-11-28 06:58:53.596570774 +0000 UTC m=+387.974635885" watchObservedRunningTime="2025-11-28 06:58:53.620618746 +0000 UTC m=+387.998683857" Nov 28 06:58:53 crc kubenswrapper[4946]: I1128 06:58:53.620965 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h2hpw" podStartSLOduration=3.016984264 podStartE2EDuration="8.620960645s" podCreationTimestamp="2025-11-28 06:58:45 +0000 UTC" firstStartedPulling="2025-11-28 06:58:47.470085604 +0000 UTC m=+381.848150755" lastFinishedPulling="2025-11-28 06:58:53.074062025 +0000 UTC m=+387.452127136" observedRunningTime="2025-11-28 06:58:53.619887617 +0000 UTC m=+387.997952738" watchObservedRunningTime="2025-11-28 06:58:53.620960645 +0000 UTC m=+387.999025756" Nov 28 06:58:54 crc kubenswrapper[4946]: I1128 06:58:54.731316 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:58:54 crc kubenswrapper[4946]: I1128 06:58:54.731391 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:58:56 crc kubenswrapper[4946]: I1128 06:58:56.181866 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h2hpw" Nov 28 06:58:56 crc kubenswrapper[4946]: I1128 06:58:56.182301 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h2hpw" Nov 28 06:58:56 crc kubenswrapper[4946]: I1128 06:58:56.237181 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h2hpw" Nov 28 06:58:56 crc kubenswrapper[4946]: I1128 06:58:56.416220 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9s587" Nov 28 06:58:56 crc kubenswrapper[4946]: I1128 06:58:56.416307 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9s587" Nov 28 06:58:57 crc kubenswrapper[4946]: I1128 06:58:57.462394 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9s587" podUID="0c331d4e-3f84-4908-8dc4-71701be41cc0" containerName="registry-server" probeResult="failure" output=< Nov 28 06:58:57 crc kubenswrapper[4946]: timeout: failed to connect service ":50051" within 1s Nov 28 06:58:57 crc kubenswrapper[4946]: > Nov 28 06:58:58 crc kubenswrapper[4946]: I1128 06:58:58.593289 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bd5ld" Nov 28 06:58:58 crc kubenswrapper[4946]: I1128 06:58:58.593937 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bd5ld" Nov 28 06:58:58 crc kubenswrapper[4946]: I1128 06:58:58.650125 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bd5ld" Nov 28 06:58:58 crc kubenswrapper[4946]: I1128 06:58:58.815931 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5rk47" Nov 28 06:58:58 crc kubenswrapper[4946]: I1128 06:58:58.817029 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5rk47" Nov 28 06:58:58 crc kubenswrapper[4946]: I1128 06:58:58.862859 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5rk47" Nov 28 06:58:59 crc kubenswrapper[4946]: I1128 06:58:59.643765 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5rk47" Nov 28 06:58:59 crc kubenswrapper[4946]: I1128 06:58:59.667095 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bd5ld" Nov 28 06:59:06 crc kubenswrapper[4946]: I1128 06:59:06.253178 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h2hpw" Nov 28 06:59:06 crc kubenswrapper[4946]: I1128 06:59:06.499868 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9s587" Nov 28 06:59:06 crc kubenswrapper[4946]: I1128 06:59:06.575187 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9s587" Nov 28 06:59:24 crc kubenswrapper[4946]: I1128 06:59:24.730647 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:59:24 crc kubenswrapper[4946]: I1128 06:59:24.731725 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:59:24 crc kubenswrapper[4946]: I1128 06:59:24.731831 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 06:59:24 crc kubenswrapper[4946]: I1128 06:59:24.733044 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"434a29b0903cdd2aa9424d48ae517522642263b1fb83a0aeca569e01bd8b7068"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 06:59:24 crc kubenswrapper[4946]: I1128 06:59:24.733146 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://434a29b0903cdd2aa9424d48ae517522642263b1fb83a0aeca569e01bd8b7068" gracePeriod=600 Nov 28 06:59:25 crc kubenswrapper[4946]: I1128 06:59:25.794023 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="434a29b0903cdd2aa9424d48ae517522642263b1fb83a0aeca569e01bd8b7068" exitCode=0 Nov 28 06:59:25 crc kubenswrapper[4946]: I1128 06:59:25.794103 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"434a29b0903cdd2aa9424d48ae517522642263b1fb83a0aeca569e01bd8b7068"} Nov 28 06:59:25 crc kubenswrapper[4946]: I1128 06:59:25.794897 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"21fc0485e74b978141bbad8ee7d9fb603a74f8f723790b3bd012d88483e9ed87"} Nov 28 06:59:25 crc kubenswrapper[4946]: I1128 06:59:25.794941 4946 scope.go:117] "RemoveContainer" containerID="174d7788364a980297791c1072b9c05dd827bb3759a851ea4c5dd07981297b67" Nov 28 07:00:00 crc kubenswrapper[4946]: I1128 07:00:00.196863 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405220-67wkg"] Nov 28 07:00:00 crc kubenswrapper[4946]: I1128 07:00:00.198496 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-67wkg" Nov 28 07:00:00 crc kubenswrapper[4946]: I1128 07:00:00.201177 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 07:00:00 crc kubenswrapper[4946]: I1128 07:00:00.201442 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 07:00:00 crc kubenswrapper[4946]: I1128 07:00:00.206634 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405220-67wkg"] Nov 28 07:00:00 crc kubenswrapper[4946]: I1128 07:00:00.238097 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90ff5457-2e1e-4e21-8fba-a7f0c99dea9f-config-volume\") pod \"collect-profiles-29405220-67wkg\" (UID: \"90ff5457-2e1e-4e21-8fba-a7f0c99dea9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-67wkg" Nov 28 07:00:00 crc kubenswrapper[4946]: I1128 07:00:00.238139 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv66m\" (UniqueName: \"kubernetes.io/projected/90ff5457-2e1e-4e21-8fba-a7f0c99dea9f-kube-api-access-bv66m\") pod \"collect-profiles-29405220-67wkg\" (UID: \"90ff5457-2e1e-4e21-8fba-a7f0c99dea9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-67wkg" Nov 28 07:00:00 crc kubenswrapper[4946]: I1128 07:00:00.238180 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90ff5457-2e1e-4e21-8fba-a7f0c99dea9f-secret-volume\") pod \"collect-profiles-29405220-67wkg\" (UID: \"90ff5457-2e1e-4e21-8fba-a7f0c99dea9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-67wkg" Nov 28 07:00:00 crc kubenswrapper[4946]: I1128 07:00:00.339417 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90ff5457-2e1e-4e21-8fba-a7f0c99dea9f-secret-volume\") pod \"collect-profiles-29405220-67wkg\" (UID: \"90ff5457-2e1e-4e21-8fba-a7f0c99dea9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-67wkg" Nov 28 07:00:00 crc kubenswrapper[4946]: I1128 07:00:00.339564 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90ff5457-2e1e-4e21-8fba-a7f0c99dea9f-config-volume\") pod \"collect-profiles-29405220-67wkg\" (UID: \"90ff5457-2e1e-4e21-8fba-a7f0c99dea9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-67wkg" Nov 28 07:00:00 crc kubenswrapper[4946]: I1128 07:00:00.339603 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv66m\" (UniqueName: \"kubernetes.io/projected/90ff5457-2e1e-4e21-8fba-a7f0c99dea9f-kube-api-access-bv66m\") pod \"collect-profiles-29405220-67wkg\" (UID: \"90ff5457-2e1e-4e21-8fba-a7f0c99dea9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-67wkg" Nov 28 07:00:00 crc kubenswrapper[4946]: I1128 07:00:00.340874 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90ff5457-2e1e-4e21-8fba-a7f0c99dea9f-config-volume\") pod \"collect-profiles-29405220-67wkg\" (UID: \"90ff5457-2e1e-4e21-8fba-a7f0c99dea9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-67wkg" Nov 28 07:00:00 crc kubenswrapper[4946]: I1128 07:00:00.350879 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90ff5457-2e1e-4e21-8fba-a7f0c99dea9f-secret-volume\") pod \"collect-profiles-29405220-67wkg\" (UID: \"90ff5457-2e1e-4e21-8fba-a7f0c99dea9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-67wkg" Nov 28 07:00:00 crc kubenswrapper[4946]: I1128 07:00:00.356760 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv66m\" (UniqueName: \"kubernetes.io/projected/90ff5457-2e1e-4e21-8fba-a7f0c99dea9f-kube-api-access-bv66m\") pod \"collect-profiles-29405220-67wkg\" (UID: \"90ff5457-2e1e-4e21-8fba-a7f0c99dea9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-67wkg" Nov 28 07:00:00 crc kubenswrapper[4946]: I1128 07:00:00.518770 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-67wkg" Nov 28 07:00:00 crc kubenswrapper[4946]: I1128 07:00:00.749618 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405220-67wkg"] Nov 28 07:00:01 crc kubenswrapper[4946]: I1128 07:00:01.058877 4946 generic.go:334] "Generic (PLEG): container finished" podID="90ff5457-2e1e-4e21-8fba-a7f0c99dea9f" containerID="993815f62811cc23fc8537561e35245840757164bd7c177a8e48f8b06f9fa147" exitCode=0 Nov 28 07:00:01 crc kubenswrapper[4946]: I1128 07:00:01.059311 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-67wkg" event={"ID":"90ff5457-2e1e-4e21-8fba-a7f0c99dea9f","Type":"ContainerDied","Data":"993815f62811cc23fc8537561e35245840757164bd7c177a8e48f8b06f9fa147"} Nov 28 07:00:01 crc kubenswrapper[4946]: I1128 07:00:01.059490 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-67wkg" event={"ID":"90ff5457-2e1e-4e21-8fba-a7f0c99dea9f","Type":"ContainerStarted","Data":"5bbf4b716003124388c3d40309d2c5ddffa96256dfe61c185c2b8e2a04b2f313"} Nov 28 07:00:02 crc kubenswrapper[4946]: I1128 07:00:02.297851 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-67wkg" Nov 28 07:00:02 crc kubenswrapper[4946]: I1128 07:00:02.363711 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90ff5457-2e1e-4e21-8fba-a7f0c99dea9f-config-volume\") pod \"90ff5457-2e1e-4e21-8fba-a7f0c99dea9f\" (UID: \"90ff5457-2e1e-4e21-8fba-a7f0c99dea9f\") " Nov 28 07:00:02 crc kubenswrapper[4946]: I1128 07:00:02.363784 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90ff5457-2e1e-4e21-8fba-a7f0c99dea9f-secret-volume\") pod \"90ff5457-2e1e-4e21-8fba-a7f0c99dea9f\" (UID: \"90ff5457-2e1e-4e21-8fba-a7f0c99dea9f\") " Nov 28 07:00:02 crc kubenswrapper[4946]: I1128 07:00:02.363829 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv66m\" (UniqueName: \"kubernetes.io/projected/90ff5457-2e1e-4e21-8fba-a7f0c99dea9f-kube-api-access-bv66m\") pod \"90ff5457-2e1e-4e21-8fba-a7f0c99dea9f\" (UID: \"90ff5457-2e1e-4e21-8fba-a7f0c99dea9f\") " Nov 28 07:00:02 crc kubenswrapper[4946]: I1128 07:00:02.364184 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90ff5457-2e1e-4e21-8fba-a7f0c99dea9f-config-volume" (OuterVolumeSpecName: "config-volume") pod "90ff5457-2e1e-4e21-8fba-a7f0c99dea9f" (UID: "90ff5457-2e1e-4e21-8fba-a7f0c99dea9f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:00:02 crc kubenswrapper[4946]: I1128 07:00:02.370786 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90ff5457-2e1e-4e21-8fba-a7f0c99dea9f-kube-api-access-bv66m" (OuterVolumeSpecName: "kube-api-access-bv66m") pod "90ff5457-2e1e-4e21-8fba-a7f0c99dea9f" (UID: "90ff5457-2e1e-4e21-8fba-a7f0c99dea9f"). InnerVolumeSpecName "kube-api-access-bv66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:00:02 crc kubenswrapper[4946]: I1128 07:00:02.371224 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ff5457-2e1e-4e21-8fba-a7f0c99dea9f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "90ff5457-2e1e-4e21-8fba-a7f0c99dea9f" (UID: "90ff5457-2e1e-4e21-8fba-a7f0c99dea9f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:00:02 crc kubenswrapper[4946]: I1128 07:00:02.465205 4946 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90ff5457-2e1e-4e21-8fba-a7f0c99dea9f-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:02 crc kubenswrapper[4946]: I1128 07:00:02.465251 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv66m\" (UniqueName: \"kubernetes.io/projected/90ff5457-2e1e-4e21-8fba-a7f0c99dea9f-kube-api-access-bv66m\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:02 crc kubenswrapper[4946]: I1128 07:00:02.465262 4946 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90ff5457-2e1e-4e21-8fba-a7f0c99dea9f-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:03 crc kubenswrapper[4946]: I1128 07:00:03.074195 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-67wkg" event={"ID":"90ff5457-2e1e-4e21-8fba-a7f0c99dea9f","Type":"ContainerDied","Data":"5bbf4b716003124388c3d40309d2c5ddffa96256dfe61c185c2b8e2a04b2f313"} Nov 28 07:00:03 crc kubenswrapper[4946]: I1128 07:00:03.074769 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bbf4b716003124388c3d40309d2c5ddffa96256dfe61c185c2b8e2a04b2f313" Nov 28 07:00:03 crc kubenswrapper[4946]: I1128 07:00:03.074455 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-67wkg" Nov 28 07:01:54 crc kubenswrapper[4946]: I1128 07:01:54.730789 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:01:54 crc kubenswrapper[4946]: I1128 07:01:54.731635 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:02:24 crc kubenswrapper[4946]: I1128 07:02:24.730834 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:02:24 crc kubenswrapper[4946]: I1128 07:02:24.732768 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:02:55 crc kubenswrapper[4946]: I1128 07:02:54.730843 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:02:55 crc kubenswrapper[4946]: I1128 07:02:54.731613 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:02:55 crc kubenswrapper[4946]: I1128 07:02:54.731669 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 07:02:55 crc kubenswrapper[4946]: I1128 07:02:54.732281 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"21fc0485e74b978141bbad8ee7d9fb603a74f8f723790b3bd012d88483e9ed87"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 07:02:55 crc kubenswrapper[4946]: I1128 07:02:54.732336 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://21fc0485e74b978141bbad8ee7d9fb603a74f8f723790b3bd012d88483e9ed87" gracePeriod=600 Nov 28 07:02:55 crc kubenswrapper[4946]: I1128 07:02:55.345138 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="21fc0485e74b978141bbad8ee7d9fb603a74f8f723790b3bd012d88483e9ed87" exitCode=0 Nov 28 07:02:55 crc kubenswrapper[4946]: I1128 07:02:55.345231 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"21fc0485e74b978141bbad8ee7d9fb603a74f8f723790b3bd012d88483e9ed87"} Nov 28 07:02:55 crc kubenswrapper[4946]: I1128 07:02:55.345929 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"9312f27097b6f3bbafda147b11eda22821e2c49101e6733241ea36c59af5418f"} Nov 28 07:02:55 crc kubenswrapper[4946]: I1128 07:02:55.345960 4946 scope.go:117] "RemoveContainer" containerID="434a29b0903cdd2aa9424d48ae517522642263b1fb83a0aeca569e01bd8b7068" Nov 28 07:04:57 crc kubenswrapper[4946]: I1128 07:04:57.397692 4946 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 07:05:23 crc kubenswrapper[4946]: I1128 07:05:23.082977 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-55xst"] Nov 28 07:05:23 crc kubenswrapper[4946]: E1128 07:05:23.084187 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ff5457-2e1e-4e21-8fba-a7f0c99dea9f" containerName="collect-profiles" Nov 28 07:05:23 crc kubenswrapper[4946]: I1128 07:05:23.084207 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ff5457-2e1e-4e21-8fba-a7f0c99dea9f" containerName="collect-profiles" Nov 28 07:05:23 crc kubenswrapper[4946]: I1128 07:05:23.084344 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="90ff5457-2e1e-4e21-8fba-a7f0c99dea9f" containerName="collect-profiles" Nov 28 07:05:23 crc kubenswrapper[4946]: I1128 07:05:23.085417 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-55xst" Nov 28 07:05:23 crc kubenswrapper[4946]: I1128 07:05:23.107136 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-55xst"] Nov 28 07:05:23 crc kubenswrapper[4946]: I1128 07:05:23.171698 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18572852-b419-45b3-999d-aad04d5bc7b7-utilities\") pod \"certified-operators-55xst\" (UID: \"18572852-b419-45b3-999d-aad04d5bc7b7\") " pod="openshift-marketplace/certified-operators-55xst" Nov 28 07:05:23 crc kubenswrapper[4946]: I1128 07:05:23.171759 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18572852-b419-45b3-999d-aad04d5bc7b7-catalog-content\") pod \"certified-operators-55xst\" (UID: \"18572852-b419-45b3-999d-aad04d5bc7b7\") " pod="openshift-marketplace/certified-operators-55xst" Nov 28 07:05:23 crc kubenswrapper[4946]: I1128 07:05:23.171852 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hfqz\" (UniqueName: \"kubernetes.io/projected/18572852-b419-45b3-999d-aad04d5bc7b7-kube-api-access-2hfqz\") pod \"certified-operators-55xst\" (UID: \"18572852-b419-45b3-999d-aad04d5bc7b7\") " pod="openshift-marketplace/certified-operators-55xst" Nov 28 07:05:23 crc kubenswrapper[4946]: I1128 07:05:23.273327 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18572852-b419-45b3-999d-aad04d5bc7b7-utilities\") pod \"certified-operators-55xst\" (UID: \"18572852-b419-45b3-999d-aad04d5bc7b7\") " pod="openshift-marketplace/certified-operators-55xst" Nov 28 07:05:23 crc kubenswrapper[4946]: I1128 07:05:23.273387 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18572852-b419-45b3-999d-aad04d5bc7b7-catalog-content\") pod \"certified-operators-55xst\" (UID: \"18572852-b419-45b3-999d-aad04d5bc7b7\") " pod="openshift-marketplace/certified-operators-55xst" Nov 28 07:05:23 crc kubenswrapper[4946]: I1128 07:05:23.273451 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hfqz\" (UniqueName: \"kubernetes.io/projected/18572852-b419-45b3-999d-aad04d5bc7b7-kube-api-access-2hfqz\") pod \"certified-operators-55xst\" (UID: \"18572852-b419-45b3-999d-aad04d5bc7b7\") " pod="openshift-marketplace/certified-operators-55xst" Nov 28 07:05:23 crc kubenswrapper[4946]: I1128 07:05:23.273970 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18572852-b419-45b3-999d-aad04d5bc7b7-utilities\") pod \"certified-operators-55xst\" (UID: \"18572852-b419-45b3-999d-aad04d5bc7b7\") " pod="openshift-marketplace/certified-operators-55xst" Nov 28 07:05:23 crc kubenswrapper[4946]: I1128 07:05:23.274145 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18572852-b419-45b3-999d-aad04d5bc7b7-catalog-content\") pod \"certified-operators-55xst\" (UID: \"18572852-b419-45b3-999d-aad04d5bc7b7\") " pod="openshift-marketplace/certified-operators-55xst" Nov 28 07:05:23 crc kubenswrapper[4946]: I1128 07:05:23.301477 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hfqz\" (UniqueName: \"kubernetes.io/projected/18572852-b419-45b3-999d-aad04d5bc7b7-kube-api-access-2hfqz\") pod \"certified-operators-55xst\" (UID: \"18572852-b419-45b3-999d-aad04d5bc7b7\") " pod="openshift-marketplace/certified-operators-55xst" Nov 28 07:05:23 crc kubenswrapper[4946]: I1128 07:05:23.418893 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-55xst" Nov 28 07:05:23 crc kubenswrapper[4946]: I1128 07:05:23.683614 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-55xst"] Nov 28 07:05:24 crc kubenswrapper[4946]: I1128 07:05:24.404941 4946 generic.go:334] "Generic (PLEG): container finished" podID="18572852-b419-45b3-999d-aad04d5bc7b7" containerID="e3d646d481054dddd92d0e3c3fe9438a9e9e541f464e1b4b2a1bb5805ac5d2b7" exitCode=0 Nov 28 07:05:24 crc kubenswrapper[4946]: I1128 07:05:24.405046 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-55xst" event={"ID":"18572852-b419-45b3-999d-aad04d5bc7b7","Type":"ContainerDied","Data":"e3d646d481054dddd92d0e3c3fe9438a9e9e541f464e1b4b2a1bb5805ac5d2b7"} Nov 28 07:05:24 crc kubenswrapper[4946]: I1128 07:05:24.405243 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-55xst" event={"ID":"18572852-b419-45b3-999d-aad04d5bc7b7","Type":"ContainerStarted","Data":"3f6c4571feb208ea040509c143dafcfd6a8bf53456cf2c1dce8f580f8a9c5dd7"} Nov 28 07:05:24 crc kubenswrapper[4946]: I1128 07:05:24.407614 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 07:05:24 crc kubenswrapper[4946]: I1128 07:05:24.731138 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:05:24 crc kubenswrapper[4946]: I1128 07:05:24.731230 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:05:27 crc kubenswrapper[4946]: I1128 07:05:27.440159 4946 generic.go:334] "Generic (PLEG): container finished" podID="18572852-b419-45b3-999d-aad04d5bc7b7" containerID="689eaa5ba517d0c8f026e981772f3cceb83194b526b726324ba7c66550d171bf" exitCode=0 Nov 28 07:05:27 crc kubenswrapper[4946]: I1128 07:05:27.440557 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-55xst" event={"ID":"18572852-b419-45b3-999d-aad04d5bc7b7","Type":"ContainerDied","Data":"689eaa5ba517d0c8f026e981772f3cceb83194b526b726324ba7c66550d171bf"} Nov 28 07:05:27 crc kubenswrapper[4946]: I1128 07:05:27.443936 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z7dgb"] Nov 28 07:05:27 crc kubenswrapper[4946]: I1128 07:05:27.445241 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z7dgb" Nov 28 07:05:27 crc kubenswrapper[4946]: I1128 07:05:27.479889 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z7dgb"] Nov 28 07:05:27 crc kubenswrapper[4946]: I1128 07:05:27.540280 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f5d6fef-1331-4efe-ba88-ddf52dea0a1f-catalog-content\") pod \"redhat-operators-z7dgb\" (UID: \"1f5d6fef-1331-4efe-ba88-ddf52dea0a1f\") " pod="openshift-marketplace/redhat-operators-z7dgb" Nov 28 07:05:27 crc kubenswrapper[4946]: I1128 07:05:27.540340 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmdkr\" (UniqueName: \"kubernetes.io/projected/1f5d6fef-1331-4efe-ba88-ddf52dea0a1f-kube-api-access-kmdkr\") pod \"redhat-operators-z7dgb\" (UID: \"1f5d6fef-1331-4efe-ba88-ddf52dea0a1f\") " pod="openshift-marketplace/redhat-operators-z7dgb" Nov 28 07:05:27 crc kubenswrapper[4946]: I1128 07:05:27.540455 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f5d6fef-1331-4efe-ba88-ddf52dea0a1f-utilities\") pod \"redhat-operators-z7dgb\" (UID: \"1f5d6fef-1331-4efe-ba88-ddf52dea0a1f\") " pod="openshift-marketplace/redhat-operators-z7dgb" Nov 28 07:05:27 crc kubenswrapper[4946]: I1128 07:05:27.642572 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f5d6fef-1331-4efe-ba88-ddf52dea0a1f-utilities\") pod \"redhat-operators-z7dgb\" (UID: \"1f5d6fef-1331-4efe-ba88-ddf52dea0a1f\") " pod="openshift-marketplace/redhat-operators-z7dgb" Nov 28 07:05:27 crc kubenswrapper[4946]: I1128 07:05:27.642654 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmdkr\" (UniqueName: \"kubernetes.io/projected/1f5d6fef-1331-4efe-ba88-ddf52dea0a1f-kube-api-access-kmdkr\") pod \"redhat-operators-z7dgb\" (UID: \"1f5d6fef-1331-4efe-ba88-ddf52dea0a1f\") " pod="openshift-marketplace/redhat-operators-z7dgb" Nov 28 07:05:27 crc kubenswrapper[4946]: I1128 07:05:27.643199 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f5d6fef-1331-4efe-ba88-ddf52dea0a1f-utilities\") pod \"redhat-operators-z7dgb\" (UID: \"1f5d6fef-1331-4efe-ba88-ddf52dea0a1f\") " pod="openshift-marketplace/redhat-operators-z7dgb" Nov 28 07:05:27 crc kubenswrapper[4946]: I1128 07:05:27.643226 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f5d6fef-1331-4efe-ba88-ddf52dea0a1f-catalog-content\") pod \"redhat-operators-z7dgb\" (UID: \"1f5d6fef-1331-4efe-ba88-ddf52dea0a1f\") " pod="openshift-marketplace/redhat-operators-z7dgb" Nov 28 07:05:27 crc kubenswrapper[4946]: I1128 07:05:27.643799 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f5d6fef-1331-4efe-ba88-ddf52dea0a1f-catalog-content\") pod \"redhat-operators-z7dgb\" (UID: \"1f5d6fef-1331-4efe-ba88-ddf52dea0a1f\") " pod="openshift-marketplace/redhat-operators-z7dgb" Nov 28 07:05:27 crc kubenswrapper[4946]: I1128 07:05:27.665889 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmdkr\" (UniqueName: \"kubernetes.io/projected/1f5d6fef-1331-4efe-ba88-ddf52dea0a1f-kube-api-access-kmdkr\") pod \"redhat-operators-z7dgb\" (UID: \"1f5d6fef-1331-4efe-ba88-ddf52dea0a1f\") " pod="openshift-marketplace/redhat-operators-z7dgb" Nov 28 07:05:27 crc kubenswrapper[4946]: I1128 07:05:27.779281 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z7dgb" Nov 28 07:05:28 crc kubenswrapper[4946]: I1128 07:05:27.999897 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z7dgb"] Nov 28 07:05:28 crc kubenswrapper[4946]: W1128 07:05:28.008723 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f5d6fef_1331_4efe_ba88_ddf52dea0a1f.slice/crio-78bedcd1519aebc6949123e2d07d886a92567cf7fda7d3928bdd94ff580ca55a WatchSource:0}: Error finding container 78bedcd1519aebc6949123e2d07d886a92567cf7fda7d3928bdd94ff580ca55a: Status 404 returned error can't find the container with id 78bedcd1519aebc6949123e2d07d886a92567cf7fda7d3928bdd94ff580ca55a Nov 28 07:05:28 crc kubenswrapper[4946]: I1128 07:05:28.450677 4946 generic.go:334] "Generic (PLEG): container finished" podID="1f5d6fef-1331-4efe-ba88-ddf52dea0a1f" containerID="0376a7c4cbff51829e69ef4938b0f7cc1a01ae7f5eca08a6acfa133839135971" exitCode=0 Nov 28 07:05:28 crc kubenswrapper[4946]: I1128 07:05:28.450784 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7dgb" event={"ID":"1f5d6fef-1331-4efe-ba88-ddf52dea0a1f","Type":"ContainerDied","Data":"0376a7c4cbff51829e69ef4938b0f7cc1a01ae7f5eca08a6acfa133839135971"} Nov 28 07:05:28 crc kubenswrapper[4946]: I1128 07:05:28.450846 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7dgb" event={"ID":"1f5d6fef-1331-4efe-ba88-ddf52dea0a1f","Type":"ContainerStarted","Data":"78bedcd1519aebc6949123e2d07d886a92567cf7fda7d3928bdd94ff580ca55a"} Nov 28 07:05:28 crc kubenswrapper[4946]: I1128 07:05:28.454251 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-55xst" event={"ID":"18572852-b419-45b3-999d-aad04d5bc7b7","Type":"ContainerStarted","Data":"f9bfdc38611407eba3572248f72d3e27071f31787949b50d661b336383fd328c"} Nov 28 07:05:28 crc kubenswrapper[4946]: I1128 07:05:28.496363 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-55xst" podStartSLOduration=1.7257285 podStartE2EDuration="5.496337674s" podCreationTimestamp="2025-11-28 07:05:23 +0000 UTC" firstStartedPulling="2025-11-28 07:05:24.407359371 +0000 UTC m=+778.785424482" lastFinishedPulling="2025-11-28 07:05:28.177968545 +0000 UTC m=+782.556033656" observedRunningTime="2025-11-28 07:05:28.492434839 +0000 UTC m=+782.870499950" watchObservedRunningTime="2025-11-28 07:05:28.496337674 +0000 UTC m=+782.874402805" Nov 28 07:05:30 crc kubenswrapper[4946]: I1128 07:05:30.476507 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7dgb" event={"ID":"1f5d6fef-1331-4efe-ba88-ddf52dea0a1f","Type":"ContainerStarted","Data":"e7a1f35cda7e5ff59df6cb422a9557cd6a125ad161a411b6d1f7d7cbdc12e096"} Nov 28 07:05:31 crc kubenswrapper[4946]: I1128 07:05:31.485065 4946 generic.go:334] "Generic (PLEG): container finished" podID="1f5d6fef-1331-4efe-ba88-ddf52dea0a1f" containerID="e7a1f35cda7e5ff59df6cb422a9557cd6a125ad161a411b6d1f7d7cbdc12e096" exitCode=0 Nov 28 07:05:31 crc kubenswrapper[4946]: I1128 07:05:31.485125 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7dgb" event={"ID":"1f5d6fef-1331-4efe-ba88-ddf52dea0a1f","Type":"ContainerDied","Data":"e7a1f35cda7e5ff59df6cb422a9557cd6a125ad161a411b6d1f7d7cbdc12e096"} Nov 28 07:05:33 crc kubenswrapper[4946]: I1128 07:05:33.419047 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-55xst" Nov 28 07:05:33 crc kubenswrapper[4946]: I1128 07:05:33.419984 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-55xst" Nov 28 07:05:33 crc kubenswrapper[4946]: I1128 07:05:33.470374 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-55xst" Nov 28 07:05:33 crc kubenswrapper[4946]: I1128 07:05:33.502367 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7dgb" event={"ID":"1f5d6fef-1331-4efe-ba88-ddf52dea0a1f","Type":"ContainerStarted","Data":"4431e521bad5c65019edfa7f9a321d2bea7c79eb1730262b69d5d746ec5246fb"} Nov 28 07:05:33 crc kubenswrapper[4946]: I1128 07:05:33.528694 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z7dgb" podStartSLOduration=2.897697495 podStartE2EDuration="6.528673535s" podCreationTimestamp="2025-11-28 07:05:27 +0000 UTC" firstStartedPulling="2025-11-28 07:05:28.452386393 +0000 UTC m=+782.830451504" lastFinishedPulling="2025-11-28 07:05:32.083362433 +0000 UTC m=+786.461427544" observedRunningTime="2025-11-28 07:05:33.522337161 +0000 UTC m=+787.900402282" watchObservedRunningTime="2025-11-28 07:05:33.528673535 +0000 UTC m=+787.906738646" Nov 28 07:05:33 crc kubenswrapper[4946]: I1128 07:05:33.560029 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-55xst" Nov 28 07:05:35 crc kubenswrapper[4946]: I1128 07:05:35.623917 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-55xst"] Nov 28 07:05:35 crc kubenswrapper[4946]: I1128 07:05:35.624202 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-55xst" podUID="18572852-b419-45b3-999d-aad04d5bc7b7" containerName="registry-server" containerID="cri-o://f9bfdc38611407eba3572248f72d3e27071f31787949b50d661b336383fd328c" gracePeriod=2 Nov 28 07:05:37 crc kubenswrapper[4946]: I1128 07:05:37.780372 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z7dgb" Nov 28 07:05:37 crc kubenswrapper[4946]: I1128 07:05:37.780887 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z7dgb" Nov 28 07:05:38 crc kubenswrapper[4946]: I1128 07:05:38.533343 4946 generic.go:334] "Generic (PLEG): container finished" podID="18572852-b419-45b3-999d-aad04d5bc7b7" containerID="f9bfdc38611407eba3572248f72d3e27071f31787949b50d661b336383fd328c" exitCode=0 Nov 28 07:05:38 crc kubenswrapper[4946]: I1128 07:05:38.533387 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-55xst" event={"ID":"18572852-b419-45b3-999d-aad04d5bc7b7","Type":"ContainerDied","Data":"f9bfdc38611407eba3572248f72d3e27071f31787949b50d661b336383fd328c"} Nov 28 07:05:38 crc kubenswrapper[4946]: I1128 07:05:38.819116 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z7dgb" podUID="1f5d6fef-1331-4efe-ba88-ddf52dea0a1f" containerName="registry-server" probeResult="failure" output=< Nov 28 07:05:38 crc kubenswrapper[4946]: timeout: failed to connect service ":50051" within 1s Nov 28 07:05:38 crc kubenswrapper[4946]: > Nov 28 07:05:40 crc kubenswrapper[4946]: I1128 07:05:40.073720 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-55xst" Nov 28 07:05:40 crc kubenswrapper[4946]: I1128 07:05:40.156318 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hfqz\" (UniqueName: \"kubernetes.io/projected/18572852-b419-45b3-999d-aad04d5bc7b7-kube-api-access-2hfqz\") pod \"18572852-b419-45b3-999d-aad04d5bc7b7\" (UID: \"18572852-b419-45b3-999d-aad04d5bc7b7\") " Nov 28 07:05:40 crc kubenswrapper[4946]: I1128 07:05:40.156438 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18572852-b419-45b3-999d-aad04d5bc7b7-utilities\") pod \"18572852-b419-45b3-999d-aad04d5bc7b7\" (UID: \"18572852-b419-45b3-999d-aad04d5bc7b7\") " Nov 28 07:05:40 crc kubenswrapper[4946]: I1128 07:05:40.156473 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18572852-b419-45b3-999d-aad04d5bc7b7-catalog-content\") pod \"18572852-b419-45b3-999d-aad04d5bc7b7\" (UID: \"18572852-b419-45b3-999d-aad04d5bc7b7\") " Nov 28 07:05:40 crc kubenswrapper[4946]: I1128 07:05:40.157419 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18572852-b419-45b3-999d-aad04d5bc7b7-utilities" (OuterVolumeSpecName: "utilities") pod "18572852-b419-45b3-999d-aad04d5bc7b7" (UID: "18572852-b419-45b3-999d-aad04d5bc7b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:05:40 crc kubenswrapper[4946]: I1128 07:05:40.162578 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18572852-b419-45b3-999d-aad04d5bc7b7-kube-api-access-2hfqz" (OuterVolumeSpecName: "kube-api-access-2hfqz") pod "18572852-b419-45b3-999d-aad04d5bc7b7" (UID: "18572852-b419-45b3-999d-aad04d5bc7b7"). InnerVolumeSpecName "kube-api-access-2hfqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:05:40 crc kubenswrapper[4946]: I1128 07:05:40.202524 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18572852-b419-45b3-999d-aad04d5bc7b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18572852-b419-45b3-999d-aad04d5bc7b7" (UID: "18572852-b419-45b3-999d-aad04d5bc7b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:05:40 crc kubenswrapper[4946]: I1128 07:05:40.257904 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hfqz\" (UniqueName: \"kubernetes.io/projected/18572852-b419-45b3-999d-aad04d5bc7b7-kube-api-access-2hfqz\") on node \"crc\" DevicePath \"\"" Nov 28 07:05:40 crc kubenswrapper[4946]: I1128 07:05:40.257950 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18572852-b419-45b3-999d-aad04d5bc7b7-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:05:40 crc kubenswrapper[4946]: I1128 07:05:40.257963 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18572852-b419-45b3-999d-aad04d5bc7b7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:05:40 crc kubenswrapper[4946]: I1128 07:05:40.549968 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-55xst" event={"ID":"18572852-b419-45b3-999d-aad04d5bc7b7","Type":"ContainerDied","Data":"3f6c4571feb208ea040509c143dafcfd6a8bf53456cf2c1dce8f580f8a9c5dd7"} Nov 28 07:05:40 crc kubenswrapper[4946]: I1128 07:05:40.550030 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-55xst" Nov 28 07:05:40 crc kubenswrapper[4946]: I1128 07:05:40.550060 4946 scope.go:117] "RemoveContainer" containerID="f9bfdc38611407eba3572248f72d3e27071f31787949b50d661b336383fd328c" Nov 28 07:05:40 crc kubenswrapper[4946]: I1128 07:05:40.576654 4946 scope.go:117] "RemoveContainer" containerID="689eaa5ba517d0c8f026e981772f3cceb83194b526b726324ba7c66550d171bf" Nov 28 07:05:40 crc kubenswrapper[4946]: I1128 07:05:40.598411 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-55xst"] Nov 28 07:05:40 crc kubenswrapper[4946]: I1128 07:05:40.602668 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-55xst"] Nov 28 07:05:40 crc kubenswrapper[4946]: I1128 07:05:40.609605 4946 scope.go:117] "RemoveContainer" containerID="e3d646d481054dddd92d0e3c3fe9438a9e9e541f464e1b4b2a1bb5805ac5d2b7" Nov 28 07:05:41 crc kubenswrapper[4946]: I1128 07:05:41.998042 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18572852-b419-45b3-999d-aad04d5bc7b7" path="/var/lib/kubelet/pods/18572852-b419-45b3-999d-aad04d5bc7b7/volumes" Nov 28 07:05:47 crc kubenswrapper[4946]: I1128 07:05:47.819671 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z7dgb" Nov 28 07:05:47 crc kubenswrapper[4946]: I1128 07:05:47.868734 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z7dgb" Nov 28 07:05:48 crc kubenswrapper[4946]: I1128 07:05:48.057421 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z7dgb"] Nov 28 07:05:49 crc kubenswrapper[4946]: I1128 07:05:49.605527 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z7dgb" podUID="1f5d6fef-1331-4efe-ba88-ddf52dea0a1f" containerName="registry-server" containerID="cri-o://4431e521bad5c65019edfa7f9a321d2bea7c79eb1730262b69d5d746ec5246fb" gracePeriod=2 Nov 28 07:05:49 crc kubenswrapper[4946]: I1128 07:05:49.981672 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z7dgb" Nov 28 07:05:50 crc kubenswrapper[4946]: I1128 07:05:50.091265 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmdkr\" (UniqueName: \"kubernetes.io/projected/1f5d6fef-1331-4efe-ba88-ddf52dea0a1f-kube-api-access-kmdkr\") pod \"1f5d6fef-1331-4efe-ba88-ddf52dea0a1f\" (UID: \"1f5d6fef-1331-4efe-ba88-ddf52dea0a1f\") " Nov 28 07:05:50 crc kubenswrapper[4946]: I1128 07:05:50.091348 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f5d6fef-1331-4efe-ba88-ddf52dea0a1f-utilities\") pod \"1f5d6fef-1331-4efe-ba88-ddf52dea0a1f\" (UID: \"1f5d6fef-1331-4efe-ba88-ddf52dea0a1f\") " Nov 28 07:05:50 crc kubenswrapper[4946]: I1128 07:05:50.091659 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f5d6fef-1331-4efe-ba88-ddf52dea0a1f-catalog-content\") pod \"1f5d6fef-1331-4efe-ba88-ddf52dea0a1f\" (UID: \"1f5d6fef-1331-4efe-ba88-ddf52dea0a1f\") " Nov 28 07:05:50 crc kubenswrapper[4946]: I1128 07:05:50.092214 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f5d6fef-1331-4efe-ba88-ddf52dea0a1f-utilities" (OuterVolumeSpecName: "utilities") pod "1f5d6fef-1331-4efe-ba88-ddf52dea0a1f" (UID: "1f5d6fef-1331-4efe-ba88-ddf52dea0a1f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:05:50 crc kubenswrapper[4946]: I1128 07:05:50.098539 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f5d6fef-1331-4efe-ba88-ddf52dea0a1f-kube-api-access-kmdkr" (OuterVolumeSpecName: "kube-api-access-kmdkr") pod "1f5d6fef-1331-4efe-ba88-ddf52dea0a1f" (UID: "1f5d6fef-1331-4efe-ba88-ddf52dea0a1f"). InnerVolumeSpecName "kube-api-access-kmdkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:05:50 crc kubenswrapper[4946]: I1128 07:05:50.193039 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmdkr\" (UniqueName: \"kubernetes.io/projected/1f5d6fef-1331-4efe-ba88-ddf52dea0a1f-kube-api-access-kmdkr\") on node \"crc\" DevicePath \"\"" Nov 28 07:05:50 crc kubenswrapper[4946]: I1128 07:05:50.193432 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f5d6fef-1331-4efe-ba88-ddf52dea0a1f-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:05:50 crc kubenswrapper[4946]: I1128 07:05:50.193726 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f5d6fef-1331-4efe-ba88-ddf52dea0a1f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f5d6fef-1331-4efe-ba88-ddf52dea0a1f" (UID: "1f5d6fef-1331-4efe-ba88-ddf52dea0a1f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:05:50 crc kubenswrapper[4946]: I1128 07:05:50.294857 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f5d6fef-1331-4efe-ba88-ddf52dea0a1f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:05:50 crc kubenswrapper[4946]: I1128 07:05:50.617559 4946 generic.go:334] "Generic (PLEG): container finished" podID="1f5d6fef-1331-4efe-ba88-ddf52dea0a1f" containerID="4431e521bad5c65019edfa7f9a321d2bea7c79eb1730262b69d5d746ec5246fb" exitCode=0 Nov 28 07:05:50 crc kubenswrapper[4946]: I1128 07:05:50.617650 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z7dgb" Nov 28 07:05:50 crc kubenswrapper[4946]: I1128 07:05:50.617643 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7dgb" event={"ID":"1f5d6fef-1331-4efe-ba88-ddf52dea0a1f","Type":"ContainerDied","Data":"4431e521bad5c65019edfa7f9a321d2bea7c79eb1730262b69d5d746ec5246fb"} Nov 28 07:05:50 crc kubenswrapper[4946]: I1128 07:05:50.617896 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7dgb" event={"ID":"1f5d6fef-1331-4efe-ba88-ddf52dea0a1f","Type":"ContainerDied","Data":"78bedcd1519aebc6949123e2d07d886a92567cf7fda7d3928bdd94ff580ca55a"} Nov 28 07:05:50 crc kubenswrapper[4946]: I1128 07:05:50.617938 4946 scope.go:117] "RemoveContainer" containerID="4431e521bad5c65019edfa7f9a321d2bea7c79eb1730262b69d5d746ec5246fb" Nov 28 07:05:50 crc kubenswrapper[4946]: I1128 07:05:50.642277 4946 scope.go:117] "RemoveContainer" containerID="e7a1f35cda7e5ff59df6cb422a9557cd6a125ad161a411b6d1f7d7cbdc12e096" Nov 28 07:05:50 crc kubenswrapper[4946]: I1128 07:05:50.671149 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z7dgb"] Nov 28 07:05:50 crc kubenswrapper[4946]: I1128 07:05:50.676997 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z7dgb"] Nov 28 07:05:50 crc kubenswrapper[4946]: I1128 07:05:50.683327 4946 scope.go:117] "RemoveContainer" containerID="0376a7c4cbff51829e69ef4938b0f7cc1a01ae7f5eca08a6acfa133839135971" Nov 28 07:05:50 crc kubenswrapper[4946]: I1128 07:05:50.710177 4946 scope.go:117] "RemoveContainer" containerID="4431e521bad5c65019edfa7f9a321d2bea7c79eb1730262b69d5d746ec5246fb" Nov 28 07:05:50 crc kubenswrapper[4946]: E1128 07:05:50.710714 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4431e521bad5c65019edfa7f9a321d2bea7c79eb1730262b69d5d746ec5246fb\": container with ID starting with 4431e521bad5c65019edfa7f9a321d2bea7c79eb1730262b69d5d746ec5246fb not found: ID does not exist" containerID="4431e521bad5c65019edfa7f9a321d2bea7c79eb1730262b69d5d746ec5246fb" Nov 28 07:05:50 crc kubenswrapper[4946]: I1128 07:05:50.710761 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4431e521bad5c65019edfa7f9a321d2bea7c79eb1730262b69d5d746ec5246fb"} err="failed to get container status \"4431e521bad5c65019edfa7f9a321d2bea7c79eb1730262b69d5d746ec5246fb\": rpc error: code = NotFound desc = could not find container \"4431e521bad5c65019edfa7f9a321d2bea7c79eb1730262b69d5d746ec5246fb\": container with ID starting with 4431e521bad5c65019edfa7f9a321d2bea7c79eb1730262b69d5d746ec5246fb not found: ID does not exist" Nov 28 07:05:50 crc kubenswrapper[4946]: I1128 07:05:50.710792 4946 scope.go:117] "RemoveContainer" containerID="e7a1f35cda7e5ff59df6cb422a9557cd6a125ad161a411b6d1f7d7cbdc12e096" Nov 28 07:05:50 crc kubenswrapper[4946]: E1128 07:05:50.711270 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7a1f35cda7e5ff59df6cb422a9557cd6a125ad161a411b6d1f7d7cbdc12e096\": container with ID starting with e7a1f35cda7e5ff59df6cb422a9557cd6a125ad161a411b6d1f7d7cbdc12e096 not found: ID does not exist" containerID="e7a1f35cda7e5ff59df6cb422a9557cd6a125ad161a411b6d1f7d7cbdc12e096" Nov 28 07:05:50 crc kubenswrapper[4946]: I1128 07:05:50.711303 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7a1f35cda7e5ff59df6cb422a9557cd6a125ad161a411b6d1f7d7cbdc12e096"} err="failed to get container status \"e7a1f35cda7e5ff59df6cb422a9557cd6a125ad161a411b6d1f7d7cbdc12e096\": rpc error: code = NotFound desc = could not find container \"e7a1f35cda7e5ff59df6cb422a9557cd6a125ad161a411b6d1f7d7cbdc12e096\": container with ID starting with e7a1f35cda7e5ff59df6cb422a9557cd6a125ad161a411b6d1f7d7cbdc12e096 not found: ID does not exist" Nov 28 07:05:50 crc kubenswrapper[4946]: I1128 07:05:50.711325 4946 scope.go:117] "RemoveContainer" containerID="0376a7c4cbff51829e69ef4938b0f7cc1a01ae7f5eca08a6acfa133839135971" Nov 28 07:05:50 crc kubenswrapper[4946]: E1128 07:05:50.711644 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0376a7c4cbff51829e69ef4938b0f7cc1a01ae7f5eca08a6acfa133839135971\": container with ID starting with 0376a7c4cbff51829e69ef4938b0f7cc1a01ae7f5eca08a6acfa133839135971 not found: ID does not exist" containerID="0376a7c4cbff51829e69ef4938b0f7cc1a01ae7f5eca08a6acfa133839135971" Nov 28 07:05:50 crc kubenswrapper[4946]: I1128 07:05:50.711674 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0376a7c4cbff51829e69ef4938b0f7cc1a01ae7f5eca08a6acfa133839135971"} err="failed to get container status \"0376a7c4cbff51829e69ef4938b0f7cc1a01ae7f5eca08a6acfa133839135971\": rpc error: code = NotFound desc = could not find container \"0376a7c4cbff51829e69ef4938b0f7cc1a01ae7f5eca08a6acfa133839135971\": container with ID starting with 0376a7c4cbff51829e69ef4938b0f7cc1a01ae7f5eca08a6acfa133839135971 not found: ID does not exist" Nov 28 07:05:52 crc kubenswrapper[4946]: I1128 07:05:52.061162 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f5d6fef-1331-4efe-ba88-ddf52dea0a1f" path="/var/lib/kubelet/pods/1f5d6fef-1331-4efe-ba88-ddf52dea0a1f/volumes" Nov 28 07:05:54 crc kubenswrapper[4946]: I1128 07:05:54.731125 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:05:54 crc kubenswrapper[4946]: I1128 07:05:54.731614 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:06:24 crc kubenswrapper[4946]: I1128 07:06:24.731086 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:06:24 crc kubenswrapper[4946]: I1128 07:06:24.732241 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:06:24 crc kubenswrapper[4946]: I1128 07:06:24.732342 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 07:06:24 crc kubenswrapper[4946]: I1128 07:06:24.733605 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9312f27097b6f3bbafda147b11eda22821e2c49101e6733241ea36c59af5418f"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 07:06:24 crc kubenswrapper[4946]: I1128 07:06:24.733745 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://9312f27097b6f3bbafda147b11eda22821e2c49101e6733241ea36c59af5418f" gracePeriod=600 Nov 28 07:06:25 crc kubenswrapper[4946]: I1128 07:06:25.858340 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="9312f27097b6f3bbafda147b11eda22821e2c49101e6733241ea36c59af5418f" exitCode=0 Nov 28 07:06:25 crc kubenswrapper[4946]: I1128 07:06:25.858428 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"9312f27097b6f3bbafda147b11eda22821e2c49101e6733241ea36c59af5418f"} Nov 28 07:06:25 crc kubenswrapper[4946]: I1128 07:06:25.858519 4946 scope.go:117] "RemoveContainer" containerID="21fc0485e74b978141bbad8ee7d9fb603a74f8f723790b3bd012d88483e9ed87" Nov 28 07:06:26 crc kubenswrapper[4946]: I1128 07:06:26.865289 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"85f899527cbb8eb5fccf192c306339421531f2edfd2b109fbf8ff7c7c6545620"} Nov 28 07:06:44 crc kubenswrapper[4946]: I1128 07:06:44.569131 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jz6dz"] Nov 28 07:06:44 crc kubenswrapper[4946]: E1128 07:06:44.570644 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18572852-b419-45b3-999d-aad04d5bc7b7" containerName="extract-content" Nov 28 07:06:44 crc kubenswrapper[4946]: I1128 07:06:44.570680 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="18572852-b419-45b3-999d-aad04d5bc7b7" containerName="extract-content" Nov 28 07:06:44 crc kubenswrapper[4946]: E1128 07:06:44.570701 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f5d6fef-1331-4efe-ba88-ddf52dea0a1f" containerName="extract-utilities" Nov 28 07:06:44 crc kubenswrapper[4946]: I1128 07:06:44.570716 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f5d6fef-1331-4efe-ba88-ddf52dea0a1f" containerName="extract-utilities" Nov 28 07:06:44 crc kubenswrapper[4946]: E1128 07:06:44.570735 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18572852-b419-45b3-999d-aad04d5bc7b7" containerName="registry-server" Nov 28 07:06:44 crc kubenswrapper[4946]: I1128 07:06:44.570749 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="18572852-b419-45b3-999d-aad04d5bc7b7" containerName="registry-server" Nov 28 07:06:44 crc kubenswrapper[4946]: E1128 07:06:44.570767 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f5d6fef-1331-4efe-ba88-ddf52dea0a1f" containerName="extract-content" Nov 28 07:06:44 crc kubenswrapper[4946]: I1128 07:06:44.570779 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f5d6fef-1331-4efe-ba88-ddf52dea0a1f" containerName="extract-content" Nov 28 07:06:44 crc kubenswrapper[4946]: E1128 07:06:44.570796 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f5d6fef-1331-4efe-ba88-ddf52dea0a1f" containerName="registry-server" Nov 28 07:06:44 crc kubenswrapper[4946]: I1128 07:06:44.570809 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f5d6fef-1331-4efe-ba88-ddf52dea0a1f" containerName="registry-server" Nov 28 07:06:44 crc kubenswrapper[4946]: E1128 07:06:44.570843 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18572852-b419-45b3-999d-aad04d5bc7b7" containerName="extract-utilities" Nov 28 07:06:44 crc kubenswrapper[4946]: I1128 07:06:44.570856 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="18572852-b419-45b3-999d-aad04d5bc7b7" containerName="extract-utilities" Nov 28 07:06:44 crc kubenswrapper[4946]: I1128 07:06:44.571020 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="18572852-b419-45b3-999d-aad04d5bc7b7" containerName="registry-server" Nov 28 07:06:44 crc kubenswrapper[4946]: I1128 07:06:44.571039 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f5d6fef-1331-4efe-ba88-ddf52dea0a1f" containerName="registry-server" Nov 28 07:06:44 crc kubenswrapper[4946]: I1128 07:06:44.572671 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jz6dz" Nov 28 07:06:44 crc kubenswrapper[4946]: I1128 07:06:44.589892 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jz6dz"] Nov 28 07:06:44 crc kubenswrapper[4946]: I1128 07:06:44.733918 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlbxf\" (UniqueName: \"kubernetes.io/projected/b333392b-1388-4fcb-8e11-7acabcd78767-kube-api-access-xlbxf\") pod \"community-operators-jz6dz\" (UID: \"b333392b-1388-4fcb-8e11-7acabcd78767\") " pod="openshift-marketplace/community-operators-jz6dz" Nov 28 07:06:44 crc kubenswrapper[4946]: I1128 07:06:44.734056 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b333392b-1388-4fcb-8e11-7acabcd78767-utilities\") pod \"community-operators-jz6dz\" (UID: \"b333392b-1388-4fcb-8e11-7acabcd78767\") " pod="openshift-marketplace/community-operators-jz6dz" Nov 28 07:06:44 crc kubenswrapper[4946]: I1128 07:06:44.734092 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b333392b-1388-4fcb-8e11-7acabcd78767-catalog-content\") pod \"community-operators-jz6dz\" (UID: \"b333392b-1388-4fcb-8e11-7acabcd78767\") " pod="openshift-marketplace/community-operators-jz6dz" Nov 28 07:06:44 crc kubenswrapper[4946]: I1128 07:06:44.835975 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlbxf\" (UniqueName: \"kubernetes.io/projected/b333392b-1388-4fcb-8e11-7acabcd78767-kube-api-access-xlbxf\") pod \"community-operators-jz6dz\" (UID: \"b333392b-1388-4fcb-8e11-7acabcd78767\") " pod="openshift-marketplace/community-operators-jz6dz" Nov 28 07:06:44 crc kubenswrapper[4946]: I1128 07:06:44.836108 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b333392b-1388-4fcb-8e11-7acabcd78767-utilities\") pod \"community-operators-jz6dz\" (UID: \"b333392b-1388-4fcb-8e11-7acabcd78767\") " pod="openshift-marketplace/community-operators-jz6dz" Nov 28 07:06:44 crc kubenswrapper[4946]: I1128 07:06:44.836150 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b333392b-1388-4fcb-8e11-7acabcd78767-catalog-content\") pod \"community-operators-jz6dz\" (UID: \"b333392b-1388-4fcb-8e11-7acabcd78767\") " pod="openshift-marketplace/community-operators-jz6dz" Nov 28 07:06:44 crc kubenswrapper[4946]: I1128 07:06:44.837260 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b333392b-1388-4fcb-8e11-7acabcd78767-utilities\") pod \"community-operators-jz6dz\" (UID: \"b333392b-1388-4fcb-8e11-7acabcd78767\") " pod="openshift-marketplace/community-operators-jz6dz" Nov 28 07:06:44 crc kubenswrapper[4946]: I1128 07:06:44.837727 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b333392b-1388-4fcb-8e11-7acabcd78767-catalog-content\") pod \"community-operators-jz6dz\" (UID: \"b333392b-1388-4fcb-8e11-7acabcd78767\") " pod="openshift-marketplace/community-operators-jz6dz" Nov 28 07:06:44 crc kubenswrapper[4946]: I1128 07:06:44.858533 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlbxf\" (UniqueName: \"kubernetes.io/projected/b333392b-1388-4fcb-8e11-7acabcd78767-kube-api-access-xlbxf\") pod \"community-operators-jz6dz\" (UID: \"b333392b-1388-4fcb-8e11-7acabcd78767\") " pod="openshift-marketplace/community-operators-jz6dz" Nov 28 07:06:44 crc kubenswrapper[4946]: I1128 07:06:44.906512 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jz6dz" Nov 28 07:06:45 crc kubenswrapper[4946]: I1128 07:06:45.165182 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jz6dz"] Nov 28 07:06:46 crc kubenswrapper[4946]: I1128 07:06:46.024412 4946 generic.go:334] "Generic (PLEG): container finished" podID="b333392b-1388-4fcb-8e11-7acabcd78767" containerID="9fbea893fc87155a2baa6e9a11fba4366c1dc8890d2ea15168f3f7801644c297" exitCode=0 Nov 28 07:06:46 crc kubenswrapper[4946]: I1128 07:06:46.024557 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jz6dz" event={"ID":"b333392b-1388-4fcb-8e11-7acabcd78767","Type":"ContainerDied","Data":"9fbea893fc87155a2baa6e9a11fba4366c1dc8890d2ea15168f3f7801644c297"} Nov 28 07:06:46 crc kubenswrapper[4946]: I1128 07:06:46.025002 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jz6dz" event={"ID":"b333392b-1388-4fcb-8e11-7acabcd78767","Type":"ContainerStarted","Data":"54d1dcfe8a7db09373dcbb5af429913925171c4f376bc13d488bb82e0498b04b"} Nov 28 07:06:47 crc kubenswrapper[4946]: I1128 07:06:47.032987 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jz6dz" event={"ID":"b333392b-1388-4fcb-8e11-7acabcd78767","Type":"ContainerStarted","Data":"a1a45e1645b3bbfac224157b38aafa47cfa25b64dff113813c6441baca17261c"} Nov 28 07:06:48 crc kubenswrapper[4946]: I1128 07:06:48.043230 4946 generic.go:334] "Generic (PLEG): container finished" podID="b333392b-1388-4fcb-8e11-7acabcd78767" containerID="a1a45e1645b3bbfac224157b38aafa47cfa25b64dff113813c6441baca17261c" exitCode=0 Nov 28 07:06:48 crc kubenswrapper[4946]: I1128 07:06:48.043411 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jz6dz" event={"ID":"b333392b-1388-4fcb-8e11-7acabcd78767","Type":"ContainerDied","Data":"a1a45e1645b3bbfac224157b38aafa47cfa25b64dff113813c6441baca17261c"} Nov 28 07:06:49 crc kubenswrapper[4946]: I1128 07:06:49.053392 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jz6dz" event={"ID":"b333392b-1388-4fcb-8e11-7acabcd78767","Type":"ContainerStarted","Data":"92b7715a3c5f30e9558f3f891e07da9f28a87807a8ad388587d5b8cb0a7eb9e0"} Nov 28 07:06:49 crc kubenswrapper[4946]: I1128 07:06:49.073628 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jz6dz" podStartSLOduration=2.593454495 podStartE2EDuration="5.0736093s" podCreationTimestamp="2025-11-28 07:06:44 +0000 UTC" firstStartedPulling="2025-11-28 07:06:46.026051299 +0000 UTC m=+860.404116410" lastFinishedPulling="2025-11-28 07:06:48.506206084 +0000 UTC m=+862.884271215" observedRunningTime="2025-11-28 07:06:49.070595125 +0000 UTC m=+863.448660236" watchObservedRunningTime="2025-11-28 07:06:49.0736093 +0000 UTC m=+863.451674411" Nov 28 07:06:54 crc kubenswrapper[4946]: I1128 07:06:54.907021 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jz6dz" Nov 28 07:06:54 crc kubenswrapper[4946]: I1128 07:06:54.907789 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jz6dz" Nov 28 07:06:54 crc kubenswrapper[4946]: I1128 07:06:54.960627 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jz6dz" Nov 28 07:06:55 crc kubenswrapper[4946]: I1128 07:06:55.147232 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jz6dz" Nov 28 07:06:55 crc kubenswrapper[4946]: I1128 07:06:55.214621 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jz6dz"] Nov 28 07:06:57 crc kubenswrapper[4946]: I1128 07:06:57.108564 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jz6dz" podUID="b333392b-1388-4fcb-8e11-7acabcd78767" containerName="registry-server" containerID="cri-o://92b7715a3c5f30e9558f3f891e07da9f28a87807a8ad388587d5b8cb0a7eb9e0" gracePeriod=2 Nov 28 07:06:57 crc kubenswrapper[4946]: I1128 07:06:57.510541 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jz6dz" Nov 28 07:06:57 crc kubenswrapper[4946]: I1128 07:06:57.629774 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlbxf\" (UniqueName: \"kubernetes.io/projected/b333392b-1388-4fcb-8e11-7acabcd78767-kube-api-access-xlbxf\") pod \"b333392b-1388-4fcb-8e11-7acabcd78767\" (UID: \"b333392b-1388-4fcb-8e11-7acabcd78767\") " Nov 28 07:06:57 crc kubenswrapper[4946]: I1128 07:06:57.629888 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b333392b-1388-4fcb-8e11-7acabcd78767-utilities\") pod \"b333392b-1388-4fcb-8e11-7acabcd78767\" (UID: \"b333392b-1388-4fcb-8e11-7acabcd78767\") " Nov 28 07:06:57 crc kubenswrapper[4946]: I1128 07:06:57.629990 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b333392b-1388-4fcb-8e11-7acabcd78767-catalog-content\") pod \"b333392b-1388-4fcb-8e11-7acabcd78767\" (UID: \"b333392b-1388-4fcb-8e11-7acabcd78767\") " Nov 28 07:06:57 crc kubenswrapper[4946]: I1128 07:06:57.630775 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b333392b-1388-4fcb-8e11-7acabcd78767-utilities" (OuterVolumeSpecName: "utilities") pod "b333392b-1388-4fcb-8e11-7acabcd78767" (UID: "b333392b-1388-4fcb-8e11-7acabcd78767"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:06:57 crc kubenswrapper[4946]: I1128 07:06:57.638443 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b333392b-1388-4fcb-8e11-7acabcd78767-kube-api-access-xlbxf" (OuterVolumeSpecName: "kube-api-access-xlbxf") pod "b333392b-1388-4fcb-8e11-7acabcd78767" (UID: "b333392b-1388-4fcb-8e11-7acabcd78767"). InnerVolumeSpecName "kube-api-access-xlbxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:06:57 crc kubenswrapper[4946]: I1128 07:06:57.695243 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b333392b-1388-4fcb-8e11-7acabcd78767-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b333392b-1388-4fcb-8e11-7acabcd78767" (UID: "b333392b-1388-4fcb-8e11-7acabcd78767"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:06:57 crc kubenswrapper[4946]: I1128 07:06:57.731566 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlbxf\" (UniqueName: \"kubernetes.io/projected/b333392b-1388-4fcb-8e11-7acabcd78767-kube-api-access-xlbxf\") on node \"crc\" DevicePath \"\"" Nov 28 07:06:57 crc kubenswrapper[4946]: I1128 07:06:57.731623 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b333392b-1388-4fcb-8e11-7acabcd78767-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:06:57 crc kubenswrapper[4946]: I1128 07:06:57.731638 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b333392b-1388-4fcb-8e11-7acabcd78767-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:06:58 crc kubenswrapper[4946]: I1128 07:06:58.118430 4946 generic.go:334] "Generic (PLEG): container finished" podID="b333392b-1388-4fcb-8e11-7acabcd78767" containerID="92b7715a3c5f30e9558f3f891e07da9f28a87807a8ad388587d5b8cb0a7eb9e0" exitCode=0 Nov 28 07:06:58 crc kubenswrapper[4946]: I1128 07:06:58.118574 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jz6dz" Nov 28 07:06:58 crc kubenswrapper[4946]: I1128 07:06:58.119564 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jz6dz" event={"ID":"b333392b-1388-4fcb-8e11-7acabcd78767","Type":"ContainerDied","Data":"92b7715a3c5f30e9558f3f891e07da9f28a87807a8ad388587d5b8cb0a7eb9e0"} Nov 28 07:06:58 crc kubenswrapper[4946]: I1128 07:06:58.119775 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jz6dz" event={"ID":"b333392b-1388-4fcb-8e11-7acabcd78767","Type":"ContainerDied","Data":"54d1dcfe8a7db09373dcbb5af429913925171c4f376bc13d488bb82e0498b04b"} Nov 28 07:06:58 crc kubenswrapper[4946]: I1128 07:06:58.119857 4946 scope.go:117] "RemoveContainer" containerID="92b7715a3c5f30e9558f3f891e07da9f28a87807a8ad388587d5b8cb0a7eb9e0" Nov 28 07:06:58 crc kubenswrapper[4946]: I1128 07:06:58.146536 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jz6dz"] Nov 28 07:06:58 crc kubenswrapper[4946]: I1128 07:06:58.148491 4946 scope.go:117] "RemoveContainer" containerID="a1a45e1645b3bbfac224157b38aafa47cfa25b64dff113813c6441baca17261c" Nov 28 07:06:58 crc kubenswrapper[4946]: I1128 07:06:58.152190 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jz6dz"] Nov 28 07:06:58 crc kubenswrapper[4946]: I1128 07:06:58.174314 4946 scope.go:117] "RemoveContainer" containerID="9fbea893fc87155a2baa6e9a11fba4366c1dc8890d2ea15168f3f7801644c297" Nov 28 07:06:58 crc kubenswrapper[4946]: I1128 07:06:58.194762 4946 scope.go:117] "RemoveContainer" containerID="92b7715a3c5f30e9558f3f891e07da9f28a87807a8ad388587d5b8cb0a7eb9e0" Nov 28 07:06:58 crc kubenswrapper[4946]: E1128 07:06:58.195295 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92b7715a3c5f30e9558f3f891e07da9f28a87807a8ad388587d5b8cb0a7eb9e0\": container with ID starting with 92b7715a3c5f30e9558f3f891e07da9f28a87807a8ad388587d5b8cb0a7eb9e0 not found: ID does not exist" containerID="92b7715a3c5f30e9558f3f891e07da9f28a87807a8ad388587d5b8cb0a7eb9e0" Nov 28 07:06:58 crc kubenswrapper[4946]: I1128 07:06:58.195416 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b7715a3c5f30e9558f3f891e07da9f28a87807a8ad388587d5b8cb0a7eb9e0"} err="failed to get container status \"92b7715a3c5f30e9558f3f891e07da9f28a87807a8ad388587d5b8cb0a7eb9e0\": rpc error: code = NotFound desc = could not find container \"92b7715a3c5f30e9558f3f891e07da9f28a87807a8ad388587d5b8cb0a7eb9e0\": container with ID starting with 92b7715a3c5f30e9558f3f891e07da9f28a87807a8ad388587d5b8cb0a7eb9e0 not found: ID does not exist" Nov 28 07:06:58 crc kubenswrapper[4946]: I1128 07:06:58.195528 4946 scope.go:117] "RemoveContainer" containerID="a1a45e1645b3bbfac224157b38aafa47cfa25b64dff113813c6441baca17261c" Nov 28 07:06:58 crc kubenswrapper[4946]: E1128 07:06:58.196067 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1a45e1645b3bbfac224157b38aafa47cfa25b64dff113813c6441baca17261c\": container with ID starting with a1a45e1645b3bbfac224157b38aafa47cfa25b64dff113813c6441baca17261c not found: ID does not exist" containerID="a1a45e1645b3bbfac224157b38aafa47cfa25b64dff113813c6441baca17261c" Nov 28 07:06:58 crc kubenswrapper[4946]: I1128 07:06:58.196135 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1a45e1645b3bbfac224157b38aafa47cfa25b64dff113813c6441baca17261c"} err="failed to get container status \"a1a45e1645b3bbfac224157b38aafa47cfa25b64dff113813c6441baca17261c\": rpc error: code = NotFound desc = could not find container \"a1a45e1645b3bbfac224157b38aafa47cfa25b64dff113813c6441baca17261c\": container with ID starting with a1a45e1645b3bbfac224157b38aafa47cfa25b64dff113813c6441baca17261c not found: ID does not exist" Nov 28 07:06:58 crc kubenswrapper[4946]: I1128 07:06:58.196176 4946 scope.go:117] "RemoveContainer" containerID="9fbea893fc87155a2baa6e9a11fba4366c1dc8890d2ea15168f3f7801644c297" Nov 28 07:06:58 crc kubenswrapper[4946]: E1128 07:06:58.196658 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fbea893fc87155a2baa6e9a11fba4366c1dc8890d2ea15168f3f7801644c297\": container with ID starting with 9fbea893fc87155a2baa6e9a11fba4366c1dc8890d2ea15168f3f7801644c297 not found: ID does not exist" containerID="9fbea893fc87155a2baa6e9a11fba4366c1dc8890d2ea15168f3f7801644c297" Nov 28 07:06:58 crc kubenswrapper[4946]: I1128 07:06:58.196790 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fbea893fc87155a2baa6e9a11fba4366c1dc8890d2ea15168f3f7801644c297"} err="failed to get container status \"9fbea893fc87155a2baa6e9a11fba4366c1dc8890d2ea15168f3f7801644c297\": rpc error: code = NotFound desc = could not find container \"9fbea893fc87155a2baa6e9a11fba4366c1dc8890d2ea15168f3f7801644c297\": container with ID starting with 9fbea893fc87155a2baa6e9a11fba4366c1dc8890d2ea15168f3f7801644c297 not found: ID does not exist" Nov 28 07:07:00 crc kubenswrapper[4946]: I1128 07:07:00.002929 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b333392b-1388-4fcb-8e11-7acabcd78767" path="/var/lib/kubelet/pods/b333392b-1388-4fcb-8e11-7acabcd78767/volumes" Nov 28 07:07:20 crc kubenswrapper[4946]: I1128 07:07:20.601011 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s5n4x"] Nov 28 07:07:20 crc kubenswrapper[4946]: E1128 07:07:20.602376 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b333392b-1388-4fcb-8e11-7acabcd78767" containerName="registry-server" Nov 28 07:07:20 crc kubenswrapper[4946]: I1128 07:07:20.602401 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b333392b-1388-4fcb-8e11-7acabcd78767" containerName="registry-server" Nov 28 07:07:20 crc kubenswrapper[4946]: E1128 07:07:20.602427 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b333392b-1388-4fcb-8e11-7acabcd78767" containerName="extract-content" Nov 28 07:07:20 crc kubenswrapper[4946]: I1128 07:07:20.602438 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b333392b-1388-4fcb-8e11-7acabcd78767" containerName="extract-content" Nov 28 07:07:20 crc kubenswrapper[4946]: E1128 07:07:20.604119 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b333392b-1388-4fcb-8e11-7acabcd78767" containerName="extract-utilities" Nov 28 07:07:20 crc kubenswrapper[4946]: I1128 07:07:20.604145 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b333392b-1388-4fcb-8e11-7acabcd78767" containerName="extract-utilities" Nov 28 07:07:20 crc kubenswrapper[4946]: I1128 07:07:20.604315 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b333392b-1388-4fcb-8e11-7acabcd78767" containerName="registry-server" Nov 28 07:07:20 crc kubenswrapper[4946]: I1128 07:07:20.605695 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5n4x" Nov 28 07:07:20 crc kubenswrapper[4946]: I1128 07:07:20.620651 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5n4x"] Nov 28 07:07:20 crc kubenswrapper[4946]: I1128 07:07:20.708296 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfpz8\" (UniqueName: \"kubernetes.io/projected/0b188612-3562-413e-9295-d1534c0cd90a-kube-api-access-lfpz8\") pod \"redhat-marketplace-s5n4x\" (UID: \"0b188612-3562-413e-9295-d1534c0cd90a\") " pod="openshift-marketplace/redhat-marketplace-s5n4x" Nov 28 07:07:20 crc kubenswrapper[4946]: I1128 07:07:20.708388 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b188612-3562-413e-9295-d1534c0cd90a-utilities\") pod \"redhat-marketplace-s5n4x\" (UID: \"0b188612-3562-413e-9295-d1534c0cd90a\") " pod="openshift-marketplace/redhat-marketplace-s5n4x" Nov 28 07:07:20 crc kubenswrapper[4946]: I1128 07:07:20.708488 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b188612-3562-413e-9295-d1534c0cd90a-catalog-content\") pod \"redhat-marketplace-s5n4x\" (UID: \"0b188612-3562-413e-9295-d1534c0cd90a\") " pod="openshift-marketplace/redhat-marketplace-s5n4x" Nov 28 07:07:20 crc kubenswrapper[4946]: I1128 07:07:20.809438 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b188612-3562-413e-9295-d1534c0cd90a-utilities\") pod \"redhat-marketplace-s5n4x\" (UID: \"0b188612-3562-413e-9295-d1534c0cd90a\") " pod="openshift-marketplace/redhat-marketplace-s5n4x" Nov 28 07:07:20 crc kubenswrapper[4946]: I1128 07:07:20.809579 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b188612-3562-413e-9295-d1534c0cd90a-catalog-content\") pod \"redhat-marketplace-s5n4x\" (UID: \"0b188612-3562-413e-9295-d1534c0cd90a\") " pod="openshift-marketplace/redhat-marketplace-s5n4x" Nov 28 07:07:20 crc kubenswrapper[4946]: I1128 07:07:20.809633 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfpz8\" (UniqueName: \"kubernetes.io/projected/0b188612-3562-413e-9295-d1534c0cd90a-kube-api-access-lfpz8\") pod \"redhat-marketplace-s5n4x\" (UID: \"0b188612-3562-413e-9295-d1534c0cd90a\") " pod="openshift-marketplace/redhat-marketplace-s5n4x" Nov 28 07:07:20 crc kubenswrapper[4946]: I1128 07:07:20.810131 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b188612-3562-413e-9295-d1534c0cd90a-utilities\") pod \"redhat-marketplace-s5n4x\" (UID: \"0b188612-3562-413e-9295-d1534c0cd90a\") " pod="openshift-marketplace/redhat-marketplace-s5n4x" Nov 28 07:07:20 crc kubenswrapper[4946]: I1128 07:07:20.810274 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b188612-3562-413e-9295-d1534c0cd90a-catalog-content\") pod \"redhat-marketplace-s5n4x\" (UID: \"0b188612-3562-413e-9295-d1534c0cd90a\") " pod="openshift-marketplace/redhat-marketplace-s5n4x" Nov 28 07:07:20 crc kubenswrapper[4946]: I1128 07:07:20.834895 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfpz8\" (UniqueName: \"kubernetes.io/projected/0b188612-3562-413e-9295-d1534c0cd90a-kube-api-access-lfpz8\") pod \"redhat-marketplace-s5n4x\" (UID: \"0b188612-3562-413e-9295-d1534c0cd90a\") " pod="openshift-marketplace/redhat-marketplace-s5n4x" Nov 28 07:07:20 crc kubenswrapper[4946]: I1128 07:07:20.937597 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5n4x" Nov 28 07:07:21 crc kubenswrapper[4946]: I1128 07:07:21.188400 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5n4x"] Nov 28 07:07:21 crc kubenswrapper[4946]: I1128 07:07:21.286843 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5n4x" event={"ID":"0b188612-3562-413e-9295-d1534c0cd90a","Type":"ContainerStarted","Data":"cbe6dfb6282a087f8b779314bb02c8bfc4d60235059763778df03436b44b212d"} Nov 28 07:07:22 crc kubenswrapper[4946]: I1128 07:07:22.295994 4946 generic.go:334] "Generic (PLEG): container finished" podID="0b188612-3562-413e-9295-d1534c0cd90a" containerID="1b13a2871b40b166164fed865f462b44cfd0680a737d550a1745bed38ac5d301" exitCode=0 Nov 28 07:07:22 crc kubenswrapper[4946]: I1128 07:07:22.296075 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5n4x" event={"ID":"0b188612-3562-413e-9295-d1534c0cd90a","Type":"ContainerDied","Data":"1b13a2871b40b166164fed865f462b44cfd0680a737d550a1745bed38ac5d301"} Nov 28 07:07:24 crc kubenswrapper[4946]: I1128 07:07:24.315451 4946 generic.go:334] "Generic (PLEG): container finished" podID="0b188612-3562-413e-9295-d1534c0cd90a" containerID="8753166cb1acfa552900bcae9e3ac44e011bc374d9e944633ebb2f72b029b07c" exitCode=0 Nov 28 07:07:24 crc kubenswrapper[4946]: I1128 07:07:24.315539 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5n4x" event={"ID":"0b188612-3562-413e-9295-d1534c0cd90a","Type":"ContainerDied","Data":"8753166cb1acfa552900bcae9e3ac44e011bc374d9e944633ebb2f72b029b07c"} Nov 28 07:07:25 crc kubenswrapper[4946]: I1128 07:07:25.328576 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5n4x" event={"ID":"0b188612-3562-413e-9295-d1534c0cd90a","Type":"ContainerStarted","Data":"39e47282d8621c5f2edfc354f7fb49ef392ff9ccf9919ea89f2094deaaf247b3"} Nov 28 07:07:25 crc kubenswrapper[4946]: I1128 07:07:25.353364 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s5n4x" podStartSLOduration=3.907609046 podStartE2EDuration="5.353293402s" podCreationTimestamp="2025-11-28 07:07:20 +0000 UTC" firstStartedPulling="2025-11-28 07:07:23.306651611 +0000 UTC m=+897.684716732" lastFinishedPulling="2025-11-28 07:07:24.752335977 +0000 UTC m=+899.130401088" observedRunningTime="2025-11-28 07:07:25.351756504 +0000 UTC m=+899.729821625" watchObservedRunningTime="2025-11-28 07:07:25.353293402 +0000 UTC m=+899.731358533" Nov 28 07:07:30 crc kubenswrapper[4946]: I1128 07:07:30.938149 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s5n4x" Nov 28 07:07:30 crc kubenswrapper[4946]: I1128 07:07:30.938916 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s5n4x" Nov 28 07:07:30 crc kubenswrapper[4946]: I1128 07:07:30.992123 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s5n4x" Nov 28 07:07:31 crc kubenswrapper[4946]: I1128 07:07:31.422297 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s5n4x" Nov 28 07:07:31 crc kubenswrapper[4946]: I1128 07:07:31.481293 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5n4x"] Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.102483 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pkknv"] Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.103225 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="ovn-controller" containerID="cri-o://7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324" gracePeriod=30 Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.103252 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7" gracePeriod=30 Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.103278 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="nbdb" containerID="cri-o://f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce" gracePeriod=30 Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.103345 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="kube-rbac-proxy-node" containerID="cri-o://35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca" gracePeriod=30 Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.103375 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="northd" containerID="cri-o://d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e" gracePeriod=30 Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.103395 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="ovn-acl-logging" containerID="cri-o://a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f" gracePeriod=30 Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.103566 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="sbdb" containerID="cri-o://bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b" gracePeriod=30 Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.161524 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="ovnkube-controller" containerID="cri-o://c30a490f27e222171754b9e68dbf02ac61a2f09d7e76897a6b22805e9582c6c8" gracePeriod=30 Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.387738 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9g9w4_857356d2-6585-41c6-9a2c-e06ef45f7303/kube-multus/2.log" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.388684 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9g9w4_857356d2-6585-41c6-9a2c-e06ef45f7303/kube-multus/1.log" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.388725 4946 generic.go:334] "Generic (PLEG): container finished" podID="857356d2-6585-41c6-9a2c-e06ef45f7303" containerID="550d51e89fd2af743383ca0f5cafb114f9ccfa889b800eabc67d79127fe98802" exitCode=2 Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.388782 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9g9w4" event={"ID":"857356d2-6585-41c6-9a2c-e06ef45f7303","Type":"ContainerDied","Data":"550d51e89fd2af743383ca0f5cafb114f9ccfa889b800eabc67d79127fe98802"} Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.388822 4946 scope.go:117] "RemoveContainer" containerID="c0a0a595eea30586e0c6859963642e750f7e52a69e98545d3dd7dcaff841e1b1" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.389652 4946 scope.go:117] "RemoveContainer" containerID="550d51e89fd2af743383ca0f5cafb114f9ccfa889b800eabc67d79127fe98802" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.393820 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pkknv_47e7046d-60dc-4dc0-b63e-f22f4ca5cd51/ovnkube-controller/3.log" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.396518 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pkknv_47e7046d-60dc-4dc0-b63e-f22f4ca5cd51/ovn-acl-logging/0.log" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.397026 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pkknv_47e7046d-60dc-4dc0-b63e-f22f4ca5cd51/ovn-controller/0.log" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.398058 4946 generic.go:334] "Generic (PLEG): container finished" podID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerID="c30a490f27e222171754b9e68dbf02ac61a2f09d7e76897a6b22805e9582c6c8" exitCode=0 Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.398090 4946 generic.go:334] "Generic (PLEG): container finished" podID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerID="bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b" exitCode=0 Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.398100 4946 generic.go:334] "Generic (PLEG): container finished" podID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerID="f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce" exitCode=0 Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.398109 4946 generic.go:334] "Generic (PLEG): container finished" podID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerID="38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7" exitCode=0 Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.398120 4946 generic.go:334] "Generic (PLEG): container finished" podID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerID="35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca" exitCode=0 Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.398130 4946 generic.go:334] "Generic (PLEG): container finished" podID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerID="a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f" exitCode=143 Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.398143 4946 generic.go:334] "Generic (PLEG): container finished" podID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerID="7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324" exitCode=143 Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.398117 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerDied","Data":"c30a490f27e222171754b9e68dbf02ac61a2f09d7e76897a6b22805e9582c6c8"} Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.398249 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerDied","Data":"bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b"} Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.398268 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerDied","Data":"f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce"} Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.398280 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerDied","Data":"38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7"} Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.398293 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerDied","Data":"35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca"} Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.398305 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerDied","Data":"a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f"} Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.398317 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerDied","Data":"7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324"} Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.398448 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s5n4x" podUID="0b188612-3562-413e-9295-d1534c0cd90a" containerName="registry-server" containerID="cri-o://39e47282d8621c5f2edfc354f7fb49ef392ff9ccf9919ea89f2094deaaf247b3" gracePeriod=2 Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.459825 4946 scope.go:117] "RemoveContainer" containerID="9a4ec6ab6b9acae6303733b64d2906e1bd955b80202cc9af54adc15207de969b" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.530999 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pkknv_47e7046d-60dc-4dc0-b63e-f22f4ca5cd51/ovn-acl-logging/0.log" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.532885 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pkknv_47e7046d-60dc-4dc0-b63e-f22f4ca5cd51/ovn-controller/0.log" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.533529 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.579102 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5n4x" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.599094 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-kubelet\") pod \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.599152 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-log-socket\") pod \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.599190 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-slash\") pod \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.599269 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-ovn-node-metrics-cert\") pod \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.599297 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-run-openvswitch\") pod \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.599321 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-var-lib-openvswitch\") pod \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.599379 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-run-ovn-kubernetes\") pod \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.599414 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-run-ovn\") pod \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.599440 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-ovnkube-config\") pod \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.599503 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-run-systemd\") pod \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.599543 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-var-lib-cni-networks-ovn-kubernetes\") pod \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.599587 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-env-overrides\") pod \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.599642 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-run-netns\") pod \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.599682 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmv4h\" (UniqueName: \"kubernetes.io/projected/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-kube-api-access-tmv4h\") pod \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.599705 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-cni-bin\") pod \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.599748 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-node-log\") pod \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.599775 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-ovnkube-script-lib\") pod \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.599820 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-cni-netd\") pod \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.599847 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-etc-openvswitch\") pod \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.599873 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-systemd-units\") pod \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\" (UID: \"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51\") " Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.600418 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" (UID: "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.600488 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" (UID: "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.600513 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-log-socket" (OuterVolumeSpecName: "log-socket") pod "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" (UID: "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.600537 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-slash" (OuterVolumeSpecName: "host-slash") pod "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" (UID: "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.607593 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" (UID: "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.607672 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" (UID: "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.607699 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" (UID: "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.607722 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" (UID: "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.608160 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" (UID: "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.608579 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" (UID: "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.608607 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" (UID: "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.608944 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" (UID: "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.608970 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" (UID: "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.610494 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-node-log" (OuterVolumeSpecName: "node-log") pod "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" (UID: "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.610687 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" (UID: "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.610768 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" (UID: "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.633106 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" (UID: "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.645641 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zjrwp"] Nov 28 07:07:33 crc kubenswrapper[4946]: E1128 07:07:33.646037 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="kubecfg-setup" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646079 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="kubecfg-setup" Nov 28 07:07:33 crc kubenswrapper[4946]: E1128 07:07:33.646090 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="ovnkube-controller" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646099 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="ovnkube-controller" Nov 28 07:07:33 crc kubenswrapper[4946]: E1128 07:07:33.646108 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="ovn-controller" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646116 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="ovn-controller" Nov 28 07:07:33 crc kubenswrapper[4946]: E1128 07:07:33.646130 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="ovn-acl-logging" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646138 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="ovn-acl-logging" Nov 28 07:07:33 crc kubenswrapper[4946]: E1128 07:07:33.646147 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="northd" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646154 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="northd" Nov 28 07:07:33 crc kubenswrapper[4946]: E1128 07:07:33.646165 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="ovnkube-controller" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646175 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="ovnkube-controller" Nov 28 07:07:33 crc kubenswrapper[4946]: E1128 07:07:33.646186 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b188612-3562-413e-9295-d1534c0cd90a" containerName="extract-utilities" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646194 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b188612-3562-413e-9295-d1534c0cd90a" containerName="extract-utilities" Nov 28 07:07:33 crc kubenswrapper[4946]: E1128 07:07:33.646207 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="sbdb" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646214 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="sbdb" Nov 28 07:07:33 crc kubenswrapper[4946]: E1128 07:07:33.646228 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="nbdb" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646237 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="nbdb" Nov 28 07:07:33 crc kubenswrapper[4946]: E1128 07:07:33.646249 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b188612-3562-413e-9295-d1534c0cd90a" containerName="registry-server" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646256 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b188612-3562-413e-9295-d1534c0cd90a" containerName="registry-server" Nov 28 07:07:33 crc kubenswrapper[4946]: E1128 07:07:33.646266 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="kube-rbac-proxy-node" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646273 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="kube-rbac-proxy-node" Nov 28 07:07:33 crc kubenswrapper[4946]: E1128 07:07:33.646284 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646291 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 07:07:33 crc kubenswrapper[4946]: E1128 07:07:33.646303 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="ovnkube-controller" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646310 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="ovnkube-controller" Nov 28 07:07:33 crc kubenswrapper[4946]: E1128 07:07:33.646319 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="ovnkube-controller" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646331 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="ovnkube-controller" Nov 28 07:07:33 crc kubenswrapper[4946]: E1128 07:07:33.646341 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b188612-3562-413e-9295-d1534c0cd90a" containerName="extract-content" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646348 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b188612-3562-413e-9295-d1534c0cd90a" containerName="extract-content" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646527 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="nbdb" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646544 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b188612-3562-413e-9295-d1534c0cd90a" containerName="registry-server" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646554 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="ovnkube-controller" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646564 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="ovn-acl-logging" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646576 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="kube-rbac-proxy-node" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646586 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="ovn-controller" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646595 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646604 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="northd" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646628 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="ovnkube-controller" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646637 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="ovnkube-controller" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646645 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="sbdb" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646656 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="ovnkube-controller" Nov 28 07:07:33 crc kubenswrapper[4946]: E1128 07:07:33.646778 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="ovnkube-controller" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646791 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="ovnkube-controller" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.646922 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerName="ovnkube-controller" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.656256 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.674980 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" (UID: "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.677058 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-kube-api-access-tmv4h" (OuterVolumeSpecName: "kube-api-access-tmv4h") pod "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" (UID: "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51"). InnerVolumeSpecName "kube-api-access-tmv4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.678248 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" (UID: "47e7046d-60dc-4dc0-b63e-f22f4ca5cd51"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.717916 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b188612-3562-413e-9295-d1534c0cd90a-catalog-content\") pod \"0b188612-3562-413e-9295-d1534c0cd90a\" (UID: \"0b188612-3562-413e-9295-d1534c0cd90a\") " Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.718325 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b188612-3562-413e-9295-d1534c0cd90a-utilities\") pod \"0b188612-3562-413e-9295-d1534c0cd90a\" (UID: \"0b188612-3562-413e-9295-d1534c0cd90a\") " Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.718373 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfpz8\" (UniqueName: \"kubernetes.io/projected/0b188612-3562-413e-9295-d1534c0cd90a-kube-api-access-lfpz8\") pod \"0b188612-3562-413e-9295-d1534c0cd90a\" (UID: \"0b188612-3562-413e-9295-d1534c0cd90a\") " Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.718839 4946 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.718862 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmv4h\" (UniqueName: \"kubernetes.io/projected/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-kube-api-access-tmv4h\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.718875 4946 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.718884 4946 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-node-log\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.718893 4946 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.718902 4946 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.718911 4946 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.718921 4946 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.718929 4946 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.718938 4946 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-log-socket\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.718946 4946 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-slash\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.718969 4946 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.718978 4946 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.718986 4946 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.718995 4946 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.719004 4946 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.719013 4946 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.719022 4946 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.719031 4946 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.719039 4946 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.722746 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b188612-3562-413e-9295-d1534c0cd90a-utilities" (OuterVolumeSpecName: "utilities") pod "0b188612-3562-413e-9295-d1534c0cd90a" (UID: "0b188612-3562-413e-9295-d1534c0cd90a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.724694 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b188612-3562-413e-9295-d1534c0cd90a-kube-api-access-lfpz8" (OuterVolumeSpecName: "kube-api-access-lfpz8") pod "0b188612-3562-413e-9295-d1534c0cd90a" (UID: "0b188612-3562-413e-9295-d1534c0cd90a"). InnerVolumeSpecName "kube-api-access-lfpz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.765425 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b188612-3562-413e-9295-d1534c0cd90a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b188612-3562-413e-9295-d1534c0cd90a" (UID: "0b188612-3562-413e-9295-d1534c0cd90a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.820722 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-node-log\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.820804 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-run-systemd\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.820826 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-etc-openvswitch\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.820854 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1db2e42e-a744-4e36-a9d0-236601daae20-ovnkube-script-lib\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.820914 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-host-run-netns\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.820940 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1db2e42e-a744-4e36-a9d0-236601daae20-env-overrides\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.820968 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-run-ovn\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.820983 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-host-cni-bin\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.821001 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-host-run-ovn-kubernetes\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.821053 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-host-cni-netd\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.821075 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-log-socket\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.821238 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-systemd-units\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.821318 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1db2e42e-a744-4e36-a9d0-236601daae20-ovn-node-metrics-cert\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.821366 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-host-kubelet\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.821407 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1db2e42e-a744-4e36-a9d0-236601daae20-ovnkube-config\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.821446 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.821619 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-run-openvswitch\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.821704 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-var-lib-openvswitch\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.821812 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85wrk\" (UniqueName: \"kubernetes.io/projected/1db2e42e-a744-4e36-a9d0-236601daae20-kube-api-access-85wrk\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.821879 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-host-slash\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.822017 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfpz8\" (UniqueName: \"kubernetes.io/projected/0b188612-3562-413e-9295-d1534c0cd90a-kube-api-access-lfpz8\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.822040 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b188612-3562-413e-9295-d1534c0cd90a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.822056 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b188612-3562-413e-9295-d1534c0cd90a-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.923544 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-host-run-ovn-kubernetes\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.923587 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-host-cni-netd\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.923634 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-log-socket\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.923662 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-systemd-units\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.923686 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1db2e42e-a744-4e36-a9d0-236601daae20-ovn-node-metrics-cert\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.923707 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1db2e42e-a744-4e36-a9d0-236601daae20-ovnkube-config\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.923726 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-host-kubelet\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.923745 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.923766 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-run-openvswitch\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.923785 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-var-lib-openvswitch\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.923805 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85wrk\" (UniqueName: \"kubernetes.io/projected/1db2e42e-a744-4e36-a9d0-236601daae20-kube-api-access-85wrk\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.923824 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-host-slash\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.923842 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-node-log\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.923857 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-run-systemd\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.923905 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-etc-openvswitch\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.923926 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1db2e42e-a744-4e36-a9d0-236601daae20-ovnkube-script-lib\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.923944 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-host-run-netns\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.923969 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1db2e42e-a744-4e36-a9d0-236601daae20-env-overrides\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.923996 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-run-ovn\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.924015 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-host-cni-bin\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.924088 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-host-cni-bin\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.924127 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-host-run-ovn-kubernetes\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.924150 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-host-cni-netd\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.924170 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-log-socket\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.924189 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-systemd-units\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.924692 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-host-slash\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.924808 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-host-kubelet\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.924879 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-node-log\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.924976 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-run-openvswitch\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.925022 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-run-systemd\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.925100 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-etc-openvswitch\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.925106 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-host-run-netns\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.925072 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-run-ovn\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.925550 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1db2e42e-a744-4e36-a9d0-236601daae20-env-overrides\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.925601 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.925662 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1db2e42e-a744-4e36-a9d0-236601daae20-ovnkube-script-lib\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.924771 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1db2e42e-a744-4e36-a9d0-236601daae20-var-lib-openvswitch\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.925703 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1db2e42e-a744-4e36-a9d0-236601daae20-ovnkube-config\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.927766 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1db2e42e-a744-4e36-a9d0-236601daae20-ovn-node-metrics-cert\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.943882 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85wrk\" (UniqueName: \"kubernetes.io/projected/1db2e42e-a744-4e36-a9d0-236601daae20-kube-api-access-85wrk\") pod \"ovnkube-node-zjrwp\" (UID: \"1db2e42e-a744-4e36-a9d0-236601daae20\") " pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:33 crc kubenswrapper[4946]: I1128 07:07:33.997825 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.406026 4946 generic.go:334] "Generic (PLEG): container finished" podID="0b188612-3562-413e-9295-d1534c0cd90a" containerID="39e47282d8621c5f2edfc354f7fb49ef392ff9ccf9919ea89f2094deaaf247b3" exitCode=0 Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.406104 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5n4x" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.406132 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5n4x" event={"ID":"0b188612-3562-413e-9295-d1534c0cd90a","Type":"ContainerDied","Data":"39e47282d8621c5f2edfc354f7fb49ef392ff9ccf9919ea89f2094deaaf247b3"} Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.406524 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5n4x" event={"ID":"0b188612-3562-413e-9295-d1534c0cd90a","Type":"ContainerDied","Data":"cbe6dfb6282a087f8b779314bb02c8bfc4d60235059763778df03436b44b212d"} Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.406557 4946 scope.go:117] "RemoveContainer" containerID="39e47282d8621c5f2edfc354f7fb49ef392ff9ccf9919ea89f2094deaaf247b3" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.410507 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pkknv_47e7046d-60dc-4dc0-b63e-f22f4ca5cd51/ovn-acl-logging/0.log" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.411016 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pkknv_47e7046d-60dc-4dc0-b63e-f22f4ca5cd51/ovn-controller/0.log" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.411382 4946 generic.go:334] "Generic (PLEG): container finished" podID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" containerID="d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e" exitCode=0 Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.411425 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerDied","Data":"d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e"} Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.411485 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" event={"ID":"47e7046d-60dc-4dc0-b63e-f22f4ca5cd51","Type":"ContainerDied","Data":"61e3f55bcbc151c49e2b4d5d26793757faebef7293f442e7cc50af03aab8bb11"} Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.411494 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pkknv" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.413368 4946 generic.go:334] "Generic (PLEG): container finished" podID="1db2e42e-a744-4e36-a9d0-236601daae20" containerID="e5b71b28185cd3bc008034f07fbb0461e05c226803a049cae6e964014cfd2a53" exitCode=0 Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.413427 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" event={"ID":"1db2e42e-a744-4e36-a9d0-236601daae20","Type":"ContainerDied","Data":"e5b71b28185cd3bc008034f07fbb0461e05c226803a049cae6e964014cfd2a53"} Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.413476 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" event={"ID":"1db2e42e-a744-4e36-a9d0-236601daae20","Type":"ContainerStarted","Data":"59cd0d0755f04ac69ff35ff1b3ae961c80ce08273bf216e3da82ecfa29b6b42c"} Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.420125 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9g9w4_857356d2-6585-41c6-9a2c-e06ef45f7303/kube-multus/2.log" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.420182 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9g9w4" event={"ID":"857356d2-6585-41c6-9a2c-e06ef45f7303","Type":"ContainerStarted","Data":"ebe6dceaa9654323b003711b67b69c991bec493f4850512002cc651e6c5ed783"} Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.423080 4946 scope.go:117] "RemoveContainer" containerID="8753166cb1acfa552900bcae9e3ac44e011bc374d9e944633ebb2f72b029b07c" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.449021 4946 scope.go:117] "RemoveContainer" containerID="1b13a2871b40b166164fed865f462b44cfd0680a737d550a1745bed38ac5d301" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.481197 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5n4x"] Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.509071 4946 scope.go:117] "RemoveContainer" containerID="39e47282d8621c5f2edfc354f7fb49ef392ff9ccf9919ea89f2094deaaf247b3" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.512143 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5n4x"] Nov 28 07:07:34 crc kubenswrapper[4946]: E1128 07:07:34.516147 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39e47282d8621c5f2edfc354f7fb49ef392ff9ccf9919ea89f2094deaaf247b3\": container with ID starting with 39e47282d8621c5f2edfc354f7fb49ef392ff9ccf9919ea89f2094deaaf247b3 not found: ID does not exist" containerID="39e47282d8621c5f2edfc354f7fb49ef392ff9ccf9919ea89f2094deaaf247b3" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.516210 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39e47282d8621c5f2edfc354f7fb49ef392ff9ccf9919ea89f2094deaaf247b3"} err="failed to get container status \"39e47282d8621c5f2edfc354f7fb49ef392ff9ccf9919ea89f2094deaaf247b3\": rpc error: code = NotFound desc = could not find container \"39e47282d8621c5f2edfc354f7fb49ef392ff9ccf9919ea89f2094deaaf247b3\": container with ID starting with 39e47282d8621c5f2edfc354f7fb49ef392ff9ccf9919ea89f2094deaaf247b3 not found: ID does not exist" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.516242 4946 scope.go:117] "RemoveContainer" containerID="8753166cb1acfa552900bcae9e3ac44e011bc374d9e944633ebb2f72b029b07c" Nov 28 07:07:34 crc kubenswrapper[4946]: E1128 07:07:34.517422 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8753166cb1acfa552900bcae9e3ac44e011bc374d9e944633ebb2f72b029b07c\": container with ID starting with 8753166cb1acfa552900bcae9e3ac44e011bc374d9e944633ebb2f72b029b07c not found: ID does not exist" containerID="8753166cb1acfa552900bcae9e3ac44e011bc374d9e944633ebb2f72b029b07c" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.517453 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8753166cb1acfa552900bcae9e3ac44e011bc374d9e944633ebb2f72b029b07c"} err="failed to get container status \"8753166cb1acfa552900bcae9e3ac44e011bc374d9e944633ebb2f72b029b07c\": rpc error: code = NotFound desc = could not find container \"8753166cb1acfa552900bcae9e3ac44e011bc374d9e944633ebb2f72b029b07c\": container with ID starting with 8753166cb1acfa552900bcae9e3ac44e011bc374d9e944633ebb2f72b029b07c not found: ID does not exist" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.517483 4946 scope.go:117] "RemoveContainer" containerID="1b13a2871b40b166164fed865f462b44cfd0680a737d550a1745bed38ac5d301" Nov 28 07:07:34 crc kubenswrapper[4946]: E1128 07:07:34.519034 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b13a2871b40b166164fed865f462b44cfd0680a737d550a1745bed38ac5d301\": container with ID starting with 1b13a2871b40b166164fed865f462b44cfd0680a737d550a1745bed38ac5d301 not found: ID does not exist" containerID="1b13a2871b40b166164fed865f462b44cfd0680a737d550a1745bed38ac5d301" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.519073 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b13a2871b40b166164fed865f462b44cfd0680a737d550a1745bed38ac5d301"} err="failed to get container status \"1b13a2871b40b166164fed865f462b44cfd0680a737d550a1745bed38ac5d301\": rpc error: code = NotFound desc = could not find container \"1b13a2871b40b166164fed865f462b44cfd0680a737d550a1745bed38ac5d301\": container with ID starting with 1b13a2871b40b166164fed865f462b44cfd0680a737d550a1745bed38ac5d301 not found: ID does not exist" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.519094 4946 scope.go:117] "RemoveContainer" containerID="c30a490f27e222171754b9e68dbf02ac61a2f09d7e76897a6b22805e9582c6c8" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.525167 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pkknv"] Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.530304 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pkknv"] Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.544151 4946 scope.go:117] "RemoveContainer" containerID="bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.563422 4946 scope.go:117] "RemoveContainer" containerID="f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.586081 4946 scope.go:117] "RemoveContainer" containerID="d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.607030 4946 scope.go:117] "RemoveContainer" containerID="38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.635966 4946 scope.go:117] "RemoveContainer" containerID="35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.660736 4946 scope.go:117] "RemoveContainer" containerID="a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.683450 4946 scope.go:117] "RemoveContainer" containerID="7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.701757 4946 scope.go:117] "RemoveContainer" containerID="5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.718032 4946 scope.go:117] "RemoveContainer" containerID="c30a490f27e222171754b9e68dbf02ac61a2f09d7e76897a6b22805e9582c6c8" Nov 28 07:07:34 crc kubenswrapper[4946]: E1128 07:07:34.718833 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c30a490f27e222171754b9e68dbf02ac61a2f09d7e76897a6b22805e9582c6c8\": container with ID starting with c30a490f27e222171754b9e68dbf02ac61a2f09d7e76897a6b22805e9582c6c8 not found: ID does not exist" containerID="c30a490f27e222171754b9e68dbf02ac61a2f09d7e76897a6b22805e9582c6c8" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.718881 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c30a490f27e222171754b9e68dbf02ac61a2f09d7e76897a6b22805e9582c6c8"} err="failed to get container status \"c30a490f27e222171754b9e68dbf02ac61a2f09d7e76897a6b22805e9582c6c8\": rpc error: code = NotFound desc = could not find container \"c30a490f27e222171754b9e68dbf02ac61a2f09d7e76897a6b22805e9582c6c8\": container with ID starting with c30a490f27e222171754b9e68dbf02ac61a2f09d7e76897a6b22805e9582c6c8 not found: ID does not exist" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.718908 4946 scope.go:117] "RemoveContainer" containerID="bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b" Nov 28 07:07:34 crc kubenswrapper[4946]: E1128 07:07:34.719512 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\": container with ID starting with bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b not found: ID does not exist" containerID="bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.719549 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b"} err="failed to get container status \"bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\": rpc error: code = NotFound desc = could not find container \"bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b\": container with ID starting with bc681b78e3640f7f8dd8729a32ea393d97f837795e0cccf628724d2b3901739b not found: ID does not exist" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.719570 4946 scope.go:117] "RemoveContainer" containerID="f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce" Nov 28 07:07:34 crc kubenswrapper[4946]: E1128 07:07:34.720257 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\": container with ID starting with f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce not found: ID does not exist" containerID="f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.720290 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce"} err="failed to get container status \"f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\": rpc error: code = NotFound desc = could not find container \"f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce\": container with ID starting with f9128b2484a66b1b0f5954befe68d1bd1be5a4edbf254d76473d8ee3025b47ce not found: ID does not exist" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.720717 4946 scope.go:117] "RemoveContainer" containerID="d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e" Nov 28 07:07:34 crc kubenswrapper[4946]: E1128 07:07:34.721409 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\": container with ID starting with d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e not found: ID does not exist" containerID="d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.721504 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e"} err="failed to get container status \"d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\": rpc error: code = NotFound desc = could not find container \"d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e\": container with ID starting with d8b00edadf6d3442d70a476d465733f303f7a48d8998c1a9c881ae515daa264e not found: ID does not exist" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.721551 4946 scope.go:117] "RemoveContainer" containerID="38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7" Nov 28 07:07:34 crc kubenswrapper[4946]: E1128 07:07:34.722232 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\": container with ID starting with 38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7 not found: ID does not exist" containerID="38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.722268 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7"} err="failed to get container status \"38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\": rpc error: code = NotFound desc = could not find container \"38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7\": container with ID starting with 38e04dce69273a5669801d093325e51ed6d3a83d545aaf505f6619b89cb9e3d7 not found: ID does not exist" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.722282 4946 scope.go:117] "RemoveContainer" containerID="35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca" Nov 28 07:07:34 crc kubenswrapper[4946]: E1128 07:07:34.722616 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\": container with ID starting with 35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca not found: ID does not exist" containerID="35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.722633 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca"} err="failed to get container status \"35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\": rpc error: code = NotFound desc = could not find container \"35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca\": container with ID starting with 35308a0d1d457d423060c71fb70b7e8d1723171f575b2545db3b372c4d2a12ca not found: ID does not exist" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.722670 4946 scope.go:117] "RemoveContainer" containerID="a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f" Nov 28 07:07:34 crc kubenswrapper[4946]: E1128 07:07:34.723868 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\": container with ID starting with a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f not found: ID does not exist" containerID="a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.723884 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f"} err="failed to get container status \"a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\": rpc error: code = NotFound desc = could not find container \"a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f\": container with ID starting with a1ba38f4c30e4f09063926783a9fbf7883079d7f077200ee18a7a24c236b025f not found: ID does not exist" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.723896 4946 scope.go:117] "RemoveContainer" containerID="7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324" Nov 28 07:07:34 crc kubenswrapper[4946]: E1128 07:07:34.724353 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\": container with ID starting with 7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324 not found: ID does not exist" containerID="7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.724398 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324"} err="failed to get container status \"7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\": rpc error: code = NotFound desc = could not find container \"7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324\": container with ID starting with 7ed2e660d6d37ec0abfdd8485a2b9cb79b041a214007a5b8832d70a21076c324 not found: ID does not exist" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.724418 4946 scope.go:117] "RemoveContainer" containerID="5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830" Nov 28 07:07:34 crc kubenswrapper[4946]: E1128 07:07:34.726137 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\": container with ID starting with 5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830 not found: ID does not exist" containerID="5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830" Nov 28 07:07:34 crc kubenswrapper[4946]: I1128 07:07:34.726157 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830"} err="failed to get container status \"5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\": rpc error: code = NotFound desc = could not find container \"5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830\": container with ID starting with 5e1dd7ed194f7068efa7608f3259832555abaadb4684fd34ed46850913b21830 not found: ID does not exist" Nov 28 07:07:35 crc kubenswrapper[4946]: I1128 07:07:35.432568 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" event={"ID":"1db2e42e-a744-4e36-a9d0-236601daae20","Type":"ContainerStarted","Data":"67ae1ec26694beff8103052b48df61346fb9a97b07d30b4b98228ed7622a3754"} Nov 28 07:07:35 crc kubenswrapper[4946]: I1128 07:07:35.432623 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" event={"ID":"1db2e42e-a744-4e36-a9d0-236601daae20","Type":"ContainerStarted","Data":"432eb47d00e8906e8506004db0e3b21125eaf36dd5135e8c29e3d478332f6be3"} Nov 28 07:07:35 crc kubenswrapper[4946]: I1128 07:07:35.432638 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" event={"ID":"1db2e42e-a744-4e36-a9d0-236601daae20","Type":"ContainerStarted","Data":"571464f923445fbffdfb4ad5a7778eb14019938786a1f8e9a66ed41f41d8b640"} Nov 28 07:07:35 crc kubenswrapper[4946]: I1128 07:07:35.432649 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" event={"ID":"1db2e42e-a744-4e36-a9d0-236601daae20","Type":"ContainerStarted","Data":"9a73ca1b81ff21fd2cabc9522e15ccf7f280ec35621a2e24bdf573841878446e"} Nov 28 07:07:35 crc kubenswrapper[4946]: I1128 07:07:35.432660 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" event={"ID":"1db2e42e-a744-4e36-a9d0-236601daae20","Type":"ContainerStarted","Data":"c96ef473a9efd98331656ab8819215c15e1cf6244d1e0a12451ac54973f0b643"} Nov 28 07:07:35 crc kubenswrapper[4946]: I1128 07:07:35.432670 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" event={"ID":"1db2e42e-a744-4e36-a9d0-236601daae20","Type":"ContainerStarted","Data":"630eb11c1459fbb5f1f65baccace156b571a43b56eedfbb144e25588306b0c3d"} Nov 28 07:07:35 crc kubenswrapper[4946]: I1128 07:07:35.999264 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b188612-3562-413e-9295-d1534c0cd90a" path="/var/lib/kubelet/pods/0b188612-3562-413e-9295-d1534c0cd90a/volumes" Nov 28 07:07:36 crc kubenswrapper[4946]: I1128 07:07:36.001631 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47e7046d-60dc-4dc0-b63e-f22f4ca5cd51" path="/var/lib/kubelet/pods/47e7046d-60dc-4dc0-b63e-f22f4ca5cd51/volumes" Nov 28 07:07:37 crc kubenswrapper[4946]: I1128 07:07:37.457227 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" event={"ID":"1db2e42e-a744-4e36-a9d0-236601daae20","Type":"ContainerStarted","Data":"d1cfce261094240325a2add44eadf6af3f36b392972cf0ea6f95f64fbbb4c83d"} Nov 28 07:07:37 crc kubenswrapper[4946]: I1128 07:07:37.900196 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-whwl7"] Nov 28 07:07:37 crc kubenswrapper[4946]: I1128 07:07:37.901607 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-whwl7" Nov 28 07:07:37 crc kubenswrapper[4946]: I1128 07:07:37.903544 4946 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-s7ht6" Nov 28 07:07:37 crc kubenswrapper[4946]: I1128 07:07:37.905202 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 28 07:07:37 crc kubenswrapper[4946]: I1128 07:07:37.905219 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 28 07:07:37 crc kubenswrapper[4946]: I1128 07:07:37.905583 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 28 07:07:37 crc kubenswrapper[4946]: I1128 07:07:37.984417 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/15199c22-6d54-45fd-b47f-e1046b4875be-crc-storage\") pod \"crc-storage-crc-whwl7\" (UID: \"15199c22-6d54-45fd-b47f-e1046b4875be\") " pod="crc-storage/crc-storage-crc-whwl7" Nov 28 07:07:37 crc kubenswrapper[4946]: I1128 07:07:37.984481 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxjb7\" (UniqueName: \"kubernetes.io/projected/15199c22-6d54-45fd-b47f-e1046b4875be-kube-api-access-gxjb7\") pod \"crc-storage-crc-whwl7\" (UID: \"15199c22-6d54-45fd-b47f-e1046b4875be\") " pod="crc-storage/crc-storage-crc-whwl7" Nov 28 07:07:37 crc kubenswrapper[4946]: I1128 07:07:37.984519 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/15199c22-6d54-45fd-b47f-e1046b4875be-node-mnt\") pod \"crc-storage-crc-whwl7\" (UID: \"15199c22-6d54-45fd-b47f-e1046b4875be\") " pod="crc-storage/crc-storage-crc-whwl7" Nov 28 07:07:38 crc kubenswrapper[4946]: I1128 07:07:38.086187 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/15199c22-6d54-45fd-b47f-e1046b4875be-crc-storage\") pod \"crc-storage-crc-whwl7\" (UID: \"15199c22-6d54-45fd-b47f-e1046b4875be\") " pod="crc-storage/crc-storage-crc-whwl7" Nov 28 07:07:38 crc kubenswrapper[4946]: I1128 07:07:38.086252 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxjb7\" (UniqueName: \"kubernetes.io/projected/15199c22-6d54-45fd-b47f-e1046b4875be-kube-api-access-gxjb7\") pod \"crc-storage-crc-whwl7\" (UID: \"15199c22-6d54-45fd-b47f-e1046b4875be\") " pod="crc-storage/crc-storage-crc-whwl7" Nov 28 07:07:38 crc kubenswrapper[4946]: I1128 07:07:38.086342 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/15199c22-6d54-45fd-b47f-e1046b4875be-node-mnt\") pod \"crc-storage-crc-whwl7\" (UID: \"15199c22-6d54-45fd-b47f-e1046b4875be\") " pod="crc-storage/crc-storage-crc-whwl7" Nov 28 07:07:38 crc kubenswrapper[4946]: I1128 07:07:38.086718 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/15199c22-6d54-45fd-b47f-e1046b4875be-node-mnt\") pod \"crc-storage-crc-whwl7\" (UID: \"15199c22-6d54-45fd-b47f-e1046b4875be\") " pod="crc-storage/crc-storage-crc-whwl7" Nov 28 07:07:38 crc kubenswrapper[4946]: I1128 07:07:38.087203 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/15199c22-6d54-45fd-b47f-e1046b4875be-crc-storage\") pod \"crc-storage-crc-whwl7\" (UID: \"15199c22-6d54-45fd-b47f-e1046b4875be\") " pod="crc-storage/crc-storage-crc-whwl7" Nov 28 07:07:38 crc kubenswrapper[4946]: I1128 07:07:38.113633 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxjb7\" (UniqueName: \"kubernetes.io/projected/15199c22-6d54-45fd-b47f-e1046b4875be-kube-api-access-gxjb7\") pod \"crc-storage-crc-whwl7\" (UID: \"15199c22-6d54-45fd-b47f-e1046b4875be\") " pod="crc-storage/crc-storage-crc-whwl7" Nov 28 07:07:38 crc kubenswrapper[4946]: I1128 07:07:38.224259 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-whwl7" Nov 28 07:07:38 crc kubenswrapper[4946]: E1128 07:07:38.269427 4946 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-whwl7_crc-storage_15199c22-6d54-45fd-b47f-e1046b4875be_0(3a6a87cdf7cabe05f30312ff1826134632d69b6f8606387322eb9bb9319eb245): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 07:07:38 crc kubenswrapper[4946]: E1128 07:07:38.269559 4946 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-whwl7_crc-storage_15199c22-6d54-45fd-b47f-e1046b4875be_0(3a6a87cdf7cabe05f30312ff1826134632d69b6f8606387322eb9bb9319eb245): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-whwl7" Nov 28 07:07:38 crc kubenswrapper[4946]: E1128 07:07:38.269595 4946 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-whwl7_crc-storage_15199c22-6d54-45fd-b47f-e1046b4875be_0(3a6a87cdf7cabe05f30312ff1826134632d69b6f8606387322eb9bb9319eb245): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-whwl7" Nov 28 07:07:38 crc kubenswrapper[4946]: E1128 07:07:38.269692 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-whwl7_crc-storage(15199c22-6d54-45fd-b47f-e1046b4875be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-whwl7_crc-storage(15199c22-6d54-45fd-b47f-e1046b4875be)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-whwl7_crc-storage_15199c22-6d54-45fd-b47f-e1046b4875be_0(3a6a87cdf7cabe05f30312ff1826134632d69b6f8606387322eb9bb9319eb245): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-whwl7" podUID="15199c22-6d54-45fd-b47f-e1046b4875be" Nov 28 07:07:40 crc kubenswrapper[4946]: I1128 07:07:40.486964 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" event={"ID":"1db2e42e-a744-4e36-a9d0-236601daae20","Type":"ContainerStarted","Data":"9da649a3af64b7e1d57c7c3a60c8236fae1ec288d09d9e9f23fa6b3afc651c55"} Nov 28 07:07:41 crc kubenswrapper[4946]: I1128 07:07:41.492602 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:41 crc kubenswrapper[4946]: I1128 07:07:41.492664 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:41 crc kubenswrapper[4946]: I1128 07:07:41.492676 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:41 crc kubenswrapper[4946]: I1128 07:07:41.525737 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" podStartSLOduration=8.525717312 podStartE2EDuration="8.525717312s" podCreationTimestamp="2025-11-28 07:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:07:41.522962844 +0000 UTC m=+915.901027975" watchObservedRunningTime="2025-11-28 07:07:41.525717312 +0000 UTC m=+915.903782433" Nov 28 07:07:41 crc kubenswrapper[4946]: I1128 07:07:41.532512 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:41 crc kubenswrapper[4946]: I1128 07:07:41.532901 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:07:41 crc kubenswrapper[4946]: I1128 07:07:41.734802 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-whwl7"] Nov 28 07:07:41 crc kubenswrapper[4946]: I1128 07:07:41.735020 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-whwl7" Nov 28 07:07:41 crc kubenswrapper[4946]: I1128 07:07:41.735604 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-whwl7" Nov 28 07:07:41 crc kubenswrapper[4946]: E1128 07:07:41.769190 4946 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-whwl7_crc-storage_15199c22-6d54-45fd-b47f-e1046b4875be_0(9a43db7ae98b16bfc22b093dbda01bc1df702320109009109ef0c1cb75679f76): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 07:07:41 crc kubenswrapper[4946]: E1128 07:07:41.769770 4946 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-whwl7_crc-storage_15199c22-6d54-45fd-b47f-e1046b4875be_0(9a43db7ae98b16bfc22b093dbda01bc1df702320109009109ef0c1cb75679f76): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-whwl7" Nov 28 07:07:41 crc kubenswrapper[4946]: E1128 07:07:41.769799 4946 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-whwl7_crc-storage_15199c22-6d54-45fd-b47f-e1046b4875be_0(9a43db7ae98b16bfc22b093dbda01bc1df702320109009109ef0c1cb75679f76): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-whwl7" Nov 28 07:07:41 crc kubenswrapper[4946]: E1128 07:07:41.769866 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-whwl7_crc-storage(15199c22-6d54-45fd-b47f-e1046b4875be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-whwl7_crc-storage(15199c22-6d54-45fd-b47f-e1046b4875be)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-whwl7_crc-storage_15199c22-6d54-45fd-b47f-e1046b4875be_0(9a43db7ae98b16bfc22b093dbda01bc1df702320109009109ef0c1cb75679f76): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-whwl7" podUID="15199c22-6d54-45fd-b47f-e1046b4875be" Nov 28 07:07:56 crc kubenswrapper[4946]: I1128 07:07:56.989915 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-whwl7" Nov 28 07:07:56 crc kubenswrapper[4946]: I1128 07:07:56.992138 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-whwl7" Nov 28 07:07:57 crc kubenswrapper[4946]: I1128 07:07:57.220629 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-whwl7"] Nov 28 07:07:57 crc kubenswrapper[4946]: I1128 07:07:57.602343 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-whwl7" event={"ID":"15199c22-6d54-45fd-b47f-e1046b4875be","Type":"ContainerStarted","Data":"26f2159bdfeeaf1aefe84ffc226ccf47f3eb4a89c4682b87f0f9ecebc1cf457f"} Nov 28 07:07:59 crc kubenswrapper[4946]: I1128 07:07:59.617662 4946 generic.go:334] "Generic (PLEG): container finished" podID="15199c22-6d54-45fd-b47f-e1046b4875be" containerID="f7f05ce7c4a48ef690cec16b64e2189c4eb184e3722ad7a03a1c13bc025c0fc5" exitCode=0 Nov 28 07:07:59 crc kubenswrapper[4946]: I1128 07:07:59.617768 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-whwl7" event={"ID":"15199c22-6d54-45fd-b47f-e1046b4875be","Type":"ContainerDied","Data":"f7f05ce7c4a48ef690cec16b64e2189c4eb184e3722ad7a03a1c13bc025c0fc5"} Nov 28 07:08:00 crc kubenswrapper[4946]: I1128 07:08:00.837776 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-whwl7" Nov 28 07:08:00 crc kubenswrapper[4946]: I1128 07:08:00.940590 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/15199c22-6d54-45fd-b47f-e1046b4875be-node-mnt\") pod \"15199c22-6d54-45fd-b47f-e1046b4875be\" (UID: \"15199c22-6d54-45fd-b47f-e1046b4875be\") " Nov 28 07:08:00 crc kubenswrapper[4946]: I1128 07:08:00.940668 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/15199c22-6d54-45fd-b47f-e1046b4875be-crc-storage\") pod \"15199c22-6d54-45fd-b47f-e1046b4875be\" (UID: \"15199c22-6d54-45fd-b47f-e1046b4875be\") " Nov 28 07:08:00 crc kubenswrapper[4946]: I1128 07:08:00.940734 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15199c22-6d54-45fd-b47f-e1046b4875be-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "15199c22-6d54-45fd-b47f-e1046b4875be" (UID: "15199c22-6d54-45fd-b47f-e1046b4875be"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:08:00 crc kubenswrapper[4946]: I1128 07:08:00.940752 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxjb7\" (UniqueName: \"kubernetes.io/projected/15199c22-6d54-45fd-b47f-e1046b4875be-kube-api-access-gxjb7\") pod \"15199c22-6d54-45fd-b47f-e1046b4875be\" (UID: \"15199c22-6d54-45fd-b47f-e1046b4875be\") " Nov 28 07:08:00 crc kubenswrapper[4946]: I1128 07:08:00.941155 4946 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/15199c22-6d54-45fd-b47f-e1046b4875be-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:00 crc kubenswrapper[4946]: I1128 07:08:00.946922 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15199c22-6d54-45fd-b47f-e1046b4875be-kube-api-access-gxjb7" (OuterVolumeSpecName: "kube-api-access-gxjb7") pod "15199c22-6d54-45fd-b47f-e1046b4875be" (UID: "15199c22-6d54-45fd-b47f-e1046b4875be"). InnerVolumeSpecName "kube-api-access-gxjb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:08:00 crc kubenswrapper[4946]: I1128 07:08:00.954967 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15199c22-6d54-45fd-b47f-e1046b4875be-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "15199c22-6d54-45fd-b47f-e1046b4875be" (UID: "15199c22-6d54-45fd-b47f-e1046b4875be"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:01 crc kubenswrapper[4946]: I1128 07:08:01.042550 4946 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/15199c22-6d54-45fd-b47f-e1046b4875be-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:01 crc kubenswrapper[4946]: I1128 07:08:01.042590 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxjb7\" (UniqueName: \"kubernetes.io/projected/15199c22-6d54-45fd-b47f-e1046b4875be-kube-api-access-gxjb7\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:01 crc kubenswrapper[4946]: I1128 07:08:01.631984 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-whwl7" event={"ID":"15199c22-6d54-45fd-b47f-e1046b4875be","Type":"ContainerDied","Data":"26f2159bdfeeaf1aefe84ffc226ccf47f3eb4a89c4682b87f0f9ecebc1cf457f"} Nov 28 07:08:01 crc kubenswrapper[4946]: I1128 07:08:01.632023 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-whwl7" Nov 28 07:08:01 crc kubenswrapper[4946]: I1128 07:08:01.632042 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26f2159bdfeeaf1aefe84ffc226ccf47f3eb4a89c4682b87f0f9ecebc1cf457f" Nov 28 07:08:04 crc kubenswrapper[4946]: I1128 07:08:04.023370 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zjrwp" Nov 28 07:08:08 crc kubenswrapper[4946]: I1128 07:08:08.302592 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h"] Nov 28 07:08:08 crc kubenswrapper[4946]: E1128 07:08:08.303377 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15199c22-6d54-45fd-b47f-e1046b4875be" containerName="storage" Nov 28 07:08:08 crc kubenswrapper[4946]: I1128 07:08:08.303394 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="15199c22-6d54-45fd-b47f-e1046b4875be" containerName="storage" Nov 28 07:08:08 crc kubenswrapper[4946]: I1128 07:08:08.303526 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="15199c22-6d54-45fd-b47f-e1046b4875be" containerName="storage" Nov 28 07:08:08 crc kubenswrapper[4946]: I1128 07:08:08.304504 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h" Nov 28 07:08:08 crc kubenswrapper[4946]: I1128 07:08:08.307633 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 07:08:08 crc kubenswrapper[4946]: I1128 07:08:08.321048 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h"] Nov 28 07:08:08 crc kubenswrapper[4946]: I1128 07:08:08.468921 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/79d1ba63-f8be-449c-bd90-3e9eb0276d8e-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h\" (UID: \"79d1ba63-f8be-449c-bd90-3e9eb0276d8e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h" Nov 28 07:08:08 crc kubenswrapper[4946]: I1128 07:08:08.469022 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/79d1ba63-f8be-449c-bd90-3e9eb0276d8e-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h\" (UID: \"79d1ba63-f8be-449c-bd90-3e9eb0276d8e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h" Nov 28 07:08:08 crc kubenswrapper[4946]: I1128 07:08:08.469057 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6sh6\" (UniqueName: \"kubernetes.io/projected/79d1ba63-f8be-449c-bd90-3e9eb0276d8e-kube-api-access-t6sh6\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h\" (UID: \"79d1ba63-f8be-449c-bd90-3e9eb0276d8e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h" Nov 28 07:08:08 crc kubenswrapper[4946]: I1128 07:08:08.570377 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/79d1ba63-f8be-449c-bd90-3e9eb0276d8e-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h\" (UID: \"79d1ba63-f8be-449c-bd90-3e9eb0276d8e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h" Nov 28 07:08:08 crc kubenswrapper[4946]: I1128 07:08:08.570458 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6sh6\" (UniqueName: \"kubernetes.io/projected/79d1ba63-f8be-449c-bd90-3e9eb0276d8e-kube-api-access-t6sh6\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h\" (UID: \"79d1ba63-f8be-449c-bd90-3e9eb0276d8e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h" Nov 28 07:08:08 crc kubenswrapper[4946]: I1128 07:08:08.570598 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/79d1ba63-f8be-449c-bd90-3e9eb0276d8e-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h\" (UID: \"79d1ba63-f8be-449c-bd90-3e9eb0276d8e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h" Nov 28 07:08:08 crc kubenswrapper[4946]: I1128 07:08:08.570933 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/79d1ba63-f8be-449c-bd90-3e9eb0276d8e-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h\" (UID: \"79d1ba63-f8be-449c-bd90-3e9eb0276d8e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h" Nov 28 07:08:08 crc kubenswrapper[4946]: I1128 07:08:08.571338 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/79d1ba63-f8be-449c-bd90-3e9eb0276d8e-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h\" (UID: \"79d1ba63-f8be-449c-bd90-3e9eb0276d8e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h" Nov 28 07:08:08 crc kubenswrapper[4946]: I1128 07:08:08.593621 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6sh6\" (UniqueName: \"kubernetes.io/projected/79d1ba63-f8be-449c-bd90-3e9eb0276d8e-kube-api-access-t6sh6\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h\" (UID: \"79d1ba63-f8be-449c-bd90-3e9eb0276d8e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h" Nov 28 07:08:08 crc kubenswrapper[4946]: I1128 07:08:08.627586 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h" Nov 28 07:08:08 crc kubenswrapper[4946]: I1128 07:08:08.883090 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h"] Nov 28 07:08:09 crc kubenswrapper[4946]: I1128 07:08:09.686599 4946 generic.go:334] "Generic (PLEG): container finished" podID="79d1ba63-f8be-449c-bd90-3e9eb0276d8e" containerID="c7c4c2dabb4617993f261945e7949851aaa5576f865d119b288a16de83753f8d" exitCode=0 Nov 28 07:08:09 crc kubenswrapper[4946]: I1128 07:08:09.686680 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h" event={"ID":"79d1ba63-f8be-449c-bd90-3e9eb0276d8e","Type":"ContainerDied","Data":"c7c4c2dabb4617993f261945e7949851aaa5576f865d119b288a16de83753f8d"} Nov 28 07:08:09 crc kubenswrapper[4946]: I1128 07:08:09.686752 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h" event={"ID":"79d1ba63-f8be-449c-bd90-3e9eb0276d8e","Type":"ContainerStarted","Data":"1038f20dfbd82cd5a4854152204d2f24ad2ad2245909541928c944572bdfc77b"} Nov 28 07:08:11 crc kubenswrapper[4946]: I1128 07:08:11.702602 4946 generic.go:334] "Generic (PLEG): container finished" podID="79d1ba63-f8be-449c-bd90-3e9eb0276d8e" containerID="ddbe84d02a3292f78722d776e8ec5cf25c762459f186401a2c139efe6cd56c05" exitCode=0 Nov 28 07:08:11 crc kubenswrapper[4946]: I1128 07:08:11.702683 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h" event={"ID":"79d1ba63-f8be-449c-bd90-3e9eb0276d8e","Type":"ContainerDied","Data":"ddbe84d02a3292f78722d776e8ec5cf25c762459f186401a2c139efe6cd56c05"} Nov 28 07:08:12 crc kubenswrapper[4946]: I1128 07:08:12.713134 4946 generic.go:334] "Generic (PLEG): container finished" podID="79d1ba63-f8be-449c-bd90-3e9eb0276d8e" containerID="b93b8b8ade892990338cf56e39adfaa47eaab5492208609b56c3996571054fff" exitCode=0 Nov 28 07:08:12 crc kubenswrapper[4946]: I1128 07:08:12.713228 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h" event={"ID":"79d1ba63-f8be-449c-bd90-3e9eb0276d8e","Type":"ContainerDied","Data":"b93b8b8ade892990338cf56e39adfaa47eaab5492208609b56c3996571054fff"} Nov 28 07:08:13 crc kubenswrapper[4946]: I1128 07:08:13.965934 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h" Nov 28 07:08:14 crc kubenswrapper[4946]: I1128 07:08:14.054989 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6sh6\" (UniqueName: \"kubernetes.io/projected/79d1ba63-f8be-449c-bd90-3e9eb0276d8e-kube-api-access-t6sh6\") pod \"79d1ba63-f8be-449c-bd90-3e9eb0276d8e\" (UID: \"79d1ba63-f8be-449c-bd90-3e9eb0276d8e\") " Nov 28 07:08:14 crc kubenswrapper[4946]: I1128 07:08:14.055658 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/79d1ba63-f8be-449c-bd90-3e9eb0276d8e-bundle\") pod \"79d1ba63-f8be-449c-bd90-3e9eb0276d8e\" (UID: \"79d1ba63-f8be-449c-bd90-3e9eb0276d8e\") " Nov 28 07:08:14 crc kubenswrapper[4946]: I1128 07:08:14.055815 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/79d1ba63-f8be-449c-bd90-3e9eb0276d8e-util\") pod \"79d1ba63-f8be-449c-bd90-3e9eb0276d8e\" (UID: \"79d1ba63-f8be-449c-bd90-3e9eb0276d8e\") " Nov 28 07:08:14 crc kubenswrapper[4946]: I1128 07:08:14.056268 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79d1ba63-f8be-449c-bd90-3e9eb0276d8e-bundle" (OuterVolumeSpecName: "bundle") pod "79d1ba63-f8be-449c-bd90-3e9eb0276d8e" (UID: "79d1ba63-f8be-449c-bd90-3e9eb0276d8e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:08:14 crc kubenswrapper[4946]: I1128 07:08:14.062260 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d1ba63-f8be-449c-bd90-3e9eb0276d8e-kube-api-access-t6sh6" (OuterVolumeSpecName: "kube-api-access-t6sh6") pod "79d1ba63-f8be-449c-bd90-3e9eb0276d8e" (UID: "79d1ba63-f8be-449c-bd90-3e9eb0276d8e"). InnerVolumeSpecName "kube-api-access-t6sh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:08:14 crc kubenswrapper[4946]: I1128 07:08:14.069841 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79d1ba63-f8be-449c-bd90-3e9eb0276d8e-util" (OuterVolumeSpecName: "util") pod "79d1ba63-f8be-449c-bd90-3e9eb0276d8e" (UID: "79d1ba63-f8be-449c-bd90-3e9eb0276d8e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:08:14 crc kubenswrapper[4946]: I1128 07:08:14.157170 4946 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/79d1ba63-f8be-449c-bd90-3e9eb0276d8e-util\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:14 crc kubenswrapper[4946]: I1128 07:08:14.157220 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6sh6\" (UniqueName: \"kubernetes.io/projected/79d1ba63-f8be-449c-bd90-3e9eb0276d8e-kube-api-access-t6sh6\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:14 crc kubenswrapper[4946]: I1128 07:08:14.157238 4946 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/79d1ba63-f8be-449c-bd90-3e9eb0276d8e-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:14 crc kubenswrapper[4946]: I1128 07:08:14.729501 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h" event={"ID":"79d1ba63-f8be-449c-bd90-3e9eb0276d8e","Type":"ContainerDied","Data":"1038f20dfbd82cd5a4854152204d2f24ad2ad2245909541928c944572bdfc77b"} Nov 28 07:08:14 crc kubenswrapper[4946]: I1128 07:08:14.729560 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1038f20dfbd82cd5a4854152204d2f24ad2ad2245909541928c944572bdfc77b" Nov 28 07:08:14 crc kubenswrapper[4946]: I1128 07:08:14.729571 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h" Nov 28 07:08:15 crc kubenswrapper[4946]: I1128 07:08:15.664146 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-586jl"] Nov 28 07:08:15 crc kubenswrapper[4946]: E1128 07:08:15.664496 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d1ba63-f8be-449c-bd90-3e9eb0276d8e" containerName="pull" Nov 28 07:08:15 crc kubenswrapper[4946]: I1128 07:08:15.664513 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d1ba63-f8be-449c-bd90-3e9eb0276d8e" containerName="pull" Nov 28 07:08:15 crc kubenswrapper[4946]: E1128 07:08:15.664527 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d1ba63-f8be-449c-bd90-3e9eb0276d8e" containerName="extract" Nov 28 07:08:15 crc kubenswrapper[4946]: I1128 07:08:15.664536 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d1ba63-f8be-449c-bd90-3e9eb0276d8e" containerName="extract" Nov 28 07:08:15 crc kubenswrapper[4946]: E1128 07:08:15.664549 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d1ba63-f8be-449c-bd90-3e9eb0276d8e" containerName="util" Nov 28 07:08:15 crc kubenswrapper[4946]: I1128 07:08:15.664557 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d1ba63-f8be-449c-bd90-3e9eb0276d8e" containerName="util" Nov 28 07:08:15 crc kubenswrapper[4946]: I1128 07:08:15.664678 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d1ba63-f8be-449c-bd90-3e9eb0276d8e" containerName="extract" Nov 28 07:08:15 crc kubenswrapper[4946]: I1128 07:08:15.665129 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-586jl" Nov 28 07:08:15 crc kubenswrapper[4946]: I1128 07:08:15.667032 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-4qqff" Nov 28 07:08:15 crc kubenswrapper[4946]: I1128 07:08:15.668791 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 28 07:08:15 crc kubenswrapper[4946]: I1128 07:08:15.671169 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 28 07:08:15 crc kubenswrapper[4946]: I1128 07:08:15.678130 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-586jl"] Nov 28 07:08:15 crc kubenswrapper[4946]: I1128 07:08:15.782751 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqfrs\" (UniqueName: \"kubernetes.io/projected/3c0f21c1-4c34-4a8a-9533-2fe283ff8d16-kube-api-access-jqfrs\") pod \"nmstate-operator-5b5b58f5c8-586jl\" (UID: \"3c0f21c1-4c34-4a8a-9533-2fe283ff8d16\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-586jl" Nov 28 07:08:15 crc kubenswrapper[4946]: I1128 07:08:15.883853 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqfrs\" (UniqueName: \"kubernetes.io/projected/3c0f21c1-4c34-4a8a-9533-2fe283ff8d16-kube-api-access-jqfrs\") pod \"nmstate-operator-5b5b58f5c8-586jl\" (UID: \"3c0f21c1-4c34-4a8a-9533-2fe283ff8d16\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-586jl" Nov 28 07:08:15 crc kubenswrapper[4946]: I1128 07:08:15.904616 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqfrs\" (UniqueName: \"kubernetes.io/projected/3c0f21c1-4c34-4a8a-9533-2fe283ff8d16-kube-api-access-jqfrs\") pod \"nmstate-operator-5b5b58f5c8-586jl\" (UID: \"3c0f21c1-4c34-4a8a-9533-2fe283ff8d16\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-586jl" Nov 28 07:08:15 crc kubenswrapper[4946]: I1128 07:08:15.981068 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-586jl" Nov 28 07:08:16 crc kubenswrapper[4946]: I1128 07:08:16.407228 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-586jl"] Nov 28 07:08:16 crc kubenswrapper[4946]: W1128 07:08:16.419317 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c0f21c1_4c34_4a8a_9533_2fe283ff8d16.slice/crio-462af83bfb71b5908f69df777378e4ae4a2973669f4ec24c41f5576a52f496af WatchSource:0}: Error finding container 462af83bfb71b5908f69df777378e4ae4a2973669f4ec24c41f5576a52f496af: Status 404 returned error can't find the container with id 462af83bfb71b5908f69df777378e4ae4a2973669f4ec24c41f5576a52f496af Nov 28 07:08:16 crc kubenswrapper[4946]: I1128 07:08:16.744553 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-586jl" event={"ID":"3c0f21c1-4c34-4a8a-9533-2fe283ff8d16","Type":"ContainerStarted","Data":"462af83bfb71b5908f69df777378e4ae4a2973669f4ec24c41f5576a52f496af"} Nov 28 07:08:20 crc kubenswrapper[4946]: I1128 07:08:20.777352 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-586jl" event={"ID":"3c0f21c1-4c34-4a8a-9533-2fe283ff8d16","Type":"ContainerStarted","Data":"957c1bf051b48aadfc160fe0218ceb563834d1ccabab7d3fdf3ccbd23a8d39c7"} Nov 28 07:08:20 crc kubenswrapper[4946]: I1128 07:08:20.803783 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-586jl" podStartSLOduration=1.874852251 podStartE2EDuration="5.803749117s" podCreationTimestamp="2025-11-28 07:08:15 +0000 UTC" firstStartedPulling="2025-11-28 07:08:16.422253614 +0000 UTC m=+950.800318725" lastFinishedPulling="2025-11-28 07:08:20.35115048 +0000 UTC m=+954.729215591" observedRunningTime="2025-11-28 07:08:20.800979128 +0000 UTC m=+955.179044299" watchObservedRunningTime="2025-11-28 07:08:20.803749117 +0000 UTC m=+955.181814268" Nov 28 07:08:21 crc kubenswrapper[4946]: I1128 07:08:21.816011 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-w7xkb"] Nov 28 07:08:21 crc kubenswrapper[4946]: I1128 07:08:21.817298 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w7xkb" Nov 28 07:08:21 crc kubenswrapper[4946]: I1128 07:08:21.819860 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-z78z9" Nov 28 07:08:21 crc kubenswrapper[4946]: I1128 07:08:21.829007 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-w7xkb"] Nov 28 07:08:21 crc kubenswrapper[4946]: I1128 07:08:21.845367 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjdw"] Nov 28 07:08:21 crc kubenswrapper[4946]: I1128 07:08:21.846475 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjdw" Nov 28 07:08:21 crc kubenswrapper[4946]: I1128 07:08:21.849772 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 28 07:08:21 crc kubenswrapper[4946]: I1128 07:08:21.872334 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjdw"] Nov 28 07:08:21 crc kubenswrapper[4946]: I1128 07:08:21.882171 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-pbwhg"] Nov 28 07:08:21 crc kubenswrapper[4946]: I1128 07:08:21.893043 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-pbwhg" Nov 28 07:08:21 crc kubenswrapper[4946]: I1128 07:08:21.980253 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-bzbsm"] Nov 28 07:08:21 crc kubenswrapper[4946]: I1128 07:08:21.981260 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-bzbsm" Nov 28 07:08:21 crc kubenswrapper[4946]: I1128 07:08:21.984270 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-drsb8" Nov 28 07:08:21 crc kubenswrapper[4946]: I1128 07:08:21.984783 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 28 07:08:21 crc kubenswrapper[4946]: I1128 07:08:21.985751 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 28 07:08:21 crc kubenswrapper[4946]: I1128 07:08:21.990381 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4c8ab8df-c0e1-476a-a5aa-edc322f5c62c-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-xnjdw\" (UID: \"4c8ab8df-c0e1-476a-a5aa-edc322f5c62c\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjdw" Nov 28 07:08:21 crc kubenswrapper[4946]: I1128 07:08:21.990496 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzm26\" (UniqueName: \"kubernetes.io/projected/da83efd2-a141-4a4d-86c6-cbf48bbc47d9-kube-api-access-mzm26\") pod \"nmstate-metrics-7f946cbc9-w7xkb\" (UID: \"da83efd2-a141-4a4d-86c6-cbf48bbc47d9\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w7xkb" Nov 28 07:08:21 crc kubenswrapper[4946]: I1128 07:08:21.990539 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqqns\" (UniqueName: \"kubernetes.io/projected/4c8ab8df-c0e1-476a-a5aa-edc322f5c62c-kube-api-access-qqqns\") pod \"nmstate-webhook-5f6d4c5ccb-xnjdw\" (UID: \"4c8ab8df-c0e1-476a-a5aa-edc322f5c62c\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjdw" Nov 28 07:08:21 crc kubenswrapper[4946]: I1128 07:08:21.990580 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cda20c55-deee-4488-af64-283f20a5679f-dbus-socket\") pod \"nmstate-handler-pbwhg\" (UID: \"cda20c55-deee-4488-af64-283f20a5679f\") " pod="openshift-nmstate/nmstate-handler-pbwhg" Nov 28 07:08:21 crc kubenswrapper[4946]: I1128 07:08:21.990603 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cda20c55-deee-4488-af64-283f20a5679f-nmstate-lock\") pod \"nmstate-handler-pbwhg\" (UID: \"cda20c55-deee-4488-af64-283f20a5679f\") " pod="openshift-nmstate/nmstate-handler-pbwhg" Nov 28 07:08:21 crc kubenswrapper[4946]: I1128 07:08:21.990707 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtplp\" (UniqueName: \"kubernetes.io/projected/cda20c55-deee-4488-af64-283f20a5679f-kube-api-access-rtplp\") pod \"nmstate-handler-pbwhg\" (UID: \"cda20c55-deee-4488-af64-283f20a5679f\") " pod="openshift-nmstate/nmstate-handler-pbwhg" Nov 28 07:08:21 crc kubenswrapper[4946]: I1128 07:08:21.990743 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cda20c55-deee-4488-af64-283f20a5679f-ovs-socket\") pod \"nmstate-handler-pbwhg\" (UID: \"cda20c55-deee-4488-af64-283f20a5679f\") " pod="openshift-nmstate/nmstate-handler-pbwhg" Nov 28 07:08:21 crc kubenswrapper[4946]: I1128 07:08:21.999676 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-bzbsm"] Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.092149 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzm26\" (UniqueName: \"kubernetes.io/projected/da83efd2-a141-4a4d-86c6-cbf48bbc47d9-kube-api-access-mzm26\") pod \"nmstate-metrics-7f946cbc9-w7xkb\" (UID: \"da83efd2-a141-4a4d-86c6-cbf48bbc47d9\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w7xkb" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.092242 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a7636476-2648-4dcb-8349-15d10c0f5664-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-bzbsm\" (UID: \"a7636476-2648-4dcb-8349-15d10c0f5664\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-bzbsm" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.092301 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqqns\" (UniqueName: \"kubernetes.io/projected/4c8ab8df-c0e1-476a-a5aa-edc322f5c62c-kube-api-access-qqqns\") pod \"nmstate-webhook-5f6d4c5ccb-xnjdw\" (UID: \"4c8ab8df-c0e1-476a-a5aa-edc322f5c62c\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjdw" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.092334 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cda20c55-deee-4488-af64-283f20a5679f-dbus-socket\") pod \"nmstate-handler-pbwhg\" (UID: \"cda20c55-deee-4488-af64-283f20a5679f\") " pod="openshift-nmstate/nmstate-handler-pbwhg" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.092412 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cda20c55-deee-4488-af64-283f20a5679f-nmstate-lock\") pod \"nmstate-handler-pbwhg\" (UID: \"cda20c55-deee-4488-af64-283f20a5679f\") " pod="openshift-nmstate/nmstate-handler-pbwhg" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.092492 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkr8q\" (UniqueName: \"kubernetes.io/projected/a7636476-2648-4dcb-8349-15d10c0f5664-kube-api-access-jkr8q\") pod \"nmstate-console-plugin-7fbb5f6569-bzbsm\" (UID: \"a7636476-2648-4dcb-8349-15d10c0f5664\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-bzbsm" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.092560 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtplp\" (UniqueName: \"kubernetes.io/projected/cda20c55-deee-4488-af64-283f20a5679f-kube-api-access-rtplp\") pod \"nmstate-handler-pbwhg\" (UID: \"cda20c55-deee-4488-af64-283f20a5679f\") " pod="openshift-nmstate/nmstate-handler-pbwhg" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.092602 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cda20c55-deee-4488-af64-283f20a5679f-ovs-socket\") pod \"nmstate-handler-pbwhg\" (UID: \"cda20c55-deee-4488-af64-283f20a5679f\") " pod="openshift-nmstate/nmstate-handler-pbwhg" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.092674 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cda20c55-deee-4488-af64-283f20a5679f-ovs-socket\") pod \"nmstate-handler-pbwhg\" (UID: \"cda20c55-deee-4488-af64-283f20a5679f\") " pod="openshift-nmstate/nmstate-handler-pbwhg" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.092728 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cda20c55-deee-4488-af64-283f20a5679f-nmstate-lock\") pod \"nmstate-handler-pbwhg\" (UID: \"cda20c55-deee-4488-af64-283f20a5679f\") " pod="openshift-nmstate/nmstate-handler-pbwhg" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.092777 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4c8ab8df-c0e1-476a-a5aa-edc322f5c62c-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-xnjdw\" (UID: \"4c8ab8df-c0e1-476a-a5aa-edc322f5c62c\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjdw" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.092826 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cda20c55-deee-4488-af64-283f20a5679f-dbus-socket\") pod \"nmstate-handler-pbwhg\" (UID: \"cda20c55-deee-4488-af64-283f20a5679f\") " pod="openshift-nmstate/nmstate-handler-pbwhg" Nov 28 07:08:22 crc kubenswrapper[4946]: E1128 07:08:22.092860 4946 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.092904 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7636476-2648-4dcb-8349-15d10c0f5664-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-bzbsm\" (UID: \"a7636476-2648-4dcb-8349-15d10c0f5664\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-bzbsm" Nov 28 07:08:22 crc kubenswrapper[4946]: E1128 07:08:22.092963 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c8ab8df-c0e1-476a-a5aa-edc322f5c62c-tls-key-pair podName:4c8ab8df-c0e1-476a-a5aa-edc322f5c62c nodeName:}" failed. No retries permitted until 2025-11-28 07:08:22.592933914 +0000 UTC m=+956.970999245 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/4c8ab8df-c0e1-476a-a5aa-edc322f5c62c-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-xnjdw" (UID: "4c8ab8df-c0e1-476a-a5aa-edc322f5c62c") : secret "openshift-nmstate-webhook" not found Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.122532 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqqns\" (UniqueName: \"kubernetes.io/projected/4c8ab8df-c0e1-476a-a5aa-edc322f5c62c-kube-api-access-qqqns\") pod \"nmstate-webhook-5f6d4c5ccb-xnjdw\" (UID: \"4c8ab8df-c0e1-476a-a5aa-edc322f5c62c\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjdw" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.133830 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtplp\" (UniqueName: \"kubernetes.io/projected/cda20c55-deee-4488-af64-283f20a5679f-kube-api-access-rtplp\") pod \"nmstate-handler-pbwhg\" (UID: \"cda20c55-deee-4488-af64-283f20a5679f\") " pod="openshift-nmstate/nmstate-handler-pbwhg" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.140951 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzm26\" (UniqueName: \"kubernetes.io/projected/da83efd2-a141-4a4d-86c6-cbf48bbc47d9-kube-api-access-mzm26\") pod \"nmstate-metrics-7f946cbc9-w7xkb\" (UID: \"da83efd2-a141-4a4d-86c6-cbf48bbc47d9\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w7xkb" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.185378 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-69b754d995-hflc9"] Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.186146 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.194929 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7636476-2648-4dcb-8349-15d10c0f5664-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-bzbsm\" (UID: \"a7636476-2648-4dcb-8349-15d10c0f5664\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-bzbsm" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.195034 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a7636476-2648-4dcb-8349-15d10c0f5664-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-bzbsm\" (UID: \"a7636476-2648-4dcb-8349-15d10c0f5664\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-bzbsm" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.195070 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkr8q\" (UniqueName: \"kubernetes.io/projected/a7636476-2648-4dcb-8349-15d10c0f5664-kube-api-access-jkr8q\") pod \"nmstate-console-plugin-7fbb5f6569-bzbsm\" (UID: \"a7636476-2648-4dcb-8349-15d10c0f5664\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-bzbsm" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.196078 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a7636476-2648-4dcb-8349-15d10c0f5664-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-bzbsm\" (UID: \"a7636476-2648-4dcb-8349-15d10c0f5664\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-bzbsm" Nov 28 07:08:22 crc kubenswrapper[4946]: E1128 07:08:22.196226 4946 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 28 07:08:22 crc kubenswrapper[4946]: E1128 07:08:22.196302 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7636476-2648-4dcb-8349-15d10c0f5664-plugin-serving-cert podName:a7636476-2648-4dcb-8349-15d10c0f5664 nodeName:}" failed. No retries permitted until 2025-11-28 07:08:22.696283589 +0000 UTC m=+957.074348700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/a7636476-2648-4dcb-8349-15d10c0f5664-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-bzbsm" (UID: "a7636476-2648-4dcb-8349-15d10c0f5664") : secret "plugin-serving-cert" not found Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.210024 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-pbwhg" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.231650 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkr8q\" (UniqueName: \"kubernetes.io/projected/a7636476-2648-4dcb-8349-15d10c0f5664-kube-api-access-jkr8q\") pod \"nmstate-console-plugin-7fbb5f6569-bzbsm\" (UID: \"a7636476-2648-4dcb-8349-15d10c0f5664\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-bzbsm" Nov 28 07:08:22 crc kubenswrapper[4946]: W1128 07:08:22.235076 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcda20c55_deee_4488_af64_283f20a5679f.slice/crio-9c0fae1251d6abccca980953f0729c60497790a795e92f99cfebc45e20bf773e WatchSource:0}: Error finding container 9c0fae1251d6abccca980953f0729c60497790a795e92f99cfebc45e20bf773e: Status 404 returned error can't find the container with id 9c0fae1251d6abccca980953f0729c60497790a795e92f99cfebc45e20bf773e Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.256938 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69b754d995-hflc9"] Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.297168 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz8kn\" (UniqueName: \"kubernetes.io/projected/df584e31-ede0-4ba7-a45a-b96674152d27-kube-api-access-jz8kn\") pod \"console-69b754d995-hflc9\" (UID: \"df584e31-ede0-4ba7-a45a-b96674152d27\") " pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.297321 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/df584e31-ede0-4ba7-a45a-b96674152d27-oauth-serving-cert\") pod \"console-69b754d995-hflc9\" (UID: \"df584e31-ede0-4ba7-a45a-b96674152d27\") " pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.297384 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/df584e31-ede0-4ba7-a45a-b96674152d27-service-ca\") pod \"console-69b754d995-hflc9\" (UID: \"df584e31-ede0-4ba7-a45a-b96674152d27\") " pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.297407 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df584e31-ede0-4ba7-a45a-b96674152d27-trusted-ca-bundle\") pod \"console-69b754d995-hflc9\" (UID: \"df584e31-ede0-4ba7-a45a-b96674152d27\") " pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.297619 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/df584e31-ede0-4ba7-a45a-b96674152d27-console-oauth-config\") pod \"console-69b754d995-hflc9\" (UID: \"df584e31-ede0-4ba7-a45a-b96674152d27\") " pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.297662 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/df584e31-ede0-4ba7-a45a-b96674152d27-console-config\") pod \"console-69b754d995-hflc9\" (UID: \"df584e31-ede0-4ba7-a45a-b96674152d27\") " pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.297858 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/df584e31-ede0-4ba7-a45a-b96674152d27-console-serving-cert\") pod \"console-69b754d995-hflc9\" (UID: \"df584e31-ede0-4ba7-a45a-b96674152d27\") " pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.399504 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/df584e31-ede0-4ba7-a45a-b96674152d27-console-oauth-config\") pod \"console-69b754d995-hflc9\" (UID: \"df584e31-ede0-4ba7-a45a-b96674152d27\") " pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.399595 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/df584e31-ede0-4ba7-a45a-b96674152d27-console-config\") pod \"console-69b754d995-hflc9\" (UID: \"df584e31-ede0-4ba7-a45a-b96674152d27\") " pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.399677 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/df584e31-ede0-4ba7-a45a-b96674152d27-console-serving-cert\") pod \"console-69b754d995-hflc9\" (UID: \"df584e31-ede0-4ba7-a45a-b96674152d27\") " pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.399737 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz8kn\" (UniqueName: \"kubernetes.io/projected/df584e31-ede0-4ba7-a45a-b96674152d27-kube-api-access-jz8kn\") pod \"console-69b754d995-hflc9\" (UID: \"df584e31-ede0-4ba7-a45a-b96674152d27\") " pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.400289 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/df584e31-ede0-4ba7-a45a-b96674152d27-oauth-serving-cert\") pod \"console-69b754d995-hflc9\" (UID: \"df584e31-ede0-4ba7-a45a-b96674152d27\") " pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.400806 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/df584e31-ede0-4ba7-a45a-b96674152d27-console-config\") pod \"console-69b754d995-hflc9\" (UID: \"df584e31-ede0-4ba7-a45a-b96674152d27\") " pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.401274 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/df584e31-ede0-4ba7-a45a-b96674152d27-oauth-serving-cert\") pod \"console-69b754d995-hflc9\" (UID: \"df584e31-ede0-4ba7-a45a-b96674152d27\") " pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.401363 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/df584e31-ede0-4ba7-a45a-b96674152d27-service-ca\") pod \"console-69b754d995-hflc9\" (UID: \"df584e31-ede0-4ba7-a45a-b96674152d27\") " pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.401394 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df584e31-ede0-4ba7-a45a-b96674152d27-trusted-ca-bundle\") pod \"console-69b754d995-hflc9\" (UID: \"df584e31-ede0-4ba7-a45a-b96674152d27\") " pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.402368 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/df584e31-ede0-4ba7-a45a-b96674152d27-service-ca\") pod \"console-69b754d995-hflc9\" (UID: \"df584e31-ede0-4ba7-a45a-b96674152d27\") " pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.402596 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df584e31-ede0-4ba7-a45a-b96674152d27-trusted-ca-bundle\") pod \"console-69b754d995-hflc9\" (UID: \"df584e31-ede0-4ba7-a45a-b96674152d27\") " pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.404205 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/df584e31-ede0-4ba7-a45a-b96674152d27-console-oauth-config\") pod \"console-69b754d995-hflc9\" (UID: \"df584e31-ede0-4ba7-a45a-b96674152d27\") " pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.404437 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/df584e31-ede0-4ba7-a45a-b96674152d27-console-serving-cert\") pod \"console-69b754d995-hflc9\" (UID: \"df584e31-ede0-4ba7-a45a-b96674152d27\") " pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.435167 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w7xkb" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.435334 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz8kn\" (UniqueName: \"kubernetes.io/projected/df584e31-ede0-4ba7-a45a-b96674152d27-kube-api-access-jz8kn\") pod \"console-69b754d995-hflc9\" (UID: \"df584e31-ede0-4ba7-a45a-b96674152d27\") " pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.504185 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.606075 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4c8ab8df-c0e1-476a-a5aa-edc322f5c62c-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-xnjdw\" (UID: \"4c8ab8df-c0e1-476a-a5aa-edc322f5c62c\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjdw" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.612121 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4c8ab8df-c0e1-476a-a5aa-edc322f5c62c-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-xnjdw\" (UID: \"4c8ab8df-c0e1-476a-a5aa-edc322f5c62c\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjdw" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.709392 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7636476-2648-4dcb-8349-15d10c0f5664-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-bzbsm\" (UID: \"a7636476-2648-4dcb-8349-15d10c0f5664\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-bzbsm" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.715267 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7636476-2648-4dcb-8349-15d10c0f5664-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-bzbsm\" (UID: \"a7636476-2648-4dcb-8349-15d10c0f5664\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-bzbsm" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.728164 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69b754d995-hflc9"] Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.763242 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjdw" Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.788342 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69b754d995-hflc9" event={"ID":"df584e31-ede0-4ba7-a45a-b96674152d27","Type":"ContainerStarted","Data":"fd5ca741b8c984aa8804e58a3eb28d9f058b093b87bcb5614e8f21f79f233993"} Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.790078 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-pbwhg" event={"ID":"cda20c55-deee-4488-af64-283f20a5679f","Type":"ContainerStarted","Data":"9c0fae1251d6abccca980953f0729c60497790a795e92f99cfebc45e20bf773e"} Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.881662 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-w7xkb"] Nov 28 07:08:22 crc kubenswrapper[4946]: W1128 07:08:22.894440 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda83efd2_a141_4a4d_86c6_cbf48bbc47d9.slice/crio-801ac00b4ea85ad09e3aca32bda81802e500f0fb202fa62db3189275ec7ff4e8 WatchSource:0}: Error finding container 801ac00b4ea85ad09e3aca32bda81802e500f0fb202fa62db3189275ec7ff4e8: Status 404 returned error can't find the container with id 801ac00b4ea85ad09e3aca32bda81802e500f0fb202fa62db3189275ec7ff4e8 Nov 28 07:08:22 crc kubenswrapper[4946]: I1128 07:08:22.901697 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-bzbsm" Nov 28 07:08:23 crc kubenswrapper[4946]: I1128 07:08:23.102416 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-bzbsm"] Nov 28 07:08:23 crc kubenswrapper[4946]: I1128 07:08:23.210020 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjdw"] Nov 28 07:08:23 crc kubenswrapper[4946]: I1128 07:08:23.802166 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-bzbsm" event={"ID":"a7636476-2648-4dcb-8349-15d10c0f5664","Type":"ContainerStarted","Data":"c583c840cf9fff18d68e57dee8e95e743c20f3f21de0fc3d905fd7d4d4bd65b8"} Nov 28 07:08:23 crc kubenswrapper[4946]: I1128 07:08:23.804437 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69b754d995-hflc9" event={"ID":"df584e31-ede0-4ba7-a45a-b96674152d27","Type":"ContainerStarted","Data":"b3cf5311fa28b5cef2e89ec50f2a8e8945202e3505b107df4d70343e1e4ed631"} Nov 28 07:08:23 crc kubenswrapper[4946]: I1128 07:08:23.806014 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjdw" event={"ID":"4c8ab8df-c0e1-476a-a5aa-edc322f5c62c","Type":"ContainerStarted","Data":"dd83d95d24da888b05c7df393e1812c1e5cddeacfd100860f193b26f6a5f9c86"} Nov 28 07:08:23 crc kubenswrapper[4946]: I1128 07:08:23.807542 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w7xkb" event={"ID":"da83efd2-a141-4a4d-86c6-cbf48bbc47d9","Type":"ContainerStarted","Data":"801ac00b4ea85ad09e3aca32bda81802e500f0fb202fa62db3189275ec7ff4e8"} Nov 28 07:08:23 crc kubenswrapper[4946]: I1128 07:08:23.827782 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-69b754d995-hflc9" podStartSLOduration=1.827764337 podStartE2EDuration="1.827764337s" podCreationTimestamp="2025-11-28 07:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:08:23.823046801 +0000 UTC m=+958.201111912" watchObservedRunningTime="2025-11-28 07:08:23.827764337 +0000 UTC m=+958.205829448" Nov 28 07:08:26 crc kubenswrapper[4946]: I1128 07:08:26.829988 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjdw" event={"ID":"4c8ab8df-c0e1-476a-a5aa-edc322f5c62c","Type":"ContainerStarted","Data":"97f518980167a593477d57e26fe4803e9c13edf9bc3f499d930de0f70dfa6ad4"} Nov 28 07:08:26 crc kubenswrapper[4946]: I1128 07:08:26.831103 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjdw" Nov 28 07:08:26 crc kubenswrapper[4946]: I1128 07:08:26.832998 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w7xkb" event={"ID":"da83efd2-a141-4a4d-86c6-cbf48bbc47d9","Type":"ContainerStarted","Data":"66486001d26ff83b94914ee83cddf77acd80e4af2cc785d3f2e2c9f6a6478d19"} Nov 28 07:08:26 crc kubenswrapper[4946]: I1128 07:08:26.836707 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-pbwhg" event={"ID":"cda20c55-deee-4488-af64-283f20a5679f","Type":"ContainerStarted","Data":"100c848f99c372538c172c4a9b4248fac374ec6b80fe128f1e8cad0907bf832d"} Nov 28 07:08:26 crc kubenswrapper[4946]: I1128 07:08:26.836883 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-pbwhg" Nov 28 07:08:26 crc kubenswrapper[4946]: I1128 07:08:26.856771 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjdw" podStartSLOduration=2.719158654 podStartE2EDuration="5.8567391s" podCreationTimestamp="2025-11-28 07:08:21 +0000 UTC" firstStartedPulling="2025-11-28 07:08:23.220508667 +0000 UTC m=+957.598573778" lastFinishedPulling="2025-11-28 07:08:26.358089113 +0000 UTC m=+960.736154224" observedRunningTime="2025-11-28 07:08:26.847523092 +0000 UTC m=+961.225588203" watchObservedRunningTime="2025-11-28 07:08:26.8567391 +0000 UTC m=+961.234804221" Nov 28 07:08:26 crc kubenswrapper[4946]: I1128 07:08:26.878146 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-pbwhg" podStartSLOduration=2.33032948 podStartE2EDuration="5.878090527s" podCreationTimestamp="2025-11-28 07:08:21 +0000 UTC" firstStartedPulling="2025-11-28 07:08:22.237563619 +0000 UTC m=+956.615628730" lastFinishedPulling="2025-11-28 07:08:25.785324666 +0000 UTC m=+960.163389777" observedRunningTime="2025-11-28 07:08:26.870316445 +0000 UTC m=+961.248381576" watchObservedRunningTime="2025-11-28 07:08:26.878090527 +0000 UTC m=+961.256155638" Nov 28 07:08:27 crc kubenswrapper[4946]: I1128 07:08:27.844340 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-bzbsm" event={"ID":"a7636476-2648-4dcb-8349-15d10c0f5664","Type":"ContainerStarted","Data":"4ad8ece29269a54f98232ef86cddc642e1e80e6766b1430beb24a95cfdcc1931"} Nov 28 07:08:27 crc kubenswrapper[4946]: I1128 07:08:27.861745 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-bzbsm" podStartSLOduration=2.716138687 podStartE2EDuration="6.86172004s" podCreationTimestamp="2025-11-28 07:08:21 +0000 UTC" firstStartedPulling="2025-11-28 07:08:23.115128572 +0000 UTC m=+957.493193673" lastFinishedPulling="2025-11-28 07:08:27.260709915 +0000 UTC m=+961.638775026" observedRunningTime="2025-11-28 07:08:27.859733991 +0000 UTC m=+962.237799112" watchObservedRunningTime="2025-11-28 07:08:27.86172004 +0000 UTC m=+962.239785151" Nov 28 07:08:28 crc kubenswrapper[4946]: I1128 07:08:28.855739 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w7xkb" event={"ID":"da83efd2-a141-4a4d-86c6-cbf48bbc47d9","Type":"ContainerStarted","Data":"28d2afb4f7e39d13bd05bf5e725c0198f49812b1be71336a5e8b69e47fa6ebfb"} Nov 28 07:08:28 crc kubenswrapper[4946]: I1128 07:08:28.881367 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w7xkb" podStartSLOduration=2.391125854 podStartE2EDuration="7.881337554s" podCreationTimestamp="2025-11-28 07:08:21 +0000 UTC" firstStartedPulling="2025-11-28 07:08:22.903761457 +0000 UTC m=+957.281826568" lastFinishedPulling="2025-11-28 07:08:28.393973157 +0000 UTC m=+962.772038268" observedRunningTime="2025-11-28 07:08:28.875248544 +0000 UTC m=+963.253313665" watchObservedRunningTime="2025-11-28 07:08:28.881337554 +0000 UTC m=+963.259402675" Nov 28 07:08:32 crc kubenswrapper[4946]: I1128 07:08:32.238108 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-pbwhg" Nov 28 07:08:32 crc kubenswrapper[4946]: I1128 07:08:32.504838 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:32 crc kubenswrapper[4946]: I1128 07:08:32.504911 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:32 crc kubenswrapper[4946]: I1128 07:08:32.509236 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:32 crc kubenswrapper[4946]: I1128 07:08:32.891934 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-69b754d995-hflc9" Nov 28 07:08:32 crc kubenswrapper[4946]: I1128 07:08:32.965108 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-r7ztb"] Nov 28 07:08:42 crc kubenswrapper[4946]: I1128 07:08:42.771274 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xnjdw" Nov 28 07:08:54 crc kubenswrapper[4946]: I1128 07:08:54.730911 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:08:54 crc kubenswrapper[4946]: I1128 07:08:54.731653 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:08:57 crc kubenswrapper[4946]: I1128 07:08:57.510491 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds"] Nov 28 07:08:57 crc kubenswrapper[4946]: I1128 07:08:57.513889 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds" Nov 28 07:08:57 crc kubenswrapper[4946]: I1128 07:08:57.518725 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 07:08:57 crc kubenswrapper[4946]: I1128 07:08:57.523549 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds"] Nov 28 07:08:57 crc kubenswrapper[4946]: I1128 07:08:57.638902 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvz7k\" (UniqueName: \"kubernetes.io/projected/0fb75378-f653-400a-8738-81376998a521-kube-api-access-xvz7k\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds\" (UID: \"0fb75378-f653-400a-8738-81376998a521\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds" Nov 28 07:08:57 crc kubenswrapper[4946]: I1128 07:08:57.639295 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0fb75378-f653-400a-8738-81376998a521-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds\" (UID: \"0fb75378-f653-400a-8738-81376998a521\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds" Nov 28 07:08:57 crc kubenswrapper[4946]: I1128 07:08:57.639422 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0fb75378-f653-400a-8738-81376998a521-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds\" (UID: \"0fb75378-f653-400a-8738-81376998a521\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds" Nov 28 07:08:57 crc kubenswrapper[4946]: I1128 07:08:57.740651 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0fb75378-f653-400a-8738-81376998a521-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds\" (UID: \"0fb75378-f653-400a-8738-81376998a521\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds" Nov 28 07:08:57 crc kubenswrapper[4946]: I1128 07:08:57.740704 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0fb75378-f653-400a-8738-81376998a521-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds\" (UID: \"0fb75378-f653-400a-8738-81376998a521\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds" Nov 28 07:08:57 crc kubenswrapper[4946]: I1128 07:08:57.740780 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvz7k\" (UniqueName: \"kubernetes.io/projected/0fb75378-f653-400a-8738-81376998a521-kube-api-access-xvz7k\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds\" (UID: \"0fb75378-f653-400a-8738-81376998a521\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds" Nov 28 07:08:57 crc kubenswrapper[4946]: I1128 07:08:57.741369 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0fb75378-f653-400a-8738-81376998a521-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds\" (UID: \"0fb75378-f653-400a-8738-81376998a521\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds" Nov 28 07:08:57 crc kubenswrapper[4946]: I1128 07:08:57.741381 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0fb75378-f653-400a-8738-81376998a521-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds\" (UID: \"0fb75378-f653-400a-8738-81376998a521\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds" Nov 28 07:08:57 crc kubenswrapper[4946]: I1128 07:08:57.761615 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvz7k\" (UniqueName: \"kubernetes.io/projected/0fb75378-f653-400a-8738-81376998a521-kube-api-access-xvz7k\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds\" (UID: \"0fb75378-f653-400a-8738-81376998a521\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds" Nov 28 07:08:57 crc kubenswrapper[4946]: I1128 07:08:57.835914 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds" Nov 28 07:08:58 crc kubenswrapper[4946]: I1128 07:08:58.021406 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-r7ztb" podUID="79c0d15c-8fc9-4efd-b1ec-739718f313d9" containerName="console" containerID="cri-o://96045c27504107ddb509a67ac3bd912c5b7570b92a1d10f14ac7150c1754851e" gracePeriod=15 Nov 28 07:08:58 crc kubenswrapper[4946]: I1128 07:08:58.104769 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds"] Nov 28 07:08:58 crc kubenswrapper[4946]: I1128 07:08:58.363779 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-r7ztb_79c0d15c-8fc9-4efd-b1ec-739718f313d9/console/0.log" Nov 28 07:08:58 crc kubenswrapper[4946]: I1128 07:08:58.364266 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 07:08:58 crc kubenswrapper[4946]: I1128 07:08:58.465956 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/79c0d15c-8fc9-4efd-b1ec-739718f313d9-console-oauth-config\") pod \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " Nov 28 07:08:58 crc kubenswrapper[4946]: I1128 07:08:58.466035 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79c0d15c-8fc9-4efd-b1ec-739718f313d9-trusted-ca-bundle\") pod \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " Nov 28 07:08:58 crc kubenswrapper[4946]: I1128 07:08:58.466121 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79c0d15c-8fc9-4efd-b1ec-739718f313d9-console-serving-cert\") pod \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " Nov 28 07:08:58 crc kubenswrapper[4946]: I1128 07:08:58.466166 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79c0d15c-8fc9-4efd-b1ec-739718f313d9-service-ca\") pod \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " Nov 28 07:08:58 crc kubenswrapper[4946]: I1128 07:08:58.467647 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/79c0d15c-8fc9-4efd-b1ec-739718f313d9-oauth-serving-cert\") pod \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " Nov 28 07:08:58 crc kubenswrapper[4946]: I1128 07:08:58.467652 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79c0d15c-8fc9-4efd-b1ec-739718f313d9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "79c0d15c-8fc9-4efd-b1ec-739718f313d9" (UID: "79c0d15c-8fc9-4efd-b1ec-739718f313d9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:58 crc kubenswrapper[4946]: I1128 07:08:58.467699 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/79c0d15c-8fc9-4efd-b1ec-739718f313d9-console-config\") pod \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " Nov 28 07:08:58 crc kubenswrapper[4946]: I1128 07:08:58.467697 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79c0d15c-8fc9-4efd-b1ec-739718f313d9-service-ca" (OuterVolumeSpecName: "service-ca") pod "79c0d15c-8fc9-4efd-b1ec-739718f313d9" (UID: "79c0d15c-8fc9-4efd-b1ec-739718f313d9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:58 crc kubenswrapper[4946]: I1128 07:08:58.467792 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p64vs\" (UniqueName: \"kubernetes.io/projected/79c0d15c-8fc9-4efd-b1ec-739718f313d9-kube-api-access-p64vs\") pod \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\" (UID: \"79c0d15c-8fc9-4efd-b1ec-739718f313d9\") " Nov 28 07:08:58 crc kubenswrapper[4946]: I1128 07:08:58.468141 4946 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79c0d15c-8fc9-4efd-b1ec-739718f313d9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:58 crc kubenswrapper[4946]: I1128 07:08:58.468164 4946 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79c0d15c-8fc9-4efd-b1ec-739718f313d9-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:58 crc kubenswrapper[4946]: I1128 07:08:58.468238 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79c0d15c-8fc9-4efd-b1ec-739718f313d9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "79c0d15c-8fc9-4efd-b1ec-739718f313d9" (UID: "79c0d15c-8fc9-4efd-b1ec-739718f313d9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:58 crc kubenswrapper[4946]: I1128 07:08:58.468871 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79c0d15c-8fc9-4efd-b1ec-739718f313d9-console-config" (OuterVolumeSpecName: "console-config") pod "79c0d15c-8fc9-4efd-b1ec-739718f313d9" (UID: "79c0d15c-8fc9-4efd-b1ec-739718f313d9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:08:58 crc kubenswrapper[4946]: I1128 07:08:58.474260 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79c0d15c-8fc9-4efd-b1ec-739718f313d9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "79c0d15c-8fc9-4efd-b1ec-739718f313d9" (UID: "79c0d15c-8fc9-4efd-b1ec-739718f313d9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:58 crc kubenswrapper[4946]: I1128 07:08:58.474445 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79c0d15c-8fc9-4efd-b1ec-739718f313d9-kube-api-access-p64vs" (OuterVolumeSpecName: "kube-api-access-p64vs") pod "79c0d15c-8fc9-4efd-b1ec-739718f313d9" (UID: "79c0d15c-8fc9-4efd-b1ec-739718f313d9"). InnerVolumeSpecName "kube-api-access-p64vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:08:58 crc kubenswrapper[4946]: I1128 07:08:58.475043 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79c0d15c-8fc9-4efd-b1ec-739718f313d9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "79c0d15c-8fc9-4efd-b1ec-739718f313d9" (UID: "79c0d15c-8fc9-4efd-b1ec-739718f313d9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:08:58 crc kubenswrapper[4946]: I1128 07:08:58.569423 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p64vs\" (UniqueName: \"kubernetes.io/projected/79c0d15c-8fc9-4efd-b1ec-739718f313d9-kube-api-access-p64vs\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:58 crc kubenswrapper[4946]: I1128 07:08:58.570991 4946 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/79c0d15c-8fc9-4efd-b1ec-739718f313d9-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:58 crc kubenswrapper[4946]: I1128 07:08:58.571080 4946 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79c0d15c-8fc9-4efd-b1ec-739718f313d9-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:58 crc kubenswrapper[4946]: I1128 07:08:58.571153 4946 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/79c0d15c-8fc9-4efd-b1ec-739718f313d9-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:58 crc kubenswrapper[4946]: I1128 07:08:58.571223 4946 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/79c0d15c-8fc9-4efd-b1ec-739718f313d9-console-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:08:59 crc kubenswrapper[4946]: I1128 07:08:59.085320 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-r7ztb_79c0d15c-8fc9-4efd-b1ec-739718f313d9/console/0.log" Nov 28 07:08:59 crc kubenswrapper[4946]: I1128 07:08:59.085409 4946 generic.go:334] "Generic (PLEG): container finished" podID="79c0d15c-8fc9-4efd-b1ec-739718f313d9" containerID="96045c27504107ddb509a67ac3bd912c5b7570b92a1d10f14ac7150c1754851e" exitCode=2 Nov 28 07:08:59 crc kubenswrapper[4946]: I1128 07:08:59.085617 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-r7ztb" Nov 28 07:08:59 crc kubenswrapper[4946]: I1128 07:08:59.085768 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-r7ztb" event={"ID":"79c0d15c-8fc9-4efd-b1ec-739718f313d9","Type":"ContainerDied","Data":"96045c27504107ddb509a67ac3bd912c5b7570b92a1d10f14ac7150c1754851e"} Nov 28 07:08:59 crc kubenswrapper[4946]: I1128 07:08:59.085858 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-r7ztb" event={"ID":"79c0d15c-8fc9-4efd-b1ec-739718f313d9","Type":"ContainerDied","Data":"6c161cb54a6d2e2b4039d3f437ec02e41b77e7685954cf10e5515b106fdaca81"} Nov 28 07:08:59 crc kubenswrapper[4946]: I1128 07:08:59.085886 4946 scope.go:117] "RemoveContainer" containerID="96045c27504107ddb509a67ac3bd912c5b7570b92a1d10f14ac7150c1754851e" Nov 28 07:08:59 crc kubenswrapper[4946]: I1128 07:08:59.087873 4946 generic.go:334] "Generic (PLEG): container finished" podID="0fb75378-f653-400a-8738-81376998a521" containerID="d36415c0afae7f549591b0207ed61cb618d67415a684b3b4136ca638e61711d5" exitCode=0 Nov 28 07:08:59 crc kubenswrapper[4946]: I1128 07:08:59.087940 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds" event={"ID":"0fb75378-f653-400a-8738-81376998a521","Type":"ContainerDied","Data":"d36415c0afae7f549591b0207ed61cb618d67415a684b3b4136ca638e61711d5"} Nov 28 07:08:59 crc kubenswrapper[4946]: I1128 07:08:59.087975 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds" event={"ID":"0fb75378-f653-400a-8738-81376998a521","Type":"ContainerStarted","Data":"5e443528966b73c4565deb031569e1807c6777bb5fe2941ba587e6265a0ba619"} Nov 28 07:08:59 crc kubenswrapper[4946]: I1128 07:08:59.113722 4946 scope.go:117] "RemoveContainer" containerID="96045c27504107ddb509a67ac3bd912c5b7570b92a1d10f14ac7150c1754851e" Nov 28 07:08:59 crc kubenswrapper[4946]: E1128 07:08:59.117708 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96045c27504107ddb509a67ac3bd912c5b7570b92a1d10f14ac7150c1754851e\": container with ID starting with 96045c27504107ddb509a67ac3bd912c5b7570b92a1d10f14ac7150c1754851e not found: ID does not exist" containerID="96045c27504107ddb509a67ac3bd912c5b7570b92a1d10f14ac7150c1754851e" Nov 28 07:08:59 crc kubenswrapper[4946]: I1128 07:08:59.117770 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96045c27504107ddb509a67ac3bd912c5b7570b92a1d10f14ac7150c1754851e"} err="failed to get container status \"96045c27504107ddb509a67ac3bd912c5b7570b92a1d10f14ac7150c1754851e\": rpc error: code = NotFound desc = could not find container \"96045c27504107ddb509a67ac3bd912c5b7570b92a1d10f14ac7150c1754851e\": container with ID starting with 96045c27504107ddb509a67ac3bd912c5b7570b92a1d10f14ac7150c1754851e not found: ID does not exist" Nov 28 07:08:59 crc kubenswrapper[4946]: I1128 07:08:59.137575 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-r7ztb"] Nov 28 07:08:59 crc kubenswrapper[4946]: I1128 07:08:59.143214 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-r7ztb"] Nov 28 07:09:00 crc kubenswrapper[4946]: I1128 07:08:59.999974 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79c0d15c-8fc9-4efd-b1ec-739718f313d9" path="/var/lib/kubelet/pods/79c0d15c-8fc9-4efd-b1ec-739718f313d9/volumes" Nov 28 07:09:01 crc kubenswrapper[4946]: I1128 07:09:01.112830 4946 generic.go:334] "Generic (PLEG): container finished" podID="0fb75378-f653-400a-8738-81376998a521" containerID="2db19e54de88d7c0f1b0d770cbb23864e58070056e961614a94c4a4f4ae2cd7b" exitCode=0 Nov 28 07:09:01 crc kubenswrapper[4946]: I1128 07:09:01.112880 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds" event={"ID":"0fb75378-f653-400a-8738-81376998a521","Type":"ContainerDied","Data":"2db19e54de88d7c0f1b0d770cbb23864e58070056e961614a94c4a4f4ae2cd7b"} Nov 28 07:09:02 crc kubenswrapper[4946]: I1128 07:09:02.125071 4946 generic.go:334] "Generic (PLEG): container finished" podID="0fb75378-f653-400a-8738-81376998a521" containerID="148cf07dcce3621e3e68918e05916cb2f31512040925bf1fe2e9bd76e2d185b2" exitCode=0 Nov 28 07:09:02 crc kubenswrapper[4946]: I1128 07:09:02.125168 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds" event={"ID":"0fb75378-f653-400a-8738-81376998a521","Type":"ContainerDied","Data":"148cf07dcce3621e3e68918e05916cb2f31512040925bf1fe2e9bd76e2d185b2"} Nov 28 07:09:03 crc kubenswrapper[4946]: I1128 07:09:03.380925 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds" Nov 28 07:09:03 crc kubenswrapper[4946]: I1128 07:09:03.444998 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0fb75378-f653-400a-8738-81376998a521-bundle\") pod \"0fb75378-f653-400a-8738-81376998a521\" (UID: \"0fb75378-f653-400a-8738-81376998a521\") " Nov 28 07:09:03 crc kubenswrapper[4946]: I1128 07:09:03.445090 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0fb75378-f653-400a-8738-81376998a521-util\") pod \"0fb75378-f653-400a-8738-81376998a521\" (UID: \"0fb75378-f653-400a-8738-81376998a521\") " Nov 28 07:09:03 crc kubenswrapper[4946]: I1128 07:09:03.445131 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvz7k\" (UniqueName: \"kubernetes.io/projected/0fb75378-f653-400a-8738-81376998a521-kube-api-access-xvz7k\") pod \"0fb75378-f653-400a-8738-81376998a521\" (UID: \"0fb75378-f653-400a-8738-81376998a521\") " Nov 28 07:09:03 crc kubenswrapper[4946]: I1128 07:09:03.447182 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fb75378-f653-400a-8738-81376998a521-bundle" (OuterVolumeSpecName: "bundle") pod "0fb75378-f653-400a-8738-81376998a521" (UID: "0fb75378-f653-400a-8738-81376998a521"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:09:03 crc kubenswrapper[4946]: I1128 07:09:03.451555 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fb75378-f653-400a-8738-81376998a521-kube-api-access-xvz7k" (OuterVolumeSpecName: "kube-api-access-xvz7k") pod "0fb75378-f653-400a-8738-81376998a521" (UID: "0fb75378-f653-400a-8738-81376998a521"). InnerVolumeSpecName "kube-api-access-xvz7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:09:03 crc kubenswrapper[4946]: I1128 07:09:03.463611 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fb75378-f653-400a-8738-81376998a521-util" (OuterVolumeSpecName: "util") pod "0fb75378-f653-400a-8738-81376998a521" (UID: "0fb75378-f653-400a-8738-81376998a521"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:09:03 crc kubenswrapper[4946]: I1128 07:09:03.546145 4946 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0fb75378-f653-400a-8738-81376998a521-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:03 crc kubenswrapper[4946]: I1128 07:09:03.546188 4946 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0fb75378-f653-400a-8738-81376998a521-util\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:03 crc kubenswrapper[4946]: I1128 07:09:03.546209 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvz7k\" (UniqueName: \"kubernetes.io/projected/0fb75378-f653-400a-8738-81376998a521-kube-api-access-xvz7k\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:04 crc kubenswrapper[4946]: I1128 07:09:04.146169 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds" event={"ID":"0fb75378-f653-400a-8738-81376998a521","Type":"ContainerDied","Data":"5e443528966b73c4565deb031569e1807c6777bb5fe2941ba587e6265a0ba619"} Nov 28 07:09:04 crc kubenswrapper[4946]: I1128 07:09:04.146816 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e443528966b73c4565deb031569e1807c6777bb5fe2941ba587e6265a0ba619" Nov 28 07:09:04 crc kubenswrapper[4946]: I1128 07:09:04.146252 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds" Nov 28 07:09:12 crc kubenswrapper[4946]: I1128 07:09:12.806703 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-8fc8b48b5-g6c68"] Nov 28 07:09:12 crc kubenswrapper[4946]: E1128 07:09:12.807719 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fb75378-f653-400a-8738-81376998a521" containerName="util" Nov 28 07:09:12 crc kubenswrapper[4946]: I1128 07:09:12.807733 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fb75378-f653-400a-8738-81376998a521" containerName="util" Nov 28 07:09:12 crc kubenswrapper[4946]: E1128 07:09:12.807746 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fb75378-f653-400a-8738-81376998a521" containerName="pull" Nov 28 07:09:12 crc kubenswrapper[4946]: I1128 07:09:12.807752 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fb75378-f653-400a-8738-81376998a521" containerName="pull" Nov 28 07:09:12 crc kubenswrapper[4946]: E1128 07:09:12.807766 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fb75378-f653-400a-8738-81376998a521" containerName="extract" Nov 28 07:09:12 crc kubenswrapper[4946]: I1128 07:09:12.807774 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fb75378-f653-400a-8738-81376998a521" containerName="extract" Nov 28 07:09:12 crc kubenswrapper[4946]: E1128 07:09:12.807788 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79c0d15c-8fc9-4efd-b1ec-739718f313d9" containerName="console" Nov 28 07:09:12 crc kubenswrapper[4946]: I1128 07:09:12.807794 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="79c0d15c-8fc9-4efd-b1ec-739718f313d9" containerName="console" Nov 28 07:09:12 crc kubenswrapper[4946]: I1128 07:09:12.807887 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fb75378-f653-400a-8738-81376998a521" containerName="extract" Nov 28 07:09:12 crc kubenswrapper[4946]: I1128 07:09:12.807898 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="79c0d15c-8fc9-4efd-b1ec-739718f313d9" containerName="console" Nov 28 07:09:12 crc kubenswrapper[4946]: I1128 07:09:12.808411 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8fc8b48b5-g6c68" Nov 28 07:09:12 crc kubenswrapper[4946]: I1128 07:09:12.813289 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 28 07:09:12 crc kubenswrapper[4946]: I1128 07:09:12.813541 4946 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-xg6z8" Nov 28 07:09:12 crc kubenswrapper[4946]: I1128 07:09:12.813683 4946 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 28 07:09:12 crc kubenswrapper[4946]: I1128 07:09:12.813767 4946 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 28 07:09:12 crc kubenswrapper[4946]: I1128 07:09:12.813880 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 28 07:09:12 crc kubenswrapper[4946]: I1128 07:09:12.895275 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8fc8b48b5-g6c68"] Nov 28 07:09:12 crc kubenswrapper[4946]: I1128 07:09:12.991381 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5492c299-4681-44a7-ba80-3d4fed664c4a-apiservice-cert\") pod \"metallb-operator-controller-manager-8fc8b48b5-g6c68\" (UID: \"5492c299-4681-44a7-ba80-3d4fed664c4a\") " pod="metallb-system/metallb-operator-controller-manager-8fc8b48b5-g6c68" Nov 28 07:09:12 crc kubenswrapper[4946]: I1128 07:09:12.991478 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9lgc\" (UniqueName: \"kubernetes.io/projected/5492c299-4681-44a7-ba80-3d4fed664c4a-kube-api-access-w9lgc\") pod \"metallb-operator-controller-manager-8fc8b48b5-g6c68\" (UID: \"5492c299-4681-44a7-ba80-3d4fed664c4a\") " pod="metallb-system/metallb-operator-controller-manager-8fc8b48b5-g6c68" Nov 28 07:09:12 crc kubenswrapper[4946]: I1128 07:09:12.991641 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5492c299-4681-44a7-ba80-3d4fed664c4a-webhook-cert\") pod \"metallb-operator-controller-manager-8fc8b48b5-g6c68\" (UID: \"5492c299-4681-44a7-ba80-3d4fed664c4a\") " pod="metallb-system/metallb-operator-controller-manager-8fc8b48b5-g6c68" Nov 28 07:09:13 crc kubenswrapper[4946]: I1128 07:09:13.093169 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5492c299-4681-44a7-ba80-3d4fed664c4a-apiservice-cert\") pod \"metallb-operator-controller-manager-8fc8b48b5-g6c68\" (UID: \"5492c299-4681-44a7-ba80-3d4fed664c4a\") " pod="metallb-system/metallb-operator-controller-manager-8fc8b48b5-g6c68" Nov 28 07:09:13 crc kubenswrapper[4946]: I1128 07:09:13.093282 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9lgc\" (UniqueName: \"kubernetes.io/projected/5492c299-4681-44a7-ba80-3d4fed664c4a-kube-api-access-w9lgc\") pod \"metallb-operator-controller-manager-8fc8b48b5-g6c68\" (UID: \"5492c299-4681-44a7-ba80-3d4fed664c4a\") " pod="metallb-system/metallb-operator-controller-manager-8fc8b48b5-g6c68" Nov 28 07:09:13 crc kubenswrapper[4946]: I1128 07:09:13.093303 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5492c299-4681-44a7-ba80-3d4fed664c4a-webhook-cert\") pod \"metallb-operator-controller-manager-8fc8b48b5-g6c68\" (UID: \"5492c299-4681-44a7-ba80-3d4fed664c4a\") " pod="metallb-system/metallb-operator-controller-manager-8fc8b48b5-g6c68" Nov 28 07:09:13 crc kubenswrapper[4946]: I1128 07:09:13.102337 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5492c299-4681-44a7-ba80-3d4fed664c4a-webhook-cert\") pod \"metallb-operator-controller-manager-8fc8b48b5-g6c68\" (UID: \"5492c299-4681-44a7-ba80-3d4fed664c4a\") " pod="metallb-system/metallb-operator-controller-manager-8fc8b48b5-g6c68" Nov 28 07:09:13 crc kubenswrapper[4946]: I1128 07:09:13.102348 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5492c299-4681-44a7-ba80-3d4fed664c4a-apiservice-cert\") pod \"metallb-operator-controller-manager-8fc8b48b5-g6c68\" (UID: \"5492c299-4681-44a7-ba80-3d4fed664c4a\") " pod="metallb-system/metallb-operator-controller-manager-8fc8b48b5-g6c68" Nov 28 07:09:13 crc kubenswrapper[4946]: I1128 07:09:13.109294 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d845b7ff6-rnvwq"] Nov 28 07:09:13 crc kubenswrapper[4946]: I1128 07:09:13.110147 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5d845b7ff6-rnvwq" Nov 28 07:09:13 crc kubenswrapper[4946]: I1128 07:09:13.117982 4946 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-nbg7m" Nov 28 07:09:13 crc kubenswrapper[4946]: I1128 07:09:13.123472 4946 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 28 07:09:13 crc kubenswrapper[4946]: I1128 07:09:13.124100 4946 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 28 07:09:13 crc kubenswrapper[4946]: I1128 07:09:13.134205 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d845b7ff6-rnvwq"] Nov 28 07:09:13 crc kubenswrapper[4946]: I1128 07:09:13.143323 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9lgc\" (UniqueName: \"kubernetes.io/projected/5492c299-4681-44a7-ba80-3d4fed664c4a-kube-api-access-w9lgc\") pod \"metallb-operator-controller-manager-8fc8b48b5-g6c68\" (UID: \"5492c299-4681-44a7-ba80-3d4fed664c4a\") " pod="metallb-system/metallb-operator-controller-manager-8fc8b48b5-g6c68" Nov 28 07:09:13 crc kubenswrapper[4946]: I1128 07:09:13.295909 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c896810c-7445-4b8a-96a3-7be0f84f3c65-webhook-cert\") pod \"metallb-operator-webhook-server-5d845b7ff6-rnvwq\" (UID: \"c896810c-7445-4b8a-96a3-7be0f84f3c65\") " pod="metallb-system/metallb-operator-webhook-server-5d845b7ff6-rnvwq" Nov 28 07:09:13 crc kubenswrapper[4946]: I1128 07:09:13.295975 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c896810c-7445-4b8a-96a3-7be0f84f3c65-apiservice-cert\") pod \"metallb-operator-webhook-server-5d845b7ff6-rnvwq\" (UID: \"c896810c-7445-4b8a-96a3-7be0f84f3c65\") " pod="metallb-system/metallb-operator-webhook-server-5d845b7ff6-rnvwq" Nov 28 07:09:13 crc kubenswrapper[4946]: I1128 07:09:13.296005 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd5sx\" (UniqueName: \"kubernetes.io/projected/c896810c-7445-4b8a-96a3-7be0f84f3c65-kube-api-access-bd5sx\") pod \"metallb-operator-webhook-server-5d845b7ff6-rnvwq\" (UID: \"c896810c-7445-4b8a-96a3-7be0f84f3c65\") " pod="metallb-system/metallb-operator-webhook-server-5d845b7ff6-rnvwq" Nov 28 07:09:13 crc kubenswrapper[4946]: I1128 07:09:13.397386 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd5sx\" (UniqueName: \"kubernetes.io/projected/c896810c-7445-4b8a-96a3-7be0f84f3c65-kube-api-access-bd5sx\") pod \"metallb-operator-webhook-server-5d845b7ff6-rnvwq\" (UID: \"c896810c-7445-4b8a-96a3-7be0f84f3c65\") " pod="metallb-system/metallb-operator-webhook-server-5d845b7ff6-rnvwq" Nov 28 07:09:13 crc kubenswrapper[4946]: I1128 07:09:13.398051 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c896810c-7445-4b8a-96a3-7be0f84f3c65-webhook-cert\") pod \"metallb-operator-webhook-server-5d845b7ff6-rnvwq\" (UID: \"c896810c-7445-4b8a-96a3-7be0f84f3c65\") " pod="metallb-system/metallb-operator-webhook-server-5d845b7ff6-rnvwq" Nov 28 07:09:13 crc kubenswrapper[4946]: I1128 07:09:13.398208 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c896810c-7445-4b8a-96a3-7be0f84f3c65-apiservice-cert\") pod \"metallb-operator-webhook-server-5d845b7ff6-rnvwq\" (UID: \"c896810c-7445-4b8a-96a3-7be0f84f3c65\") " pod="metallb-system/metallb-operator-webhook-server-5d845b7ff6-rnvwq" Nov 28 07:09:13 crc kubenswrapper[4946]: I1128 07:09:13.401960 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c896810c-7445-4b8a-96a3-7be0f84f3c65-webhook-cert\") pod \"metallb-operator-webhook-server-5d845b7ff6-rnvwq\" (UID: \"c896810c-7445-4b8a-96a3-7be0f84f3c65\") " pod="metallb-system/metallb-operator-webhook-server-5d845b7ff6-rnvwq" Nov 28 07:09:13 crc kubenswrapper[4946]: I1128 07:09:13.404672 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c896810c-7445-4b8a-96a3-7be0f84f3c65-apiservice-cert\") pod \"metallb-operator-webhook-server-5d845b7ff6-rnvwq\" (UID: \"c896810c-7445-4b8a-96a3-7be0f84f3c65\") " pod="metallb-system/metallb-operator-webhook-server-5d845b7ff6-rnvwq" Nov 28 07:09:13 crc kubenswrapper[4946]: I1128 07:09:13.425900 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd5sx\" (UniqueName: \"kubernetes.io/projected/c896810c-7445-4b8a-96a3-7be0f84f3c65-kube-api-access-bd5sx\") pod \"metallb-operator-webhook-server-5d845b7ff6-rnvwq\" (UID: \"c896810c-7445-4b8a-96a3-7be0f84f3c65\") " pod="metallb-system/metallb-operator-webhook-server-5d845b7ff6-rnvwq" Nov 28 07:09:13 crc kubenswrapper[4946]: I1128 07:09:13.427611 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8fc8b48b5-g6c68" Nov 28 07:09:13 crc kubenswrapper[4946]: I1128 07:09:13.488278 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5d845b7ff6-rnvwq" Nov 28 07:09:13 crc kubenswrapper[4946]: I1128 07:09:13.736324 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8fc8b48b5-g6c68"] Nov 28 07:09:13 crc kubenswrapper[4946]: I1128 07:09:13.825975 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d845b7ff6-rnvwq"] Nov 28 07:09:13 crc kubenswrapper[4946]: W1128 07:09:13.837494 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc896810c_7445_4b8a_96a3_7be0f84f3c65.slice/crio-5a07915f75c1a5a98725256aac3f544a879501ef86cf9924fc3d409fc5753f0a WatchSource:0}: Error finding container 5a07915f75c1a5a98725256aac3f544a879501ef86cf9924fc3d409fc5753f0a: Status 404 returned error can't find the container with id 5a07915f75c1a5a98725256aac3f544a879501ef86cf9924fc3d409fc5753f0a Nov 28 07:09:14 crc kubenswrapper[4946]: I1128 07:09:14.214996 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5d845b7ff6-rnvwq" event={"ID":"c896810c-7445-4b8a-96a3-7be0f84f3c65","Type":"ContainerStarted","Data":"5a07915f75c1a5a98725256aac3f544a879501ef86cf9924fc3d409fc5753f0a"} Nov 28 07:09:14 crc kubenswrapper[4946]: I1128 07:09:14.216291 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8fc8b48b5-g6c68" event={"ID":"5492c299-4681-44a7-ba80-3d4fed664c4a","Type":"ContainerStarted","Data":"f8ec19bfe4ead31739a4fea0fc647b01f888ed489ba4b42f0b6c73668a4049c6"} Nov 28 07:09:20 crc kubenswrapper[4946]: I1128 07:09:20.274129 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8fc8b48b5-g6c68" event={"ID":"5492c299-4681-44a7-ba80-3d4fed664c4a","Type":"ContainerStarted","Data":"fa1304e474ae1a36fd7d6f5269c37ede5f227600464a22ca8b25133173795732"} Nov 28 07:09:20 crc kubenswrapper[4946]: I1128 07:09:20.275132 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-8fc8b48b5-g6c68" Nov 28 07:09:20 crc kubenswrapper[4946]: I1128 07:09:20.308811 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-8fc8b48b5-g6c68" podStartSLOduration=2.727548686 podStartE2EDuration="8.308786275s" podCreationTimestamp="2025-11-28 07:09:12 +0000 UTC" firstStartedPulling="2025-11-28 07:09:13.754263705 +0000 UTC m=+1008.132328826" lastFinishedPulling="2025-11-28 07:09:19.335501304 +0000 UTC m=+1013.713566415" observedRunningTime="2025-11-28 07:09:20.306942489 +0000 UTC m=+1014.685007620" watchObservedRunningTime="2025-11-28 07:09:20.308786275 +0000 UTC m=+1014.686851386" Nov 28 07:09:21 crc kubenswrapper[4946]: I1128 07:09:21.283336 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5d845b7ff6-rnvwq" event={"ID":"c896810c-7445-4b8a-96a3-7be0f84f3c65","Type":"ContainerStarted","Data":"d5544d883ced0edebc955abd553ea74c95da693d30822fe67c7cd1a16be85acb"} Nov 28 07:09:21 crc kubenswrapper[4946]: I1128 07:09:21.283635 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5d845b7ff6-rnvwq" Nov 28 07:09:21 crc kubenswrapper[4946]: I1128 07:09:21.342532 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5d845b7ff6-rnvwq" podStartSLOduration=1.9871036850000001 podStartE2EDuration="8.342501872s" podCreationTimestamp="2025-11-28 07:09:13 +0000 UTC" firstStartedPulling="2025-11-28 07:09:13.841740652 +0000 UTC m=+1008.219805763" lastFinishedPulling="2025-11-28 07:09:20.197138839 +0000 UTC m=+1014.575203950" observedRunningTime="2025-11-28 07:09:21.337843707 +0000 UTC m=+1015.715908818" watchObservedRunningTime="2025-11-28 07:09:21.342501872 +0000 UTC m=+1015.720566983" Nov 28 07:09:24 crc kubenswrapper[4946]: I1128 07:09:24.731669 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:09:24 crc kubenswrapper[4946]: I1128 07:09:24.732070 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:09:33 crc kubenswrapper[4946]: I1128 07:09:33.502885 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5d845b7ff6-rnvwq" Nov 28 07:09:53 crc kubenswrapper[4946]: I1128 07:09:53.431568 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-8fc8b48b5-g6c68" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.144713 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-kj7q9"] Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.148243 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.151183 4946 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.154970 4946 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-2t7fc" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.155394 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.163864 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-mwq2z"] Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.165010 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-mwq2z" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.170226 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-mwq2z"] Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.195189 4946 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.254074 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-fcw5r"] Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.255108 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fcw5r" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.258384 4946 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-2lcj7" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.258442 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.258730 4946 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.258748 4946 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.265160 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-z8z62"] Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.266250 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-z8z62" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.269188 4946 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.297029 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8327637b-5c53-4bd6-b8bc-fbe3516bc4fc-reloader\") pod \"frr-k8s-kj7q9\" (UID: \"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc\") " pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.297095 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4x2m\" (UniqueName: \"kubernetes.io/projected/8327637b-5c53-4bd6-b8bc-fbe3516bc4fc-kube-api-access-g4x2m\") pod \"frr-k8s-kj7q9\" (UID: \"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc\") " pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.297132 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8327637b-5c53-4bd6-b8bc-fbe3516bc4fc-frr-conf\") pod \"frr-k8s-kj7q9\" (UID: \"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc\") " pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.297161 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8327637b-5c53-4bd6-b8bc-fbe3516bc4fc-frr-sockets\") pod \"frr-k8s-kj7q9\" (UID: \"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc\") " pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.297188 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8327637b-5c53-4bd6-b8bc-fbe3516bc4fc-metrics\") pod \"frr-k8s-kj7q9\" (UID: \"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc\") " pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.297215 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6c5f04a-e283-4188-afe9-5ff2c46aba47-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-mwq2z\" (UID: \"f6c5f04a-e283-4188-afe9-5ff2c46aba47\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-mwq2z" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.297238 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8327637b-5c53-4bd6-b8bc-fbe3516bc4fc-metrics-certs\") pod \"frr-k8s-kj7q9\" (UID: \"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc\") " pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.297443 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8327637b-5c53-4bd6-b8bc-fbe3516bc4fc-frr-startup\") pod \"frr-k8s-kj7q9\" (UID: \"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc\") " pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.297559 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2wjd\" (UniqueName: \"kubernetes.io/projected/f6c5f04a-e283-4188-afe9-5ff2c46aba47-kube-api-access-v2wjd\") pod \"frr-k8s-webhook-server-7fcb986d4-mwq2z\" (UID: \"f6c5f04a-e283-4188-afe9-5ff2c46aba47\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-mwq2z" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.299627 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-z8z62"] Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.398878 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0999fee9-3799-4c5f-8338-9ea0b670bed5-memberlist\") pod \"speaker-fcw5r\" (UID: \"0999fee9-3799-4c5f-8338-9ea0b670bed5\") " pod="metallb-system/speaker-fcw5r" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.398952 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/115181ac-3a10-44b9-b7b1-998e1fe24938-metrics-certs\") pod \"controller-f8648f98b-z8z62\" (UID: \"115181ac-3a10-44b9-b7b1-998e1fe24938\") " pod="metallb-system/controller-f8648f98b-z8z62" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.398992 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qw29\" (UniqueName: \"kubernetes.io/projected/115181ac-3a10-44b9-b7b1-998e1fe24938-kube-api-access-5qw29\") pod \"controller-f8648f98b-z8z62\" (UID: \"115181ac-3a10-44b9-b7b1-998e1fe24938\") " pod="metallb-system/controller-f8648f98b-z8z62" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.399023 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x54hd\" (UniqueName: \"kubernetes.io/projected/0999fee9-3799-4c5f-8338-9ea0b670bed5-kube-api-access-x54hd\") pod \"speaker-fcw5r\" (UID: \"0999fee9-3799-4c5f-8338-9ea0b670bed5\") " pod="metallb-system/speaker-fcw5r" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.399157 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/115181ac-3a10-44b9-b7b1-998e1fe24938-cert\") pod \"controller-f8648f98b-z8z62\" (UID: \"115181ac-3a10-44b9-b7b1-998e1fe24938\") " pod="metallb-system/controller-f8648f98b-z8z62" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.399271 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0999fee9-3799-4c5f-8338-9ea0b670bed5-metrics-certs\") pod \"speaker-fcw5r\" (UID: \"0999fee9-3799-4c5f-8338-9ea0b670bed5\") " pod="metallb-system/speaker-fcw5r" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.399349 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0999fee9-3799-4c5f-8338-9ea0b670bed5-metallb-excludel2\") pod \"speaker-fcw5r\" (UID: \"0999fee9-3799-4c5f-8338-9ea0b670bed5\") " pod="metallb-system/speaker-fcw5r" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.399404 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2wjd\" (UniqueName: \"kubernetes.io/projected/f6c5f04a-e283-4188-afe9-5ff2c46aba47-kube-api-access-v2wjd\") pod \"frr-k8s-webhook-server-7fcb986d4-mwq2z\" (UID: \"f6c5f04a-e283-4188-afe9-5ff2c46aba47\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-mwq2z" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.399437 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8327637b-5c53-4bd6-b8bc-fbe3516bc4fc-frr-startup\") pod \"frr-k8s-kj7q9\" (UID: \"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc\") " pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.399474 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8327637b-5c53-4bd6-b8bc-fbe3516bc4fc-reloader\") pod \"frr-k8s-kj7q9\" (UID: \"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc\") " pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.399543 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4x2m\" (UniqueName: \"kubernetes.io/projected/8327637b-5c53-4bd6-b8bc-fbe3516bc4fc-kube-api-access-g4x2m\") pod \"frr-k8s-kj7q9\" (UID: \"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc\") " pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.400183 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8327637b-5c53-4bd6-b8bc-fbe3516bc4fc-reloader\") pod \"frr-k8s-kj7q9\" (UID: \"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc\") " pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.400589 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8327637b-5c53-4bd6-b8bc-fbe3516bc4fc-frr-conf\") pod \"frr-k8s-kj7q9\" (UID: \"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc\") " pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.400296 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8327637b-5c53-4bd6-b8bc-fbe3516bc4fc-frr-conf\") pod \"frr-k8s-kj7q9\" (UID: \"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc\") " pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.400678 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8327637b-5c53-4bd6-b8bc-fbe3516bc4fc-frr-sockets\") pod \"frr-k8s-kj7q9\" (UID: \"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc\") " pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.400707 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8327637b-5c53-4bd6-b8bc-fbe3516bc4fc-metrics\") pod \"frr-k8s-kj7q9\" (UID: \"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc\") " pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.401262 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6c5f04a-e283-4188-afe9-5ff2c46aba47-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-mwq2z\" (UID: \"f6c5f04a-e283-4188-afe9-5ff2c46aba47\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-mwq2z" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.401301 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8327637b-5c53-4bd6-b8bc-fbe3516bc4fc-metrics-certs\") pod \"frr-k8s-kj7q9\" (UID: \"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc\") " pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.400715 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8327637b-5c53-4bd6-b8bc-fbe3516bc4fc-frr-startup\") pod \"frr-k8s-kj7q9\" (UID: \"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc\") " pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.401199 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8327637b-5c53-4bd6-b8bc-fbe3516bc4fc-frr-sockets\") pod \"frr-k8s-kj7q9\" (UID: \"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc\") " pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:09:54 crc kubenswrapper[4946]: E1128 07:09:54.401409 4946 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Nov 28 07:09:54 crc kubenswrapper[4946]: E1128 07:09:54.401491 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6c5f04a-e283-4188-afe9-5ff2c46aba47-cert podName:f6c5f04a-e283-4188-afe9-5ff2c46aba47 nodeName:}" failed. No retries permitted until 2025-11-28 07:09:54.901454703 +0000 UTC m=+1049.279519814 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f6c5f04a-e283-4188-afe9-5ff2c46aba47-cert") pod "frr-k8s-webhook-server-7fcb986d4-mwq2z" (UID: "f6c5f04a-e283-4188-afe9-5ff2c46aba47") : secret "frr-k8s-webhook-server-cert" not found Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.401203 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8327637b-5c53-4bd6-b8bc-fbe3516bc4fc-metrics\") pod \"frr-k8s-kj7q9\" (UID: \"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc\") " pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.409563 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8327637b-5c53-4bd6-b8bc-fbe3516bc4fc-metrics-certs\") pod \"frr-k8s-kj7q9\" (UID: \"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc\") " pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.419291 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2wjd\" (UniqueName: \"kubernetes.io/projected/f6c5f04a-e283-4188-afe9-5ff2c46aba47-kube-api-access-v2wjd\") pod \"frr-k8s-webhook-server-7fcb986d4-mwq2z\" (UID: \"f6c5f04a-e283-4188-afe9-5ff2c46aba47\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-mwq2z" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.419303 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4x2m\" (UniqueName: \"kubernetes.io/projected/8327637b-5c53-4bd6-b8bc-fbe3516bc4fc-kube-api-access-g4x2m\") pod \"frr-k8s-kj7q9\" (UID: \"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc\") " pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.503239 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.503821 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/115181ac-3a10-44b9-b7b1-998e1fe24938-cert\") pod \"controller-f8648f98b-z8z62\" (UID: \"115181ac-3a10-44b9-b7b1-998e1fe24938\") " pod="metallb-system/controller-f8648f98b-z8z62" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.503886 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0999fee9-3799-4c5f-8338-9ea0b670bed5-metrics-certs\") pod \"speaker-fcw5r\" (UID: \"0999fee9-3799-4c5f-8338-9ea0b670bed5\") " pod="metallb-system/speaker-fcw5r" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.503928 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0999fee9-3799-4c5f-8338-9ea0b670bed5-metallb-excludel2\") pod \"speaker-fcw5r\" (UID: \"0999fee9-3799-4c5f-8338-9ea0b670bed5\") " pod="metallb-system/speaker-fcw5r" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.504025 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0999fee9-3799-4c5f-8338-9ea0b670bed5-memberlist\") pod \"speaker-fcw5r\" (UID: \"0999fee9-3799-4c5f-8338-9ea0b670bed5\") " pod="metallb-system/speaker-fcw5r" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.504056 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/115181ac-3a10-44b9-b7b1-998e1fe24938-metrics-certs\") pod \"controller-f8648f98b-z8z62\" (UID: \"115181ac-3a10-44b9-b7b1-998e1fe24938\") " pod="metallb-system/controller-f8648f98b-z8z62" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.504090 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qw29\" (UniqueName: \"kubernetes.io/projected/115181ac-3a10-44b9-b7b1-998e1fe24938-kube-api-access-5qw29\") pod \"controller-f8648f98b-z8z62\" (UID: \"115181ac-3a10-44b9-b7b1-998e1fe24938\") " pod="metallb-system/controller-f8648f98b-z8z62" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.504119 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x54hd\" (UniqueName: \"kubernetes.io/projected/0999fee9-3799-4c5f-8338-9ea0b670bed5-kube-api-access-x54hd\") pod \"speaker-fcw5r\" (UID: \"0999fee9-3799-4c5f-8338-9ea0b670bed5\") " pod="metallb-system/speaker-fcw5r" Nov 28 07:09:54 crc kubenswrapper[4946]: E1128 07:09:54.504335 4946 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 28 07:09:54 crc kubenswrapper[4946]: E1128 07:09:54.504398 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0999fee9-3799-4c5f-8338-9ea0b670bed5-memberlist podName:0999fee9-3799-4c5f-8338-9ea0b670bed5 nodeName:}" failed. No retries permitted until 2025-11-28 07:09:55.004377102 +0000 UTC m=+1049.382442223 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/0999fee9-3799-4c5f-8338-9ea0b670bed5-memberlist") pod "speaker-fcw5r" (UID: "0999fee9-3799-4c5f-8338-9ea0b670bed5") : secret "metallb-memberlist" not found Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.505663 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0999fee9-3799-4c5f-8338-9ea0b670bed5-metallb-excludel2\") pod \"speaker-fcw5r\" (UID: \"0999fee9-3799-4c5f-8338-9ea0b670bed5\") " pod="metallb-system/speaker-fcw5r" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.507756 4946 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.509067 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/115181ac-3a10-44b9-b7b1-998e1fe24938-metrics-certs\") pod \"controller-f8648f98b-z8z62\" (UID: \"115181ac-3a10-44b9-b7b1-998e1fe24938\") " pod="metallb-system/controller-f8648f98b-z8z62" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.513120 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0999fee9-3799-4c5f-8338-9ea0b670bed5-metrics-certs\") pod \"speaker-fcw5r\" (UID: \"0999fee9-3799-4c5f-8338-9ea0b670bed5\") " pod="metallb-system/speaker-fcw5r" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.518937 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/115181ac-3a10-44b9-b7b1-998e1fe24938-cert\") pod \"controller-f8648f98b-z8z62\" (UID: \"115181ac-3a10-44b9-b7b1-998e1fe24938\") " pod="metallb-system/controller-f8648f98b-z8z62" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.526213 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x54hd\" (UniqueName: \"kubernetes.io/projected/0999fee9-3799-4c5f-8338-9ea0b670bed5-kube-api-access-x54hd\") pod \"speaker-fcw5r\" (UID: \"0999fee9-3799-4c5f-8338-9ea0b670bed5\") " pod="metallb-system/speaker-fcw5r" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.526857 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qw29\" (UniqueName: \"kubernetes.io/projected/115181ac-3a10-44b9-b7b1-998e1fe24938-kube-api-access-5qw29\") pod \"controller-f8648f98b-z8z62\" (UID: \"115181ac-3a10-44b9-b7b1-998e1fe24938\") " pod="metallb-system/controller-f8648f98b-z8z62" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.589243 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-z8z62" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.731319 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.731385 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.731437 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.732306 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"85f899527cbb8eb5fccf192c306339421531f2edfd2b109fbf8ff7c7c6545620"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.732360 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://85f899527cbb8eb5fccf192c306339421531f2edfd2b109fbf8ff7c7c6545620" gracePeriod=600 Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.791583 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-z8z62"] Nov 28 07:09:54 crc kubenswrapper[4946]: W1128 07:09:54.797132 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod115181ac_3a10_44b9_b7b1_998e1fe24938.slice/crio-e7c659322e9a8b8a25dc5c188479857b9d0ce577bbf9565c5c4c2f325512e48d WatchSource:0}: Error finding container e7c659322e9a8b8a25dc5c188479857b9d0ce577bbf9565c5c4c2f325512e48d: Status 404 returned error can't find the container with id e7c659322e9a8b8a25dc5c188479857b9d0ce577bbf9565c5c4c2f325512e48d Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.910620 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6c5f04a-e283-4188-afe9-5ff2c46aba47-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-mwq2z\" (UID: \"f6c5f04a-e283-4188-afe9-5ff2c46aba47\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-mwq2z" Nov 28 07:09:54 crc kubenswrapper[4946]: I1128 07:09:54.917419 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6c5f04a-e283-4188-afe9-5ff2c46aba47-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-mwq2z\" (UID: \"f6c5f04a-e283-4188-afe9-5ff2c46aba47\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-mwq2z" Nov 28 07:09:55 crc kubenswrapper[4946]: I1128 07:09:55.013490 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0999fee9-3799-4c5f-8338-9ea0b670bed5-memberlist\") pod \"speaker-fcw5r\" (UID: \"0999fee9-3799-4c5f-8338-9ea0b670bed5\") " pod="metallb-system/speaker-fcw5r" Nov 28 07:09:55 crc kubenswrapper[4946]: I1128 07:09:55.018680 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0999fee9-3799-4c5f-8338-9ea0b670bed5-memberlist\") pod \"speaker-fcw5r\" (UID: \"0999fee9-3799-4c5f-8338-9ea0b670bed5\") " pod="metallb-system/speaker-fcw5r" Nov 28 07:09:55 crc kubenswrapper[4946]: I1128 07:09:55.112349 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-mwq2z" Nov 28 07:09:55 crc kubenswrapper[4946]: I1128 07:09:55.182425 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fcw5r" Nov 28 07:09:55 crc kubenswrapper[4946]: I1128 07:09:55.386038 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-mwq2z"] Nov 28 07:09:55 crc kubenswrapper[4946]: I1128 07:09:55.513208 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="85f899527cbb8eb5fccf192c306339421531f2edfd2b109fbf8ff7c7c6545620" exitCode=0 Nov 28 07:09:55 crc kubenswrapper[4946]: I1128 07:09:55.513282 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"85f899527cbb8eb5fccf192c306339421531f2edfd2b109fbf8ff7c7c6545620"} Nov 28 07:09:55 crc kubenswrapper[4946]: I1128 07:09:55.513319 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"0a6d974443f840af515def5c439a2c40cf5e3449f4043f5c4fe778bb70c9b0fd"} Nov 28 07:09:55 crc kubenswrapper[4946]: I1128 07:09:55.513339 4946 scope.go:117] "RemoveContainer" containerID="9312f27097b6f3bbafda147b11eda22821e2c49101e6733241ea36c59af5418f" Nov 28 07:09:55 crc kubenswrapper[4946]: I1128 07:09:55.516539 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fcw5r" event={"ID":"0999fee9-3799-4c5f-8338-9ea0b670bed5","Type":"ContainerStarted","Data":"adbf8321b304f7b276248c5dede8338b79642df2e6b7795230254b1ad3c317f5"} Nov 28 07:09:55 crc kubenswrapper[4946]: I1128 07:09:55.516597 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fcw5r" event={"ID":"0999fee9-3799-4c5f-8338-9ea0b670bed5","Type":"ContainerStarted","Data":"7b3101557e3e392e6c706b311d2e7a9340f27ffe5d8c707c53ed1e14eda46e4e"} Nov 28 07:09:55 crc kubenswrapper[4946]: I1128 07:09:55.519132 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kj7q9" event={"ID":"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc","Type":"ContainerStarted","Data":"ccc1f8bef22ac00f82a814a2ed9b63671292d8d904f77efad721ee946a6436af"} Nov 28 07:09:55 crc kubenswrapper[4946]: I1128 07:09:55.521555 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-mwq2z" event={"ID":"f6c5f04a-e283-4188-afe9-5ff2c46aba47","Type":"ContainerStarted","Data":"347910eef2a0fbe080a8e8a2e10cdffba82d04a59cf34a7ebf1d673a19315c81"} Nov 28 07:09:55 crc kubenswrapper[4946]: I1128 07:09:55.524609 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-z8z62" event={"ID":"115181ac-3a10-44b9-b7b1-998e1fe24938","Type":"ContainerStarted","Data":"a12bb801fa513a5da1c16eb4c157f4cbedfecf3fa7e0cc7d4cbe92bb29feff68"} Nov 28 07:09:55 crc kubenswrapper[4946]: I1128 07:09:55.524650 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-z8z62" event={"ID":"115181ac-3a10-44b9-b7b1-998e1fe24938","Type":"ContainerStarted","Data":"0dbcd8e0e0fedfb971d5e8325438af82f60099996b38ca0670b6cc0113ac9b0b"} Nov 28 07:09:55 crc kubenswrapper[4946]: I1128 07:09:55.524661 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-z8z62" event={"ID":"115181ac-3a10-44b9-b7b1-998e1fe24938","Type":"ContainerStarted","Data":"e7c659322e9a8b8a25dc5c188479857b9d0ce577bbf9565c5c4c2f325512e48d"} Nov 28 07:09:55 crc kubenswrapper[4946]: I1128 07:09:55.525385 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-z8z62" Nov 28 07:09:55 crc kubenswrapper[4946]: I1128 07:09:55.566829 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-z8z62" podStartSLOduration=1.566801151 podStartE2EDuration="1.566801151s" podCreationTimestamp="2025-11-28 07:09:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:09:55.563898409 +0000 UTC m=+1049.941963530" watchObservedRunningTime="2025-11-28 07:09:55.566801151 +0000 UTC m=+1049.944866262" Nov 28 07:09:56 crc kubenswrapper[4946]: I1128 07:09:56.540918 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fcw5r" event={"ID":"0999fee9-3799-4c5f-8338-9ea0b670bed5","Type":"ContainerStarted","Data":"b5a16a17e9b339ab0eef488dc767c194eaa3e60ff6de813b7c7cf28e358df84d"} Nov 28 07:09:56 crc kubenswrapper[4946]: I1128 07:09:56.541494 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-fcw5r" Nov 28 07:09:56 crc kubenswrapper[4946]: I1128 07:09:56.563656 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-fcw5r" podStartSLOduration=2.563634865 podStartE2EDuration="2.563634865s" podCreationTimestamp="2025-11-28 07:09:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:09:56.562424105 +0000 UTC m=+1050.940489216" watchObservedRunningTime="2025-11-28 07:09:56.563634865 +0000 UTC m=+1050.941699976" Nov 28 07:10:02 crc kubenswrapper[4946]: I1128 07:10:02.595504 4946 generic.go:334] "Generic (PLEG): container finished" podID="8327637b-5c53-4bd6-b8bc-fbe3516bc4fc" containerID="e4f4f6f479c6b5c800e2d8fcf414c6227853f7e24fb4623491fe25e5031aeb5d" exitCode=0 Nov 28 07:10:02 crc kubenswrapper[4946]: I1128 07:10:02.595612 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kj7q9" event={"ID":"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc","Type":"ContainerDied","Data":"e4f4f6f479c6b5c800e2d8fcf414c6227853f7e24fb4623491fe25e5031aeb5d"} Nov 28 07:10:02 crc kubenswrapper[4946]: I1128 07:10:02.601148 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-mwq2z" event={"ID":"f6c5f04a-e283-4188-afe9-5ff2c46aba47","Type":"ContainerStarted","Data":"b1848a8eb0f586acf7f5ed201a01e66d19f44394759048f1d85ba973c8d60a03"} Nov 28 07:10:02 crc kubenswrapper[4946]: I1128 07:10:02.601455 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-mwq2z" Nov 28 07:10:02 crc kubenswrapper[4946]: I1128 07:10:02.663602 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-mwq2z" podStartSLOduration=2.017719352 podStartE2EDuration="8.663582374s" podCreationTimestamp="2025-11-28 07:09:54 +0000 UTC" firstStartedPulling="2025-11-28 07:09:55.420091207 +0000 UTC m=+1049.798156318" lastFinishedPulling="2025-11-28 07:10:02.065954209 +0000 UTC m=+1056.444019340" observedRunningTime="2025-11-28 07:10:02.662635131 +0000 UTC m=+1057.040700252" watchObservedRunningTime="2025-11-28 07:10:02.663582374 +0000 UTC m=+1057.041647485" Nov 28 07:10:03 crc kubenswrapper[4946]: I1128 07:10:03.610298 4946 generic.go:334] "Generic (PLEG): container finished" podID="8327637b-5c53-4bd6-b8bc-fbe3516bc4fc" containerID="258791d8c0a12241f8f7f2d9546efa9fcf7a0f9ed52bae4fb157ee39e75bbdcd" exitCode=0 Nov 28 07:10:03 crc kubenswrapper[4946]: I1128 07:10:03.610404 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kj7q9" event={"ID":"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc","Type":"ContainerDied","Data":"258791d8c0a12241f8f7f2d9546efa9fcf7a0f9ed52bae4fb157ee39e75bbdcd"} Nov 28 07:10:04 crc kubenswrapper[4946]: I1128 07:10:04.595141 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-z8z62" Nov 28 07:10:04 crc kubenswrapper[4946]: I1128 07:10:04.648359 4946 generic.go:334] "Generic (PLEG): container finished" podID="8327637b-5c53-4bd6-b8bc-fbe3516bc4fc" containerID="3e1d72feec33400d1cdc8d024a4c49e276bbec6fc75070288eb9a5338b7f57dd" exitCode=0 Nov 28 07:10:04 crc kubenswrapper[4946]: I1128 07:10:04.648443 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kj7q9" event={"ID":"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc","Type":"ContainerDied","Data":"3e1d72feec33400d1cdc8d024a4c49e276bbec6fc75070288eb9a5338b7f57dd"} Nov 28 07:10:05 crc kubenswrapper[4946]: I1128 07:10:05.189994 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-fcw5r" Nov 28 07:10:05 crc kubenswrapper[4946]: I1128 07:10:05.687858 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kj7q9" event={"ID":"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc","Type":"ContainerStarted","Data":"33eda089f67c9a9dba80d7a181b52a9cfbc169593193589888a442b44aa97f2b"} Nov 28 07:10:05 crc kubenswrapper[4946]: I1128 07:10:05.687913 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kj7q9" event={"ID":"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc","Type":"ContainerStarted","Data":"4061c0cca5a77c2da07af3b309d69bf282abc69df0b54c3af4a80b8998459d65"} Nov 28 07:10:05 crc kubenswrapper[4946]: I1128 07:10:05.687927 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kj7q9" event={"ID":"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc","Type":"ContainerStarted","Data":"f6446dc409f1aff62ccd8b5b1eb126706f301427aa1ed24fcc580e539afe41e1"} Nov 28 07:10:05 crc kubenswrapper[4946]: I1128 07:10:05.687938 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kj7q9" event={"ID":"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc","Type":"ContainerStarted","Data":"3fcc09692b02c4270bd333b872b14927282419c4e4f3f007d9a46af880898275"} Nov 28 07:10:06 crc kubenswrapper[4946]: I1128 07:10:06.637006 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n"] Nov 28 07:10:06 crc kubenswrapper[4946]: I1128 07:10:06.638612 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n" Nov 28 07:10:06 crc kubenswrapper[4946]: I1128 07:10:06.641791 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 07:10:06 crc kubenswrapper[4946]: I1128 07:10:06.654918 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n"] Nov 28 07:10:06 crc kubenswrapper[4946]: I1128 07:10:06.700351 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kj7q9" event={"ID":"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc","Type":"ContainerStarted","Data":"7cf49293c1317afb3e1ca4d48bade01d3e6a0f6a2a5af9a5153fa6fe2fbbac3a"} Nov 28 07:10:06 crc kubenswrapper[4946]: I1128 07:10:06.700403 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kj7q9" event={"ID":"8327637b-5c53-4bd6-b8bc-fbe3516bc4fc","Type":"ContainerStarted","Data":"f601248eedcaffbe6644e01b4af11bf32c5aebf0f36020285c774377600fc2df"} Nov 28 07:10:06 crc kubenswrapper[4946]: I1128 07:10:06.700548 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:10:06 crc kubenswrapper[4946]: I1128 07:10:06.730359 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-kj7q9" podStartSLOduration=5.396185654 podStartE2EDuration="12.730334737s" podCreationTimestamp="2025-11-28 07:09:54 +0000 UTC" firstStartedPulling="2025-11-28 07:09:54.731781156 +0000 UTC m=+1049.109846267" lastFinishedPulling="2025-11-28 07:10:02.065930239 +0000 UTC m=+1056.443995350" observedRunningTime="2025-11-28 07:10:06.722904153 +0000 UTC m=+1061.100969274" watchObservedRunningTime="2025-11-28 07:10:06.730334737 +0000 UTC m=+1061.108399848" Nov 28 07:10:06 crc kubenswrapper[4946]: I1128 07:10:06.814447 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msqmb\" (UniqueName: \"kubernetes.io/projected/2857b07d-5417-4e58-9de9-cdfd55125727-kube-api-access-msqmb\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n\" (UID: \"2857b07d-5417-4e58-9de9-cdfd55125727\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n" Nov 28 07:10:06 crc kubenswrapper[4946]: I1128 07:10:06.814534 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2857b07d-5417-4e58-9de9-cdfd55125727-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n\" (UID: \"2857b07d-5417-4e58-9de9-cdfd55125727\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n" Nov 28 07:10:06 crc kubenswrapper[4946]: I1128 07:10:06.814559 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2857b07d-5417-4e58-9de9-cdfd55125727-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n\" (UID: \"2857b07d-5417-4e58-9de9-cdfd55125727\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n" Nov 28 07:10:06 crc kubenswrapper[4946]: I1128 07:10:06.917020 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msqmb\" (UniqueName: \"kubernetes.io/projected/2857b07d-5417-4e58-9de9-cdfd55125727-kube-api-access-msqmb\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n\" (UID: \"2857b07d-5417-4e58-9de9-cdfd55125727\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n" Nov 28 07:10:06 crc kubenswrapper[4946]: I1128 07:10:06.917127 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2857b07d-5417-4e58-9de9-cdfd55125727-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n\" (UID: \"2857b07d-5417-4e58-9de9-cdfd55125727\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n" Nov 28 07:10:06 crc kubenswrapper[4946]: I1128 07:10:06.917189 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2857b07d-5417-4e58-9de9-cdfd55125727-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n\" (UID: \"2857b07d-5417-4e58-9de9-cdfd55125727\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n" Nov 28 07:10:06 crc kubenswrapper[4946]: I1128 07:10:06.917776 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2857b07d-5417-4e58-9de9-cdfd55125727-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n\" (UID: \"2857b07d-5417-4e58-9de9-cdfd55125727\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n" Nov 28 07:10:06 crc kubenswrapper[4946]: I1128 07:10:06.917982 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2857b07d-5417-4e58-9de9-cdfd55125727-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n\" (UID: \"2857b07d-5417-4e58-9de9-cdfd55125727\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n" Nov 28 07:10:06 crc kubenswrapper[4946]: I1128 07:10:06.941028 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msqmb\" (UniqueName: \"kubernetes.io/projected/2857b07d-5417-4e58-9de9-cdfd55125727-kube-api-access-msqmb\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n\" (UID: \"2857b07d-5417-4e58-9de9-cdfd55125727\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n" Nov 28 07:10:06 crc kubenswrapper[4946]: I1128 07:10:06.965179 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n" Nov 28 07:10:07 crc kubenswrapper[4946]: I1128 07:10:07.392117 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n"] Nov 28 07:10:07 crc kubenswrapper[4946]: W1128 07:10:07.405079 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2857b07d_5417_4e58_9de9_cdfd55125727.slice/crio-da4c6c1e898d2e51d435d1e88594faa3d6b2c9ea99331eea2854cf71a4d94703 WatchSource:0}: Error finding container da4c6c1e898d2e51d435d1e88594faa3d6b2c9ea99331eea2854cf71a4d94703: Status 404 returned error can't find the container with id da4c6c1e898d2e51d435d1e88594faa3d6b2c9ea99331eea2854cf71a4d94703 Nov 28 07:10:07 crc kubenswrapper[4946]: I1128 07:10:07.708999 4946 generic.go:334] "Generic (PLEG): container finished" podID="2857b07d-5417-4e58-9de9-cdfd55125727" containerID="7b2cd4ca012f9c061c346dab889c273ab9a0299af29c72b3775ad7180f233edd" exitCode=0 Nov 28 07:10:07 crc kubenswrapper[4946]: I1128 07:10:07.709064 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n" event={"ID":"2857b07d-5417-4e58-9de9-cdfd55125727","Type":"ContainerDied","Data":"7b2cd4ca012f9c061c346dab889c273ab9a0299af29c72b3775ad7180f233edd"} Nov 28 07:10:07 crc kubenswrapper[4946]: I1128 07:10:07.709418 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n" event={"ID":"2857b07d-5417-4e58-9de9-cdfd55125727","Type":"ContainerStarted","Data":"da4c6c1e898d2e51d435d1e88594faa3d6b2c9ea99331eea2854cf71a4d94703"} Nov 28 07:10:09 crc kubenswrapper[4946]: I1128 07:10:09.504297 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:10:09 crc kubenswrapper[4946]: I1128 07:10:09.560887 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:10:11 crc kubenswrapper[4946]: I1128 07:10:11.749704 4946 generic.go:334] "Generic (PLEG): container finished" podID="2857b07d-5417-4e58-9de9-cdfd55125727" containerID="b48eff8dfb3d6eb26c0fc7cd0133eac87ec4e40736212213323c131b6c3effa7" exitCode=0 Nov 28 07:10:11 crc kubenswrapper[4946]: I1128 07:10:11.749849 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n" event={"ID":"2857b07d-5417-4e58-9de9-cdfd55125727","Type":"ContainerDied","Data":"b48eff8dfb3d6eb26c0fc7cd0133eac87ec4e40736212213323c131b6c3effa7"} Nov 28 07:10:12 crc kubenswrapper[4946]: I1128 07:10:12.759180 4946 generic.go:334] "Generic (PLEG): container finished" podID="2857b07d-5417-4e58-9de9-cdfd55125727" containerID="257cc3817a5b5818d968c9818f074e8ae14c198eb855fabc8ddc64b0e91021b0" exitCode=0 Nov 28 07:10:12 crc kubenswrapper[4946]: I1128 07:10:12.760017 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n" event={"ID":"2857b07d-5417-4e58-9de9-cdfd55125727","Type":"ContainerDied","Data":"257cc3817a5b5818d968c9818f074e8ae14c198eb855fabc8ddc64b0e91021b0"} Nov 28 07:10:14 crc kubenswrapper[4946]: I1128 07:10:14.172840 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n" Nov 28 07:10:14 crc kubenswrapper[4946]: I1128 07:10:14.227749 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msqmb\" (UniqueName: \"kubernetes.io/projected/2857b07d-5417-4e58-9de9-cdfd55125727-kube-api-access-msqmb\") pod \"2857b07d-5417-4e58-9de9-cdfd55125727\" (UID: \"2857b07d-5417-4e58-9de9-cdfd55125727\") " Nov 28 07:10:14 crc kubenswrapper[4946]: I1128 07:10:14.227940 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2857b07d-5417-4e58-9de9-cdfd55125727-bundle\") pod \"2857b07d-5417-4e58-9de9-cdfd55125727\" (UID: \"2857b07d-5417-4e58-9de9-cdfd55125727\") " Nov 28 07:10:14 crc kubenswrapper[4946]: I1128 07:10:14.228026 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2857b07d-5417-4e58-9de9-cdfd55125727-util\") pod \"2857b07d-5417-4e58-9de9-cdfd55125727\" (UID: \"2857b07d-5417-4e58-9de9-cdfd55125727\") " Nov 28 07:10:14 crc kubenswrapper[4946]: I1128 07:10:14.229637 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2857b07d-5417-4e58-9de9-cdfd55125727-bundle" (OuterVolumeSpecName: "bundle") pod "2857b07d-5417-4e58-9de9-cdfd55125727" (UID: "2857b07d-5417-4e58-9de9-cdfd55125727"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:10:14 crc kubenswrapper[4946]: I1128 07:10:14.235092 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2857b07d-5417-4e58-9de9-cdfd55125727-kube-api-access-msqmb" (OuterVolumeSpecName: "kube-api-access-msqmb") pod "2857b07d-5417-4e58-9de9-cdfd55125727" (UID: "2857b07d-5417-4e58-9de9-cdfd55125727"). InnerVolumeSpecName "kube-api-access-msqmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:10:14 crc kubenswrapper[4946]: I1128 07:10:14.252517 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2857b07d-5417-4e58-9de9-cdfd55125727-util" (OuterVolumeSpecName: "util") pod "2857b07d-5417-4e58-9de9-cdfd55125727" (UID: "2857b07d-5417-4e58-9de9-cdfd55125727"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:10:14 crc kubenswrapper[4946]: I1128 07:10:14.329339 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msqmb\" (UniqueName: \"kubernetes.io/projected/2857b07d-5417-4e58-9de9-cdfd55125727-kube-api-access-msqmb\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:14 crc kubenswrapper[4946]: I1128 07:10:14.329387 4946 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2857b07d-5417-4e58-9de9-cdfd55125727-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:14 crc kubenswrapper[4946]: I1128 07:10:14.329411 4946 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2857b07d-5417-4e58-9de9-cdfd55125727-util\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:14 crc kubenswrapper[4946]: I1128 07:10:14.779909 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n" event={"ID":"2857b07d-5417-4e58-9de9-cdfd55125727","Type":"ContainerDied","Data":"da4c6c1e898d2e51d435d1e88594faa3d6b2c9ea99331eea2854cf71a4d94703"} Nov 28 07:10:14 crc kubenswrapper[4946]: I1128 07:10:14.779983 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da4c6c1e898d2e51d435d1e88594faa3d6b2c9ea99331eea2854cf71a4d94703" Nov 28 07:10:14 crc kubenswrapper[4946]: I1128 07:10:14.780068 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n" Nov 28 07:10:15 crc kubenswrapper[4946]: I1128 07:10:15.122963 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-mwq2z" Nov 28 07:10:19 crc kubenswrapper[4946]: I1128 07:10:19.667439 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-sp54q"] Nov 28 07:10:19 crc kubenswrapper[4946]: E1128 07:10:19.668491 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2857b07d-5417-4e58-9de9-cdfd55125727" containerName="extract" Nov 28 07:10:19 crc kubenswrapper[4946]: I1128 07:10:19.668508 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="2857b07d-5417-4e58-9de9-cdfd55125727" containerName="extract" Nov 28 07:10:19 crc kubenswrapper[4946]: E1128 07:10:19.668533 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2857b07d-5417-4e58-9de9-cdfd55125727" containerName="pull" Nov 28 07:10:19 crc kubenswrapper[4946]: I1128 07:10:19.668541 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="2857b07d-5417-4e58-9de9-cdfd55125727" containerName="pull" Nov 28 07:10:19 crc kubenswrapper[4946]: E1128 07:10:19.668556 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2857b07d-5417-4e58-9de9-cdfd55125727" containerName="util" Nov 28 07:10:19 crc kubenswrapper[4946]: I1128 07:10:19.668564 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="2857b07d-5417-4e58-9de9-cdfd55125727" containerName="util" Nov 28 07:10:19 crc kubenswrapper[4946]: I1128 07:10:19.668688 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="2857b07d-5417-4e58-9de9-cdfd55125727" containerName="extract" Nov 28 07:10:19 crc kubenswrapper[4946]: I1128 07:10:19.669261 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-sp54q" Nov 28 07:10:19 crc kubenswrapper[4946]: I1128 07:10:19.672424 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Nov 28 07:10:19 crc kubenswrapper[4946]: I1128 07:10:19.672599 4946 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-9tntw" Nov 28 07:10:19 crc kubenswrapper[4946]: I1128 07:10:19.672956 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Nov 28 07:10:19 crc kubenswrapper[4946]: I1128 07:10:19.681755 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-sp54q"] Nov 28 07:10:19 crc kubenswrapper[4946]: I1128 07:10:19.805997 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b62cfaf8-9031-4fa5-9f89-f8ddbe19de56-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-sp54q\" (UID: \"b62cfaf8-9031-4fa5-9f89-f8ddbe19de56\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-sp54q" Nov 28 07:10:19 crc kubenswrapper[4946]: I1128 07:10:19.806075 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxlb7\" (UniqueName: \"kubernetes.io/projected/b62cfaf8-9031-4fa5-9f89-f8ddbe19de56-kube-api-access-zxlb7\") pod \"cert-manager-operator-controller-manager-64cf6dff88-sp54q\" (UID: \"b62cfaf8-9031-4fa5-9f89-f8ddbe19de56\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-sp54q" Nov 28 07:10:19 crc kubenswrapper[4946]: I1128 07:10:19.907323 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b62cfaf8-9031-4fa5-9f89-f8ddbe19de56-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-sp54q\" (UID: \"b62cfaf8-9031-4fa5-9f89-f8ddbe19de56\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-sp54q" Nov 28 07:10:19 crc kubenswrapper[4946]: I1128 07:10:19.907396 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxlb7\" (UniqueName: \"kubernetes.io/projected/b62cfaf8-9031-4fa5-9f89-f8ddbe19de56-kube-api-access-zxlb7\") pod \"cert-manager-operator-controller-manager-64cf6dff88-sp54q\" (UID: \"b62cfaf8-9031-4fa5-9f89-f8ddbe19de56\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-sp54q" Nov 28 07:10:19 crc kubenswrapper[4946]: I1128 07:10:19.908171 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b62cfaf8-9031-4fa5-9f89-f8ddbe19de56-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-sp54q\" (UID: \"b62cfaf8-9031-4fa5-9f89-f8ddbe19de56\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-sp54q" Nov 28 07:10:19 crc kubenswrapper[4946]: I1128 07:10:19.931286 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxlb7\" (UniqueName: \"kubernetes.io/projected/b62cfaf8-9031-4fa5-9f89-f8ddbe19de56-kube-api-access-zxlb7\") pod \"cert-manager-operator-controller-manager-64cf6dff88-sp54q\" (UID: \"b62cfaf8-9031-4fa5-9f89-f8ddbe19de56\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-sp54q" Nov 28 07:10:19 crc kubenswrapper[4946]: I1128 07:10:19.987254 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-sp54q" Nov 28 07:10:20 crc kubenswrapper[4946]: I1128 07:10:20.218497 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-sp54q"] Nov 28 07:10:20 crc kubenswrapper[4946]: I1128 07:10:20.825899 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-sp54q" event={"ID":"b62cfaf8-9031-4fa5-9f89-f8ddbe19de56","Type":"ContainerStarted","Data":"655bba4c452bd095cef87e5b095e5a266d62304aab107bcb58d46bf06756cdc9"} Nov 28 07:10:24 crc kubenswrapper[4946]: I1128 07:10:24.507880 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-kj7q9" Nov 28 07:10:29 crc kubenswrapper[4946]: I1128 07:10:29.907383 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-sp54q" event={"ID":"b62cfaf8-9031-4fa5-9f89-f8ddbe19de56","Type":"ContainerStarted","Data":"d36470fa4ded2cf17e75300f4233ed551f10bee51cb4adec59a67520f6abb0d3"} Nov 28 07:10:29 crc kubenswrapper[4946]: I1128 07:10:29.947647 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-sp54q" podStartSLOduration=2.419022517 podStartE2EDuration="10.947621359s" podCreationTimestamp="2025-11-28 07:10:19 +0000 UTC" firstStartedPulling="2025-11-28 07:10:20.238649907 +0000 UTC m=+1074.616715018" lastFinishedPulling="2025-11-28 07:10:28.767248749 +0000 UTC m=+1083.145313860" observedRunningTime="2025-11-28 07:10:29.944494732 +0000 UTC m=+1084.322559853" watchObservedRunningTime="2025-11-28 07:10:29.947621359 +0000 UTC m=+1084.325686490" Nov 28 07:10:33 crc kubenswrapper[4946]: I1128 07:10:33.245198 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-wz2tz"] Nov 28 07:10:33 crc kubenswrapper[4946]: I1128 07:10:33.246969 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-wz2tz" Nov 28 07:10:33 crc kubenswrapper[4946]: I1128 07:10:33.250776 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 28 07:10:33 crc kubenswrapper[4946]: I1128 07:10:33.250934 4946 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-98g6l" Nov 28 07:10:33 crc kubenswrapper[4946]: I1128 07:10:33.251944 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 28 07:10:33 crc kubenswrapper[4946]: I1128 07:10:33.262778 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-wz2tz"] Nov 28 07:10:33 crc kubenswrapper[4946]: I1128 07:10:33.335260 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brxct\" (UniqueName: \"kubernetes.io/projected/4e12249a-3983-4d04-a19d-57ada923666f-kube-api-access-brxct\") pod \"cert-manager-webhook-f4fb5df64-wz2tz\" (UID: \"4e12249a-3983-4d04-a19d-57ada923666f\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-wz2tz" Nov 28 07:10:33 crc kubenswrapper[4946]: I1128 07:10:33.335449 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4e12249a-3983-4d04-a19d-57ada923666f-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-wz2tz\" (UID: \"4e12249a-3983-4d04-a19d-57ada923666f\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-wz2tz" Nov 28 07:10:33 crc kubenswrapper[4946]: I1128 07:10:33.437273 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brxct\" (UniqueName: \"kubernetes.io/projected/4e12249a-3983-4d04-a19d-57ada923666f-kube-api-access-brxct\") pod \"cert-manager-webhook-f4fb5df64-wz2tz\" (UID: \"4e12249a-3983-4d04-a19d-57ada923666f\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-wz2tz" Nov 28 07:10:33 crc kubenswrapper[4946]: I1128 07:10:33.437379 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4e12249a-3983-4d04-a19d-57ada923666f-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-wz2tz\" (UID: \"4e12249a-3983-4d04-a19d-57ada923666f\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-wz2tz" Nov 28 07:10:33 crc kubenswrapper[4946]: I1128 07:10:33.463057 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4e12249a-3983-4d04-a19d-57ada923666f-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-wz2tz\" (UID: \"4e12249a-3983-4d04-a19d-57ada923666f\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-wz2tz" Nov 28 07:10:33 crc kubenswrapper[4946]: I1128 07:10:33.468615 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brxct\" (UniqueName: \"kubernetes.io/projected/4e12249a-3983-4d04-a19d-57ada923666f-kube-api-access-brxct\") pod \"cert-manager-webhook-f4fb5df64-wz2tz\" (UID: \"4e12249a-3983-4d04-a19d-57ada923666f\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-wz2tz" Nov 28 07:10:33 crc kubenswrapper[4946]: I1128 07:10:33.563822 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-wz2tz" Nov 28 07:10:33 crc kubenswrapper[4946]: I1128 07:10:33.833211 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-wz2tz"] Nov 28 07:10:33 crc kubenswrapper[4946]: I1128 07:10:33.846928 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 07:10:33 crc kubenswrapper[4946]: I1128 07:10:33.953629 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-wz2tz" event={"ID":"4e12249a-3983-4d04-a19d-57ada923666f","Type":"ContainerStarted","Data":"c4c6af28e281e6a771fa8bd1b75d89ee6afd75ca979d3cf1eec90c7fdbc7c9a0"} Nov 28 07:10:34 crc kubenswrapper[4946]: I1128 07:10:34.636427 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-zr7xs"] Nov 28 07:10:34 crc kubenswrapper[4946]: I1128 07:10:34.637847 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-zr7xs" Nov 28 07:10:34 crc kubenswrapper[4946]: I1128 07:10:34.640075 4946 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-rcddn" Nov 28 07:10:34 crc kubenswrapper[4946]: I1128 07:10:34.657383 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-zr7xs"] Nov 28 07:10:34 crc kubenswrapper[4946]: I1128 07:10:34.767025 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0695ed81-d836-4e66-9b94-8335a37e9d46-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-zr7xs\" (UID: \"0695ed81-d836-4e66-9b94-8335a37e9d46\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-zr7xs" Nov 28 07:10:34 crc kubenswrapper[4946]: I1128 07:10:34.767183 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2m4h\" (UniqueName: \"kubernetes.io/projected/0695ed81-d836-4e66-9b94-8335a37e9d46-kube-api-access-b2m4h\") pod \"cert-manager-cainjector-855d9ccff4-zr7xs\" (UID: \"0695ed81-d836-4e66-9b94-8335a37e9d46\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-zr7xs" Nov 28 07:10:34 crc kubenswrapper[4946]: I1128 07:10:34.869116 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0695ed81-d836-4e66-9b94-8335a37e9d46-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-zr7xs\" (UID: \"0695ed81-d836-4e66-9b94-8335a37e9d46\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-zr7xs" Nov 28 07:10:34 crc kubenswrapper[4946]: I1128 07:10:34.869274 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2m4h\" (UniqueName: \"kubernetes.io/projected/0695ed81-d836-4e66-9b94-8335a37e9d46-kube-api-access-b2m4h\") pod \"cert-manager-cainjector-855d9ccff4-zr7xs\" (UID: \"0695ed81-d836-4e66-9b94-8335a37e9d46\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-zr7xs" Nov 28 07:10:34 crc kubenswrapper[4946]: I1128 07:10:34.901599 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0695ed81-d836-4e66-9b94-8335a37e9d46-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-zr7xs\" (UID: \"0695ed81-d836-4e66-9b94-8335a37e9d46\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-zr7xs" Nov 28 07:10:34 crc kubenswrapper[4946]: I1128 07:10:34.901695 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2m4h\" (UniqueName: \"kubernetes.io/projected/0695ed81-d836-4e66-9b94-8335a37e9d46-kube-api-access-b2m4h\") pod \"cert-manager-cainjector-855d9ccff4-zr7xs\" (UID: \"0695ed81-d836-4e66-9b94-8335a37e9d46\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-zr7xs" Nov 28 07:10:34 crc kubenswrapper[4946]: I1128 07:10:34.957817 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-zr7xs" Nov 28 07:10:35 crc kubenswrapper[4946]: I1128 07:10:35.195624 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-zr7xs"] Nov 28 07:10:35 crc kubenswrapper[4946]: I1128 07:10:35.973171 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-zr7xs" event={"ID":"0695ed81-d836-4e66-9b94-8335a37e9d46","Type":"ContainerStarted","Data":"96a1507c10744c8d50266a3c4ecdb69b4b9f4b0bf665f4cc03de0d2cbbcee87b"} Nov 28 07:10:44 crc kubenswrapper[4946]: I1128 07:10:44.061178 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-wz2tz" event={"ID":"4e12249a-3983-4d04-a19d-57ada923666f","Type":"ContainerStarted","Data":"1b3932ae5161df44af7dc9268d9dd8954699b12c8c21313560904746359f2ee6"} Nov 28 07:10:44 crc kubenswrapper[4946]: I1128 07:10:44.062051 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-wz2tz" Nov 28 07:10:44 crc kubenswrapper[4946]: I1128 07:10:44.063790 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-zr7xs" event={"ID":"0695ed81-d836-4e66-9b94-8335a37e9d46","Type":"ContainerStarted","Data":"9aaf975cc1b15f56c3178fa63807d71d72d645bd76590c163617ed952d1a95cf"} Nov 28 07:10:44 crc kubenswrapper[4946]: I1128 07:10:44.086426 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-wz2tz" podStartSLOduration=1.846123316 podStartE2EDuration="11.086405408s" podCreationTimestamp="2025-11-28 07:10:33 +0000 UTC" firstStartedPulling="2025-11-28 07:10:33.846610386 +0000 UTC m=+1088.224675497" lastFinishedPulling="2025-11-28 07:10:43.086892478 +0000 UTC m=+1097.464957589" observedRunningTime="2025-11-28 07:10:44.084614533 +0000 UTC m=+1098.462679664" watchObservedRunningTime="2025-11-28 07:10:44.086405408 +0000 UTC m=+1098.464470519" Nov 28 07:10:44 crc kubenswrapper[4946]: I1128 07:10:44.112131 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-zr7xs" podStartSLOduration=2.213049357 podStartE2EDuration="10.112100744s" podCreationTimestamp="2025-11-28 07:10:34 +0000 UTC" firstStartedPulling="2025-11-28 07:10:35.214710197 +0000 UTC m=+1089.592775308" lastFinishedPulling="2025-11-28 07:10:43.113761584 +0000 UTC m=+1097.491826695" observedRunningTime="2025-11-28 07:10:44.106788553 +0000 UTC m=+1098.484853664" watchObservedRunningTime="2025-11-28 07:10:44.112100744 +0000 UTC m=+1098.490165865" Nov 28 07:10:48 crc kubenswrapper[4946]: I1128 07:10:48.568584 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-wz2tz" Nov 28 07:10:51 crc kubenswrapper[4946]: I1128 07:10:51.190077 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-c25j6"] Nov 28 07:10:51 crc kubenswrapper[4946]: I1128 07:10:51.192905 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-c25j6" Nov 28 07:10:51 crc kubenswrapper[4946]: I1128 07:10:51.197675 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-c25j6"] Nov 28 07:10:51 crc kubenswrapper[4946]: I1128 07:10:51.197951 4946 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-s5cww" Nov 28 07:10:51 crc kubenswrapper[4946]: I1128 07:10:51.291670 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50b43344-5fec-4bba-b8a0-298c20b951c9-bound-sa-token\") pod \"cert-manager-86cb77c54b-c25j6\" (UID: \"50b43344-5fec-4bba-b8a0-298c20b951c9\") " pod="cert-manager/cert-manager-86cb77c54b-c25j6" Nov 28 07:10:51 crc kubenswrapper[4946]: I1128 07:10:51.292182 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md6k7\" (UniqueName: \"kubernetes.io/projected/50b43344-5fec-4bba-b8a0-298c20b951c9-kube-api-access-md6k7\") pod \"cert-manager-86cb77c54b-c25j6\" (UID: \"50b43344-5fec-4bba-b8a0-298c20b951c9\") " pod="cert-manager/cert-manager-86cb77c54b-c25j6" Nov 28 07:10:51 crc kubenswrapper[4946]: I1128 07:10:51.393617 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md6k7\" (UniqueName: \"kubernetes.io/projected/50b43344-5fec-4bba-b8a0-298c20b951c9-kube-api-access-md6k7\") pod \"cert-manager-86cb77c54b-c25j6\" (UID: \"50b43344-5fec-4bba-b8a0-298c20b951c9\") " pod="cert-manager/cert-manager-86cb77c54b-c25j6" Nov 28 07:10:51 crc kubenswrapper[4946]: I1128 07:10:51.393794 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50b43344-5fec-4bba-b8a0-298c20b951c9-bound-sa-token\") pod \"cert-manager-86cb77c54b-c25j6\" (UID: \"50b43344-5fec-4bba-b8a0-298c20b951c9\") " pod="cert-manager/cert-manager-86cb77c54b-c25j6" Nov 28 07:10:51 crc kubenswrapper[4946]: I1128 07:10:51.414347 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50b43344-5fec-4bba-b8a0-298c20b951c9-bound-sa-token\") pod \"cert-manager-86cb77c54b-c25j6\" (UID: \"50b43344-5fec-4bba-b8a0-298c20b951c9\") " pod="cert-manager/cert-manager-86cb77c54b-c25j6" Nov 28 07:10:51 crc kubenswrapper[4946]: I1128 07:10:51.416809 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md6k7\" (UniqueName: \"kubernetes.io/projected/50b43344-5fec-4bba-b8a0-298c20b951c9-kube-api-access-md6k7\") pod \"cert-manager-86cb77c54b-c25j6\" (UID: \"50b43344-5fec-4bba-b8a0-298c20b951c9\") " pod="cert-manager/cert-manager-86cb77c54b-c25j6" Nov 28 07:10:51 crc kubenswrapper[4946]: I1128 07:10:51.521826 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-c25j6" Nov 28 07:10:51 crc kubenswrapper[4946]: I1128 07:10:51.857834 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-c25j6"] Nov 28 07:10:51 crc kubenswrapper[4946]: W1128 07:10:51.864779 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50b43344_5fec_4bba_b8a0_298c20b951c9.slice/crio-3f16b470c4c3c5adc4fefe1c977fdf934bffc28ed2e7edece35f830b2b7fa240 WatchSource:0}: Error finding container 3f16b470c4c3c5adc4fefe1c977fdf934bffc28ed2e7edece35f830b2b7fa240: Status 404 returned error can't find the container with id 3f16b470c4c3c5adc4fefe1c977fdf934bffc28ed2e7edece35f830b2b7fa240 Nov 28 07:10:52 crc kubenswrapper[4946]: I1128 07:10:52.115372 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-c25j6" event={"ID":"50b43344-5fec-4bba-b8a0-298c20b951c9","Type":"ContainerStarted","Data":"3f16b470c4c3c5adc4fefe1c977fdf934bffc28ed2e7edece35f830b2b7fa240"} Nov 28 07:10:53 crc kubenswrapper[4946]: I1128 07:10:53.125867 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-c25j6" event={"ID":"50b43344-5fec-4bba-b8a0-298c20b951c9","Type":"ContainerStarted","Data":"98988c1a1aadb9d8a32856da5fb1245e5b83b1a2bcc07a9359929cec723bf04b"} Nov 28 07:10:53 crc kubenswrapper[4946]: I1128 07:10:53.145647 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-c25j6" podStartSLOduration=2.145623799 podStartE2EDuration="2.145623799s" podCreationTimestamp="2025-11-28 07:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:10:53.144884511 +0000 UTC m=+1107.522949632" watchObservedRunningTime="2025-11-28 07:10:53.145623799 +0000 UTC m=+1107.523688910" Nov 28 07:11:02 crc kubenswrapper[4946]: I1128 07:11:02.314683 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-bclht"] Nov 28 07:11:02 crc kubenswrapper[4946]: I1128 07:11:02.316597 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bclht" Nov 28 07:11:02 crc kubenswrapper[4946]: I1128 07:11:02.325722 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-czr8h" Nov 28 07:11:02 crc kubenswrapper[4946]: I1128 07:11:02.325752 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 28 07:11:02 crc kubenswrapper[4946]: I1128 07:11:02.325824 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 28 07:11:02 crc kubenswrapper[4946]: I1128 07:11:02.348999 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bclht"] Nov 28 07:11:02 crc kubenswrapper[4946]: I1128 07:11:02.365183 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm7gw\" (UniqueName: \"kubernetes.io/projected/5ad53892-5472-484e-83bf-91ade2e9cd93-kube-api-access-gm7gw\") pod \"openstack-operator-index-bclht\" (UID: \"5ad53892-5472-484e-83bf-91ade2e9cd93\") " pod="openstack-operators/openstack-operator-index-bclht" Nov 28 07:11:02 crc kubenswrapper[4946]: I1128 07:11:02.466882 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm7gw\" (UniqueName: \"kubernetes.io/projected/5ad53892-5472-484e-83bf-91ade2e9cd93-kube-api-access-gm7gw\") pod \"openstack-operator-index-bclht\" (UID: \"5ad53892-5472-484e-83bf-91ade2e9cd93\") " pod="openstack-operators/openstack-operator-index-bclht" Nov 28 07:11:02 crc kubenswrapper[4946]: I1128 07:11:02.487281 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm7gw\" (UniqueName: \"kubernetes.io/projected/5ad53892-5472-484e-83bf-91ade2e9cd93-kube-api-access-gm7gw\") pod \"openstack-operator-index-bclht\" (UID: \"5ad53892-5472-484e-83bf-91ade2e9cd93\") " pod="openstack-operators/openstack-operator-index-bclht" Nov 28 07:11:02 crc kubenswrapper[4946]: I1128 07:11:02.640291 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bclht" Nov 28 07:11:02 crc kubenswrapper[4946]: I1128 07:11:02.837282 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bclht"] Nov 28 07:11:03 crc kubenswrapper[4946]: I1128 07:11:03.202667 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bclht" event={"ID":"5ad53892-5472-484e-83bf-91ade2e9cd93","Type":"ContainerStarted","Data":"7cecf9595ccc65e4bf774d104607e4c0969f22d641bd60b77ba87dde37cf003b"} Nov 28 07:11:06 crc kubenswrapper[4946]: I1128 07:11:06.889308 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bclht"] Nov 28 07:11:07 crc kubenswrapper[4946]: I1128 07:11:07.497896 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-8shgs"] Nov 28 07:11:07 crc kubenswrapper[4946]: I1128 07:11:07.501401 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8shgs" Nov 28 07:11:07 crc kubenswrapper[4946]: I1128 07:11:07.506603 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8shgs"] Nov 28 07:11:07 crc kubenswrapper[4946]: I1128 07:11:07.572778 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2dtl\" (UniqueName: \"kubernetes.io/projected/bb89732c-7c39-48b2-896b-bb18dc44839c-kube-api-access-m2dtl\") pod \"openstack-operator-index-8shgs\" (UID: \"bb89732c-7c39-48b2-896b-bb18dc44839c\") " pod="openstack-operators/openstack-operator-index-8shgs" Nov 28 07:11:07 crc kubenswrapper[4946]: I1128 07:11:07.675083 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2dtl\" (UniqueName: \"kubernetes.io/projected/bb89732c-7c39-48b2-896b-bb18dc44839c-kube-api-access-m2dtl\") pod \"openstack-operator-index-8shgs\" (UID: \"bb89732c-7c39-48b2-896b-bb18dc44839c\") " pod="openstack-operators/openstack-operator-index-8shgs" Nov 28 07:11:07 crc kubenswrapper[4946]: I1128 07:11:07.699174 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2dtl\" (UniqueName: \"kubernetes.io/projected/bb89732c-7c39-48b2-896b-bb18dc44839c-kube-api-access-m2dtl\") pod \"openstack-operator-index-8shgs\" (UID: \"bb89732c-7c39-48b2-896b-bb18dc44839c\") " pod="openstack-operators/openstack-operator-index-8shgs" Nov 28 07:11:07 crc kubenswrapper[4946]: I1128 07:11:07.825581 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8shgs" Nov 28 07:11:08 crc kubenswrapper[4946]: I1128 07:11:08.221503 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8shgs"] Nov 28 07:11:10 crc kubenswrapper[4946]: W1128 07:11:10.726895 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb89732c_7c39_48b2_896b_bb18dc44839c.slice/crio-1feb20620e684d00a29b95b17b6c661c0d832ae7a0560b8d88ef19fb6676dffe WatchSource:0}: Error finding container 1feb20620e684d00a29b95b17b6c661c0d832ae7a0560b8d88ef19fb6676dffe: Status 404 returned error can't find the container with id 1feb20620e684d00a29b95b17b6c661c0d832ae7a0560b8d88ef19fb6676dffe Nov 28 07:11:11 crc kubenswrapper[4946]: I1128 07:11:11.260052 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8shgs" event={"ID":"bb89732c-7c39-48b2-896b-bb18dc44839c","Type":"ContainerStarted","Data":"1feb20620e684d00a29b95b17b6c661c0d832ae7a0560b8d88ef19fb6676dffe"} Nov 28 07:11:11 crc kubenswrapper[4946]: I1128 07:11:11.261901 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bclht" event={"ID":"5ad53892-5472-484e-83bf-91ade2e9cd93","Type":"ContainerStarted","Data":"972d871a80263e71b01d2d461e6ba3dc493057dd8ba59b0bb1b289095ba33d85"} Nov 28 07:11:11 crc kubenswrapper[4946]: I1128 07:11:11.262081 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-bclht" podUID="5ad53892-5472-484e-83bf-91ade2e9cd93" containerName="registry-server" containerID="cri-o://972d871a80263e71b01d2d461e6ba3dc493057dd8ba59b0bb1b289095ba33d85" gracePeriod=2 Nov 28 07:11:11 crc kubenswrapper[4946]: I1128 07:11:11.287425 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-bclht" podStartSLOduration=1.267529607 podStartE2EDuration="9.287402483s" podCreationTimestamp="2025-11-28 07:11:02 +0000 UTC" firstStartedPulling="2025-11-28 07:11:02.848662454 +0000 UTC m=+1117.226727565" lastFinishedPulling="2025-11-28 07:11:10.86853534 +0000 UTC m=+1125.246600441" observedRunningTime="2025-11-28 07:11:11.28206563 +0000 UTC m=+1125.660130741" watchObservedRunningTime="2025-11-28 07:11:11.287402483 +0000 UTC m=+1125.665467594" Nov 28 07:11:11 crc kubenswrapper[4946]: I1128 07:11:11.650373 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bclht" Nov 28 07:11:11 crc kubenswrapper[4946]: I1128 07:11:11.756597 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm7gw\" (UniqueName: \"kubernetes.io/projected/5ad53892-5472-484e-83bf-91ade2e9cd93-kube-api-access-gm7gw\") pod \"5ad53892-5472-484e-83bf-91ade2e9cd93\" (UID: \"5ad53892-5472-484e-83bf-91ade2e9cd93\") " Nov 28 07:11:11 crc kubenswrapper[4946]: I1128 07:11:11.763148 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ad53892-5472-484e-83bf-91ade2e9cd93-kube-api-access-gm7gw" (OuterVolumeSpecName: "kube-api-access-gm7gw") pod "5ad53892-5472-484e-83bf-91ade2e9cd93" (UID: "5ad53892-5472-484e-83bf-91ade2e9cd93"). InnerVolumeSpecName "kube-api-access-gm7gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:11 crc kubenswrapper[4946]: I1128 07:11:11.859019 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm7gw\" (UniqueName: \"kubernetes.io/projected/5ad53892-5472-484e-83bf-91ade2e9cd93-kube-api-access-gm7gw\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:12 crc kubenswrapper[4946]: I1128 07:11:12.272739 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8shgs" event={"ID":"bb89732c-7c39-48b2-896b-bb18dc44839c","Type":"ContainerStarted","Data":"5477205601d2324a0dc5065e5ab65c8328c65f97159d6bfaa9477bab206cc7e8"} Nov 28 07:11:12 crc kubenswrapper[4946]: I1128 07:11:12.275457 4946 generic.go:334] "Generic (PLEG): container finished" podID="5ad53892-5472-484e-83bf-91ade2e9cd93" containerID="972d871a80263e71b01d2d461e6ba3dc493057dd8ba59b0bb1b289095ba33d85" exitCode=0 Nov 28 07:11:12 crc kubenswrapper[4946]: I1128 07:11:12.275550 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bclht" event={"ID":"5ad53892-5472-484e-83bf-91ade2e9cd93","Type":"ContainerDied","Data":"972d871a80263e71b01d2d461e6ba3dc493057dd8ba59b0bb1b289095ba33d85"} Nov 28 07:11:12 crc kubenswrapper[4946]: I1128 07:11:12.275587 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bclht" event={"ID":"5ad53892-5472-484e-83bf-91ade2e9cd93","Type":"ContainerDied","Data":"7cecf9595ccc65e4bf774d104607e4c0969f22d641bd60b77ba87dde37cf003b"} Nov 28 07:11:12 crc kubenswrapper[4946]: I1128 07:11:12.275614 4946 scope.go:117] "RemoveContainer" containerID="972d871a80263e71b01d2d461e6ba3dc493057dd8ba59b0bb1b289095ba33d85" Nov 28 07:11:12 crc kubenswrapper[4946]: I1128 07:11:12.275773 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bclht" Nov 28 07:11:12 crc kubenswrapper[4946]: I1128 07:11:12.297270 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-8shgs" podStartSLOduration=4.842291614 podStartE2EDuration="5.297241681s" podCreationTimestamp="2025-11-28 07:11:07 +0000 UTC" firstStartedPulling="2025-11-28 07:11:10.731842484 +0000 UTC m=+1125.109907595" lastFinishedPulling="2025-11-28 07:11:11.186792541 +0000 UTC m=+1125.564857662" observedRunningTime="2025-11-28 07:11:12.29276949 +0000 UTC m=+1126.670834621" watchObservedRunningTime="2025-11-28 07:11:12.297241681 +0000 UTC m=+1126.675306792" Nov 28 07:11:12 crc kubenswrapper[4946]: I1128 07:11:12.318181 4946 scope.go:117] "RemoveContainer" containerID="972d871a80263e71b01d2d461e6ba3dc493057dd8ba59b0bb1b289095ba33d85" Nov 28 07:11:12 crc kubenswrapper[4946]: E1128 07:11:12.319753 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"972d871a80263e71b01d2d461e6ba3dc493057dd8ba59b0bb1b289095ba33d85\": container with ID starting with 972d871a80263e71b01d2d461e6ba3dc493057dd8ba59b0bb1b289095ba33d85 not found: ID does not exist" containerID="972d871a80263e71b01d2d461e6ba3dc493057dd8ba59b0bb1b289095ba33d85" Nov 28 07:11:12 crc kubenswrapper[4946]: I1128 07:11:12.319885 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"972d871a80263e71b01d2d461e6ba3dc493057dd8ba59b0bb1b289095ba33d85"} err="failed to get container status \"972d871a80263e71b01d2d461e6ba3dc493057dd8ba59b0bb1b289095ba33d85\": rpc error: code = NotFound desc = could not find container \"972d871a80263e71b01d2d461e6ba3dc493057dd8ba59b0bb1b289095ba33d85\": container with ID starting with 972d871a80263e71b01d2d461e6ba3dc493057dd8ba59b0bb1b289095ba33d85 not found: ID does not exist" Nov 28 07:11:12 crc kubenswrapper[4946]: I1128 07:11:12.325430 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bclht"] Nov 28 07:11:12 crc kubenswrapper[4946]: I1128 07:11:12.331365 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-bclht"] Nov 28 07:11:13 crc kubenswrapper[4946]: I1128 07:11:13.998290 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ad53892-5472-484e-83bf-91ade2e9cd93" path="/var/lib/kubelet/pods/5ad53892-5472-484e-83bf-91ade2e9cd93/volumes" Nov 28 07:11:17 crc kubenswrapper[4946]: I1128 07:11:17.826787 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-8shgs" Nov 28 07:11:17 crc kubenswrapper[4946]: I1128 07:11:17.827262 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-8shgs" Nov 28 07:11:17 crc kubenswrapper[4946]: I1128 07:11:17.857608 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-8shgs" Nov 28 07:11:18 crc kubenswrapper[4946]: I1128 07:11:18.368965 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-8shgs" Nov 28 07:11:25 crc kubenswrapper[4946]: I1128 07:11:25.250591 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h"] Nov 28 07:11:25 crc kubenswrapper[4946]: E1128 07:11:25.251765 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad53892-5472-484e-83bf-91ade2e9cd93" containerName="registry-server" Nov 28 07:11:25 crc kubenswrapper[4946]: I1128 07:11:25.251791 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad53892-5472-484e-83bf-91ade2e9cd93" containerName="registry-server" Nov 28 07:11:25 crc kubenswrapper[4946]: I1128 07:11:25.252015 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ad53892-5472-484e-83bf-91ade2e9cd93" containerName="registry-server" Nov 28 07:11:25 crc kubenswrapper[4946]: I1128 07:11:25.253563 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h" Nov 28 07:11:25 crc kubenswrapper[4946]: I1128 07:11:25.256528 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-5fd96" Nov 28 07:11:25 crc kubenswrapper[4946]: I1128 07:11:25.259980 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h"] Nov 28 07:11:25 crc kubenswrapper[4946]: I1128 07:11:25.367371 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4zn5\" (UniqueName: \"kubernetes.io/projected/dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8-kube-api-access-q4zn5\") pod \"b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h\" (UID: \"dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8\") " pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h" Nov 28 07:11:25 crc kubenswrapper[4946]: I1128 07:11:25.367493 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8-bundle\") pod \"b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h\" (UID: \"dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8\") " pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h" Nov 28 07:11:25 crc kubenswrapper[4946]: I1128 07:11:25.367527 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8-util\") pod \"b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h\" (UID: \"dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8\") " pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h" Nov 28 07:11:25 crc kubenswrapper[4946]: I1128 07:11:25.468670 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4zn5\" (UniqueName: \"kubernetes.io/projected/dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8-kube-api-access-q4zn5\") pod \"b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h\" (UID: \"dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8\") " pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h" Nov 28 07:11:25 crc kubenswrapper[4946]: I1128 07:11:25.468835 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8-bundle\") pod \"b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h\" (UID: \"dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8\") " pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h" Nov 28 07:11:25 crc kubenswrapper[4946]: I1128 07:11:25.468882 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8-util\") pod \"b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h\" (UID: \"dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8\") " pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h" Nov 28 07:11:25 crc kubenswrapper[4946]: I1128 07:11:25.469964 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8-util\") pod \"b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h\" (UID: \"dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8\") " pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h" Nov 28 07:11:25 crc kubenswrapper[4946]: I1128 07:11:25.470019 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8-bundle\") pod \"b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h\" (UID: \"dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8\") " pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h" Nov 28 07:11:25 crc kubenswrapper[4946]: I1128 07:11:25.497124 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4zn5\" (UniqueName: \"kubernetes.io/projected/dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8-kube-api-access-q4zn5\") pod \"b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h\" (UID: \"dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8\") " pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h" Nov 28 07:11:25 crc kubenswrapper[4946]: I1128 07:11:25.577351 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h" Nov 28 07:11:25 crc kubenswrapper[4946]: I1128 07:11:25.842165 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h"] Nov 28 07:11:26 crc kubenswrapper[4946]: I1128 07:11:26.395535 4946 generic.go:334] "Generic (PLEG): container finished" podID="dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8" containerID="8d0ef9ecc4a376ca97f4ed1c8dd9470eaff50166eefd3db24d04071146bd2d32" exitCode=0 Nov 28 07:11:26 crc kubenswrapper[4946]: I1128 07:11:26.395603 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h" event={"ID":"dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8","Type":"ContainerDied","Data":"8d0ef9ecc4a376ca97f4ed1c8dd9470eaff50166eefd3db24d04071146bd2d32"} Nov 28 07:11:26 crc kubenswrapper[4946]: I1128 07:11:26.397777 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h" event={"ID":"dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8","Type":"ContainerStarted","Data":"c0055d30ca29706570b18097768fced743360858cee1512ae03ae12a7ffc5676"} Nov 28 07:11:28 crc kubenswrapper[4946]: I1128 07:11:28.417102 4946 generic.go:334] "Generic (PLEG): container finished" podID="dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8" containerID="d4a905dfb657dafa6a5f42021f4822109af48bb1573b8daf3a5792838272316c" exitCode=0 Nov 28 07:11:28 crc kubenswrapper[4946]: I1128 07:11:28.417196 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h" event={"ID":"dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8","Type":"ContainerDied","Data":"d4a905dfb657dafa6a5f42021f4822109af48bb1573b8daf3a5792838272316c"} Nov 28 07:11:29 crc kubenswrapper[4946]: I1128 07:11:29.430344 4946 generic.go:334] "Generic (PLEG): container finished" podID="dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8" containerID="3a14256843a4d0caf13cadbdf85dbdad34092bd0b114e60f96b1c8e6a15e552b" exitCode=0 Nov 28 07:11:29 crc kubenswrapper[4946]: I1128 07:11:29.430414 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h" event={"ID":"dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8","Type":"ContainerDied","Data":"3a14256843a4d0caf13cadbdf85dbdad34092bd0b114e60f96b1c8e6a15e552b"} Nov 28 07:11:30 crc kubenswrapper[4946]: I1128 07:11:30.719943 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h" Nov 28 07:11:30 crc kubenswrapper[4946]: I1128 07:11:30.867092 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4zn5\" (UniqueName: \"kubernetes.io/projected/dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8-kube-api-access-q4zn5\") pod \"dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8\" (UID: \"dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8\") " Nov 28 07:11:30 crc kubenswrapper[4946]: I1128 07:11:30.867293 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8-bundle\") pod \"dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8\" (UID: \"dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8\") " Nov 28 07:11:30 crc kubenswrapper[4946]: I1128 07:11:30.867330 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8-util\") pod \"dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8\" (UID: \"dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8\") " Nov 28 07:11:30 crc kubenswrapper[4946]: I1128 07:11:30.868355 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8-bundle" (OuterVolumeSpecName: "bundle") pod "dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8" (UID: "dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:30 crc kubenswrapper[4946]: I1128 07:11:30.875394 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8-kube-api-access-q4zn5" (OuterVolumeSpecName: "kube-api-access-q4zn5") pod "dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8" (UID: "dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8"). InnerVolumeSpecName "kube-api-access-q4zn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:11:30 crc kubenswrapper[4946]: I1128 07:11:30.894441 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8-util" (OuterVolumeSpecName: "util") pod "dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8" (UID: "dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:11:30 crc kubenswrapper[4946]: I1128 07:11:30.969396 4946 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:30 crc kubenswrapper[4946]: I1128 07:11:30.969448 4946 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8-util\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:30 crc kubenswrapper[4946]: I1128 07:11:30.969489 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4zn5\" (UniqueName: \"kubernetes.io/projected/dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8-kube-api-access-q4zn5\") on node \"crc\" DevicePath \"\"" Nov 28 07:11:31 crc kubenswrapper[4946]: I1128 07:11:31.462402 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h" event={"ID":"dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8","Type":"ContainerDied","Data":"c0055d30ca29706570b18097768fced743360858cee1512ae03ae12a7ffc5676"} Nov 28 07:11:31 crc kubenswrapper[4946]: I1128 07:11:31.462501 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0055d30ca29706570b18097768fced743360858cee1512ae03ae12a7ffc5676" Nov 28 07:11:31 crc kubenswrapper[4946]: I1128 07:11:31.462547 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h" Nov 28 07:11:37 crc kubenswrapper[4946]: I1128 07:11:37.699751 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-67d8f6cc56-mmr5q"] Nov 28 07:11:37 crc kubenswrapper[4946]: E1128 07:11:37.700849 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8" containerName="util" Nov 28 07:11:37 crc kubenswrapper[4946]: I1128 07:11:37.700868 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8" containerName="util" Nov 28 07:11:37 crc kubenswrapper[4946]: E1128 07:11:37.700877 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8" containerName="extract" Nov 28 07:11:37 crc kubenswrapper[4946]: I1128 07:11:37.700884 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8" containerName="extract" Nov 28 07:11:37 crc kubenswrapper[4946]: E1128 07:11:37.700895 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8" containerName="pull" Nov 28 07:11:37 crc kubenswrapper[4946]: I1128 07:11:37.700902 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8" containerName="pull" Nov 28 07:11:37 crc kubenswrapper[4946]: I1128 07:11:37.701038 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8" containerName="extract" Nov 28 07:11:37 crc kubenswrapper[4946]: I1128 07:11:37.701588 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-67d8f6cc56-mmr5q" Nov 28 07:11:37 crc kubenswrapper[4946]: I1128 07:11:37.706592 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-98nb9" Nov 28 07:11:37 crc kubenswrapper[4946]: I1128 07:11:37.742553 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-67d8f6cc56-mmr5q"] Nov 28 07:11:37 crc kubenswrapper[4946]: I1128 07:11:37.878156 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxdbj\" (UniqueName: \"kubernetes.io/projected/a7ce130a-7c66-4b3c-9ea1-aca15bb025c0-kube-api-access-kxdbj\") pod \"openstack-operator-controller-operator-67d8f6cc56-mmr5q\" (UID: \"a7ce130a-7c66-4b3c-9ea1-aca15bb025c0\") " pod="openstack-operators/openstack-operator-controller-operator-67d8f6cc56-mmr5q" Nov 28 07:11:37 crc kubenswrapper[4946]: I1128 07:11:37.980051 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxdbj\" (UniqueName: \"kubernetes.io/projected/a7ce130a-7c66-4b3c-9ea1-aca15bb025c0-kube-api-access-kxdbj\") pod \"openstack-operator-controller-operator-67d8f6cc56-mmr5q\" (UID: \"a7ce130a-7c66-4b3c-9ea1-aca15bb025c0\") " pod="openstack-operators/openstack-operator-controller-operator-67d8f6cc56-mmr5q" Nov 28 07:11:38 crc kubenswrapper[4946]: I1128 07:11:38.012936 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxdbj\" (UniqueName: \"kubernetes.io/projected/a7ce130a-7c66-4b3c-9ea1-aca15bb025c0-kube-api-access-kxdbj\") pod \"openstack-operator-controller-operator-67d8f6cc56-mmr5q\" (UID: \"a7ce130a-7c66-4b3c-9ea1-aca15bb025c0\") " pod="openstack-operators/openstack-operator-controller-operator-67d8f6cc56-mmr5q" Nov 28 07:11:38 crc kubenswrapper[4946]: I1128 07:11:38.024834 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-67d8f6cc56-mmr5q" Nov 28 07:11:38 crc kubenswrapper[4946]: W1128 07:11:38.328878 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7ce130a_7c66_4b3c_9ea1_aca15bb025c0.slice/crio-412a1538ead2411b4f54d389c3f9bbd0eca09bfc0c723f5dcd07bf2470e81b3b WatchSource:0}: Error finding container 412a1538ead2411b4f54d389c3f9bbd0eca09bfc0c723f5dcd07bf2470e81b3b: Status 404 returned error can't find the container with id 412a1538ead2411b4f54d389c3f9bbd0eca09bfc0c723f5dcd07bf2470e81b3b Nov 28 07:11:38 crc kubenswrapper[4946]: I1128 07:11:38.347201 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-67d8f6cc56-mmr5q"] Nov 28 07:11:38 crc kubenswrapper[4946]: I1128 07:11:38.524854 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-67d8f6cc56-mmr5q" event={"ID":"a7ce130a-7c66-4b3c-9ea1-aca15bb025c0","Type":"ContainerStarted","Data":"412a1538ead2411b4f54d389c3f9bbd0eca09bfc0c723f5dcd07bf2470e81b3b"} Nov 28 07:11:46 crc kubenswrapper[4946]: I1128 07:11:46.585671 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-67d8f6cc56-mmr5q" event={"ID":"a7ce130a-7c66-4b3c-9ea1-aca15bb025c0","Type":"ContainerStarted","Data":"e9d249b31f548cef754c78011de398ed248e89ba56099be674757e305c331b4b"} Nov 28 07:11:46 crc kubenswrapper[4946]: I1128 07:11:46.586360 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-67d8f6cc56-mmr5q" Nov 28 07:11:46 crc kubenswrapper[4946]: I1128 07:11:46.621017 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-67d8f6cc56-mmr5q" podStartSLOduration=4.884010057 podStartE2EDuration="9.620994265s" podCreationTimestamp="2025-11-28 07:11:37 +0000 UTC" firstStartedPulling="2025-11-28 07:11:38.336042575 +0000 UTC m=+1152.714107686" lastFinishedPulling="2025-11-28 07:11:43.073026783 +0000 UTC m=+1157.451091894" observedRunningTime="2025-11-28 07:11:46.618948065 +0000 UTC m=+1160.997013196" watchObservedRunningTime="2025-11-28 07:11:46.620994265 +0000 UTC m=+1160.999059386" Nov 28 07:11:54 crc kubenswrapper[4946]: I1128 07:11:54.731621 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:11:54 crc kubenswrapper[4946]: I1128 07:11:54.732800 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:11:58 crc kubenswrapper[4946]: I1128 07:11:58.029924 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-67d8f6cc56-mmr5q" Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.806760 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-2l7gl"] Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.808692 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-2l7gl" Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.818331 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-g6gbz"] Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.819438 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-g6gbz" Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.820941 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-bz78v" Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.823857 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-cxn42" Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.826645 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-hk9d2"] Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.827806 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-955677c94-hk9d2" Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.829412 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-8nzkl" Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.845832 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-g6gbz"] Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.868301 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-hk9d2"] Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.895273 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-589cbd6b5b-7p86n"] Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.896758 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-7p86n" Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.903335 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-2l7gl"] Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.904904 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-lm7b9" Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.911309 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-589cbd6b5b-7p86n"] Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.916437 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-zwd8k"] Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.917941 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-zwd8k" Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.921440 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-fs9kz" Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.934362 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-zwd8k"] Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.960245 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-5hbz6"] Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.961582 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-5hbz6" Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.972625 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-t874m" Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.976592 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-dhvm2"] Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.977939 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-dhvm2" Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.982570 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-8cgzm" Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.982839 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.985970 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-5hbz6"] Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.989998 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-pvrsb"] Nov 28 07:12:23 crc kubenswrapper[4946]: I1128 07:12:23.995150 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-pvrsb" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.000333 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrt64\" (UniqueName: \"kubernetes.io/projected/353bb4c2-bff5-4749-a149-8a856803b84b-kube-api-access-rrt64\") pod \"barbican-operator-controller-manager-7b64f4fb85-2l7gl\" (UID: \"353bb4c2-bff5-4749-a149-8a856803b84b\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-2l7gl" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.000382 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt9rx\" (UniqueName: \"kubernetes.io/projected/d4188f46-2979-4b90-bfa0-37962da6e3c7-kube-api-access-kt9rx\") pod \"cinder-operator-controller-manager-6b7f75547b-g6gbz\" (UID: \"d4188f46-2979-4b90-bfa0-37962da6e3c7\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-g6gbz" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.000434 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9p2s\" (UniqueName: \"kubernetes.io/projected/dc11bd96-48e8-4613-80e6-ce3b518cea8d-kube-api-access-n9p2s\") pod \"glance-operator-controller-manager-589cbd6b5b-7p86n\" (UID: \"dc11bd96-48e8-4613-80e6-ce3b518cea8d\") " pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-7p86n" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.000498 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h5wl\" (UniqueName: \"kubernetes.io/projected/3fe2b1f1-7ef0-4cc3-9cf1-adefe5dead46-kube-api-access-9h5wl\") pod \"designate-operator-controller-manager-955677c94-hk9d2\" (UID: \"3fe2b1f1-7ef0-4cc3-9cf1-adefe5dead46\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-hk9d2" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.011214 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-8tvmx" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.024697 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-dhvm2"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.024737 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-pvrsb"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.061625 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-2f2dz"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.062877 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-2f2dz" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.066511 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-s5bvq" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.111012 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph78w\" (UniqueName: \"kubernetes.io/projected/1b3d3306-e302-4fe9-b50c-8295275ed28c-kube-api-access-ph78w\") pod \"heat-operator-controller-manager-5b77f656f-zwd8k\" (UID: \"1b3d3306-e302-4fe9-b50c-8295275ed28c\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-zwd8k" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.111396 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrt64\" (UniqueName: \"kubernetes.io/projected/353bb4c2-bff5-4749-a149-8a856803b84b-kube-api-access-rrt64\") pod \"barbican-operator-controller-manager-7b64f4fb85-2l7gl\" (UID: \"353bb4c2-bff5-4749-a149-8a856803b84b\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-2l7gl" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.111490 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt9rx\" (UniqueName: \"kubernetes.io/projected/d4188f46-2979-4b90-bfa0-37962da6e3c7-kube-api-access-kt9rx\") pod \"cinder-operator-controller-manager-6b7f75547b-g6gbz\" (UID: \"d4188f46-2979-4b90-bfa0-37962da6e3c7\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-g6gbz" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.112291 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9p2s\" (UniqueName: \"kubernetes.io/projected/dc11bd96-48e8-4613-80e6-ce3b518cea8d-kube-api-access-n9p2s\") pod \"glance-operator-controller-manager-589cbd6b5b-7p86n\" (UID: \"dc11bd96-48e8-4613-80e6-ce3b518cea8d\") " pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-7p86n" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.112371 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1c7d18a6-2067-4736-a42f-074f2672a841-cert\") pod \"infra-operator-controller-manager-57548d458d-dhvm2\" (UID: \"1c7d18a6-2067-4736-a42f-074f2672a841\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-dhvm2" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.112482 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h5wl\" (UniqueName: \"kubernetes.io/projected/3fe2b1f1-7ef0-4cc3-9cf1-adefe5dead46-kube-api-access-9h5wl\") pod \"designate-operator-controller-manager-955677c94-hk9d2\" (UID: \"3fe2b1f1-7ef0-4cc3-9cf1-adefe5dead46\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-hk9d2" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.112525 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4qr2\" (UniqueName: \"kubernetes.io/projected/1c7d18a6-2067-4736-a42f-074f2672a841-kube-api-access-t4qr2\") pod \"infra-operator-controller-manager-57548d458d-dhvm2\" (UID: \"1c7d18a6-2067-4736-a42f-074f2672a841\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-dhvm2" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.112576 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngjrv\" (UniqueName: \"kubernetes.io/projected/f1ac7d28-f59d-44e5-aa3e-c6da338fca84-kube-api-access-ngjrv\") pod \"horizon-operator-controller-manager-5d494799bf-5hbz6\" (UID: \"f1ac7d28-f59d-44e5-aa3e-c6da338fca84\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-5hbz6" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.112732 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rlpv\" (UniqueName: \"kubernetes.io/projected/e3d1cda6-c2ac-40fc-8f40-2e4b89dbca81-kube-api-access-2rlpv\") pod \"ironic-operator-controller-manager-67cb4dc6d4-pvrsb\" (UID: \"e3d1cda6-c2ac-40fc-8f40-2e4b89dbca81\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-pvrsb" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.149428 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-sqh4v"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.157448 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9p2s\" (UniqueName: \"kubernetes.io/projected/dc11bd96-48e8-4613-80e6-ce3b518cea8d-kube-api-access-n9p2s\") pod \"glance-operator-controller-manager-589cbd6b5b-7p86n\" (UID: \"dc11bd96-48e8-4613-80e6-ce3b518cea8d\") " pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-7p86n" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.163724 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrt64\" (UniqueName: \"kubernetes.io/projected/353bb4c2-bff5-4749-a149-8a856803b84b-kube-api-access-rrt64\") pod \"barbican-operator-controller-manager-7b64f4fb85-2l7gl\" (UID: \"353bb4c2-bff5-4749-a149-8a856803b84b\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-2l7gl" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.164476 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h5wl\" (UniqueName: \"kubernetes.io/projected/3fe2b1f1-7ef0-4cc3-9cf1-adefe5dead46-kube-api-access-9h5wl\") pod \"designate-operator-controller-manager-955677c94-hk9d2\" (UID: \"3fe2b1f1-7ef0-4cc3-9cf1-adefe5dead46\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-hk9d2" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.179128 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt9rx\" (UniqueName: \"kubernetes.io/projected/d4188f46-2979-4b90-bfa0-37962da6e3c7-kube-api-access-kt9rx\") pod \"cinder-operator-controller-manager-6b7f75547b-g6gbz\" (UID: \"d4188f46-2979-4b90-bfa0-37962da6e3c7\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-g6gbz" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.179967 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-sqh4v" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.187392 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-j4bnp" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.195444 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-9b7s9"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.196836 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-9b7s9" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.202935 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-s77th" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.215480 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4qr2\" (UniqueName: \"kubernetes.io/projected/1c7d18a6-2067-4736-a42f-074f2672a841-kube-api-access-t4qr2\") pod \"infra-operator-controller-manager-57548d458d-dhvm2\" (UID: \"1c7d18a6-2067-4736-a42f-074f2672a841\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-dhvm2" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.215557 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngjrv\" (UniqueName: \"kubernetes.io/projected/f1ac7d28-f59d-44e5-aa3e-c6da338fca84-kube-api-access-ngjrv\") pod \"horizon-operator-controller-manager-5d494799bf-5hbz6\" (UID: \"f1ac7d28-f59d-44e5-aa3e-c6da338fca84\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-5hbz6" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.215610 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rlpv\" (UniqueName: \"kubernetes.io/projected/e3d1cda6-c2ac-40fc-8f40-2e4b89dbca81-kube-api-access-2rlpv\") pod \"ironic-operator-controller-manager-67cb4dc6d4-pvrsb\" (UID: \"e3d1cda6-c2ac-40fc-8f40-2e4b89dbca81\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-pvrsb" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.215644 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph78w\" (UniqueName: \"kubernetes.io/projected/1b3d3306-e302-4fe9-b50c-8295275ed28c-kube-api-access-ph78w\") pod \"heat-operator-controller-manager-5b77f656f-zwd8k\" (UID: \"1b3d3306-e302-4fe9-b50c-8295275ed28c\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-zwd8k" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.215687 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9l4p\" (UniqueName: \"kubernetes.io/projected/cb30728b-6cd4-4d80-8e1a-bc979410fad6-kube-api-access-f9l4p\") pod \"manila-operator-controller-manager-5d499bf58b-2f2dz\" (UID: \"cb30728b-6cd4-4d80-8e1a-bc979410fad6\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-2f2dz" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.215768 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1c7d18a6-2067-4736-a42f-074f2672a841-cert\") pod \"infra-operator-controller-manager-57548d458d-dhvm2\" (UID: \"1c7d18a6-2067-4736-a42f-074f2672a841\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-dhvm2" Nov 28 07:12:24 crc kubenswrapper[4946]: E1128 07:12:24.215949 4946 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 07:12:24 crc kubenswrapper[4946]: E1128 07:12:24.216036 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c7d18a6-2067-4736-a42f-074f2672a841-cert podName:1c7d18a6-2067-4736-a42f-074f2672a841 nodeName:}" failed. No retries permitted until 2025-11-28 07:12:24.71600486 +0000 UTC m=+1199.094069971 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1c7d18a6-2067-4736-a42f-074f2672a841-cert") pod "infra-operator-controller-manager-57548d458d-dhvm2" (UID: "1c7d18a6-2067-4736-a42f-074f2672a841") : secret "infra-operator-webhook-server-cert" not found Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.221914 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-7p86n" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.241702 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-sqh4v"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.246736 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngjrv\" (UniqueName: \"kubernetes.io/projected/f1ac7d28-f59d-44e5-aa3e-c6da338fca84-kube-api-access-ngjrv\") pod \"horizon-operator-controller-manager-5d494799bf-5hbz6\" (UID: \"f1ac7d28-f59d-44e5-aa3e-c6da338fca84\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-5hbz6" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.247145 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rlpv\" (UniqueName: \"kubernetes.io/projected/e3d1cda6-c2ac-40fc-8f40-2e4b89dbca81-kube-api-access-2rlpv\") pod \"ironic-operator-controller-manager-67cb4dc6d4-pvrsb\" (UID: \"e3d1cda6-c2ac-40fc-8f40-2e4b89dbca81\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-pvrsb" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.252668 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph78w\" (UniqueName: \"kubernetes.io/projected/1b3d3306-e302-4fe9-b50c-8295275ed28c-kube-api-access-ph78w\") pod \"heat-operator-controller-manager-5b77f656f-zwd8k\" (UID: \"1b3d3306-e302-4fe9-b50c-8295275ed28c\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-zwd8k" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.254925 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-2f2dz"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.261589 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-9b7s9"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.265071 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4qr2\" (UniqueName: \"kubernetes.io/projected/1c7d18a6-2067-4736-a42f-074f2672a841-kube-api-access-t4qr2\") pod \"infra-operator-controller-manager-57548d458d-dhvm2\" (UID: \"1c7d18a6-2067-4736-a42f-074f2672a841\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-dhvm2" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.283144 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-5hbz6" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.317424 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drd5h\" (UniqueName: \"kubernetes.io/projected/8d84e62b-b1cf-4238-b78b-f47a9f2df3ef-kube-api-access-drd5h\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-9b7s9\" (UID: \"8d84e62b-b1cf-4238-b78b-f47a9f2df3ef\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-9b7s9" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.317499 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62vzj\" (UniqueName: \"kubernetes.io/projected/2f759a24-d58b-4aed-8e14-71dec2ff2df6-kube-api-access-62vzj\") pod \"keystone-operator-controller-manager-7b4567c7cf-sqh4v\" (UID: \"2f759a24-d58b-4aed-8e14-71dec2ff2df6\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-sqh4v" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.317572 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9l4p\" (UniqueName: \"kubernetes.io/projected/cb30728b-6cd4-4d80-8e1a-bc979410fad6-kube-api-access-f9l4p\") pod \"manila-operator-controller-manager-5d499bf58b-2f2dz\" (UID: \"cb30728b-6cd4-4d80-8e1a-bc979410fad6\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-2f2dz" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.331062 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-pvrsb" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.329808 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-nl9r5"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.333211 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-nl9r5" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.336962 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-t79ns" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.342426 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9l4p\" (UniqueName: \"kubernetes.io/projected/cb30728b-6cd4-4d80-8e1a-bc979410fad6-kube-api-access-f9l4p\") pod \"manila-operator-controller-manager-5d499bf58b-2f2dz\" (UID: \"cb30728b-6cd4-4d80-8e1a-bc979410fad6\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-2f2dz" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.351161 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-mwdxn"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.352553 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-mwdxn" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.354633 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-gqcps" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.383096 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-nl9r5"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.394014 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-mwdxn"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.404730 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-2f2dz" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.412148 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-r94kz"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.413668 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-r94kz" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.416595 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-r94kz"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.419668 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drd5h\" (UniqueName: \"kubernetes.io/projected/8d84e62b-b1cf-4238-b78b-f47a9f2df3ef-kube-api-access-drd5h\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-9b7s9\" (UID: \"8d84e62b-b1cf-4238-b78b-f47a9f2df3ef\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-9b7s9" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.419722 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62vzj\" (UniqueName: \"kubernetes.io/projected/2f759a24-d58b-4aed-8e14-71dec2ff2df6-kube-api-access-62vzj\") pod \"keystone-operator-controller-manager-7b4567c7cf-sqh4v\" (UID: \"2f759a24-d58b-4aed-8e14-71dec2ff2df6\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-sqh4v" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.422235 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.423671 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.432107 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-2l7gl" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.435396 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-tkcvf" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.435634 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-9nzjs"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.435718 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.436140 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-x96z6" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.437103 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-9nzjs" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.447197 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-vwgdw" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.449858 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-g6gbz" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.455189 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drd5h\" (UniqueName: \"kubernetes.io/projected/8d84e62b-b1cf-4238-b78b-f47a9f2df3ef-kube-api-access-drd5h\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-9b7s9\" (UID: \"8d84e62b-b1cf-4238-b78b-f47a9f2df3ef\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-9b7s9" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.459766 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62vzj\" (UniqueName: \"kubernetes.io/projected/2f759a24-d58b-4aed-8e14-71dec2ff2df6-kube-api-access-62vzj\") pod \"keystone-operator-controller-manager-7b4567c7cf-sqh4v\" (UID: \"2f759a24-d58b-4aed-8e14-71dec2ff2df6\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-sqh4v" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.460636 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-955677c94-hk9d2" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.493394 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-4mqd6"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.495011 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4mqd6" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.503190 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-fgdb7" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.510304 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-9nzjs"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.521952 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aef07b0c-aae8-48fd-a246-8b5669cccbce-cert\") pod \"openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n\" (UID: \"aef07b0c-aae8-48fd-a246-8b5669cccbce\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.522172 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5h8g\" (UniqueName: \"kubernetes.io/projected/d37965ca-e557-483c-b195-310b690d9101-kube-api-access-w5h8g\") pod \"octavia-operator-controller-manager-64cdc6ff96-r94kz\" (UID: \"d37965ca-e557-483c-b195-310b690d9101\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-r94kz" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.522277 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvqld\" (UniqueName: \"kubernetes.io/projected/9a41bf27-5186-4bfe-b722-87a604d851c3-kube-api-access-wvqld\") pod \"nova-operator-controller-manager-79556f57fc-mwdxn\" (UID: \"9a41bf27-5186-4bfe-b722-87a604d851c3\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-mwdxn" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.522362 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgkxv\" (UniqueName: \"kubernetes.io/projected/7aeae67c-c1be-4d23-bb62-c8798d9fe052-kube-api-access-dgkxv\") pod \"neutron-operator-controller-manager-6fdcddb789-nl9r5\" (UID: \"7aeae67c-c1be-4d23-bb62-c8798d9fe052\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-nl9r5" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.522400 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4dbm\" (UniqueName: \"kubernetes.io/projected/aef07b0c-aae8-48fd-a246-8b5669cccbce-kube-api-access-g4dbm\") pod \"openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n\" (UID: \"aef07b0c-aae8-48fd-a246-8b5669cccbce\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.525039 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-2nfmg"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.526336 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d77b94747-2nfmg" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.535131 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-sqh4v" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.535680 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.537637 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-zwd8k" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.537937 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-2nfmg"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.542027 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-9b7s9" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.551818 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-zz6xk" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.557149 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qfvfk"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.558592 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qfvfk" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.564910 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-4mqd6"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.578054 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-5nsv9" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.578988 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-nqqq9"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.583211 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-nqqq9" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.585233 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-b28l5" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.596963 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qfvfk"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.604156 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-nqqq9"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.624944 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgkxv\" (UniqueName: \"kubernetes.io/projected/7aeae67c-c1be-4d23-bb62-c8798d9fe052-kube-api-access-dgkxv\") pod \"neutron-operator-controller-manager-6fdcddb789-nl9r5\" (UID: \"7aeae67c-c1be-4d23-bb62-c8798d9fe052\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-nl9r5" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.624982 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4dbm\" (UniqueName: \"kubernetes.io/projected/aef07b0c-aae8-48fd-a246-8b5669cccbce-kube-api-access-g4dbm\") pod \"openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n\" (UID: \"aef07b0c-aae8-48fd-a246-8b5669cccbce\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.625035 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aef07b0c-aae8-48fd-a246-8b5669cccbce-cert\") pod \"openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n\" (UID: \"aef07b0c-aae8-48fd-a246-8b5669cccbce\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.625062 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsjnx\" (UniqueName: \"kubernetes.io/projected/3f402388-33d3-4c7b-a1b7-26241b6de58c-kube-api-access-zsjnx\") pod \"swift-operator-controller-manager-d77b94747-2nfmg\" (UID: \"3f402388-33d3-4c7b-a1b7-26241b6de58c\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-2nfmg" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.625107 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5h8g\" (UniqueName: \"kubernetes.io/projected/d37965ca-e557-483c-b195-310b690d9101-kube-api-access-w5h8g\") pod \"octavia-operator-controller-manager-64cdc6ff96-r94kz\" (UID: \"d37965ca-e557-483c-b195-310b690d9101\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-r94kz" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.625130 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg5lx\" (UniqueName: \"kubernetes.io/projected/b3227d8a-06db-4f44-ae26-f173c27fd3e1-kube-api-access-pg5lx\") pod \"ovn-operator-controller-manager-56897c768d-9nzjs\" (UID: \"b3227d8a-06db-4f44-ae26-f173c27fd3e1\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-9nzjs" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.625167 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvqld\" (UniqueName: \"kubernetes.io/projected/9a41bf27-5186-4bfe-b722-87a604d851c3-kube-api-access-wvqld\") pod \"nova-operator-controller-manager-79556f57fc-mwdxn\" (UID: \"9a41bf27-5186-4bfe-b722-87a604d851c3\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-mwdxn" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.625192 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5qlt\" (UniqueName: \"kubernetes.io/projected/710cbde0-3645-452d-8cda-d4165c8fdd32-kube-api-access-q5qlt\") pod \"placement-operator-controller-manager-57988cc5b5-4mqd6\" (UID: \"710cbde0-3645-452d-8cda-d4165c8fdd32\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4mqd6" Nov 28 07:12:24 crc kubenswrapper[4946]: E1128 07:12:24.634726 4946 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 07:12:24 crc kubenswrapper[4946]: E1128 07:12:24.634802 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aef07b0c-aae8-48fd-a246-8b5669cccbce-cert podName:aef07b0c-aae8-48fd-a246-8b5669cccbce nodeName:}" failed. No retries permitted until 2025-11-28 07:12:25.134782201 +0000 UTC m=+1199.512847312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/aef07b0c-aae8-48fd-a246-8b5669cccbce-cert") pod "openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n" (UID: "aef07b0c-aae8-48fd-a246-8b5669cccbce") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.694885 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4dbm\" (UniqueName: \"kubernetes.io/projected/aef07b0c-aae8-48fd-a246-8b5669cccbce-kube-api-access-g4dbm\") pod \"openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n\" (UID: \"aef07b0c-aae8-48fd-a246-8b5669cccbce\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.695271 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5h8g\" (UniqueName: \"kubernetes.io/projected/d37965ca-e557-483c-b195-310b690d9101-kube-api-access-w5h8g\") pod \"octavia-operator-controller-manager-64cdc6ff96-r94kz\" (UID: \"d37965ca-e557-483c-b195-310b690d9101\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-r94kz" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.695766 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgkxv\" (UniqueName: \"kubernetes.io/projected/7aeae67c-c1be-4d23-bb62-c8798d9fe052-kube-api-access-dgkxv\") pod \"neutron-operator-controller-manager-6fdcddb789-nl9r5\" (UID: \"7aeae67c-c1be-4d23-bb62-c8798d9fe052\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-nl9r5" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.700430 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvqld\" (UniqueName: \"kubernetes.io/projected/9a41bf27-5186-4bfe-b722-87a604d851c3-kube-api-access-wvqld\") pod \"nova-operator-controller-manager-79556f57fc-mwdxn\" (UID: \"9a41bf27-5186-4bfe-b722-87a604d851c3\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-mwdxn" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.730409 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5qlt\" (UniqueName: \"kubernetes.io/projected/710cbde0-3645-452d-8cda-d4165c8fdd32-kube-api-access-q5qlt\") pod \"placement-operator-controller-manager-57988cc5b5-4mqd6\" (UID: \"710cbde0-3645-452d-8cda-d4165c8fdd32\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4mqd6" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.730497 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1c7d18a6-2067-4736-a42f-074f2672a841-cert\") pod \"infra-operator-controller-manager-57548d458d-dhvm2\" (UID: \"1c7d18a6-2067-4736-a42f-074f2672a841\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-dhvm2" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.730556 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xrwz\" (UniqueName: \"kubernetes.io/projected/989a6399-e7f7-4b7d-bd68-4d44531b3a8e-kube-api-access-9xrwz\") pod \"telemetry-operator-controller-manager-76cc84c6bb-qfvfk\" (UID: \"989a6399-e7f7-4b7d-bd68-4d44531b3a8e\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qfvfk" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.730594 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxvhl\" (UniqueName: \"kubernetes.io/projected/fda79d01-4f7c-4c88-8567-6c9543ec8b51-kube-api-access-hxvhl\") pod \"test-operator-controller-manager-5cd6c7f4c8-nqqq9\" (UID: \"fda79d01-4f7c-4c88-8567-6c9543ec8b51\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-nqqq9" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.730652 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsjnx\" (UniqueName: \"kubernetes.io/projected/3f402388-33d3-4c7b-a1b7-26241b6de58c-kube-api-access-zsjnx\") pod \"swift-operator-controller-manager-d77b94747-2nfmg\" (UID: \"3f402388-33d3-4c7b-a1b7-26241b6de58c\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-2nfmg" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.730697 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg5lx\" (UniqueName: \"kubernetes.io/projected/b3227d8a-06db-4f44-ae26-f173c27fd3e1-kube-api-access-pg5lx\") pod \"ovn-operator-controller-manager-56897c768d-9nzjs\" (UID: \"b3227d8a-06db-4f44-ae26-f173c27fd3e1\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-9nzjs" Nov 28 07:12:24 crc kubenswrapper[4946]: E1128 07:12:24.731253 4946 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 07:12:24 crc kubenswrapper[4946]: E1128 07:12:24.731341 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c7d18a6-2067-4736-a42f-074f2672a841-cert podName:1c7d18a6-2067-4736-a42f-074f2672a841 nodeName:}" failed. No retries permitted until 2025-11-28 07:12:25.731310642 +0000 UTC m=+1200.109375753 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1c7d18a6-2067-4736-a42f-074f2672a841-cert") pod "infra-operator-controller-manager-57548d458d-dhvm2" (UID: "1c7d18a6-2067-4736-a42f-074f2672a841") : secret "infra-operator-webhook-server-cert" not found Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.732014 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.732092 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.753184 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-blpm6"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.757072 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-r94kz" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.785216 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsjnx\" (UniqueName: \"kubernetes.io/projected/3f402388-33d3-4c7b-a1b7-26241b6de58c-kube-api-access-zsjnx\") pod \"swift-operator-controller-manager-d77b94747-2nfmg\" (UID: \"3f402388-33d3-4c7b-a1b7-26241b6de58c\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-2nfmg" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.793055 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg5lx\" (UniqueName: \"kubernetes.io/projected/b3227d8a-06db-4f44-ae26-f173c27fd3e1-kube-api-access-pg5lx\") pod \"ovn-operator-controller-manager-56897c768d-9nzjs\" (UID: \"b3227d8a-06db-4f44-ae26-f173c27fd3e1\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-9nzjs" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.808186 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-blpm6" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.821049 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-blpm6"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.821771 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-b8bwj" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.833254 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5qlt\" (UniqueName: \"kubernetes.io/projected/710cbde0-3645-452d-8cda-d4165c8fdd32-kube-api-access-q5qlt\") pod \"placement-operator-controller-manager-57988cc5b5-4mqd6\" (UID: \"710cbde0-3645-452d-8cda-d4165c8fdd32\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4mqd6" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.879557 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xrwz\" (UniqueName: \"kubernetes.io/projected/989a6399-e7f7-4b7d-bd68-4d44531b3a8e-kube-api-access-9xrwz\") pod \"telemetry-operator-controller-manager-76cc84c6bb-qfvfk\" (UID: \"989a6399-e7f7-4b7d-bd68-4d44531b3a8e\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qfvfk" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.879605 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxvhl\" (UniqueName: \"kubernetes.io/projected/fda79d01-4f7c-4c88-8567-6c9543ec8b51-kube-api-access-hxvhl\") pod \"test-operator-controller-manager-5cd6c7f4c8-nqqq9\" (UID: \"fda79d01-4f7c-4c88-8567-6c9543ec8b51\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-nqqq9" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.880921 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4mqd6" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.932415 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxvhl\" (UniqueName: \"kubernetes.io/projected/fda79d01-4f7c-4c88-8567-6c9543ec8b51-kube-api-access-hxvhl\") pod \"test-operator-controller-manager-5cd6c7f4c8-nqqq9\" (UID: \"fda79d01-4f7c-4c88-8567-6c9543ec8b51\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-nqqq9" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.938110 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d77b94747-2nfmg" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.948681 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xrwz\" (UniqueName: \"kubernetes.io/projected/989a6399-e7f7-4b7d-bd68-4d44531b3a8e-kube-api-access-9xrwz\") pod \"telemetry-operator-controller-manager-76cc84c6bb-qfvfk\" (UID: \"989a6399-e7f7-4b7d-bd68-4d44531b3a8e\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qfvfk" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.960927 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-nl9r5" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.965671 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qfvfk" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.968311 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-7p86n" event={"ID":"dc11bd96-48e8-4613-80e6-ce3b518cea8d","Type":"ContainerStarted","Data":"e78664d16784d3bd66edf2fdfa93162884cf91186b4d785b7bad74e6b6dadaa0"} Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.974897 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-mwdxn" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.988616 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wskr8\" (UniqueName: \"kubernetes.io/projected/836f3766-a6eb-447f-9337-fa9082bcb62b-kube-api-access-wskr8\") pod \"watcher-operator-controller-manager-656dcb59d4-blpm6\" (UID: \"836f3766-a6eb-447f-9337-fa9082bcb62b\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-blpm6" Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.997716 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r"] Nov 28 07:12:24 crc kubenswrapper[4946]: I1128 07:12:24.999004 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.008730 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.009048 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-5shdq" Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.009161 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.031078 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-nqqq9" Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.051979 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r"] Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.078067 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5qsk"] Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.079361 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5qsk" Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.084871 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-btwkk" Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.085703 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-589cbd6b5b-7p86n"] Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.089960 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-webhook-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-l7w7r\" (UID: \"5f4eb990-cf89-4f9a-8d22-f016b8894f4f\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.090029 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wskr8\" (UniqueName: \"kubernetes.io/projected/836f3766-a6eb-447f-9337-fa9082bcb62b-kube-api-access-wskr8\") pod \"watcher-operator-controller-manager-656dcb59d4-blpm6\" (UID: \"836f3766-a6eb-447f-9337-fa9082bcb62b\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-blpm6" Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.090054 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-metrics-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-l7w7r\" (UID: \"5f4eb990-cf89-4f9a-8d22-f016b8894f4f\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.090086 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wflnn\" (UniqueName: \"kubernetes.io/projected/520a099d-3fd5-42f5-b883-c7a1b94dcb70-kube-api-access-wflnn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-t5qsk\" (UID: \"520a099d-3fd5-42f5-b883-c7a1b94dcb70\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5qsk" Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.090112 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gbdz\" (UniqueName: \"kubernetes.io/projected/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-kube-api-access-8gbdz\") pod \"openstack-operator-controller-manager-66f75ddbcc-l7w7r\" (UID: \"5f4eb990-cf89-4f9a-8d22-f016b8894f4f\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.094312 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5qsk"] Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.114347 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wskr8\" (UniqueName: \"kubernetes.io/projected/836f3766-a6eb-447f-9337-fa9082bcb62b-kube-api-access-wskr8\") pod \"watcher-operator-controller-manager-656dcb59d4-blpm6\" (UID: \"836f3766-a6eb-447f-9337-fa9082bcb62b\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-blpm6" Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.114735 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-9nzjs" Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.128094 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-5hbz6"] Nov 28 07:12:25 crc kubenswrapper[4946]: W1128 07:12:25.170264 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1ac7d28_f59d_44e5_aa3e_c6da338fca84.slice/crio-4921f0e395727fabed4164f83357a113b8cee7bd26afa9574b27f0e5beb03594 WatchSource:0}: Error finding container 4921f0e395727fabed4164f83357a113b8cee7bd26afa9574b27f0e5beb03594: Status 404 returned error can't find the container with id 4921f0e395727fabed4164f83357a113b8cee7bd26afa9574b27f0e5beb03594 Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.170333 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-blpm6" Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.191635 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-webhook-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-l7w7r\" (UID: \"5f4eb990-cf89-4f9a-8d22-f016b8894f4f\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.191722 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-metrics-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-l7w7r\" (UID: \"5f4eb990-cf89-4f9a-8d22-f016b8894f4f\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.191754 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wflnn\" (UniqueName: \"kubernetes.io/projected/520a099d-3fd5-42f5-b883-c7a1b94dcb70-kube-api-access-wflnn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-t5qsk\" (UID: \"520a099d-3fd5-42f5-b883-c7a1b94dcb70\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5qsk" Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.191777 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gbdz\" (UniqueName: \"kubernetes.io/projected/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-kube-api-access-8gbdz\") pod \"openstack-operator-controller-manager-66f75ddbcc-l7w7r\" (UID: \"5f4eb990-cf89-4f9a-8d22-f016b8894f4f\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.191813 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aef07b0c-aae8-48fd-a246-8b5669cccbce-cert\") pod \"openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n\" (UID: \"aef07b0c-aae8-48fd-a246-8b5669cccbce\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n" Nov 28 07:12:25 crc kubenswrapper[4946]: E1128 07:12:25.191940 4946 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 07:12:25 crc kubenswrapper[4946]: E1128 07:12:25.191975 4946 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 07:12:25 crc kubenswrapper[4946]: E1128 07:12:25.192030 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-webhook-certs podName:5f4eb990-cf89-4f9a-8d22-f016b8894f4f nodeName:}" failed. No retries permitted until 2025-11-28 07:12:25.692005121 +0000 UTC m=+1200.070070232 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-webhook-certs") pod "openstack-operator-controller-manager-66f75ddbcc-l7w7r" (UID: "5f4eb990-cf89-4f9a-8d22-f016b8894f4f") : secret "webhook-server-cert" not found Nov 28 07:12:25 crc kubenswrapper[4946]: E1128 07:12:25.192049 4946 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 07:12:25 crc kubenswrapper[4946]: E1128 07:12:25.192051 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aef07b0c-aae8-48fd-a246-8b5669cccbce-cert podName:aef07b0c-aae8-48fd-a246-8b5669cccbce nodeName:}" failed. No retries permitted until 2025-11-28 07:12:26.192043002 +0000 UTC m=+1200.570108113 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/aef07b0c-aae8-48fd-a246-8b5669cccbce-cert") pod "openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n" (UID: "aef07b0c-aae8-48fd-a246-8b5669cccbce") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 07:12:25 crc kubenswrapper[4946]: E1128 07:12:25.192090 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-metrics-certs podName:5f4eb990-cf89-4f9a-8d22-f016b8894f4f nodeName:}" failed. No retries permitted until 2025-11-28 07:12:25.692072112 +0000 UTC m=+1200.070137223 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-metrics-certs") pod "openstack-operator-controller-manager-66f75ddbcc-l7w7r" (UID: "5f4eb990-cf89-4f9a-8d22-f016b8894f4f") : secret "metrics-server-cert" not found Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.212543 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wflnn\" (UniqueName: \"kubernetes.io/projected/520a099d-3fd5-42f5-b883-c7a1b94dcb70-kube-api-access-wflnn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-t5qsk\" (UID: \"520a099d-3fd5-42f5-b883-c7a1b94dcb70\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5qsk" Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.213452 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gbdz\" (UniqueName: \"kubernetes.io/projected/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-kube-api-access-8gbdz\") pod \"openstack-operator-controller-manager-66f75ddbcc-l7w7r\" (UID: \"5f4eb990-cf89-4f9a-8d22-f016b8894f4f\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.295277 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-pvrsb"] Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.449877 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5qsk" Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.595355 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-g6gbz"] Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.621878 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-2l7gl"] Nov 28 07:12:25 crc kubenswrapper[4946]: W1128 07:12:25.626357 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod353bb4c2_bff5_4749_a149_8a856803b84b.slice/crio-a2f1a9a75b56f9257777ce37f1540330dfba0fea113c9ac6a354665155f747b0 WatchSource:0}: Error finding container a2f1a9a75b56f9257777ce37f1540330dfba0fea113c9ac6a354665155f747b0: Status 404 returned error can't find the container with id a2f1a9a75b56f9257777ce37f1540330dfba0fea113c9ac6a354665155f747b0 Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.711888 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-metrics-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-l7w7r\" (UID: \"5f4eb990-cf89-4f9a-8d22-f016b8894f4f\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.712172 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-webhook-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-l7w7r\" (UID: \"5f4eb990-cf89-4f9a-8d22-f016b8894f4f\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" Nov 28 07:12:25 crc kubenswrapper[4946]: E1128 07:12:25.712059 4946 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 07:12:25 crc kubenswrapper[4946]: E1128 07:12:25.712347 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-metrics-certs podName:5f4eb990-cf89-4f9a-8d22-f016b8894f4f nodeName:}" failed. No retries permitted until 2025-11-28 07:12:26.712327426 +0000 UTC m=+1201.090392537 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-metrics-certs") pod "openstack-operator-controller-manager-66f75ddbcc-l7w7r" (UID: "5f4eb990-cf89-4f9a-8d22-f016b8894f4f") : secret "metrics-server-cert" not found Nov 28 07:12:25 crc kubenswrapper[4946]: E1128 07:12:25.712276 4946 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 07:12:25 crc kubenswrapper[4946]: E1128 07:12:25.712750 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-webhook-certs podName:5f4eb990-cf89-4f9a-8d22-f016b8894f4f nodeName:}" failed. No retries permitted until 2025-11-28 07:12:26.712697535 +0000 UTC m=+1201.090762646 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-webhook-certs") pod "openstack-operator-controller-manager-66f75ddbcc-l7w7r" (UID: "5f4eb990-cf89-4f9a-8d22-f016b8894f4f") : secret "webhook-server-cert" not found Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.725211 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-2f2dz"] Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.736246 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-zwd8k"] Nov 28 07:12:25 crc kubenswrapper[4946]: W1128 07:12:25.739187 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b3d3306_e302_4fe9_b50c_8295275ed28c.slice/crio-39451dd699fbf5dcc68413b0b92f6c57d7ef6b5b3655ae512e23e05b98f33a8b WatchSource:0}: Error finding container 39451dd699fbf5dcc68413b0b92f6c57d7ef6b5b3655ae512e23e05b98f33a8b: Status 404 returned error can't find the container with id 39451dd699fbf5dcc68413b0b92f6c57d7ef6b5b3655ae512e23e05b98f33a8b Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.752321 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-hk9d2"] Nov 28 07:12:25 crc kubenswrapper[4946]: W1128 07:12:25.755917 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fe2b1f1_7ef0_4cc3_9cf1_adefe5dead46.slice/crio-b70cab8d9a1e6145cc3a5201b12e225e7464b300cadc7b1410551fd8c2b5b647 WatchSource:0}: Error finding container b70cab8d9a1e6145cc3a5201b12e225e7464b300cadc7b1410551fd8c2b5b647: Status 404 returned error can't find the container with id b70cab8d9a1e6145cc3a5201b12e225e7464b300cadc7b1410551fd8c2b5b647 Nov 28 07:12:25 crc kubenswrapper[4946]: W1128 07:12:25.758333 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb30728b_6cd4_4d80_8e1a_bc979410fad6.slice/crio-85edcffbd7a250561c77785916c548f26c18b01fe59074b19ff745d5cd8a7e70 WatchSource:0}: Error finding container 85edcffbd7a250561c77785916c548f26c18b01fe59074b19ff745d5cd8a7e70: Status 404 returned error can't find the container with id 85edcffbd7a250561c77785916c548f26c18b01fe59074b19ff745d5cd8a7e70 Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.814677 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1c7d18a6-2067-4736-a42f-074f2672a841-cert\") pod \"infra-operator-controller-manager-57548d458d-dhvm2\" (UID: \"1c7d18a6-2067-4736-a42f-074f2672a841\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-dhvm2" Nov 28 07:12:25 crc kubenswrapper[4946]: E1128 07:12:25.815089 4946 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 07:12:25 crc kubenswrapper[4946]: E1128 07:12:25.815169 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c7d18a6-2067-4736-a42f-074f2672a841-cert podName:1c7d18a6-2067-4736-a42f-074f2672a841 nodeName:}" failed. No retries permitted until 2025-11-28 07:12:27.815143072 +0000 UTC m=+1202.193208183 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1c7d18a6-2067-4736-a42f-074f2672a841-cert") pod "infra-operator-controller-manager-57548d458d-dhvm2" (UID: "1c7d18a6-2067-4736-a42f-074f2672a841") : secret "infra-operator-webhook-server-cert" not found Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.886611 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-4mqd6"] Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.892630 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-9b7s9"] Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.911165 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-2nfmg"] Nov 28 07:12:25 crc kubenswrapper[4946]: W1128 07:12:25.916212 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7aeae67c_c1be_4d23_bb62_c8798d9fe052.slice/crio-eac2dfe140c572eb1d2105dde88cd724061454fa61776efdc41d356eaf8c11ea WatchSource:0}: Error finding container eac2dfe140c572eb1d2105dde88cd724061454fa61776efdc41d356eaf8c11ea: Status 404 returned error can't find the container with id eac2dfe140c572eb1d2105dde88cd724061454fa61776efdc41d356eaf8c11ea Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.917622 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-nl9r5"] Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.975194 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4mqd6" event={"ID":"710cbde0-3645-452d-8cda-d4165c8fdd32","Type":"ContainerStarted","Data":"295db82e60e861ec721adfa768caf11d08a8b3b0fb329fa6c0a6a87c73885948"} Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.977470 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-g6gbz" event={"ID":"d4188f46-2979-4b90-bfa0-37962da6e3c7","Type":"ContainerStarted","Data":"57879c4382e732eecdb39a19b828d1785c79fc26e4d41e63de90041f31a2c358"} Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.980037 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-2nfmg" event={"ID":"3f402388-33d3-4c7b-a1b7-26241b6de58c","Type":"ContainerStarted","Data":"9da3b67af102a91f45e1893af5948a45c80d55fe6309c40a71fa560484ef8d05"} Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.981637 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-9b7s9" event={"ID":"8d84e62b-b1cf-4238-b78b-f47a9f2df3ef","Type":"ContainerStarted","Data":"e90bb1dce2f93ee5ebb5b25143612d90fd2ef37647c365d7a01c462250c30b28"} Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.983854 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-nl9r5" event={"ID":"7aeae67c-c1be-4d23-bb62-c8798d9fe052","Type":"ContainerStarted","Data":"eac2dfe140c572eb1d2105dde88cd724061454fa61776efdc41d356eaf8c11ea"} Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.984774 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-pvrsb" event={"ID":"e3d1cda6-c2ac-40fc-8f40-2e4b89dbca81","Type":"ContainerStarted","Data":"6874577284c2b85c20e08b3b7f7899b842e1c0cb88e1ce322cb3ca89a9a2391a"} Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.986058 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-zwd8k" event={"ID":"1b3d3306-e302-4fe9-b50c-8295275ed28c","Type":"ContainerStarted","Data":"39451dd699fbf5dcc68413b0b92f6c57d7ef6b5b3655ae512e23e05b98f33a8b"} Nov 28 07:12:25 crc kubenswrapper[4946]: I1128 07:12:25.986862 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-5hbz6" event={"ID":"f1ac7d28-f59d-44e5-aa3e-c6da338fca84","Type":"ContainerStarted","Data":"4921f0e395727fabed4164f83357a113b8cee7bd26afa9574b27f0e5beb03594"} Nov 28 07:12:26 crc kubenswrapper[4946]: I1128 07:12:26.046614 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-2l7gl" event={"ID":"353bb4c2-bff5-4749-a149-8a856803b84b","Type":"ContainerStarted","Data":"a2f1a9a75b56f9257777ce37f1540330dfba0fea113c9ac6a354665155f747b0"} Nov 28 07:12:26 crc kubenswrapper[4946]: I1128 07:12:26.046831 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-2f2dz" event={"ID":"cb30728b-6cd4-4d80-8e1a-bc979410fad6","Type":"ContainerStarted","Data":"85edcffbd7a250561c77785916c548f26c18b01fe59074b19ff745d5cd8a7e70"} Nov 28 07:12:26 crc kubenswrapper[4946]: I1128 07:12:26.046852 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-hk9d2" event={"ID":"3fe2b1f1-7ef0-4cc3-9cf1-adefe5dead46","Type":"ContainerStarted","Data":"b70cab8d9a1e6145cc3a5201b12e225e7464b300cadc7b1410551fd8c2b5b647"} Nov 28 07:12:26 crc kubenswrapper[4946]: I1128 07:12:26.089911 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-nqqq9"] Nov 28 07:12:26 crc kubenswrapper[4946]: I1128 07:12:26.131034 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qfvfk"] Nov 28 07:12:26 crc kubenswrapper[4946]: W1128 07:12:26.133676 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd37965ca_e557_483c_b195_310b690d9101.slice/crio-1d288364016ac505a3153d5636ca1126bd5fceac29248433308d118fbb5b4dcd WatchSource:0}: Error finding container 1d288364016ac505a3153d5636ca1126bd5fceac29248433308d118fbb5b4dcd: Status 404 returned error can't find the container with id 1d288364016ac505a3153d5636ca1126bd5fceac29248433308d118fbb5b4dcd Nov 28 07:12:26 crc kubenswrapper[4946]: E1128 07:12:26.137343 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ddc8a82f05930db8ee7a8d6d189b5a66373060656e4baf71ac302f89c477da4c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w5h8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-64cdc6ff96-r94kz_openstack-operators(d37965ca-e557-483c-b195-310b690d9101): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 07:12:26 crc kubenswrapper[4946]: E1128 07:12:26.139892 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w5h8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-64cdc6ff96-r94kz_openstack-operators(d37965ca-e557-483c-b195-310b690d9101): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 07:12:26 crc kubenswrapper[4946]: I1128 07:12:26.141359 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-9nzjs"] Nov 28 07:12:26 crc kubenswrapper[4946]: E1128 07:12:26.141512 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-r94kz" podUID="d37965ca-e557-483c-b195-310b690d9101" Nov 28 07:12:26 crc kubenswrapper[4946]: E1128 07:12:26.141953 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wvqld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-mwdxn_openstack-operators(9a41bf27-5186-4bfe-b722-87a604d851c3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 07:12:26 crc kubenswrapper[4946]: I1128 07:12:26.148290 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-mwdxn"] Nov 28 07:12:26 crc kubenswrapper[4946]: E1128 07:12:26.148925 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wvqld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-mwdxn_openstack-operators(9a41bf27-5186-4bfe-b722-87a604d851c3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 07:12:26 crc kubenswrapper[4946]: E1128 07:12:26.150809 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-mwdxn" podUID="9a41bf27-5186-4bfe-b722-87a604d851c3" Nov 28 07:12:26 crc kubenswrapper[4946]: W1128 07:12:26.153261 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f759a24_d58b_4aed_8e14_71dec2ff2df6.slice/crio-924a51ef21dcae42920b8b7cf5d9e75cc7eba21a5dc1cdd15155265b3dc4ab1c WatchSource:0}: Error finding container 924a51ef21dcae42920b8b7cf5d9e75cc7eba21a5dc1cdd15155265b3dc4ab1c: Status 404 returned error can't find the container with id 924a51ef21dcae42920b8b7cf5d9e75cc7eba21a5dc1cdd15155265b3dc4ab1c Nov 28 07:12:26 crc kubenswrapper[4946]: I1128 07:12:26.156199 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-r94kz"] Nov 28 07:12:26 crc kubenswrapper[4946]: E1128 07:12:26.160171 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:25faa5b0e4801d4d3b01a28b877ed3188eee71f33ad66f3c2e86b7921758e711,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-62vzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7b4567c7cf-sqh4v_openstack-operators(2f759a24-d58b-4aed-8e14-71dec2ff2df6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 07:12:26 crc kubenswrapper[4946]: I1128 07:12:26.161231 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-sqh4v"] Nov 28 07:12:26 crc kubenswrapper[4946]: E1128 07:12:26.161966 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-62vzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7b4567c7cf-sqh4v_openstack-operators(2f759a24-d58b-4aed-8e14-71dec2ff2df6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 07:12:26 crc kubenswrapper[4946]: E1128 07:12:26.163356 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-sqh4v" podUID="2f759a24-d58b-4aed-8e14-71dec2ff2df6" Nov 28 07:12:26 crc kubenswrapper[4946]: I1128 07:12:26.171577 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-blpm6"] Nov 28 07:12:26 crc kubenswrapper[4946]: I1128 07:12:26.178098 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5qsk"] Nov 28 07:12:26 crc kubenswrapper[4946]: E1128 07:12:26.190091 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:6bed55b172b9ee8ccc3952cbfc543d8bd44e2690f6db94348a754152fd78f4cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wskr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-656dcb59d4-blpm6_openstack-operators(836f3766-a6eb-447f-9337-fa9082bcb62b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 07:12:26 crc kubenswrapper[4946]: E1128 07:12:26.190511 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wflnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-t5qsk_openstack-operators(520a099d-3fd5-42f5-b883-c7a1b94dcb70): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 07:12:26 crc kubenswrapper[4946]: E1128 07:12:26.192536 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5qsk" podUID="520a099d-3fd5-42f5-b883-c7a1b94dcb70" Nov 28 07:12:26 crc kubenswrapper[4946]: E1128 07:12:26.208592 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wskr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-656dcb59d4-blpm6_openstack-operators(836f3766-a6eb-447f-9337-fa9082bcb62b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 07:12:26 crc kubenswrapper[4946]: E1128 07:12:26.211058 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-blpm6" podUID="836f3766-a6eb-447f-9337-fa9082bcb62b" Nov 28 07:12:26 crc kubenswrapper[4946]: I1128 07:12:26.222031 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aef07b0c-aae8-48fd-a246-8b5669cccbce-cert\") pod \"openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n\" (UID: \"aef07b0c-aae8-48fd-a246-8b5669cccbce\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n" Nov 28 07:12:26 crc kubenswrapper[4946]: E1128 07:12:26.222493 4946 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 07:12:26 crc kubenswrapper[4946]: E1128 07:12:26.237458 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aef07b0c-aae8-48fd-a246-8b5669cccbce-cert podName:aef07b0c-aae8-48fd-a246-8b5669cccbce nodeName:}" failed. No retries permitted until 2025-11-28 07:12:28.23742641 +0000 UTC m=+1202.615491521 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/aef07b0c-aae8-48fd-a246-8b5669cccbce-cert") pod "openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n" (UID: "aef07b0c-aae8-48fd-a246-8b5669cccbce") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 07:12:26 crc kubenswrapper[4946]: I1128 07:12:26.750516 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-webhook-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-l7w7r\" (UID: \"5f4eb990-cf89-4f9a-8d22-f016b8894f4f\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" Nov 28 07:12:26 crc kubenswrapper[4946]: E1128 07:12:26.750754 4946 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 07:12:26 crc kubenswrapper[4946]: I1128 07:12:26.750924 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-metrics-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-l7w7r\" (UID: \"5f4eb990-cf89-4f9a-8d22-f016b8894f4f\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" Nov 28 07:12:26 crc kubenswrapper[4946]: E1128 07:12:26.750980 4946 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 07:12:26 crc kubenswrapper[4946]: E1128 07:12:26.751046 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-webhook-certs podName:5f4eb990-cf89-4f9a-8d22-f016b8894f4f nodeName:}" failed. No retries permitted until 2025-11-28 07:12:28.750998738 +0000 UTC m=+1203.129063849 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-webhook-certs") pod "openstack-operator-controller-manager-66f75ddbcc-l7w7r" (UID: "5f4eb990-cf89-4f9a-8d22-f016b8894f4f") : secret "webhook-server-cert" not found Nov 28 07:12:26 crc kubenswrapper[4946]: E1128 07:12:26.751168 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-metrics-certs podName:5f4eb990-cf89-4f9a-8d22-f016b8894f4f nodeName:}" failed. No retries permitted until 2025-11-28 07:12:28.751154942 +0000 UTC m=+1203.129220053 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-metrics-certs") pod "openstack-operator-controller-manager-66f75ddbcc-l7w7r" (UID: "5f4eb990-cf89-4f9a-8d22-f016b8894f4f") : secret "metrics-server-cert" not found Nov 28 07:12:27 crc kubenswrapper[4946]: I1128 07:12:27.039971 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-blpm6" event={"ID":"836f3766-a6eb-447f-9337-fa9082bcb62b","Type":"ContainerStarted","Data":"c560ce60ad96ed75421abd34796129becb0acc89829fecc8fb03a76ed91e9375"} Nov 28 07:12:27 crc kubenswrapper[4946]: I1128 07:12:27.050862 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-mwdxn" event={"ID":"9a41bf27-5186-4bfe-b722-87a604d851c3","Type":"ContainerStarted","Data":"319a5652ef554f85981c1bdfd2d4cda57e940e83a75655f9fc56a25f3792f6cb"} Nov 28 07:12:27 crc kubenswrapper[4946]: I1128 07:12:27.054256 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qfvfk" event={"ID":"989a6399-e7f7-4b7d-bd68-4d44531b3a8e","Type":"ContainerStarted","Data":"c6c1002dfb2962dea77b8008edf76bccf88732674b34801f38784bdd59925712"} Nov 28 07:12:27 crc kubenswrapper[4946]: I1128 07:12:27.065375 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-sqh4v" event={"ID":"2f759a24-d58b-4aed-8e14-71dec2ff2df6","Type":"ContainerStarted","Data":"924a51ef21dcae42920b8b7cf5d9e75cc7eba21a5dc1cdd15155265b3dc4ab1c"} Nov 28 07:12:27 crc kubenswrapper[4946]: I1128 07:12:27.072951 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5qsk" event={"ID":"520a099d-3fd5-42f5-b883-c7a1b94dcb70","Type":"ContainerStarted","Data":"e52c67570a743c67602a7c1986045d0deeab1c7d9a5ee377e80aa703a4603e2b"} Nov 28 07:12:27 crc kubenswrapper[4946]: I1128 07:12:27.076237 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-9nzjs" event={"ID":"b3227d8a-06db-4f44-ae26-f173c27fd3e1","Type":"ContainerStarted","Data":"6fd0015c3b3783e0969ea82bb20e065a5cd32819ee9d609ca2aa64d5ba24bc3d"} Nov 28 07:12:27 crc kubenswrapper[4946]: E1128 07:12:27.077529 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-mwdxn" podUID="9a41bf27-5186-4bfe-b722-87a604d851c3" Nov 28 07:12:27 crc kubenswrapper[4946]: I1128 07:12:27.081896 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-r94kz" event={"ID":"d37965ca-e557-483c-b195-310b690d9101","Type":"ContainerStarted","Data":"1d288364016ac505a3153d5636ca1126bd5fceac29248433308d118fbb5b4dcd"} Nov 28 07:12:27 crc kubenswrapper[4946]: E1128 07:12:27.082848 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5qsk" podUID="520a099d-3fd5-42f5-b883-c7a1b94dcb70" Nov 28 07:12:27 crc kubenswrapper[4946]: E1128 07:12:27.082926 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6bed55b172b9ee8ccc3952cbfc543d8bd44e2690f6db94348a754152fd78f4cf\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-blpm6" podUID="836f3766-a6eb-447f-9337-fa9082bcb62b" Nov 28 07:12:27 crc kubenswrapper[4946]: E1128 07:12:27.083194 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:25faa5b0e4801d4d3b01a28b877ed3188eee71f33ad66f3c2e86b7921758e711\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-sqh4v" podUID="2f759a24-d58b-4aed-8e14-71dec2ff2df6" Nov 28 07:12:27 crc kubenswrapper[4946]: I1128 07:12:27.088123 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-nqqq9" event={"ID":"fda79d01-4f7c-4c88-8567-6c9543ec8b51","Type":"ContainerStarted","Data":"d74e888f54fe6d945e3e78acc2cfd0eb688e93bd8954dfd03e22fa63ed362a8d"} Nov 28 07:12:27 crc kubenswrapper[4946]: E1128 07:12:27.093905 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ddc8a82f05930db8ee7a8d6d189b5a66373060656e4baf71ac302f89c477da4c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-r94kz" podUID="d37965ca-e557-483c-b195-310b690d9101" Nov 28 07:12:27 crc kubenswrapper[4946]: I1128 07:12:27.904558 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1c7d18a6-2067-4736-a42f-074f2672a841-cert\") pod \"infra-operator-controller-manager-57548d458d-dhvm2\" (UID: \"1c7d18a6-2067-4736-a42f-074f2672a841\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-dhvm2" Nov 28 07:12:27 crc kubenswrapper[4946]: E1128 07:12:27.904978 4946 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 07:12:27 crc kubenswrapper[4946]: E1128 07:12:27.905712 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c7d18a6-2067-4736-a42f-074f2672a841-cert podName:1c7d18a6-2067-4736-a42f-074f2672a841 nodeName:}" failed. No retries permitted until 2025-11-28 07:12:31.905686042 +0000 UTC m=+1206.283751153 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1c7d18a6-2067-4736-a42f-074f2672a841-cert") pod "infra-operator-controller-manager-57548d458d-dhvm2" (UID: "1c7d18a6-2067-4736-a42f-074f2672a841") : secret "infra-operator-webhook-server-cert" not found Nov 28 07:12:28 crc kubenswrapper[4946]: E1128 07:12:28.108224 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5qsk" podUID="520a099d-3fd5-42f5-b883-c7a1b94dcb70" Nov 28 07:12:28 crc kubenswrapper[4946]: E1128 07:12:28.111729 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-mwdxn" podUID="9a41bf27-5186-4bfe-b722-87a604d851c3" Nov 28 07:12:28 crc kubenswrapper[4946]: E1128 07:12:28.111799 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6bed55b172b9ee8ccc3952cbfc543d8bd44e2690f6db94348a754152fd78f4cf\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-blpm6" podUID="836f3766-a6eb-447f-9337-fa9082bcb62b" Nov 28 07:12:28 crc kubenswrapper[4946]: E1128 07:12:28.111843 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ddc8a82f05930db8ee7a8d6d189b5a66373060656e4baf71ac302f89c477da4c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-r94kz" podUID="d37965ca-e557-483c-b195-310b690d9101" Nov 28 07:12:28 crc kubenswrapper[4946]: E1128 07:12:28.123936 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:25faa5b0e4801d4d3b01a28b877ed3188eee71f33ad66f3c2e86b7921758e711\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-sqh4v" podUID="2f759a24-d58b-4aed-8e14-71dec2ff2df6" Nov 28 07:12:28 crc kubenswrapper[4946]: I1128 07:12:28.315112 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aef07b0c-aae8-48fd-a246-8b5669cccbce-cert\") pod \"openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n\" (UID: \"aef07b0c-aae8-48fd-a246-8b5669cccbce\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n" Nov 28 07:12:28 crc kubenswrapper[4946]: E1128 07:12:28.316389 4946 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 07:12:28 crc kubenswrapper[4946]: E1128 07:12:28.316447 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aef07b0c-aae8-48fd-a246-8b5669cccbce-cert podName:aef07b0c-aae8-48fd-a246-8b5669cccbce nodeName:}" failed. No retries permitted until 2025-11-28 07:12:32.316431644 +0000 UTC m=+1206.694496755 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/aef07b0c-aae8-48fd-a246-8b5669cccbce-cert") pod "openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n" (UID: "aef07b0c-aae8-48fd-a246-8b5669cccbce") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 07:12:28 crc kubenswrapper[4946]: I1128 07:12:28.831770 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-webhook-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-l7w7r\" (UID: \"5f4eb990-cf89-4f9a-8d22-f016b8894f4f\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" Nov 28 07:12:28 crc kubenswrapper[4946]: I1128 07:12:28.831927 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-metrics-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-l7w7r\" (UID: \"5f4eb990-cf89-4f9a-8d22-f016b8894f4f\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" Nov 28 07:12:28 crc kubenswrapper[4946]: E1128 07:12:28.832026 4946 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 07:12:28 crc kubenswrapper[4946]: E1128 07:12:28.832075 4946 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 07:12:28 crc kubenswrapper[4946]: E1128 07:12:28.832144 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-metrics-certs podName:5f4eb990-cf89-4f9a-8d22-f016b8894f4f nodeName:}" failed. No retries permitted until 2025-11-28 07:12:32.832123535 +0000 UTC m=+1207.210188646 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-metrics-certs") pod "openstack-operator-controller-manager-66f75ddbcc-l7w7r" (UID: "5f4eb990-cf89-4f9a-8d22-f016b8894f4f") : secret "metrics-server-cert" not found Nov 28 07:12:28 crc kubenswrapper[4946]: E1128 07:12:28.832161 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-webhook-certs podName:5f4eb990-cf89-4f9a-8d22-f016b8894f4f nodeName:}" failed. No retries permitted until 2025-11-28 07:12:32.832153786 +0000 UTC m=+1207.210218897 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-webhook-certs") pod "openstack-operator-controller-manager-66f75ddbcc-l7w7r" (UID: "5f4eb990-cf89-4f9a-8d22-f016b8894f4f") : secret "webhook-server-cert" not found Nov 28 07:12:31 crc kubenswrapper[4946]: I1128 07:12:31.997089 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1c7d18a6-2067-4736-a42f-074f2672a841-cert\") pod \"infra-operator-controller-manager-57548d458d-dhvm2\" (UID: \"1c7d18a6-2067-4736-a42f-074f2672a841\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-dhvm2" Nov 28 07:12:31 crc kubenswrapper[4946]: E1128 07:12:31.998053 4946 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 07:12:31 crc kubenswrapper[4946]: E1128 07:12:31.998129 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c7d18a6-2067-4736-a42f-074f2672a841-cert podName:1c7d18a6-2067-4736-a42f-074f2672a841 nodeName:}" failed. No retries permitted until 2025-11-28 07:12:39.998112058 +0000 UTC m=+1214.376177169 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1c7d18a6-2067-4736-a42f-074f2672a841-cert") pod "infra-operator-controller-manager-57548d458d-dhvm2" (UID: "1c7d18a6-2067-4736-a42f-074f2672a841") : secret "infra-operator-webhook-server-cert" not found Nov 28 07:12:32 crc kubenswrapper[4946]: I1128 07:12:32.405373 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aef07b0c-aae8-48fd-a246-8b5669cccbce-cert\") pod \"openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n\" (UID: \"aef07b0c-aae8-48fd-a246-8b5669cccbce\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n" Nov 28 07:12:32 crc kubenswrapper[4946]: E1128 07:12:32.405753 4946 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 07:12:32 crc kubenswrapper[4946]: E1128 07:12:32.405938 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aef07b0c-aae8-48fd-a246-8b5669cccbce-cert podName:aef07b0c-aae8-48fd-a246-8b5669cccbce nodeName:}" failed. No retries permitted until 2025-11-28 07:12:40.405892166 +0000 UTC m=+1214.783957447 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/aef07b0c-aae8-48fd-a246-8b5669cccbce-cert") pod "openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n" (UID: "aef07b0c-aae8-48fd-a246-8b5669cccbce") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 07:12:32 crc kubenswrapper[4946]: I1128 07:12:32.915152 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-webhook-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-l7w7r\" (UID: \"5f4eb990-cf89-4f9a-8d22-f016b8894f4f\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" Nov 28 07:12:32 crc kubenswrapper[4946]: I1128 07:12:32.915268 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-metrics-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-l7w7r\" (UID: \"5f4eb990-cf89-4f9a-8d22-f016b8894f4f\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" Nov 28 07:12:32 crc kubenswrapper[4946]: E1128 07:12:32.915535 4946 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 07:12:32 crc kubenswrapper[4946]: E1128 07:12:32.915605 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-metrics-certs podName:5f4eb990-cf89-4f9a-8d22-f016b8894f4f nodeName:}" failed. No retries permitted until 2025-11-28 07:12:40.915583759 +0000 UTC m=+1215.293648880 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-metrics-certs") pod "openstack-operator-controller-manager-66f75ddbcc-l7w7r" (UID: "5f4eb990-cf89-4f9a-8d22-f016b8894f4f") : secret "metrics-server-cert" not found Nov 28 07:12:32 crc kubenswrapper[4946]: E1128 07:12:32.915612 4946 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 07:12:32 crc kubenswrapper[4946]: E1128 07:12:32.915777 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-webhook-certs podName:5f4eb990-cf89-4f9a-8d22-f016b8894f4f nodeName:}" failed. No retries permitted until 2025-11-28 07:12:40.915704311 +0000 UTC m=+1215.293769502 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-webhook-certs") pod "openstack-operator-controller-manager-66f75ddbcc-l7w7r" (UID: "5f4eb990-cf89-4f9a-8d22-f016b8894f4f") : secret "webhook-server-cert" not found Nov 28 07:12:39 crc kubenswrapper[4946]: E1128 07:12:39.259500 4946 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa" Nov 28 07:12:39 crc kubenswrapper[4946]: E1128 07:12:39.260588 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hxvhl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cd6c7f4c8-nqqq9_openstack-operators(fda79d01-4f7c-4c88-8567-6c9543ec8b51): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 07:12:39 crc kubenswrapper[4946]: E1128 07:12:39.779286 4946 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:e00a9ed0ab26c5b745bd804ab1fe6b22428d026f17ea05a05f045e060342f46c" Nov 28 07:12:39 crc kubenswrapper[4946]: E1128 07:12:39.779559 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e00a9ed0ab26c5b745bd804ab1fe6b22428d026f17ea05a05f045e060342f46c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dgkxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-6fdcddb789-nl9r5_openstack-operators(7aeae67c-c1be-4d23-bb62-c8798d9fe052): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 07:12:40 crc kubenswrapper[4946]: I1128 07:12:40.050433 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1c7d18a6-2067-4736-a42f-074f2672a841-cert\") pod \"infra-operator-controller-manager-57548d458d-dhvm2\" (UID: \"1c7d18a6-2067-4736-a42f-074f2672a841\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-dhvm2" Nov 28 07:12:40 crc kubenswrapper[4946]: I1128 07:12:40.056656 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1c7d18a6-2067-4736-a42f-074f2672a841-cert\") pod \"infra-operator-controller-manager-57548d458d-dhvm2\" (UID: \"1c7d18a6-2067-4736-a42f-074f2672a841\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-dhvm2" Nov 28 07:12:40 crc kubenswrapper[4946]: I1128 07:12:40.242117 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-8cgzm" Nov 28 07:12:40 crc kubenswrapper[4946]: I1128 07:12:40.251534 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-dhvm2" Nov 28 07:12:40 crc kubenswrapper[4946]: I1128 07:12:40.458675 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aef07b0c-aae8-48fd-a246-8b5669cccbce-cert\") pod \"openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n\" (UID: \"aef07b0c-aae8-48fd-a246-8b5669cccbce\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n" Nov 28 07:12:40 crc kubenswrapper[4946]: I1128 07:12:40.472225 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aef07b0c-aae8-48fd-a246-8b5669cccbce-cert\") pod \"openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n\" (UID: \"aef07b0c-aae8-48fd-a246-8b5669cccbce\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n" Nov 28 07:12:40 crc kubenswrapper[4946]: I1128 07:12:40.682192 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-x96z6" Nov 28 07:12:40 crc kubenswrapper[4946]: I1128 07:12:40.691158 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n" Nov 28 07:12:40 crc kubenswrapper[4946]: I1128 07:12:40.968017 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-webhook-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-l7w7r\" (UID: \"5f4eb990-cf89-4f9a-8d22-f016b8894f4f\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" Nov 28 07:12:40 crc kubenswrapper[4946]: I1128 07:12:40.968113 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-metrics-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-l7w7r\" (UID: \"5f4eb990-cf89-4f9a-8d22-f016b8894f4f\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" Nov 28 07:12:40 crc kubenswrapper[4946]: I1128 07:12:40.974145 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-metrics-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-l7w7r\" (UID: \"5f4eb990-cf89-4f9a-8d22-f016b8894f4f\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" Nov 28 07:12:40 crc kubenswrapper[4946]: I1128 07:12:40.979417 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5f4eb990-cf89-4f9a-8d22-f016b8894f4f-webhook-certs\") pod \"openstack-operator-controller-manager-66f75ddbcc-l7w7r\" (UID: \"5f4eb990-cf89-4f9a-8d22-f016b8894f4f\") " pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" Nov 28 07:12:41 crc kubenswrapper[4946]: I1128 07:12:41.017201 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-5shdq" Nov 28 07:12:41 crc kubenswrapper[4946]: I1128 07:12:41.025366 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" Nov 28 07:12:46 crc kubenswrapper[4946]: E1128 07:12:46.890243 4946 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:89910bc3ecceb7590d3207ac294eb7354de358cf39ef03c72323b26c598e50e6" Nov 28 07:12:46 crc kubenswrapper[4946]: E1128 07:12:46.891388 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:89910bc3ecceb7590d3207ac294eb7354de358cf39ef03c72323b26c598e50e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f9l4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-5d499bf58b-2f2dz_openstack-operators(cb30728b-6cd4-4d80-8e1a-bc979410fad6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 07:12:47 crc kubenswrapper[4946]: E1128 07:12:47.141688 4946 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:ec4e5c911c1d0f1ea211a04b251a9d2e95b69d141c1caf07a0381693b2d6368b" Nov 28 07:12:47 crc kubenswrapper[4946]: E1128 07:12:47.143489 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:ec4e5c911c1d0f1ea211a04b251a9d2e95b69d141c1caf07a0381693b2d6368b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9h5wl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-955677c94-hk9d2_openstack-operators(3fe2b1f1-7ef0-4cc3-9cf1-adefe5dead46): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 07:12:47 crc kubenswrapper[4946]: E1128 07:12:47.906373 4946 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385" Nov 28 07:12:47 crc kubenswrapper[4946]: E1128 07:12:47.906678 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9xrwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-qfvfk_openstack-operators(989a6399-e7f7-4b7d-bd68-4d44531b3a8e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 07:12:49 crc kubenswrapper[4946]: I1128 07:12:49.513470 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-dhvm2"] Nov 28 07:12:52 crc kubenswrapper[4946]: I1128 07:12:52.029082 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n"] Nov 28 07:12:52 crc kubenswrapper[4946]: I1128 07:12:52.072060 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r"] Nov 28 07:12:52 crc kubenswrapper[4946]: I1128 07:12:52.311121 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-dhvm2" event={"ID":"1c7d18a6-2067-4736-a42f-074f2672a841","Type":"ContainerStarted","Data":"98b80fdd99fb0116e0a418d7aaf59d8b173840228bf0b5e02e7277f6268dd3c9"} Nov 28 07:12:52 crc kubenswrapper[4946]: W1128 07:12:52.478247 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaef07b0c_aae8_48fd_a246_8b5669cccbce.slice/crio-f48feb58eb4376639b114057cb6016bb00be273e3ba49ccf04b23ee7d6623e40 WatchSource:0}: Error finding container f48feb58eb4376639b114057cb6016bb00be273e3ba49ccf04b23ee7d6623e40: Status 404 returned error can't find the container with id f48feb58eb4376639b114057cb6016bb00be273e3ba49ccf04b23ee7d6623e40 Nov 28 07:12:53 crc kubenswrapper[4946]: I1128 07:12:53.319859 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n" event={"ID":"aef07b0c-aae8-48fd-a246-8b5669cccbce","Type":"ContainerStarted","Data":"f48feb58eb4376639b114057cb6016bb00be273e3ba49ccf04b23ee7d6623e40"} Nov 28 07:12:53 crc kubenswrapper[4946]: I1128 07:12:53.323148 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" event={"ID":"5f4eb990-cf89-4f9a-8d22-f016b8894f4f","Type":"ContainerStarted","Data":"c43bcfc33db63b60b9e7c995163a785609db67d25f297c5d8fc11e1ea7212bd7"} Nov 28 07:12:54 crc kubenswrapper[4946]: I1128 07:12:54.336072 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-pvrsb" event={"ID":"e3d1cda6-c2ac-40fc-8f40-2e4b89dbca81","Type":"ContainerStarted","Data":"7e4b3080e2be49e09651cf5e89e61310d4b016445a773552f36b689813e24020"} Nov 28 07:12:54 crc kubenswrapper[4946]: I1128 07:12:54.360100 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-2l7gl" event={"ID":"353bb4c2-bff5-4749-a149-8a856803b84b","Type":"ContainerStarted","Data":"c48c57570c04b7c7f7e666def85ee1f331ba29d8321ab2c8921b1fe3244cd583"} Nov 28 07:12:54 crc kubenswrapper[4946]: I1128 07:12:54.368901 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4mqd6" event={"ID":"710cbde0-3645-452d-8cda-d4165c8fdd32","Type":"ContainerStarted","Data":"7d5fd8a3b8237230e130f9e45c75f4ebc042bfe192dfce2edd9419b96fb23683"} Nov 28 07:12:54 crc kubenswrapper[4946]: I1128 07:12:54.374471 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-7p86n" event={"ID":"dc11bd96-48e8-4613-80e6-ce3b518cea8d","Type":"ContainerStarted","Data":"c8a72e58d3fc5fdbbb7cb18655e0ea672d29d399d7f3a56890790785afc472f2"} Nov 28 07:12:54 crc kubenswrapper[4946]: I1128 07:12:54.377050 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" event={"ID":"5f4eb990-cf89-4f9a-8d22-f016b8894f4f","Type":"ContainerStarted","Data":"f7815d4f7dc4610d028b047f50dd2a73c23f50626a2e695146605d8eb98219fb"} Nov 28 07:12:54 crc kubenswrapper[4946]: I1128 07:12:54.377372 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" Nov 28 07:12:54 crc kubenswrapper[4946]: I1128 07:12:54.392500 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-g6gbz" event={"ID":"d4188f46-2979-4b90-bfa0-37962da6e3c7","Type":"ContainerStarted","Data":"72037569967e0f141d98cbe66917a627875ae8eb3e0276faccf93f783cc13b1b"} Nov 28 07:12:54 crc kubenswrapper[4946]: I1128 07:12:54.395263 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-9b7s9" event={"ID":"8d84e62b-b1cf-4238-b78b-f47a9f2df3ef","Type":"ContainerStarted","Data":"2bc83d71db9fc90727a99156a02d95e88a70be00dc50035c62c1679043c8e68d"} Nov 28 07:12:54 crc kubenswrapper[4946]: I1128 07:12:54.400642 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-zwd8k" event={"ID":"1b3d3306-e302-4fe9-b50c-8295275ed28c","Type":"ContainerStarted","Data":"89299b0e8516971d2c3e202d0d631adbeb347b54cfe23af94222c5eae5889476"} Nov 28 07:12:54 crc kubenswrapper[4946]: I1128 07:12:54.402942 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-5hbz6" event={"ID":"f1ac7d28-f59d-44e5-aa3e-c6da338fca84","Type":"ContainerStarted","Data":"b0e8d5c9b004710616fe76915505311722d9f7af3e3f0b8cf0d99d99f22b8acb"} Nov 28 07:12:54 crc kubenswrapper[4946]: I1128 07:12:54.404845 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-9nzjs" event={"ID":"b3227d8a-06db-4f44-ae26-f173c27fd3e1","Type":"ContainerStarted","Data":"e735c2dde7fb5fc2a8305bba4f6dcc4b131775c355f54e7abd84fcdc8d33b481"} Nov 28 07:12:54 crc kubenswrapper[4946]: I1128 07:12:54.415790 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-2nfmg" event={"ID":"3f402388-33d3-4c7b-a1b7-26241b6de58c","Type":"ContainerStarted","Data":"12d3abb161e0dd11594f2ace7aa492c95f82ed6261f54c0032b6d21cc284fe97"} Nov 28 07:12:54 crc kubenswrapper[4946]: I1128 07:12:54.437118 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" podStartSLOduration=30.437099969 podStartE2EDuration="30.437099969s" podCreationTimestamp="2025-11-28 07:12:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:12:54.431853379 +0000 UTC m=+1228.809918490" watchObservedRunningTime="2025-11-28 07:12:54.437099969 +0000 UTC m=+1228.815165080" Nov 28 07:12:54 crc kubenswrapper[4946]: I1128 07:12:54.731122 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:12:54 crc kubenswrapper[4946]: I1128 07:12:54.731204 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:12:54 crc kubenswrapper[4946]: I1128 07:12:54.731280 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 07:12:54 crc kubenswrapper[4946]: I1128 07:12:54.731915 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0a6d974443f840af515def5c439a2c40cf5e3449f4043f5c4fe778bb70c9b0fd"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 07:12:54 crc kubenswrapper[4946]: I1128 07:12:54.731978 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://0a6d974443f840af515def5c439a2c40cf5e3449f4043f5c4fe778bb70c9b0fd" gracePeriod=600 Nov 28 07:12:55 crc kubenswrapper[4946]: I1128 07:12:55.453279 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="0a6d974443f840af515def5c439a2c40cf5e3449f4043f5c4fe778bb70c9b0fd" exitCode=0 Nov 28 07:12:55 crc kubenswrapper[4946]: I1128 07:12:55.453380 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"0a6d974443f840af515def5c439a2c40cf5e3449f4043f5c4fe778bb70c9b0fd"} Nov 28 07:12:55 crc kubenswrapper[4946]: I1128 07:12:55.453478 4946 scope.go:117] "RemoveContainer" containerID="85f899527cbb8eb5fccf192c306339421531f2edfd2b109fbf8ff7c7c6545620" Nov 28 07:12:57 crc kubenswrapper[4946]: I1128 07:12:57.476991 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-mwdxn" event={"ID":"9a41bf27-5186-4bfe-b722-87a604d851c3","Type":"ContainerStarted","Data":"815c4b72684101bf295cddbfefb40479f1c6419d888823c0032737f9b1b0c9fd"} Nov 28 07:12:57 crc kubenswrapper[4946]: I1128 07:12:57.481005 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-r94kz" event={"ID":"d37965ca-e557-483c-b195-310b690d9101","Type":"ContainerStarted","Data":"43e048688f283526f304a8060da91a135a380372e7565ce056be485326b05746"} Nov 28 07:12:57 crc kubenswrapper[4946]: I1128 07:12:57.482749 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-blpm6" event={"ID":"836f3766-a6eb-447f-9337-fa9082bcb62b","Type":"ContainerStarted","Data":"d77dc8fc1cd705374f6f8835b698f77cd56fc40ea285aa830d88de6db3ca3ba9"} Nov 28 07:12:58 crc kubenswrapper[4946]: I1128 07:12:58.495319 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-dhvm2" event={"ID":"1c7d18a6-2067-4736-a42f-074f2672a841","Type":"ContainerStarted","Data":"1b20cdcd60d695395c8e4c6d91fbf9b186bd6c7399df61abd79c03f26b972439"} Nov 28 07:12:58 crc kubenswrapper[4946]: I1128 07:12:58.499482 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"a1b65860bba4b7422a1bd44c20f73ab6d26e45cd22f0c4eba1bdbae4c38acc18"} Nov 28 07:12:58 crc kubenswrapper[4946]: I1128 07:12:58.504216 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-sqh4v" event={"ID":"2f759a24-d58b-4aed-8e14-71dec2ff2df6","Type":"ContainerStarted","Data":"0613770a591247937fb5c26bf68210bf6828ea8ea38f11afd99446e1badf6817"} Nov 28 07:12:58 crc kubenswrapper[4946]: I1128 07:12:58.506802 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5qsk" event={"ID":"520a099d-3fd5-42f5-b883-c7a1b94dcb70","Type":"ContainerStarted","Data":"f542b9ad7566cea4eefbf78890b37b7f419add3d74b528b0368f8fcb78fe9d30"} Nov 28 07:12:58 crc kubenswrapper[4946]: I1128 07:12:58.559073 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t5qsk" podStartSLOduration=7.056317629 podStartE2EDuration="34.559037859s" podCreationTimestamp="2025-11-28 07:12:24 +0000 UTC" firstStartedPulling="2025-11-28 07:12:26.190280392 +0000 UTC m=+1200.568345513" lastFinishedPulling="2025-11-28 07:12:53.693000632 +0000 UTC m=+1228.071065743" observedRunningTime="2025-11-28 07:12:58.548316326 +0000 UTC m=+1232.926381437" watchObservedRunningTime="2025-11-28 07:12:58.559037859 +0000 UTC m=+1232.937102970" Nov 28 07:12:58 crc kubenswrapper[4946]: E1128 07:12:58.612320 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-2f2dz" podUID="cb30728b-6cd4-4d80-8e1a-bc979410fad6" Nov 28 07:12:58 crc kubenswrapper[4946]: E1128 07:12:58.640611 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-nl9r5" podUID="7aeae67c-c1be-4d23-bb62-c8798d9fe052" Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.514854 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-r94kz" event={"ID":"d37965ca-e557-483c-b195-310b690d9101","Type":"ContainerStarted","Data":"0830ab68484f4f792f15fd58eac4e79145188bd7daa9705968e01681784c965d"} Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.515803 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-r94kz" Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.519334 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-2l7gl" event={"ID":"353bb4c2-bff5-4749-a149-8a856803b84b","Type":"ContainerStarted","Data":"59906942bd27efe016a715dce08439cf7401c2802c99197fb388c9a63813b12b"} Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.520964 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-2l7gl" Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.529957 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-2l7gl" Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.530016 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-2f2dz" event={"ID":"cb30728b-6cd4-4d80-8e1a-bc979410fad6","Type":"ContainerStarted","Data":"516154c34b5c417388be37ffa6ecbbf98d058dea571e77748470a897a7e19c4d"} Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.549515 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-mwdxn" event={"ID":"9a41bf27-5186-4bfe-b722-87a604d851c3","Type":"ContainerStarted","Data":"4e9d9f001293547fb1263bd5616d0a4698687902324ccd784fd3da2dd9e92ae0"} Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.550480 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-mwdxn" Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.577744 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-g6gbz" event={"ID":"d4188f46-2979-4b90-bfa0-37962da6e3c7","Type":"ContainerStarted","Data":"3b2595dc581899c03b8c5189f7f9a838700bd93b1ea2bb2d9bf6affd876eb0e8"} Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.579860 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-g6gbz" Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.582808 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-g6gbz" Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.583058 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-r94kz" podStartSLOduration=3.31365908 podStartE2EDuration="35.58304475s" podCreationTimestamp="2025-11-28 07:12:24 +0000 UTC" firstStartedPulling="2025-11-28 07:12:26.137040794 +0000 UTC m=+1200.515105905" lastFinishedPulling="2025-11-28 07:12:58.406426464 +0000 UTC m=+1232.784491575" observedRunningTime="2025-11-28 07:12:59.565918779 +0000 UTC m=+1233.943983890" watchObservedRunningTime="2025-11-28 07:12:59.58304475 +0000 UTC m=+1233.961109861" Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.589179 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-9b7s9" event={"ID":"8d84e62b-b1cf-4238-b78b-f47a9f2df3ef","Type":"ContainerStarted","Data":"ed4c13e87585f48e624a4e5a397cda1b0d7a9f300ff038274468e019ce3292d1"} Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.591427 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-9b7s9" Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.596538 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-9b7s9" Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.620129 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-2l7gl" podStartSLOduration=3.847560219 podStartE2EDuration="36.620110869s" podCreationTimestamp="2025-11-28 07:12:23 +0000 UTC" firstStartedPulling="2025-11-28 07:12:25.63251855 +0000 UTC m=+1200.010583661" lastFinishedPulling="2025-11-28 07:12:58.40506921 +0000 UTC m=+1232.783134311" observedRunningTime="2025-11-28 07:12:59.618990042 +0000 UTC m=+1233.997055153" watchObservedRunningTime="2025-11-28 07:12:59.620110869 +0000 UTC m=+1233.998175980" Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.620832 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-9nzjs" event={"ID":"b3227d8a-06db-4f44-ae26-f173c27fd3e1","Type":"ContainerStarted","Data":"90d48633a663db2ef5fa45fe407fcbb93b4f96d5c1a4b78ded08ec907e8f90ea"} Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.620884 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-9nzjs" Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.624105 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-9nzjs" Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.627083 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-dhvm2" event={"ID":"1c7d18a6-2067-4736-a42f-074f2672a841","Type":"ContainerStarted","Data":"fb59d6542971b72c43bc249d396969ceb34c3af96325e38488bf198b9b2d292f"} Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.627643 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-dhvm2" Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.635142 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n" event={"ID":"aef07b0c-aae8-48fd-a246-8b5669cccbce","Type":"ContainerStarted","Data":"0a5c473404498d4aedc5053a80234ac8792747df3923c9f0c11ad4c485e3bfc1"} Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.635749 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n" Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.647669 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-nl9r5" event={"ID":"7aeae67c-c1be-4d23-bb62-c8798d9fe052","Type":"ContainerStarted","Data":"c6107d44bdb5cf335b9108f4f7da62e30231fa76e67b3d34b86fe38ed3a89165"} Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.661685 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-mwdxn" podStartSLOduration=3.184821689 podStartE2EDuration="35.661664529s" podCreationTimestamp="2025-11-28 07:12:24 +0000 UTC" firstStartedPulling="2025-11-28 07:12:26.141690299 +0000 UTC m=+1200.519755410" lastFinishedPulling="2025-11-28 07:12:58.618533149 +0000 UTC m=+1232.996598250" observedRunningTime="2025-11-28 07:12:59.655340584 +0000 UTC m=+1234.033405705" watchObservedRunningTime="2025-11-28 07:12:59.661664529 +0000 UTC m=+1234.039729640" Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.697516 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-dhvm2" podStartSLOduration=31.459973306 podStartE2EDuration="36.697496568s" podCreationTimestamp="2025-11-28 07:12:23 +0000 UTC" firstStartedPulling="2025-11-28 07:12:51.814211536 +0000 UTC m=+1226.192276647" lastFinishedPulling="2025-11-28 07:12:57.051734798 +0000 UTC m=+1231.429799909" observedRunningTime="2025-11-28 07:12:59.692933857 +0000 UTC m=+1234.070998978" watchObservedRunningTime="2025-11-28 07:12:59.697496568 +0000 UTC m=+1234.075561679" Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.707222 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-5hbz6" event={"ID":"f1ac7d28-f59d-44e5-aa3e-c6da338fca84","Type":"ContainerStarted","Data":"03b42fb6d3da23af67a3f0629d2af49e46e12387606326983802708729faccb5"} Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.707279 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-5hbz6" Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.736170 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-5hbz6" Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.757431 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-g6gbz" podStartSLOduration=4.098244641 podStartE2EDuration="36.757411448s" podCreationTimestamp="2025-11-28 07:12:23 +0000 UTC" firstStartedPulling="2025-11-28 07:12:25.633248418 +0000 UTC m=+1200.011313529" lastFinishedPulling="2025-11-28 07:12:58.292415225 +0000 UTC m=+1232.670480336" observedRunningTime="2025-11-28 07:12:59.752952868 +0000 UTC m=+1234.131017989" watchObservedRunningTime="2025-11-28 07:12:59.757411448 +0000 UTC m=+1234.135476559" Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.791748 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-9b7s9" podStartSLOduration=3.139192634 podStartE2EDuration="35.791697529s" podCreationTimestamp="2025-11-28 07:12:24 +0000 UTC" firstStartedPulling="2025-11-28 07:12:25.899359568 +0000 UTC m=+1200.277424679" lastFinishedPulling="2025-11-28 07:12:58.551864473 +0000 UTC m=+1232.929929574" observedRunningTime="2025-11-28 07:12:59.784565394 +0000 UTC m=+1234.162630505" watchObservedRunningTime="2025-11-28 07:12:59.791697529 +0000 UTC m=+1234.169762640" Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.823966 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-9nzjs" podStartSLOduration=3.6722225120000003 podStartE2EDuration="35.82392371s" podCreationTimestamp="2025-11-28 07:12:24 +0000 UTC" firstStartedPulling="2025-11-28 07:12:26.108322183 +0000 UTC m=+1200.486387294" lastFinishedPulling="2025-11-28 07:12:58.260023381 +0000 UTC m=+1232.638088492" observedRunningTime="2025-11-28 07:12:59.822509336 +0000 UTC m=+1234.200574457" watchObservedRunningTime="2025-11-28 07:12:59.82392371 +0000 UTC m=+1234.201988821" Nov 28 07:12:59 crc kubenswrapper[4946]: E1128 07:12:59.850342 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-955677c94-hk9d2" podUID="3fe2b1f1-7ef0-4cc3-9cf1-adefe5dead46" Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.883332 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n" podStartSLOduration=31.312183029 podStartE2EDuration="35.883312758s" podCreationTimestamp="2025-11-28 07:12:24 +0000 UTC" firstStartedPulling="2025-11-28 07:12:52.481423169 +0000 UTC m=+1226.859488280" lastFinishedPulling="2025-11-28 07:12:57.052552898 +0000 UTC m=+1231.430618009" observedRunningTime="2025-11-28 07:12:59.879244248 +0000 UTC m=+1234.257309359" watchObservedRunningTime="2025-11-28 07:12:59.883312758 +0000 UTC m=+1234.261377869" Nov 28 07:12:59 crc kubenswrapper[4946]: I1128 07:12:59.917356 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-5hbz6" podStartSLOduration=3.375649325 podStartE2EDuration="36.917335493s" podCreationTimestamp="2025-11-28 07:12:23 +0000 UTC" firstStartedPulling="2025-11-28 07:12:25.176147438 +0000 UTC m=+1199.554212549" lastFinishedPulling="2025-11-28 07:12:58.717833606 +0000 UTC m=+1233.095898717" observedRunningTime="2025-11-28 07:12:59.902106499 +0000 UTC m=+1234.280171610" watchObservedRunningTime="2025-11-28 07:12:59.917335493 +0000 UTC m=+1234.295400604" Nov 28 07:12:59 crc kubenswrapper[4946]: E1128 07:12:59.983675 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-nqqq9" podUID="fda79d01-4f7c-4c88-8567-6c9543ec8b51" Nov 28 07:13:00 crc kubenswrapper[4946]: E1128 07:13:00.130595 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qfvfk" podUID="989a6399-e7f7-4b7d-bd68-4d44531b3a8e" Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.716417 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-nl9r5" event={"ID":"7aeae67c-c1be-4d23-bb62-c8798d9fe052","Type":"ContainerStarted","Data":"1e33bc2d2e638d832027873cf79591104b8027c956977458116a42b231492721"} Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.717043 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-nl9r5" Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.719701 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-zwd8k" event={"ID":"1b3d3306-e302-4fe9-b50c-8295275ed28c","Type":"ContainerStarted","Data":"7cbe96ab928cad44985b127bef859454bdb17b6b735d6d97125370b82850b2f7"} Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.719917 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-zwd8k" Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.722351 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-nqqq9" event={"ID":"fda79d01-4f7c-4c88-8567-6c9543ec8b51","Type":"ContainerStarted","Data":"6860f9852c94e4b2ae13f1de0117bb4cee36b3165e86451f7ec5ea38c942eb52"} Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.722670 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-zwd8k" Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.725484 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-blpm6" event={"ID":"836f3766-a6eb-447f-9337-fa9082bcb62b","Type":"ContainerStarted","Data":"d4507d51fd4556b72aaab194cc0adf8aad1833408a6cf4c7c5444f9d62aeb160"} Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.725524 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-blpm6" Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.728008 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-2f2dz" event={"ID":"cb30728b-6cd4-4d80-8e1a-bc979410fad6","Type":"ContainerStarted","Data":"353ae53a7941c1a4d33bb7418e0a271434980586a57aa7d31dd0f19eb5578104"} Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.728365 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-2f2dz" Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.730814 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qfvfk" event={"ID":"989a6399-e7f7-4b7d-bd68-4d44531b3a8e","Type":"ContainerStarted","Data":"fda700db2725052eee428d1036c88d530803680be203ad4ea157a6a904a3717f"} Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.733297 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-hk9d2" event={"ID":"3fe2b1f1-7ef0-4cc3-9cf1-adefe5dead46","Type":"ContainerStarted","Data":"225aa47f88f3521d35473214859505d5873c5ee5f52ef75c176b2b5a9cf39f2e"} Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.742637 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-2nfmg" event={"ID":"3f402388-33d3-4c7b-a1b7-26241b6de58c","Type":"ContainerStarted","Data":"a0623293e2959f42db6bb90e9c3f2d87e96ddbea8da2df263273807567b0c897"} Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.743087 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-d77b94747-2nfmg" Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.745702 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-d77b94747-2nfmg" Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.750938 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-nl9r5" podStartSLOduration=2.32345721 podStartE2EDuration="36.75092064s" podCreationTimestamp="2025-11-28 07:12:24 +0000 UTC" firstStartedPulling="2025-11-28 07:12:25.917952468 +0000 UTC m=+1200.296017579" lastFinishedPulling="2025-11-28 07:13:00.345415898 +0000 UTC m=+1234.723481009" observedRunningTime="2025-11-28 07:13:00.744862042 +0000 UTC m=+1235.122927153" watchObservedRunningTime="2025-11-28 07:13:00.75092064 +0000 UTC m=+1235.128985751" Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.761289 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-7p86n" event={"ID":"dc11bd96-48e8-4613-80e6-ce3b518cea8d","Type":"ContainerStarted","Data":"3e2bfc63494c9dbcd942fd33496bea3d78a3c216abb60347e0c95ab7cb3d33da"} Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.762704 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-7p86n" Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.771075 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n" event={"ID":"aef07b0c-aae8-48fd-a246-8b5669cccbce","Type":"ContainerStarted","Data":"cf0c2db1d8e994f5916dedb92e7fc25bd6476cddafd143413e863130827bd2fa"} Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.775309 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-7p86n" Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.781084 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-pvrsb" event={"ID":"e3d1cda6-c2ac-40fc-8f40-2e4b89dbca81","Type":"ContainerStarted","Data":"065bbadd03ab49c4d1077a65916e092fdf4261a13bff88db10b9567f1fc3ce8b"} Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.781307 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-pvrsb" Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.784721 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-pvrsb" Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.787654 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-sqh4v" event={"ID":"2f759a24-d58b-4aed-8e14-71dec2ff2df6","Type":"ContainerStarted","Data":"936fe79d1e83cea47b8851b1009659d002c39e216576103a0f7d1b83c2671e54"} Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.787821 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-sqh4v" Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.797764 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4mqd6" event={"ID":"710cbde0-3645-452d-8cda-d4165c8fdd32","Type":"ContainerStarted","Data":"caa8e2a2366cac6619e063a84e7e9e797bdc0ea19d875822421fac2469bdb900"} Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.884816 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-blpm6" podStartSLOduration=3.831024748 podStartE2EDuration="36.884779995s" podCreationTimestamp="2025-11-28 07:12:24 +0000 UTC" firstStartedPulling="2025-11-28 07:12:26.189926484 +0000 UTC m=+1200.567991595" lastFinishedPulling="2025-11-28 07:12:59.243681731 +0000 UTC m=+1233.621746842" observedRunningTime="2025-11-28 07:13:00.877422585 +0000 UTC m=+1235.255487696" watchObservedRunningTime="2025-11-28 07:13:00.884779995 +0000 UTC m=+1235.262845096" Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.903132 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-2f2dz" podStartSLOduration=3.582880134 podStartE2EDuration="37.903107445s" podCreationTimestamp="2025-11-28 07:12:23 +0000 UTC" firstStartedPulling="2025-11-28 07:12:25.77467928 +0000 UTC m=+1200.152744391" lastFinishedPulling="2025-11-28 07:13:00.094906591 +0000 UTC m=+1234.472971702" observedRunningTime="2025-11-28 07:13:00.895022317 +0000 UTC m=+1235.273087448" watchObservedRunningTime="2025-11-28 07:13:00.903107445 +0000 UTC m=+1235.281172556" Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.922490 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-zwd8k" podStartSLOduration=4.798170242 podStartE2EDuration="37.9224441s" podCreationTimestamp="2025-11-28 07:12:23 +0000 UTC" firstStartedPulling="2025-11-28 07:12:25.742604346 +0000 UTC m=+1200.120669457" lastFinishedPulling="2025-11-28 07:12:58.866878204 +0000 UTC m=+1233.244943315" observedRunningTime="2025-11-28 07:13:00.9171761 +0000 UTC m=+1235.295241221" watchObservedRunningTime="2025-11-28 07:13:00.9224441 +0000 UTC m=+1235.300509221" Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.948234 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-d77b94747-2nfmg" podStartSLOduration=3.985900811 podStartE2EDuration="36.948205072s" podCreationTimestamp="2025-11-28 07:12:24 +0000 UTC" firstStartedPulling="2025-11-28 07:12:25.931548105 +0000 UTC m=+1200.309613216" lastFinishedPulling="2025-11-28 07:12:58.893852366 +0000 UTC m=+1233.271917477" observedRunningTime="2025-11-28 07:13:00.941652971 +0000 UTC m=+1235.319718092" watchObservedRunningTime="2025-11-28 07:13:00.948205072 +0000 UTC m=+1235.326270183" Nov 28 07:13:00 crc kubenswrapper[4946]: I1128 07:13:00.964415 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4mqd6" podStartSLOduration=3.199643999 podStartE2EDuration="36.964384009s" podCreationTimestamp="2025-11-28 07:12:24 +0000 UTC" firstStartedPulling="2025-11-28 07:12:25.902920516 +0000 UTC m=+1200.280985627" lastFinishedPulling="2025-11-28 07:12:59.667660526 +0000 UTC m=+1234.045725637" observedRunningTime="2025-11-28 07:13:00.95994143 +0000 UTC m=+1235.338006541" watchObservedRunningTime="2025-11-28 07:13:00.964384009 +0000 UTC m=+1235.342449120" Nov 28 07:13:01 crc kubenswrapper[4946]: I1128 07:13:01.016796 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-7p86n" podStartSLOduration=3.233293556 podStartE2EDuration="38.016765154s" podCreationTimestamp="2025-11-28 07:12:23 +0000 UTC" firstStartedPulling="2025-11-28 07:12:24.886547376 +0000 UTC m=+1199.264612487" lastFinishedPulling="2025-11-28 07:12:59.670018974 +0000 UTC m=+1234.048084085" observedRunningTime="2025-11-28 07:13:01.010154752 +0000 UTC m=+1235.388219873" watchObservedRunningTime="2025-11-28 07:13:01.016765154 +0000 UTC m=+1235.394830275" Nov 28 07:13:01 crc kubenswrapper[4946]: I1128 07:13:01.037222 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-66f75ddbcc-l7w7r" Nov 28 07:13:01 crc kubenswrapper[4946]: I1128 07:13:01.045665 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-sqh4v" podStartSLOduration=5.439999496 podStartE2EDuration="38.045611202s" podCreationTimestamp="2025-11-28 07:12:23 +0000 UTC" firstStartedPulling="2025-11-28 07:12:26.159980222 +0000 UTC m=+1200.538045333" lastFinishedPulling="2025-11-28 07:12:58.765591938 +0000 UTC m=+1233.143657039" observedRunningTime="2025-11-28 07:13:01.036935759 +0000 UTC m=+1235.415000880" watchObservedRunningTime="2025-11-28 07:13:01.045611202 +0000 UTC m=+1235.423676323" Nov 28 07:13:01 crc kubenswrapper[4946]: I1128 07:13:01.102921 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-pvrsb" podStartSLOduration=4.906422993 podStartE2EDuration="38.102889758s" podCreationTimestamp="2025-11-28 07:12:23 +0000 UTC" firstStartedPulling="2025-11-28 07:12:25.315756815 +0000 UTC m=+1199.693821926" lastFinishedPulling="2025-11-28 07:12:58.51222358 +0000 UTC m=+1232.890288691" observedRunningTime="2025-11-28 07:13:01.053600118 +0000 UTC m=+1235.431665239" watchObservedRunningTime="2025-11-28 07:13:01.102889758 +0000 UTC m=+1235.480954869" Nov 28 07:13:01 crc kubenswrapper[4946]: I1128 07:13:01.808855 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-nqqq9" event={"ID":"fda79d01-4f7c-4c88-8567-6c9543ec8b51","Type":"ContainerStarted","Data":"be9804c1e83813199d863ca9d174df1b8187530134194ebba1c596d1605e8a35"} Nov 28 07:13:01 crc kubenswrapper[4946]: I1128 07:13:01.810238 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-nqqq9" Nov 28 07:13:01 crc kubenswrapper[4946]: I1128 07:13:01.811931 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qfvfk" event={"ID":"989a6399-e7f7-4b7d-bd68-4d44531b3a8e","Type":"ContainerStarted","Data":"c729ce1b61950d4029f9268d664bd87e921edb29212c59c71279d19220f102fd"} Nov 28 07:13:01 crc kubenswrapper[4946]: I1128 07:13:01.812491 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qfvfk" Nov 28 07:13:01 crc kubenswrapper[4946]: I1128 07:13:01.817079 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-hk9d2" event={"ID":"3fe2b1f1-7ef0-4cc3-9cf1-adefe5dead46","Type":"ContainerStarted","Data":"a3b815733526c0bc8120e5c74b2feae8c06ef19537d900498f0cf4596d3839d2"} Nov 28 07:13:01 crc kubenswrapper[4946]: I1128 07:13:01.818932 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4mqd6" Nov 28 07:13:01 crc kubenswrapper[4946]: I1128 07:13:01.822047 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4mqd6" Nov 28 07:13:01 crc kubenswrapper[4946]: I1128 07:13:01.845308 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-nqqq9" podStartSLOduration=2.800445513 podStartE2EDuration="37.845277698s" podCreationTimestamp="2025-11-28 07:12:24 +0000 UTC" firstStartedPulling="2025-11-28 07:12:26.125752904 +0000 UTC m=+1200.503818015" lastFinishedPulling="2025-11-28 07:13:01.170585089 +0000 UTC m=+1235.548650200" observedRunningTime="2025-11-28 07:13:01.838383038 +0000 UTC m=+1236.216448159" watchObservedRunningTime="2025-11-28 07:13:01.845277698 +0000 UTC m=+1236.223342809" Nov 28 07:13:01 crc kubenswrapper[4946]: I1128 07:13:01.932159 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-955677c94-hk9d2" podStartSLOduration=3.458206301 podStartE2EDuration="38.932137239s" podCreationTimestamp="2025-11-28 07:12:23 +0000 UTC" firstStartedPulling="2025-11-28 07:12:25.759094224 +0000 UTC m=+1200.137159335" lastFinishedPulling="2025-11-28 07:13:01.233025162 +0000 UTC m=+1235.611090273" observedRunningTime="2025-11-28 07:13:01.9073076 +0000 UTC m=+1236.285372711" watchObservedRunningTime="2025-11-28 07:13:01.932137239 +0000 UTC m=+1236.310202350" Nov 28 07:13:01 crc kubenswrapper[4946]: I1128 07:13:01.934518 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qfvfk" podStartSLOduration=2.673537942 podStartE2EDuration="37.934511348s" podCreationTimestamp="2025-11-28 07:12:24 +0000 UTC" firstStartedPulling="2025-11-28 07:12:26.096430518 +0000 UTC m=+1200.474495629" lastFinishedPulling="2025-11-28 07:13:01.357403924 +0000 UTC m=+1235.735469035" observedRunningTime="2025-11-28 07:13:01.931017712 +0000 UTC m=+1236.309082833" watchObservedRunningTime="2025-11-28 07:13:01.934511348 +0000 UTC m=+1236.312576459" Nov 28 07:13:02 crc kubenswrapper[4946]: I1128 07:13:02.826920 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-955677c94-hk9d2" Nov 28 07:13:04 crc kubenswrapper[4946]: I1128 07:13:04.542580 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-sqh4v" Nov 28 07:13:04 crc kubenswrapper[4946]: I1128 07:13:04.763037 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-r94kz" Nov 28 07:13:04 crc kubenswrapper[4946]: I1128 07:13:04.978274 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-mwdxn" Nov 28 07:13:05 crc kubenswrapper[4946]: I1128 07:13:05.176149 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-blpm6" Nov 28 07:13:10 crc kubenswrapper[4946]: I1128 07:13:10.260850 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-dhvm2" Nov 28 07:13:10 crc kubenswrapper[4946]: I1128 07:13:10.697949 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n" Nov 28 07:13:14 crc kubenswrapper[4946]: I1128 07:13:14.408387 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-2f2dz" Nov 28 07:13:14 crc kubenswrapper[4946]: I1128 07:13:14.483939 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-955677c94-hk9d2" Nov 28 07:13:14 crc kubenswrapper[4946]: I1128 07:13:14.965431 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-nl9r5" Nov 28 07:13:14 crc kubenswrapper[4946]: I1128 07:13:14.971083 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-qfvfk" Nov 28 07:13:15 crc kubenswrapper[4946]: I1128 07:13:15.039294 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-nqqq9" Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.385164 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-557f57d995-g9hls"] Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.388766 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557f57d995-g9hls" Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.393830 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.395249 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.395561 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.396157 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-jmdz9" Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.400721 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557f57d995-g9hls"] Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.445660 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-766fdc659c-gbcph"] Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.447809 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-766fdc659c-gbcph" Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.457344 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.467324 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-766fdc659c-gbcph"] Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.523262 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/909ce070-90e9-424e-89bd-6825071c0d2e-config\") pod \"dnsmasq-dns-557f57d995-g9hls\" (UID: \"909ce070-90e9-424e-89bd-6825071c0d2e\") " pod="openstack/dnsmasq-dns-557f57d995-g9hls" Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.523881 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9nlp\" (UniqueName: \"kubernetes.io/projected/909ce070-90e9-424e-89bd-6825071c0d2e-kube-api-access-c9nlp\") pod \"dnsmasq-dns-557f57d995-g9hls\" (UID: \"909ce070-90e9-424e-89bd-6825071c0d2e\") " pod="openstack/dnsmasq-dns-557f57d995-g9hls" Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.625236 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a041fa4-f481-403d-95ac-c2f32f1f5980-config\") pod \"dnsmasq-dns-766fdc659c-gbcph\" (UID: \"1a041fa4-f481-403d-95ac-c2f32f1f5980\") " pod="openstack/dnsmasq-dns-766fdc659c-gbcph" Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.625540 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6tmz\" (UniqueName: \"kubernetes.io/projected/1a041fa4-f481-403d-95ac-c2f32f1f5980-kube-api-access-b6tmz\") pod \"dnsmasq-dns-766fdc659c-gbcph\" (UID: \"1a041fa4-f481-403d-95ac-c2f32f1f5980\") " pod="openstack/dnsmasq-dns-766fdc659c-gbcph" Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.625645 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/909ce070-90e9-424e-89bd-6825071c0d2e-config\") pod \"dnsmasq-dns-557f57d995-g9hls\" (UID: \"909ce070-90e9-424e-89bd-6825071c0d2e\") " pod="openstack/dnsmasq-dns-557f57d995-g9hls" Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.625693 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a041fa4-f481-403d-95ac-c2f32f1f5980-dns-svc\") pod \"dnsmasq-dns-766fdc659c-gbcph\" (UID: \"1a041fa4-f481-403d-95ac-c2f32f1f5980\") " pod="openstack/dnsmasq-dns-766fdc659c-gbcph" Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.625718 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9nlp\" (UniqueName: \"kubernetes.io/projected/909ce070-90e9-424e-89bd-6825071c0d2e-kube-api-access-c9nlp\") pod \"dnsmasq-dns-557f57d995-g9hls\" (UID: \"909ce070-90e9-424e-89bd-6825071c0d2e\") " pod="openstack/dnsmasq-dns-557f57d995-g9hls" Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.626906 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/909ce070-90e9-424e-89bd-6825071c0d2e-config\") pod \"dnsmasq-dns-557f57d995-g9hls\" (UID: \"909ce070-90e9-424e-89bd-6825071c0d2e\") " pod="openstack/dnsmasq-dns-557f57d995-g9hls" Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.669946 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9nlp\" (UniqueName: \"kubernetes.io/projected/909ce070-90e9-424e-89bd-6825071c0d2e-kube-api-access-c9nlp\") pod \"dnsmasq-dns-557f57d995-g9hls\" (UID: \"909ce070-90e9-424e-89bd-6825071c0d2e\") " pod="openstack/dnsmasq-dns-557f57d995-g9hls" Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.710515 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557f57d995-g9hls" Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.731574 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6tmz\" (UniqueName: \"kubernetes.io/projected/1a041fa4-f481-403d-95ac-c2f32f1f5980-kube-api-access-b6tmz\") pod \"dnsmasq-dns-766fdc659c-gbcph\" (UID: \"1a041fa4-f481-403d-95ac-c2f32f1f5980\") " pod="openstack/dnsmasq-dns-766fdc659c-gbcph" Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.731652 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a041fa4-f481-403d-95ac-c2f32f1f5980-dns-svc\") pod \"dnsmasq-dns-766fdc659c-gbcph\" (UID: \"1a041fa4-f481-403d-95ac-c2f32f1f5980\") " pod="openstack/dnsmasq-dns-766fdc659c-gbcph" Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.731711 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a041fa4-f481-403d-95ac-c2f32f1f5980-config\") pod \"dnsmasq-dns-766fdc659c-gbcph\" (UID: \"1a041fa4-f481-403d-95ac-c2f32f1f5980\") " pod="openstack/dnsmasq-dns-766fdc659c-gbcph" Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.732968 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a041fa4-f481-403d-95ac-c2f32f1f5980-config\") pod \"dnsmasq-dns-766fdc659c-gbcph\" (UID: \"1a041fa4-f481-403d-95ac-c2f32f1f5980\") " pod="openstack/dnsmasq-dns-766fdc659c-gbcph" Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.733677 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a041fa4-f481-403d-95ac-c2f32f1f5980-dns-svc\") pod \"dnsmasq-dns-766fdc659c-gbcph\" (UID: \"1a041fa4-f481-403d-95ac-c2f32f1f5980\") " pod="openstack/dnsmasq-dns-766fdc659c-gbcph" Nov 28 07:13:30 crc kubenswrapper[4946]: I1128 07:13:30.770686 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6tmz\" (UniqueName: \"kubernetes.io/projected/1a041fa4-f481-403d-95ac-c2f32f1f5980-kube-api-access-b6tmz\") pod \"dnsmasq-dns-766fdc659c-gbcph\" (UID: \"1a041fa4-f481-403d-95ac-c2f32f1f5980\") " pod="openstack/dnsmasq-dns-766fdc659c-gbcph" Nov 28 07:13:31 crc kubenswrapper[4946]: I1128 07:13:31.070599 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-766fdc659c-gbcph" Nov 28 07:13:31 crc kubenswrapper[4946]: I1128 07:13:31.179639 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557f57d995-g9hls"] Nov 28 07:13:31 crc kubenswrapper[4946]: W1128 07:13:31.554511 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a041fa4_f481_403d_95ac_c2f32f1f5980.slice/crio-e0fb4c7f5cf3d119a4e2703dcec8dc1a4f7188e95ec5772c9e4079ad357014d7 WatchSource:0}: Error finding container e0fb4c7f5cf3d119a4e2703dcec8dc1a4f7188e95ec5772c9e4079ad357014d7: Status 404 returned error can't find the container with id e0fb4c7f5cf3d119a4e2703dcec8dc1a4f7188e95ec5772c9e4079ad357014d7 Nov 28 07:13:31 crc kubenswrapper[4946]: I1128 07:13:31.554843 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-766fdc659c-gbcph"] Nov 28 07:13:32 crc kubenswrapper[4946]: I1128 07:13:32.201728 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-766fdc659c-gbcph" event={"ID":"1a041fa4-f481-403d-95ac-c2f32f1f5980","Type":"ContainerStarted","Data":"e0fb4c7f5cf3d119a4e2703dcec8dc1a4f7188e95ec5772c9e4079ad357014d7"} Nov 28 07:13:32 crc kubenswrapper[4946]: I1128 07:13:32.203029 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557f57d995-g9hls" event={"ID":"909ce070-90e9-424e-89bd-6825071c0d2e","Type":"ContainerStarted","Data":"d018bf96c9ce2b0b28a334a2d4f56b05f8888a82ccc840e2516c844bf2e75a14"} Nov 28 07:13:32 crc kubenswrapper[4946]: I1128 07:13:32.582323 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557f57d995-g9hls"] Nov 28 07:13:32 crc kubenswrapper[4946]: I1128 07:13:32.610433 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57dc4c6697-lx6ht"] Nov 28 07:13:32 crc kubenswrapper[4946]: I1128 07:13:32.612879 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57dc4c6697-lx6ht" Nov 28 07:13:32 crc kubenswrapper[4946]: I1128 07:13:32.630683 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57dc4c6697-lx6ht"] Nov 28 07:13:32 crc kubenswrapper[4946]: I1128 07:13:32.666124 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ce6fbc6-22c3-4e15-97c0-690b30c63673-config\") pod \"dnsmasq-dns-57dc4c6697-lx6ht\" (UID: \"1ce6fbc6-22c3-4e15-97c0-690b30c63673\") " pod="openstack/dnsmasq-dns-57dc4c6697-lx6ht" Nov 28 07:13:32 crc kubenswrapper[4946]: I1128 07:13:32.666347 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-956fk\" (UniqueName: \"kubernetes.io/projected/1ce6fbc6-22c3-4e15-97c0-690b30c63673-kube-api-access-956fk\") pod \"dnsmasq-dns-57dc4c6697-lx6ht\" (UID: \"1ce6fbc6-22c3-4e15-97c0-690b30c63673\") " pod="openstack/dnsmasq-dns-57dc4c6697-lx6ht" Nov 28 07:13:32 crc kubenswrapper[4946]: I1128 07:13:32.666394 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ce6fbc6-22c3-4e15-97c0-690b30c63673-dns-svc\") pod \"dnsmasq-dns-57dc4c6697-lx6ht\" (UID: \"1ce6fbc6-22c3-4e15-97c0-690b30c63673\") " pod="openstack/dnsmasq-dns-57dc4c6697-lx6ht" Nov 28 07:13:32 crc kubenswrapper[4946]: I1128 07:13:32.767403 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-956fk\" (UniqueName: \"kubernetes.io/projected/1ce6fbc6-22c3-4e15-97c0-690b30c63673-kube-api-access-956fk\") pod \"dnsmasq-dns-57dc4c6697-lx6ht\" (UID: \"1ce6fbc6-22c3-4e15-97c0-690b30c63673\") " pod="openstack/dnsmasq-dns-57dc4c6697-lx6ht" Nov 28 07:13:32 crc kubenswrapper[4946]: I1128 07:13:32.767487 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ce6fbc6-22c3-4e15-97c0-690b30c63673-dns-svc\") pod \"dnsmasq-dns-57dc4c6697-lx6ht\" (UID: \"1ce6fbc6-22c3-4e15-97c0-690b30c63673\") " pod="openstack/dnsmasq-dns-57dc4c6697-lx6ht" Nov 28 07:13:32 crc kubenswrapper[4946]: I1128 07:13:32.767546 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ce6fbc6-22c3-4e15-97c0-690b30c63673-config\") pod \"dnsmasq-dns-57dc4c6697-lx6ht\" (UID: \"1ce6fbc6-22c3-4e15-97c0-690b30c63673\") " pod="openstack/dnsmasq-dns-57dc4c6697-lx6ht" Nov 28 07:13:32 crc kubenswrapper[4946]: I1128 07:13:32.768632 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ce6fbc6-22c3-4e15-97c0-690b30c63673-config\") pod \"dnsmasq-dns-57dc4c6697-lx6ht\" (UID: \"1ce6fbc6-22c3-4e15-97c0-690b30c63673\") " pod="openstack/dnsmasq-dns-57dc4c6697-lx6ht" Nov 28 07:13:32 crc kubenswrapper[4946]: I1128 07:13:32.768675 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ce6fbc6-22c3-4e15-97c0-690b30c63673-dns-svc\") pod \"dnsmasq-dns-57dc4c6697-lx6ht\" (UID: \"1ce6fbc6-22c3-4e15-97c0-690b30c63673\") " pod="openstack/dnsmasq-dns-57dc4c6697-lx6ht" Nov 28 07:13:32 crc kubenswrapper[4946]: I1128 07:13:32.802391 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-956fk\" (UniqueName: \"kubernetes.io/projected/1ce6fbc6-22c3-4e15-97c0-690b30c63673-kube-api-access-956fk\") pod \"dnsmasq-dns-57dc4c6697-lx6ht\" (UID: \"1ce6fbc6-22c3-4e15-97c0-690b30c63673\") " pod="openstack/dnsmasq-dns-57dc4c6697-lx6ht" Nov 28 07:13:32 crc kubenswrapper[4946]: I1128 07:13:32.923855 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-766fdc659c-gbcph"] Nov 28 07:13:32 crc kubenswrapper[4946]: I1128 07:13:32.939006 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57dc4c6697-lx6ht" Nov 28 07:13:32 crc kubenswrapper[4946]: I1128 07:13:32.972776 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8446fd7c75-kwwmc"] Nov 28 07:13:32 crc kubenswrapper[4946]: I1128 07:13:32.991885 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8446fd7c75-kwwmc" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.038950 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8446fd7c75-kwwmc"] Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.180334 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cbb552d-dc8c-407a-82b2-248bdfb5efe3-dns-svc\") pod \"dnsmasq-dns-8446fd7c75-kwwmc\" (UID: \"6cbb552d-dc8c-407a-82b2-248bdfb5efe3\") " pod="openstack/dnsmasq-dns-8446fd7c75-kwwmc" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.180577 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbb552d-dc8c-407a-82b2-248bdfb5efe3-config\") pod \"dnsmasq-dns-8446fd7c75-kwwmc\" (UID: \"6cbb552d-dc8c-407a-82b2-248bdfb5efe3\") " pod="openstack/dnsmasq-dns-8446fd7c75-kwwmc" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.180621 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpcd5\" (UniqueName: \"kubernetes.io/projected/6cbb552d-dc8c-407a-82b2-248bdfb5efe3-kube-api-access-mpcd5\") pod \"dnsmasq-dns-8446fd7c75-kwwmc\" (UID: \"6cbb552d-dc8c-407a-82b2-248bdfb5efe3\") " pod="openstack/dnsmasq-dns-8446fd7c75-kwwmc" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.288732 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cbb552d-dc8c-407a-82b2-248bdfb5efe3-dns-svc\") pod \"dnsmasq-dns-8446fd7c75-kwwmc\" (UID: \"6cbb552d-dc8c-407a-82b2-248bdfb5efe3\") " pod="openstack/dnsmasq-dns-8446fd7c75-kwwmc" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.290385 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbb552d-dc8c-407a-82b2-248bdfb5efe3-config\") pod \"dnsmasq-dns-8446fd7c75-kwwmc\" (UID: \"6cbb552d-dc8c-407a-82b2-248bdfb5efe3\") " pod="openstack/dnsmasq-dns-8446fd7c75-kwwmc" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.290501 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpcd5\" (UniqueName: \"kubernetes.io/projected/6cbb552d-dc8c-407a-82b2-248bdfb5efe3-kube-api-access-mpcd5\") pod \"dnsmasq-dns-8446fd7c75-kwwmc\" (UID: \"6cbb552d-dc8c-407a-82b2-248bdfb5efe3\") " pod="openstack/dnsmasq-dns-8446fd7c75-kwwmc" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.291028 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cbb552d-dc8c-407a-82b2-248bdfb5efe3-dns-svc\") pod \"dnsmasq-dns-8446fd7c75-kwwmc\" (UID: \"6cbb552d-dc8c-407a-82b2-248bdfb5efe3\") " pod="openstack/dnsmasq-dns-8446fd7c75-kwwmc" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.291507 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbb552d-dc8c-407a-82b2-248bdfb5efe3-config\") pod \"dnsmasq-dns-8446fd7c75-kwwmc\" (UID: \"6cbb552d-dc8c-407a-82b2-248bdfb5efe3\") " pod="openstack/dnsmasq-dns-8446fd7c75-kwwmc" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.324381 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpcd5\" (UniqueName: \"kubernetes.io/projected/6cbb552d-dc8c-407a-82b2-248bdfb5efe3-kube-api-access-mpcd5\") pod \"dnsmasq-dns-8446fd7c75-kwwmc\" (UID: \"6cbb552d-dc8c-407a-82b2-248bdfb5efe3\") " pod="openstack/dnsmasq-dns-8446fd7c75-kwwmc" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.369884 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8446fd7c75-kwwmc" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.497915 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57dc4c6697-lx6ht"] Nov 28 07:13:33 crc kubenswrapper[4946]: W1128 07:13:33.511299 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ce6fbc6_22c3_4e15_97c0_690b30c63673.slice/crio-18c78b57ec644363bf33b67ea5566bfdfacb929068489e47d6c4a41a5bf766f2 WatchSource:0}: Error finding container 18c78b57ec644363bf33b67ea5566bfdfacb929068489e47d6c4a41a5bf766f2: Status 404 returned error can't find the container with id 18c78b57ec644363bf33b67ea5566bfdfacb929068489e47d6c4a41a5bf766f2 Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.699436 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8446fd7c75-kwwmc"] Nov 28 07:13:33 crc kubenswrapper[4946]: W1128 07:13:33.711835 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cbb552d_dc8c_407a_82b2_248bdfb5efe3.slice/crio-ffd9460489657d36a01512b68ba5684325b493fabeb1ac82929a44abfd32e4fa WatchSource:0}: Error finding container ffd9460489657d36a01512b68ba5684325b493fabeb1ac82929a44abfd32e4fa: Status 404 returned error can't find the container with id ffd9460489657d36a01512b68ba5684325b493fabeb1ac82929a44abfd32e4fa Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.851804 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.853236 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.856567 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.856906 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.857146 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.857325 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.857512 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.858428 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-x24hm" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.860261 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.870825 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.903684 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59fdca77-b333-44be-ab8c-96a2f4bcc340-pod-info\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.903741 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.903764 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59fdca77-b333-44be-ab8c-96a2f4bcc340-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.904046 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgx22\" (UniqueName: \"kubernetes.io/projected/59fdca77-b333-44be-ab8c-96a2f4bcc340-kube-api-access-tgx22\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.904222 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59fdca77-b333-44be-ab8c-96a2f4bcc340-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.904304 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59fdca77-b333-44be-ab8c-96a2f4bcc340-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.904373 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-config-data\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.904437 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.904512 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59fdca77-b333-44be-ab8c-96a2f4bcc340-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.904535 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59fdca77-b333-44be-ab8c-96a2f4bcc340-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:33 crc kubenswrapper[4946]: I1128 07:13:33.905126 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-server-conf\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.007011 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgx22\" (UniqueName: \"kubernetes.io/projected/59fdca77-b333-44be-ab8c-96a2f4bcc340-kube-api-access-tgx22\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.010242 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59fdca77-b333-44be-ab8c-96a2f4bcc340-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.010414 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59fdca77-b333-44be-ab8c-96a2f4bcc340-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.011945 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-config-data\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.012016 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.012088 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59fdca77-b333-44be-ab8c-96a2f4bcc340-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.012113 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59fdca77-b333-44be-ab8c-96a2f4bcc340-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.012202 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-server-conf\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.012271 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59fdca77-b333-44be-ab8c-96a2f4bcc340-pod-info\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.012320 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.012345 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59fdca77-b333-44be-ab8c-96a2f4bcc340-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.014193 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59fdca77-b333-44be-ab8c-96a2f4bcc340-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.014470 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59fdca77-b333-44be-ab8c-96a2f4bcc340-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.015442 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.016127 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-server-conf\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.016366 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-config-data\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.017023 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59fdca77-b333-44be-ab8c-96a2f4bcc340-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.017278 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.018805 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59fdca77-b333-44be-ab8c-96a2f4bcc340-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.022985 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59fdca77-b333-44be-ab8c-96a2f4bcc340-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.023655 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59fdca77-b333-44be-ab8c-96a2f4bcc340-pod-info\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.026617 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgx22\" (UniqueName: \"kubernetes.io/projected/59fdca77-b333-44be-ab8c-96a2f4bcc340-kube-api-access-tgx22\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.065080 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " pod="openstack/rabbitmq-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.099221 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.100811 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.106819 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.107072 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.107220 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.107490 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.110972 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.111188 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.111324 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-276r6" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.143710 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.191030 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.215341 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3521840d-60d0-450c-8c05-7e2ad0fc4e97-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.215432 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3521840d-60d0-450c-8c05-7e2ad0fc4e97-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.215536 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3521840d-60d0-450c-8c05-7e2ad0fc4e97-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.215603 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.215649 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r5q4\" (UniqueName: \"kubernetes.io/projected/3521840d-60d0-450c-8c05-7e2ad0fc4e97-kube-api-access-8r5q4\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.215679 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.216062 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3521840d-60d0-450c-8c05-7e2ad0fc4e97-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.216117 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3521840d-60d0-450c-8c05-7e2ad0fc4e97-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.216135 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.217486 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3521840d-60d0-450c-8c05-7e2ad0fc4e97-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.217676 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.243592 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57dc4c6697-lx6ht" event={"ID":"1ce6fbc6-22c3-4e15-97c0-690b30c63673","Type":"ContainerStarted","Data":"18c78b57ec644363bf33b67ea5566bfdfacb929068489e47d6c4a41a5bf766f2"} Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.254687 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8446fd7c75-kwwmc" event={"ID":"6cbb552d-dc8c-407a-82b2-248bdfb5efe3","Type":"ContainerStarted","Data":"ffd9460489657d36a01512b68ba5684325b493fabeb1ac82929a44abfd32e4fa"} Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.319811 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.319876 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r5q4\" (UniqueName: \"kubernetes.io/projected/3521840d-60d0-450c-8c05-7e2ad0fc4e97-kube-api-access-8r5q4\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.319905 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.319953 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3521840d-60d0-450c-8c05-7e2ad0fc4e97-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.319985 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3521840d-60d0-450c-8c05-7e2ad0fc4e97-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.320029 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.320083 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3521840d-60d0-450c-8c05-7e2ad0fc4e97-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.320115 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.320150 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3521840d-60d0-450c-8c05-7e2ad0fc4e97-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.320175 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3521840d-60d0-450c-8c05-7e2ad0fc4e97-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.320199 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3521840d-60d0-450c-8c05-7e2ad0fc4e97-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.320655 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.320811 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3521840d-60d0-450c-8c05-7e2ad0fc4e97-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.322120 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.323995 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.324580 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3521840d-60d0-450c-8c05-7e2ad0fc4e97-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.326534 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.335443 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3521840d-60d0-450c-8c05-7e2ad0fc4e97-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.336010 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3521840d-60d0-450c-8c05-7e2ad0fc4e97-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.336226 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3521840d-60d0-450c-8c05-7e2ad0fc4e97-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.337252 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3521840d-60d0-450c-8c05-7e2ad0fc4e97-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.344287 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r5q4\" (UniqueName: \"kubernetes.io/projected/3521840d-60d0-450c-8c05-7e2ad0fc4e97-kube-api-access-8r5q4\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.394755 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.445509 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:13:34 crc kubenswrapper[4946]: I1128 07:13:34.881745 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.113234 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.279111 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"59fdca77-b333-44be-ab8c-96a2f4bcc340","Type":"ContainerStarted","Data":"301e2cda2647a234e7bedf243c381acf80fcde3f36946e5a978cd12c8fc3475a"} Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.287318 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3521840d-60d0-450c-8c05-7e2ad0fc4e97","Type":"ContainerStarted","Data":"43803043a516ccf50c5a19a1e0ce796d958a757afcf62c3b93dce03f0af09f7e"} Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.288895 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.291222 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.297575 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.321303 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.321832 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.322167 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-g9947" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.323597 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.333435 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.451194 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.451282 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/13d10c33-0ca9-47d5-ac49-19391cebfb39-config-data-default\") pod \"openstack-galera-0\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.451320 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/13d10c33-0ca9-47d5-ac49-19391cebfb39-kolla-config\") pod \"openstack-galera-0\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.451350 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d10c33-0ca9-47d5-ac49-19391cebfb39-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.451367 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13d10c33-0ca9-47d5-ac49-19391cebfb39-operator-scripts\") pod \"openstack-galera-0\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.451389 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf9c5\" (UniqueName: \"kubernetes.io/projected/13d10c33-0ca9-47d5-ac49-19391cebfb39-kube-api-access-cf9c5\") pod \"openstack-galera-0\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.451404 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/13d10c33-0ca9-47d5-ac49-19391cebfb39-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.451429 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/13d10c33-0ca9-47d5-ac49-19391cebfb39-config-data-generated\") pod \"openstack-galera-0\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.553703 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/13d10c33-0ca9-47d5-ac49-19391cebfb39-config-data-default\") pod \"openstack-galera-0\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.553776 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/13d10c33-0ca9-47d5-ac49-19391cebfb39-kolla-config\") pod \"openstack-galera-0\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.553813 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13d10c33-0ca9-47d5-ac49-19391cebfb39-operator-scripts\") pod \"openstack-galera-0\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.553833 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d10c33-0ca9-47d5-ac49-19391cebfb39-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.553857 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf9c5\" (UniqueName: \"kubernetes.io/projected/13d10c33-0ca9-47d5-ac49-19391cebfb39-kube-api-access-cf9c5\") pod \"openstack-galera-0\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.553878 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/13d10c33-0ca9-47d5-ac49-19391cebfb39-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.553905 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/13d10c33-0ca9-47d5-ac49-19391cebfb39-config-data-generated\") pod \"openstack-galera-0\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.553975 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.554366 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.555003 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/13d10c33-0ca9-47d5-ac49-19391cebfb39-config-data-generated\") pod \"openstack-galera-0\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.555042 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/13d10c33-0ca9-47d5-ac49-19391cebfb39-kolla-config\") pod \"openstack-galera-0\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.555068 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/13d10c33-0ca9-47d5-ac49-19391cebfb39-config-data-default\") pod \"openstack-galera-0\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.562572 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d10c33-0ca9-47d5-ac49-19391cebfb39-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.562992 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13d10c33-0ca9-47d5-ac49-19391cebfb39-operator-scripts\") pod \"openstack-galera-0\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.575821 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.577525 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf9c5\" (UniqueName: \"kubernetes.io/projected/13d10c33-0ca9-47d5-ac49-19391cebfb39-kube-api-access-cf9c5\") pod \"openstack-galera-0\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.581484 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/13d10c33-0ca9-47d5-ac49-19391cebfb39-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " pod="openstack/openstack-galera-0" Nov 28 07:13:35 crc kubenswrapper[4946]: I1128 07:13:35.642508 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.398532 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.747543 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.750309 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.755561 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.755824 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-b6fsh" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.756001 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.756190 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.762690 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.888185 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e87edf72-a3a2-4df0-8249-5902f158998d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.888239 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e87edf72-a3a2-4df0-8249-5902f158998d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.888290 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.888327 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e87edf72-a3a2-4df0-8249-5902f158998d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.888618 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e87edf72-a3a2-4df0-8249-5902f158998d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.890201 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e87edf72-a3a2-4df0-8249-5902f158998d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.891016 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9pfn\" (UniqueName: \"kubernetes.io/projected/e87edf72-a3a2-4df0-8249-5902f158998d-kube-api-access-l9pfn\") pod \"openstack-cell1-galera-0\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.891280 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e87edf72-a3a2-4df0-8249-5902f158998d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.996260 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e87edf72-a3a2-4df0-8249-5902f158998d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.996331 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e87edf72-a3a2-4df0-8249-5902f158998d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.996382 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e87edf72-a3a2-4df0-8249-5902f158998d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.996456 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.996518 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e87edf72-a3a2-4df0-8249-5902f158998d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.996607 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e87edf72-a3a2-4df0-8249-5902f158998d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.996665 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e87edf72-a3a2-4df0-8249-5902f158998d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.996734 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9pfn\" (UniqueName: \"kubernetes.io/projected/e87edf72-a3a2-4df0-8249-5902f158998d-kube-api-access-l9pfn\") pod \"openstack-cell1-galera-0\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.997043 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e87edf72-a3a2-4df0-8249-5902f158998d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.997076 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.997633 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e87edf72-a3a2-4df0-8249-5902f158998d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.998085 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e87edf72-a3a2-4df0-8249-5902f158998d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:36 crc kubenswrapper[4946]: I1128 07:13:36.999283 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e87edf72-a3a2-4df0-8249-5902f158998d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.007635 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e87edf72-a3a2-4df0-8249-5902f158998d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.008762 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e87edf72-a3a2-4df0-8249-5902f158998d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.036567 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9pfn\" (UniqueName: \"kubernetes.io/projected/e87edf72-a3a2-4df0-8249-5902f158998d-kube-api-access-l9pfn\") pod \"openstack-cell1-galera-0\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.045260 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.051402 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.054548 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.054901 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-6ptjc" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.055059 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.063244 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.089405 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.097843 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.207068 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/62fdad5e-59c5-4d8f-87da-79b384fb82be-kolla-config\") pod \"memcached-0\" (UID: \"62fdad5e-59c5-4d8f-87da-79b384fb82be\") " pod="openstack/memcached-0" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.207120 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2blsv\" (UniqueName: \"kubernetes.io/projected/62fdad5e-59c5-4d8f-87da-79b384fb82be-kube-api-access-2blsv\") pod \"memcached-0\" (UID: \"62fdad5e-59c5-4d8f-87da-79b384fb82be\") " pod="openstack/memcached-0" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.207200 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62fdad5e-59c5-4d8f-87da-79b384fb82be-combined-ca-bundle\") pod \"memcached-0\" (UID: \"62fdad5e-59c5-4d8f-87da-79b384fb82be\") " pod="openstack/memcached-0" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.207219 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62fdad5e-59c5-4d8f-87da-79b384fb82be-config-data\") pod \"memcached-0\" (UID: \"62fdad5e-59c5-4d8f-87da-79b384fb82be\") " pod="openstack/memcached-0" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.207249 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/62fdad5e-59c5-4d8f-87da-79b384fb82be-memcached-tls-certs\") pod \"memcached-0\" (UID: \"62fdad5e-59c5-4d8f-87da-79b384fb82be\") " pod="openstack/memcached-0" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.308697 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62fdad5e-59c5-4d8f-87da-79b384fb82be-combined-ca-bundle\") pod \"memcached-0\" (UID: \"62fdad5e-59c5-4d8f-87da-79b384fb82be\") " pod="openstack/memcached-0" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.309125 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62fdad5e-59c5-4d8f-87da-79b384fb82be-config-data\") pod \"memcached-0\" (UID: \"62fdad5e-59c5-4d8f-87da-79b384fb82be\") " pod="openstack/memcached-0" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.309166 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/62fdad5e-59c5-4d8f-87da-79b384fb82be-memcached-tls-certs\") pod \"memcached-0\" (UID: \"62fdad5e-59c5-4d8f-87da-79b384fb82be\") " pod="openstack/memcached-0" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.309225 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/62fdad5e-59c5-4d8f-87da-79b384fb82be-kolla-config\") pod \"memcached-0\" (UID: \"62fdad5e-59c5-4d8f-87da-79b384fb82be\") " pod="openstack/memcached-0" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.309252 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2blsv\" (UniqueName: \"kubernetes.io/projected/62fdad5e-59c5-4d8f-87da-79b384fb82be-kube-api-access-2blsv\") pod \"memcached-0\" (UID: \"62fdad5e-59c5-4d8f-87da-79b384fb82be\") " pod="openstack/memcached-0" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.310329 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62fdad5e-59c5-4d8f-87da-79b384fb82be-config-data\") pod \"memcached-0\" (UID: \"62fdad5e-59c5-4d8f-87da-79b384fb82be\") " pod="openstack/memcached-0" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.311934 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/62fdad5e-59c5-4d8f-87da-79b384fb82be-kolla-config\") pod \"memcached-0\" (UID: \"62fdad5e-59c5-4d8f-87da-79b384fb82be\") " pod="openstack/memcached-0" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.317376 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62fdad5e-59c5-4d8f-87da-79b384fb82be-combined-ca-bundle\") pod \"memcached-0\" (UID: \"62fdad5e-59c5-4d8f-87da-79b384fb82be\") " pod="openstack/memcached-0" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.321117 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/62fdad5e-59c5-4d8f-87da-79b384fb82be-memcached-tls-certs\") pod \"memcached-0\" (UID: \"62fdad5e-59c5-4d8f-87da-79b384fb82be\") " pod="openstack/memcached-0" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.327340 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2blsv\" (UniqueName: \"kubernetes.io/projected/62fdad5e-59c5-4d8f-87da-79b384fb82be-kube-api-access-2blsv\") pod \"memcached-0\" (UID: \"62fdad5e-59c5-4d8f-87da-79b384fb82be\") " pod="openstack/memcached-0" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.331613 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"13d10c33-0ca9-47d5-ac49-19391cebfb39","Type":"ContainerStarted","Data":"3f22782a39a881891b144df25fbafd85c9abad50dd1c8d021984523d44369e27"} Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.427961 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.689442 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 07:13:37 crc kubenswrapper[4946]: I1128 07:13:37.948450 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 28 07:13:38 crc kubenswrapper[4946]: I1128 07:13:38.379070 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e87edf72-a3a2-4df0-8249-5902f158998d","Type":"ContainerStarted","Data":"e45402f668a49491304a9bc6126942f95b60d84665e172d1983b256b50b631e6"} Nov 28 07:13:38 crc kubenswrapper[4946]: I1128 07:13:38.382333 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"62fdad5e-59c5-4d8f-87da-79b384fb82be","Type":"ContainerStarted","Data":"f388c24b8d434f7346a8e73ac37b7bd99ad101a75b092c600bac08281ec01d0e"} Nov 28 07:13:39 crc kubenswrapper[4946]: I1128 07:13:39.092527 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 07:13:39 crc kubenswrapper[4946]: I1128 07:13:39.105289 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 07:13:39 crc kubenswrapper[4946]: I1128 07:13:39.105796 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 07:13:39 crc kubenswrapper[4946]: I1128 07:13:39.111171 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-b4hm2" Nov 28 07:13:39 crc kubenswrapper[4946]: I1128 07:13:39.162971 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vg69\" (UniqueName: \"kubernetes.io/projected/2307ed21-fd67-403c-ae9d-acc822502e42-kube-api-access-9vg69\") pod \"kube-state-metrics-0\" (UID: \"2307ed21-fd67-403c-ae9d-acc822502e42\") " pod="openstack/kube-state-metrics-0" Nov 28 07:13:39 crc kubenswrapper[4946]: I1128 07:13:39.265133 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vg69\" (UniqueName: \"kubernetes.io/projected/2307ed21-fd67-403c-ae9d-acc822502e42-kube-api-access-9vg69\") pod \"kube-state-metrics-0\" (UID: \"2307ed21-fd67-403c-ae9d-acc822502e42\") " pod="openstack/kube-state-metrics-0" Nov 28 07:13:39 crc kubenswrapper[4946]: I1128 07:13:39.292765 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vg69\" (UniqueName: \"kubernetes.io/projected/2307ed21-fd67-403c-ae9d-acc822502e42-kube-api-access-9vg69\") pod \"kube-state-metrics-0\" (UID: \"2307ed21-fd67-403c-ae9d-acc822502e42\") " pod="openstack/kube-state-metrics-0" Nov 28 07:13:39 crc kubenswrapper[4946]: I1128 07:13:39.450823 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 07:13:40 crc kubenswrapper[4946]: I1128 07:13:40.036119 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.617484 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xzzn6"] Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.618680 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xzzn6" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.625103 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.625960 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.626266 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-dh7lk" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.633627 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-gpnz5"] Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.635599 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.646109 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xzzn6"] Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.666773 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-gpnz5"] Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.739074 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5feb905d-9c23-4603-b118-fdc05a237848-var-run\") pod \"ovn-controller-xzzn6\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " pod="openstack/ovn-controller-xzzn6" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.739170 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/944abeae-3f1d-4391-a375-b64ed9c17b14-etc-ovs\") pod \"ovn-controller-ovs-gpnz5\" (UID: \"944abeae-3f1d-4391-a375-b64ed9c17b14\") " pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.739209 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5feb905d-9c23-4603-b118-fdc05a237848-ovn-controller-tls-certs\") pod \"ovn-controller-xzzn6\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " pod="openstack/ovn-controller-xzzn6" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.739238 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/944abeae-3f1d-4391-a375-b64ed9c17b14-scripts\") pod \"ovn-controller-ovs-gpnz5\" (UID: \"944abeae-3f1d-4391-a375-b64ed9c17b14\") " pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.739268 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4zb9\" (UniqueName: \"kubernetes.io/projected/5feb905d-9c23-4603-b118-fdc05a237848-kube-api-access-s4zb9\") pod \"ovn-controller-xzzn6\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " pod="openstack/ovn-controller-xzzn6" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.739291 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/944abeae-3f1d-4391-a375-b64ed9c17b14-var-lib\") pod \"ovn-controller-ovs-gpnz5\" (UID: \"944abeae-3f1d-4391-a375-b64ed9c17b14\") " pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.739314 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5feb905d-9c23-4603-b118-fdc05a237848-var-run-ovn\") pod \"ovn-controller-xzzn6\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " pod="openstack/ovn-controller-xzzn6" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.739333 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5feb905d-9c23-4603-b118-fdc05a237848-scripts\") pod \"ovn-controller-xzzn6\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " pod="openstack/ovn-controller-xzzn6" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.739359 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/944abeae-3f1d-4391-a375-b64ed9c17b14-var-log\") pod \"ovn-controller-ovs-gpnz5\" (UID: \"944abeae-3f1d-4391-a375-b64ed9c17b14\") " pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.739375 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5feb905d-9c23-4603-b118-fdc05a237848-var-log-ovn\") pod \"ovn-controller-xzzn6\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " pod="openstack/ovn-controller-xzzn6" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.739400 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5feb905d-9c23-4603-b118-fdc05a237848-combined-ca-bundle\") pod \"ovn-controller-xzzn6\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " pod="openstack/ovn-controller-xzzn6" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.739417 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/944abeae-3f1d-4391-a375-b64ed9c17b14-var-run\") pod \"ovn-controller-ovs-gpnz5\" (UID: \"944abeae-3f1d-4391-a375-b64ed9c17b14\") " pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.739443 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcq7l\" (UniqueName: \"kubernetes.io/projected/944abeae-3f1d-4391-a375-b64ed9c17b14-kube-api-access-tcq7l\") pod \"ovn-controller-ovs-gpnz5\" (UID: \"944abeae-3f1d-4391-a375-b64ed9c17b14\") " pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.841516 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcq7l\" (UniqueName: \"kubernetes.io/projected/944abeae-3f1d-4391-a375-b64ed9c17b14-kube-api-access-tcq7l\") pod \"ovn-controller-ovs-gpnz5\" (UID: \"944abeae-3f1d-4391-a375-b64ed9c17b14\") " pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.841608 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5feb905d-9c23-4603-b118-fdc05a237848-var-run\") pod \"ovn-controller-xzzn6\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " pod="openstack/ovn-controller-xzzn6" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.841731 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/944abeae-3f1d-4391-a375-b64ed9c17b14-etc-ovs\") pod \"ovn-controller-ovs-gpnz5\" (UID: \"944abeae-3f1d-4391-a375-b64ed9c17b14\") " pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.842427 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5feb905d-9c23-4603-b118-fdc05a237848-var-run\") pod \"ovn-controller-xzzn6\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " pod="openstack/ovn-controller-xzzn6" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.842477 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/944abeae-3f1d-4391-a375-b64ed9c17b14-etc-ovs\") pod \"ovn-controller-ovs-gpnz5\" (UID: \"944abeae-3f1d-4391-a375-b64ed9c17b14\") " pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.842565 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5feb905d-9c23-4603-b118-fdc05a237848-ovn-controller-tls-certs\") pod \"ovn-controller-xzzn6\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " pod="openstack/ovn-controller-xzzn6" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.842605 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/944abeae-3f1d-4391-a375-b64ed9c17b14-scripts\") pod \"ovn-controller-ovs-gpnz5\" (UID: \"944abeae-3f1d-4391-a375-b64ed9c17b14\") " pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.842649 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4zb9\" (UniqueName: \"kubernetes.io/projected/5feb905d-9c23-4603-b118-fdc05a237848-kube-api-access-s4zb9\") pod \"ovn-controller-xzzn6\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " pod="openstack/ovn-controller-xzzn6" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.842683 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/944abeae-3f1d-4391-a375-b64ed9c17b14-var-lib\") pod \"ovn-controller-ovs-gpnz5\" (UID: \"944abeae-3f1d-4391-a375-b64ed9c17b14\") " pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.842722 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5feb905d-9c23-4603-b118-fdc05a237848-var-run-ovn\") pod \"ovn-controller-xzzn6\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " pod="openstack/ovn-controller-xzzn6" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.842744 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5feb905d-9c23-4603-b118-fdc05a237848-scripts\") pod \"ovn-controller-xzzn6\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " pod="openstack/ovn-controller-xzzn6" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.842766 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/944abeae-3f1d-4391-a375-b64ed9c17b14-var-log\") pod \"ovn-controller-ovs-gpnz5\" (UID: \"944abeae-3f1d-4391-a375-b64ed9c17b14\") " pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.842784 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5feb905d-9c23-4603-b118-fdc05a237848-var-log-ovn\") pod \"ovn-controller-xzzn6\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " pod="openstack/ovn-controller-xzzn6" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.842815 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5feb905d-9c23-4603-b118-fdc05a237848-combined-ca-bundle\") pod \"ovn-controller-xzzn6\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " pod="openstack/ovn-controller-xzzn6" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.842841 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/944abeae-3f1d-4391-a375-b64ed9c17b14-var-run\") pod \"ovn-controller-ovs-gpnz5\" (UID: \"944abeae-3f1d-4391-a375-b64ed9c17b14\") " pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.842931 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/944abeae-3f1d-4391-a375-b64ed9c17b14-var-run\") pod \"ovn-controller-ovs-gpnz5\" (UID: \"944abeae-3f1d-4391-a375-b64ed9c17b14\") " pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.844046 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5feb905d-9c23-4603-b118-fdc05a237848-var-run-ovn\") pod \"ovn-controller-xzzn6\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " pod="openstack/ovn-controller-xzzn6" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.844241 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/944abeae-3f1d-4391-a375-b64ed9c17b14-var-lib\") pod \"ovn-controller-ovs-gpnz5\" (UID: \"944abeae-3f1d-4391-a375-b64ed9c17b14\") " pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.845137 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/944abeae-3f1d-4391-a375-b64ed9c17b14-var-log\") pod \"ovn-controller-ovs-gpnz5\" (UID: \"944abeae-3f1d-4391-a375-b64ed9c17b14\") " pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.845350 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5feb905d-9c23-4603-b118-fdc05a237848-var-log-ovn\") pod \"ovn-controller-xzzn6\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " pod="openstack/ovn-controller-xzzn6" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.848136 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5feb905d-9c23-4603-b118-fdc05a237848-scripts\") pod \"ovn-controller-xzzn6\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " pod="openstack/ovn-controller-xzzn6" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.849818 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/944abeae-3f1d-4391-a375-b64ed9c17b14-scripts\") pod \"ovn-controller-ovs-gpnz5\" (UID: \"944abeae-3f1d-4391-a375-b64ed9c17b14\") " pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.850045 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5feb905d-9c23-4603-b118-fdc05a237848-ovn-controller-tls-certs\") pod \"ovn-controller-xzzn6\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " pod="openstack/ovn-controller-xzzn6" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.860354 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5feb905d-9c23-4603-b118-fdc05a237848-combined-ca-bundle\") pod \"ovn-controller-xzzn6\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " pod="openstack/ovn-controller-xzzn6" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.866598 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcq7l\" (UniqueName: \"kubernetes.io/projected/944abeae-3f1d-4391-a375-b64ed9c17b14-kube-api-access-tcq7l\") pod \"ovn-controller-ovs-gpnz5\" (UID: \"944abeae-3f1d-4391-a375-b64ed9c17b14\") " pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.867372 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4zb9\" (UniqueName: \"kubernetes.io/projected/5feb905d-9c23-4603-b118-fdc05a237848-kube-api-access-s4zb9\") pod \"ovn-controller-xzzn6\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " pod="openstack/ovn-controller-xzzn6" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.942172 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xzzn6" Nov 28 07:13:42 crc kubenswrapper[4946]: I1128 07:13:42.972090 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.513348 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.516603 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.522975 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.523192 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-mqtx2" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.522975 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.523416 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.523429 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.551567 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.658717 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/926ec930-a8f3-4c87-9963-39779e7309cc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.658820 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/926ec930-a8f3-4c87-9963-39779e7309cc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.658863 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/926ec930-a8f3-4c87-9963-39779e7309cc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.658893 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.658914 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/926ec930-a8f3-4c87-9963-39779e7309cc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.659078 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qtt6\" (UniqueName: \"kubernetes.io/projected/926ec930-a8f3-4c87-9963-39779e7309cc-kube-api-access-4qtt6\") pod \"ovsdbserver-nb-0\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.659148 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/926ec930-a8f3-4c87-9963-39779e7309cc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.659367 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/926ec930-a8f3-4c87-9963-39779e7309cc-config\") pod \"ovsdbserver-nb-0\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.761268 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qtt6\" (UniqueName: \"kubernetes.io/projected/926ec930-a8f3-4c87-9963-39779e7309cc-kube-api-access-4qtt6\") pod \"ovsdbserver-nb-0\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.761329 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/926ec930-a8f3-4c87-9963-39779e7309cc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.761363 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/926ec930-a8f3-4c87-9963-39779e7309cc-config\") pod \"ovsdbserver-nb-0\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.761416 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/926ec930-a8f3-4c87-9963-39779e7309cc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.761502 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/926ec930-a8f3-4c87-9963-39779e7309cc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.761540 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/926ec930-a8f3-4c87-9963-39779e7309cc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.761574 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.761619 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/926ec930-a8f3-4c87-9963-39779e7309cc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.762114 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/926ec930-a8f3-4c87-9963-39779e7309cc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.762202 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.762516 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/926ec930-a8f3-4c87-9963-39779e7309cc-config\") pod \"ovsdbserver-nb-0\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.763240 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/926ec930-a8f3-4c87-9963-39779e7309cc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.768060 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/926ec930-a8f3-4c87-9963-39779e7309cc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.770311 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/926ec930-a8f3-4c87-9963-39779e7309cc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.773054 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/926ec930-a8f3-4c87-9963-39779e7309cc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.780073 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qtt6\" (UniqueName: \"kubernetes.io/projected/926ec930-a8f3-4c87-9963-39779e7309cc-kube-api-access-4qtt6\") pod \"ovsdbserver-nb-0\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.825499 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:43 crc kubenswrapper[4946]: I1128 07:13:43.850789 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.253795 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.262722 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.265351 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.265714 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-g7lq9" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.266046 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.267029 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.279157 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.331398 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt9kr\" (UniqueName: \"kubernetes.io/projected/a2e2a06b-b43f-4a3b-985d-964e177c3c06-kube-api-access-wt9kr\") pod \"ovsdbserver-sb-0\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.331483 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e2a06b-b43f-4a3b-985d-964e177c3c06-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.331508 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2e2a06b-b43f-4a3b-985d-964e177c3c06-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.331537 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2e2a06b-b43f-4a3b-985d-964e177c3c06-config\") pod \"ovsdbserver-sb-0\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.331617 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a2e2a06b-b43f-4a3b-985d-964e177c3c06-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.331646 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2e2a06b-b43f-4a3b-985d-964e177c3c06-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.331681 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.331717 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2e2a06b-b43f-4a3b-985d-964e177c3c06-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.435309 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt9kr\" (UniqueName: \"kubernetes.io/projected/a2e2a06b-b43f-4a3b-985d-964e177c3c06-kube-api-access-wt9kr\") pod \"ovsdbserver-sb-0\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.435405 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e2a06b-b43f-4a3b-985d-964e177c3c06-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.435445 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2e2a06b-b43f-4a3b-985d-964e177c3c06-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.435499 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2e2a06b-b43f-4a3b-985d-964e177c3c06-config\") pod \"ovsdbserver-sb-0\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.435606 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a2e2a06b-b43f-4a3b-985d-964e177c3c06-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.435656 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2e2a06b-b43f-4a3b-985d-964e177c3c06-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.435692 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.435733 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2e2a06b-b43f-4a3b-985d-964e177c3c06-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.436290 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a2e2a06b-b43f-4a3b-985d-964e177c3c06-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.436820 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.437766 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2e2a06b-b43f-4a3b-985d-964e177c3c06-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.438020 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2e2a06b-b43f-4a3b-985d-964e177c3c06-config\") pod \"ovsdbserver-sb-0\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.451755 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2e2a06b-b43f-4a3b-985d-964e177c3c06-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.453405 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e2a06b-b43f-4a3b-985d-964e177c3c06-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.460713 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.461956 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2e2a06b-b43f-4a3b-985d-964e177c3c06-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.475504 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt9kr\" (UniqueName: \"kubernetes.io/projected/a2e2a06b-b43f-4a3b-985d-964e177c3c06-kube-api-access-wt9kr\") pod \"ovsdbserver-sb-0\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:46 crc kubenswrapper[4946]: I1128 07:13:46.594942 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 28 07:13:47 crc kubenswrapper[4946]: I1128 07:13:47.500948 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2307ed21-fd67-403c-ae9d-acc822502e42","Type":"ContainerStarted","Data":"f79bc806764927e9705860d7bfe49d6ea7c1e6ce03da5f3ca44e08fd7e344a8c"} Nov 28 07:13:59 crc kubenswrapper[4946]: E1128 07:13:59.700524 4946 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:4218330ae90f65f4a2c1d93334812c4d04a4ed1d46013269252aba16e1138627" Nov 28 07:13:59 crc kubenswrapper[4946]: E1128 07:13:59.701493 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:4218330ae90f65f4a2c1d93334812c4d04a4ed1d46013269252aba16e1138627,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9nlp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-557f57d995-g9hls_openstack(909ce070-90e9-424e-89bd-6825071c0d2e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 07:13:59 crc kubenswrapper[4946]: E1128 07:13:59.702635 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-557f57d995-g9hls" podUID="909ce070-90e9-424e-89bd-6825071c0d2e" Nov 28 07:14:01 crc kubenswrapper[4946]: E1128 07:14:01.438853 4946 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:5526be2fd8d8cdc035078fdbcb7de6b02c081147295a13f2b1e50e281ef17f52" Nov 28 07:14:01 crc kubenswrapper[4946]: E1128 07:14:01.439482 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:5526be2fd8d8cdc035078fdbcb7de6b02c081147295a13f2b1e50e281ef17f52,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cf9c5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(13d10c33-0ca9-47d5-ac49-19391cebfb39): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 07:14:01 crc kubenswrapper[4946]: E1128 07:14:01.440912 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="13d10c33-0ca9-47d5-ac49-19391cebfb39" Nov 28 07:14:01 crc kubenswrapper[4946]: E1128 07:14:01.659351 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:5526be2fd8d8cdc035078fdbcb7de6b02c081147295a13f2b1e50e281ef17f52\\\"\"" pod="openstack/openstack-galera-0" podUID="13d10c33-0ca9-47d5-ac49-19391cebfb39" Nov 28 07:14:02 crc kubenswrapper[4946]: E1128 07:14:02.479524 4946 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:c64e18fe0ecb6900e763e6cf6be0ca8f71b5c8af9e078a543238a505cf88ae46" Nov 28 07:14:02 crc kubenswrapper[4946]: E1128 07:14:02.480396 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:c64e18fe0ecb6900e763e6cf6be0ca8f71b5c8af9e078a543238a505cf88ae46,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8r5q4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(3521840d-60d0-450c-8c05-7e2ad0fc4e97): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 07:14:02 crc kubenswrapper[4946]: E1128 07:14:02.481639 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="3521840d-60d0-450c-8c05-7e2ad0fc4e97" Nov 28 07:14:02 crc kubenswrapper[4946]: E1128 07:14:02.666416 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:c64e18fe0ecb6900e763e6cf6be0ca8f71b5c8af9e078a543238a505cf88ae46\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="3521840d-60d0-450c-8c05-7e2ad0fc4e97" Nov 28 07:14:03 crc kubenswrapper[4946]: E1128 07:14:03.163340 4946 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached@sha256:0e00f2303db35259ffcd3d034f38ab9eb4cb089e268305a4165b5f86a18fce6c" Nov 28 07:14:03 crc kubenswrapper[4946]: E1128 07:14:03.163697 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached@sha256:0e00f2303db35259ffcd3d034f38ab9eb4cb089e268305a4165b5f86a18fce6c,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n6dh56h5dbh675h684h75h646h5c9h598h666h55bh5cfh664h5d5h67fh659h666h6fh79h58fh5d4h9bh5c7h5c7h74h5d4h649h654h56h7bh5c8hc5q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2blsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(62fdad5e-59c5-4d8f-87da-79b384fb82be): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 07:14:03 crc kubenswrapper[4946]: E1128 07:14:03.165184 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="62fdad5e-59c5-4d8f-87da-79b384fb82be" Nov 28 07:14:03 crc kubenswrapper[4946]: E1128 07:14:03.170347 4946 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:5526be2fd8d8cdc035078fdbcb7de6b02c081147295a13f2b1e50e281ef17f52" Nov 28 07:14:03 crc kubenswrapper[4946]: E1128 07:14:03.170494 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:5526be2fd8d8cdc035078fdbcb7de6b02c081147295a13f2b1e50e281ef17f52,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9pfn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(e87edf72-a3a2-4df0-8249-5902f158998d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 07:14:03 crc kubenswrapper[4946]: E1128 07:14:03.171726 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="e87edf72-a3a2-4df0-8249-5902f158998d" Nov 28 07:14:03 crc kubenswrapper[4946]: E1128 07:14:03.199153 4946 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:4218330ae90f65f4a2c1d93334812c4d04a4ed1d46013269252aba16e1138627" Nov 28 07:14:03 crc kubenswrapper[4946]: E1128 07:14:03.199501 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:4218330ae90f65f4a2c1d93334812c4d04a4ed1d46013269252aba16e1138627,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mpcd5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-8446fd7c75-kwwmc_openstack(6cbb552d-dc8c-407a-82b2-248bdfb5efe3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 07:14:03 crc kubenswrapper[4946]: E1128 07:14:03.200729 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-8446fd7c75-kwwmc" podUID="6cbb552d-dc8c-407a-82b2-248bdfb5efe3" Nov 28 07:14:03 crc kubenswrapper[4946]: E1128 07:14:03.205685 4946 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:4218330ae90f65f4a2c1d93334812c4d04a4ed1d46013269252aba16e1138627" Nov 28 07:14:03 crc kubenswrapper[4946]: E1128 07:14:03.205971 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:4218330ae90f65f4a2c1d93334812c4d04a4ed1d46013269252aba16e1138627,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-956fk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57dc4c6697-lx6ht_openstack(1ce6fbc6-22c3-4e15-97c0-690b30c63673): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 07:14:03 crc kubenswrapper[4946]: E1128 07:14:03.207198 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57dc4c6697-lx6ht" podUID="1ce6fbc6-22c3-4e15-97c0-690b30c63673" Nov 28 07:14:03 crc kubenswrapper[4946]: E1128 07:14:03.222019 4946 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:c64e18fe0ecb6900e763e6cf6be0ca8f71b5c8af9e078a543238a505cf88ae46" Nov 28 07:14:03 crc kubenswrapper[4946]: E1128 07:14:03.222570 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:c64e18fe0ecb6900e763e6cf6be0ca8f71b5c8af9e078a543238a505cf88ae46,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgx22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(59fdca77-b333-44be-ab8c-96a2f4bcc340): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 07:14:03 crc kubenswrapper[4946]: E1128 07:14:03.223785 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="59fdca77-b333-44be-ab8c-96a2f4bcc340" Nov 28 07:14:03 crc kubenswrapper[4946]: E1128 07:14:03.261811 4946 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:4218330ae90f65f4a2c1d93334812c4d04a4ed1d46013269252aba16e1138627" Nov 28 07:14:03 crc kubenswrapper[4946]: E1128 07:14:03.262039 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:4218330ae90f65f4a2c1d93334812c4d04a4ed1d46013269252aba16e1138627,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b6tmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-766fdc659c-gbcph_openstack(1a041fa4-f481-403d-95ac-c2f32f1f5980): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 07:14:03 crc kubenswrapper[4946]: E1128 07:14:03.263436 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-766fdc659c-gbcph" podUID="1a041fa4-f481-403d-95ac-c2f32f1f5980" Nov 28 07:14:03 crc kubenswrapper[4946]: I1128 07:14:03.282679 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557f57d995-g9hls" Nov 28 07:14:03 crc kubenswrapper[4946]: I1128 07:14:03.383840 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/909ce070-90e9-424e-89bd-6825071c0d2e-config\") pod \"909ce070-90e9-424e-89bd-6825071c0d2e\" (UID: \"909ce070-90e9-424e-89bd-6825071c0d2e\") " Nov 28 07:14:03 crc kubenswrapper[4946]: I1128 07:14:03.387384 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9nlp\" (UniqueName: \"kubernetes.io/projected/909ce070-90e9-424e-89bd-6825071c0d2e-kube-api-access-c9nlp\") pod \"909ce070-90e9-424e-89bd-6825071c0d2e\" (UID: \"909ce070-90e9-424e-89bd-6825071c0d2e\") " Nov 28 07:14:03 crc kubenswrapper[4946]: I1128 07:14:03.386902 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/909ce070-90e9-424e-89bd-6825071c0d2e-config" (OuterVolumeSpecName: "config") pod "909ce070-90e9-424e-89bd-6825071c0d2e" (UID: "909ce070-90e9-424e-89bd-6825071c0d2e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:14:03 crc kubenswrapper[4946]: I1128 07:14:03.399476 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/909ce070-90e9-424e-89bd-6825071c0d2e-kube-api-access-c9nlp" (OuterVolumeSpecName: "kube-api-access-c9nlp") pod "909ce070-90e9-424e-89bd-6825071c0d2e" (UID: "909ce070-90e9-424e-89bd-6825071c0d2e"). InnerVolumeSpecName "kube-api-access-c9nlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:14:03 crc kubenswrapper[4946]: I1128 07:14:03.490015 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9nlp\" (UniqueName: \"kubernetes.io/projected/909ce070-90e9-424e-89bd-6825071c0d2e-kube-api-access-c9nlp\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:03 crc kubenswrapper[4946]: I1128 07:14:03.490057 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/909ce070-90e9-424e-89bd-6825071c0d2e-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:03 crc kubenswrapper[4946]: I1128 07:14:03.680134 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557f57d995-g9hls" event={"ID":"909ce070-90e9-424e-89bd-6825071c0d2e","Type":"ContainerDied","Data":"d018bf96c9ce2b0b28a334a2d4f56b05f8888a82ccc840e2516c844bf2e75a14"} Nov 28 07:14:03 crc kubenswrapper[4946]: I1128 07:14:03.680397 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557f57d995-g9hls" Nov 28 07:14:03 crc kubenswrapper[4946]: E1128 07:14:03.685720 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached@sha256:0e00f2303db35259ffcd3d034f38ab9eb4cb089e268305a4165b5f86a18fce6c\\\"\"" pod="openstack/memcached-0" podUID="62fdad5e-59c5-4d8f-87da-79b384fb82be" Nov 28 07:14:03 crc kubenswrapper[4946]: E1128 07:14:03.685782 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:4218330ae90f65f4a2c1d93334812c4d04a4ed1d46013269252aba16e1138627\\\"\"" pod="openstack/dnsmasq-dns-8446fd7c75-kwwmc" podUID="6cbb552d-dc8c-407a-82b2-248bdfb5efe3" Nov 28 07:14:03 crc kubenswrapper[4946]: E1128 07:14:03.685757 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:4218330ae90f65f4a2c1d93334812c4d04a4ed1d46013269252aba16e1138627\\\"\"" pod="openstack/dnsmasq-dns-57dc4c6697-lx6ht" podUID="1ce6fbc6-22c3-4e15-97c0-690b30c63673" Nov 28 07:14:03 crc kubenswrapper[4946]: E1128 07:14:03.686669 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:c64e18fe0ecb6900e763e6cf6be0ca8f71b5c8af9e078a543238a505cf88ae46\\\"\"" pod="openstack/rabbitmq-server-0" podUID="59fdca77-b333-44be-ab8c-96a2f4bcc340" Nov 28 07:14:03 crc kubenswrapper[4946]: E1128 07:14:03.686286 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:5526be2fd8d8cdc035078fdbcb7de6b02c081147295a13f2b1e50e281ef17f52\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="e87edf72-a3a2-4df0-8249-5902f158998d" Nov 28 07:14:03 crc kubenswrapper[4946]: I1128 07:14:03.696734 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xzzn6"] Nov 28 07:14:03 crc kubenswrapper[4946]: I1128 07:14:03.912155 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557f57d995-g9hls"] Nov 28 07:14:03 crc kubenswrapper[4946]: I1128 07:14:03.923645 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-557f57d995-g9hls"] Nov 28 07:14:03 crc kubenswrapper[4946]: I1128 07:14:03.936322 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 07:14:04 crc kubenswrapper[4946]: I1128 07:14:04.002771 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="909ce070-90e9-424e-89bd-6825071c0d2e" path="/var/lib/kubelet/pods/909ce070-90e9-424e-89bd-6825071c0d2e/volumes" Nov 28 07:14:04 crc kubenswrapper[4946]: I1128 07:14:04.271129 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-766fdc659c-gbcph" Nov 28 07:14:04 crc kubenswrapper[4946]: I1128 07:14:04.407169 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a041fa4-f481-403d-95ac-c2f32f1f5980-config\") pod \"1a041fa4-f481-403d-95ac-c2f32f1f5980\" (UID: \"1a041fa4-f481-403d-95ac-c2f32f1f5980\") " Nov 28 07:14:04 crc kubenswrapper[4946]: I1128 07:14:04.407748 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a041fa4-f481-403d-95ac-c2f32f1f5980-dns-svc\") pod \"1a041fa4-f481-403d-95ac-c2f32f1f5980\" (UID: \"1a041fa4-f481-403d-95ac-c2f32f1f5980\") " Nov 28 07:14:04 crc kubenswrapper[4946]: I1128 07:14:04.407742 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a041fa4-f481-403d-95ac-c2f32f1f5980-config" (OuterVolumeSpecName: "config") pod "1a041fa4-f481-403d-95ac-c2f32f1f5980" (UID: "1a041fa4-f481-403d-95ac-c2f32f1f5980"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:14:04 crc kubenswrapper[4946]: I1128 07:14:04.407991 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6tmz\" (UniqueName: \"kubernetes.io/projected/1a041fa4-f481-403d-95ac-c2f32f1f5980-kube-api-access-b6tmz\") pod \"1a041fa4-f481-403d-95ac-c2f32f1f5980\" (UID: \"1a041fa4-f481-403d-95ac-c2f32f1f5980\") " Nov 28 07:14:04 crc kubenswrapper[4946]: I1128 07:14:04.408120 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a041fa4-f481-403d-95ac-c2f32f1f5980-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1a041fa4-f481-403d-95ac-c2f32f1f5980" (UID: "1a041fa4-f481-403d-95ac-c2f32f1f5980"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:14:04 crc kubenswrapper[4946]: I1128 07:14:04.408876 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a041fa4-f481-403d-95ac-c2f32f1f5980-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:04 crc kubenswrapper[4946]: I1128 07:14:04.408895 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a041fa4-f481-403d-95ac-c2f32f1f5980-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:04 crc kubenswrapper[4946]: I1128 07:14:04.415429 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a041fa4-f481-403d-95ac-c2f32f1f5980-kube-api-access-b6tmz" (OuterVolumeSpecName: "kube-api-access-b6tmz") pod "1a041fa4-f481-403d-95ac-c2f32f1f5980" (UID: "1a041fa4-f481-403d-95ac-c2f32f1f5980"). InnerVolumeSpecName "kube-api-access-b6tmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:14:04 crc kubenswrapper[4946]: I1128 07:14:04.511049 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6tmz\" (UniqueName: \"kubernetes.io/projected/1a041fa4-f481-403d-95ac-c2f32f1f5980-kube-api-access-b6tmz\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:04 crc kubenswrapper[4946]: I1128 07:14:04.583055 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 07:14:04 crc kubenswrapper[4946]: I1128 07:14:04.689611 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xzzn6" event={"ID":"5feb905d-9c23-4603-b118-fdc05a237848","Type":"ContainerStarted","Data":"de1f85f32262dd8a4c62e3a5822bb8e7386c76210efceeb02306ca683145848a"} Nov 28 07:14:04 crc kubenswrapper[4946]: I1128 07:14:04.691942 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a2e2a06b-b43f-4a3b-985d-964e177c3c06","Type":"ContainerStarted","Data":"3480e8458f642a16249d203a2be0807dba908b4e9d3c38c8c944e916316103b5"} Nov 28 07:14:04 crc kubenswrapper[4946]: I1128 07:14:04.694909 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-766fdc659c-gbcph" event={"ID":"1a041fa4-f481-403d-95ac-c2f32f1f5980","Type":"ContainerDied","Data":"e0fb4c7f5cf3d119a4e2703dcec8dc1a4f7188e95ec5772c9e4079ad357014d7"} Nov 28 07:14:04 crc kubenswrapper[4946]: I1128 07:14:04.695006 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-766fdc659c-gbcph" Nov 28 07:14:04 crc kubenswrapper[4946]: W1128 07:14:04.712103 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod926ec930_a8f3_4c87_9963_39779e7309cc.slice/crio-de1fb84e8a63cbf2aac0c218842162ab70e03c113fdf0099c21d8f1ecb56d669 WatchSource:0}: Error finding container de1fb84e8a63cbf2aac0c218842162ab70e03c113fdf0099c21d8f1ecb56d669: Status 404 returned error can't find the container with id de1fb84e8a63cbf2aac0c218842162ab70e03c113fdf0099c21d8f1ecb56d669 Nov 28 07:14:04 crc kubenswrapper[4946]: I1128 07:14:04.813954 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-766fdc659c-gbcph"] Nov 28 07:14:04 crc kubenswrapper[4946]: I1128 07:14:04.820854 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-766fdc659c-gbcph"] Nov 28 07:14:04 crc kubenswrapper[4946]: I1128 07:14:04.965764 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-gpnz5"] Nov 28 07:14:05 crc kubenswrapper[4946]: I1128 07:14:05.706907 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"926ec930-a8f3-4c87-9963-39779e7309cc","Type":"ContainerStarted","Data":"de1fb84e8a63cbf2aac0c218842162ab70e03c113fdf0099c21d8f1ecb56d669"} Nov 28 07:14:05 crc kubenswrapper[4946]: I1128 07:14:05.709974 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2307ed21-fd67-403c-ae9d-acc822502e42","Type":"ContainerStarted","Data":"ca77bf6d5c0d4d76bcac958857c9dc6a681f5bd3946886d84896f5797455fbcc"} Nov 28 07:14:05 crc kubenswrapper[4946]: I1128 07:14:05.710161 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 28 07:14:05 crc kubenswrapper[4946]: I1128 07:14:05.711981 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gpnz5" event={"ID":"944abeae-3f1d-4391-a375-b64ed9c17b14","Type":"ContainerStarted","Data":"2b165cc1856e910a507071e4463eeac3507f736c920a23b80f36be1c7358cf8b"} Nov 28 07:14:05 crc kubenswrapper[4946]: I1128 07:14:05.728321 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=8.55021459 podStartE2EDuration="26.728302335s" podCreationTimestamp="2025-11-28 07:13:39 +0000 UTC" firstStartedPulling="2025-11-28 07:13:47.081896167 +0000 UTC m=+1281.459961278" lastFinishedPulling="2025-11-28 07:14:05.259983912 +0000 UTC m=+1299.638049023" observedRunningTime="2025-11-28 07:14:05.725367683 +0000 UTC m=+1300.103432794" watchObservedRunningTime="2025-11-28 07:14:05.728302335 +0000 UTC m=+1300.106367446" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.000368 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a041fa4-f481-403d-95ac-c2f32f1f5980" path="/var/lib/kubelet/pods/1a041fa4-f481-403d-95ac-c2f32f1f5980/volumes" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.144796 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-wm9fv"] Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.146318 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wm9fv" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.150933 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.160347 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wm9fv"] Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.244745 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wm9fv\" (UID: \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\") " pod="openstack/ovn-controller-metrics-wm9fv" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.244819 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-config\") pod \"ovn-controller-metrics-wm9fv\" (UID: \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\") " pod="openstack/ovn-controller-metrics-wm9fv" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.244850 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82g9r\" (UniqueName: \"kubernetes.io/projected/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-kube-api-access-82g9r\") pod \"ovn-controller-metrics-wm9fv\" (UID: \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\") " pod="openstack/ovn-controller-metrics-wm9fv" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.244877 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-ovs-rundir\") pod \"ovn-controller-metrics-wm9fv\" (UID: \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\") " pod="openstack/ovn-controller-metrics-wm9fv" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.244918 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-ovn-rundir\") pod \"ovn-controller-metrics-wm9fv\" (UID: \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\") " pod="openstack/ovn-controller-metrics-wm9fv" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.244966 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-combined-ca-bundle\") pod \"ovn-controller-metrics-wm9fv\" (UID: \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\") " pod="openstack/ovn-controller-metrics-wm9fv" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.287354 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57dc4c6697-lx6ht"] Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.338265 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-684dc5d7df-f27ds"] Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.340814 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-684dc5d7df-f27ds" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.348284 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-combined-ca-bundle\") pod \"ovn-controller-metrics-wm9fv\" (UID: \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\") " pod="openstack/ovn-controller-metrics-wm9fv" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.348741 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wm9fv\" (UID: \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\") " pod="openstack/ovn-controller-metrics-wm9fv" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.350518 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-config\") pod \"ovn-controller-metrics-wm9fv\" (UID: \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\") " pod="openstack/ovn-controller-metrics-wm9fv" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.351377 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82g9r\" (UniqueName: \"kubernetes.io/projected/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-kube-api-access-82g9r\") pod \"ovn-controller-metrics-wm9fv\" (UID: \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\") " pod="openstack/ovn-controller-metrics-wm9fv" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.351638 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-ovs-rundir\") pod \"ovn-controller-metrics-wm9fv\" (UID: \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\") " pod="openstack/ovn-controller-metrics-wm9fv" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.351852 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-ovn-rundir\") pod \"ovn-controller-metrics-wm9fv\" (UID: \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\") " pod="openstack/ovn-controller-metrics-wm9fv" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.352299 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-ovn-rundir\") pod \"ovn-controller-metrics-wm9fv\" (UID: \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\") " pod="openstack/ovn-controller-metrics-wm9fv" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.349623 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-684dc5d7df-f27ds"] Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.349172 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.353638 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-ovs-rundir\") pod \"ovn-controller-metrics-wm9fv\" (UID: \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\") " pod="openstack/ovn-controller-metrics-wm9fv" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.351313 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-config\") pod \"ovn-controller-metrics-wm9fv\" (UID: \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\") " pod="openstack/ovn-controller-metrics-wm9fv" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.363669 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-combined-ca-bundle\") pod \"ovn-controller-metrics-wm9fv\" (UID: \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\") " pod="openstack/ovn-controller-metrics-wm9fv" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.368062 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wm9fv\" (UID: \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\") " pod="openstack/ovn-controller-metrics-wm9fv" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.389861 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82g9r\" (UniqueName: \"kubernetes.io/projected/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-kube-api-access-82g9r\") pod \"ovn-controller-metrics-wm9fv\" (UID: \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\") " pod="openstack/ovn-controller-metrics-wm9fv" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.454716 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0f7bd4-9675-4486-b870-4bee6cbb97cd-config\") pod \"dnsmasq-dns-684dc5d7df-f27ds\" (UID: \"eb0f7bd4-9675-4486-b870-4bee6cbb97cd\") " pod="openstack/dnsmasq-dns-684dc5d7df-f27ds" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.454831 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb0f7bd4-9675-4486-b870-4bee6cbb97cd-ovsdbserver-nb\") pod \"dnsmasq-dns-684dc5d7df-f27ds\" (UID: \"eb0f7bd4-9675-4486-b870-4bee6cbb97cd\") " pod="openstack/dnsmasq-dns-684dc5d7df-f27ds" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.454886 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb0f7bd4-9675-4486-b870-4bee6cbb97cd-dns-svc\") pod \"dnsmasq-dns-684dc5d7df-f27ds\" (UID: \"eb0f7bd4-9675-4486-b870-4bee6cbb97cd\") " pod="openstack/dnsmasq-dns-684dc5d7df-f27ds" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.454943 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4qkv\" (UniqueName: \"kubernetes.io/projected/eb0f7bd4-9675-4486-b870-4bee6cbb97cd-kube-api-access-d4qkv\") pod \"dnsmasq-dns-684dc5d7df-f27ds\" (UID: \"eb0f7bd4-9675-4486-b870-4bee6cbb97cd\") " pod="openstack/dnsmasq-dns-684dc5d7df-f27ds" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.480163 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wm9fv" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.512430 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8446fd7c75-kwwmc"] Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.556683 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4qkv\" (UniqueName: \"kubernetes.io/projected/eb0f7bd4-9675-4486-b870-4bee6cbb97cd-kube-api-access-d4qkv\") pod \"dnsmasq-dns-684dc5d7df-f27ds\" (UID: \"eb0f7bd4-9675-4486-b870-4bee6cbb97cd\") " pod="openstack/dnsmasq-dns-684dc5d7df-f27ds" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.557024 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0f7bd4-9675-4486-b870-4bee6cbb97cd-config\") pod \"dnsmasq-dns-684dc5d7df-f27ds\" (UID: \"eb0f7bd4-9675-4486-b870-4bee6cbb97cd\") " pod="openstack/dnsmasq-dns-684dc5d7df-f27ds" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.557108 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb0f7bd4-9675-4486-b870-4bee6cbb97cd-ovsdbserver-nb\") pod \"dnsmasq-dns-684dc5d7df-f27ds\" (UID: \"eb0f7bd4-9675-4486-b870-4bee6cbb97cd\") " pod="openstack/dnsmasq-dns-684dc5d7df-f27ds" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.557153 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb0f7bd4-9675-4486-b870-4bee6cbb97cd-dns-svc\") pod \"dnsmasq-dns-684dc5d7df-f27ds\" (UID: \"eb0f7bd4-9675-4486-b870-4bee6cbb97cd\") " pod="openstack/dnsmasq-dns-684dc5d7df-f27ds" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.558043 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb0f7bd4-9675-4486-b870-4bee6cbb97cd-dns-svc\") pod \"dnsmasq-dns-684dc5d7df-f27ds\" (UID: \"eb0f7bd4-9675-4486-b870-4bee6cbb97cd\") " pod="openstack/dnsmasq-dns-684dc5d7df-f27ds" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.558945 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0f7bd4-9675-4486-b870-4bee6cbb97cd-config\") pod \"dnsmasq-dns-684dc5d7df-f27ds\" (UID: \"eb0f7bd4-9675-4486-b870-4bee6cbb97cd\") " pod="openstack/dnsmasq-dns-684dc5d7df-f27ds" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.560925 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58bd875f97-67m8z"] Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.569549 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb0f7bd4-9675-4486-b870-4bee6cbb97cd-ovsdbserver-nb\") pod \"dnsmasq-dns-684dc5d7df-f27ds\" (UID: \"eb0f7bd4-9675-4486-b870-4bee6cbb97cd\") " pod="openstack/dnsmasq-dns-684dc5d7df-f27ds" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.570126 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd875f97-67m8z" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.577329 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.589301 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bd875f97-67m8z"] Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.589981 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4qkv\" (UniqueName: \"kubernetes.io/projected/eb0f7bd4-9675-4486-b870-4bee6cbb97cd-kube-api-access-d4qkv\") pod \"dnsmasq-dns-684dc5d7df-f27ds\" (UID: \"eb0f7bd4-9675-4486-b870-4bee6cbb97cd\") " pod="openstack/dnsmasq-dns-684dc5d7df-f27ds" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.660203 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvdfp\" (UniqueName: \"kubernetes.io/projected/ded10611-80cf-497e-87d5-3dc3af54962c-kube-api-access-kvdfp\") pod \"dnsmasq-dns-58bd875f97-67m8z\" (UID: \"ded10611-80cf-497e-87d5-3dc3af54962c\") " pod="openstack/dnsmasq-dns-58bd875f97-67m8z" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.660268 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ded10611-80cf-497e-87d5-3dc3af54962c-config\") pod \"dnsmasq-dns-58bd875f97-67m8z\" (UID: \"ded10611-80cf-497e-87d5-3dc3af54962c\") " pod="openstack/dnsmasq-dns-58bd875f97-67m8z" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.660363 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ded10611-80cf-497e-87d5-3dc3af54962c-dns-svc\") pod \"dnsmasq-dns-58bd875f97-67m8z\" (UID: \"ded10611-80cf-497e-87d5-3dc3af54962c\") " pod="openstack/dnsmasq-dns-58bd875f97-67m8z" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.660403 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ded10611-80cf-497e-87d5-3dc3af54962c-ovsdbserver-nb\") pod \"dnsmasq-dns-58bd875f97-67m8z\" (UID: \"ded10611-80cf-497e-87d5-3dc3af54962c\") " pod="openstack/dnsmasq-dns-58bd875f97-67m8z" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.660445 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ded10611-80cf-497e-87d5-3dc3af54962c-ovsdbserver-sb\") pod \"dnsmasq-dns-58bd875f97-67m8z\" (UID: \"ded10611-80cf-497e-87d5-3dc3af54962c\") " pod="openstack/dnsmasq-dns-58bd875f97-67m8z" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.751091 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-684dc5d7df-f27ds" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.764688 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvdfp\" (UniqueName: \"kubernetes.io/projected/ded10611-80cf-497e-87d5-3dc3af54962c-kube-api-access-kvdfp\") pod \"dnsmasq-dns-58bd875f97-67m8z\" (UID: \"ded10611-80cf-497e-87d5-3dc3af54962c\") " pod="openstack/dnsmasq-dns-58bd875f97-67m8z" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.764771 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ded10611-80cf-497e-87d5-3dc3af54962c-config\") pod \"dnsmasq-dns-58bd875f97-67m8z\" (UID: \"ded10611-80cf-497e-87d5-3dc3af54962c\") " pod="openstack/dnsmasq-dns-58bd875f97-67m8z" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.764879 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ded10611-80cf-497e-87d5-3dc3af54962c-dns-svc\") pod \"dnsmasq-dns-58bd875f97-67m8z\" (UID: \"ded10611-80cf-497e-87d5-3dc3af54962c\") " pod="openstack/dnsmasq-dns-58bd875f97-67m8z" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.764914 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ded10611-80cf-497e-87d5-3dc3af54962c-ovsdbserver-nb\") pod \"dnsmasq-dns-58bd875f97-67m8z\" (UID: \"ded10611-80cf-497e-87d5-3dc3af54962c\") " pod="openstack/dnsmasq-dns-58bd875f97-67m8z" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.764989 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ded10611-80cf-497e-87d5-3dc3af54962c-ovsdbserver-sb\") pod \"dnsmasq-dns-58bd875f97-67m8z\" (UID: \"ded10611-80cf-497e-87d5-3dc3af54962c\") " pod="openstack/dnsmasq-dns-58bd875f97-67m8z" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.765772 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ded10611-80cf-497e-87d5-3dc3af54962c-config\") pod \"dnsmasq-dns-58bd875f97-67m8z\" (UID: \"ded10611-80cf-497e-87d5-3dc3af54962c\") " pod="openstack/dnsmasq-dns-58bd875f97-67m8z" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.766827 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ded10611-80cf-497e-87d5-3dc3af54962c-dns-svc\") pod \"dnsmasq-dns-58bd875f97-67m8z\" (UID: \"ded10611-80cf-497e-87d5-3dc3af54962c\") " pod="openstack/dnsmasq-dns-58bd875f97-67m8z" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.767233 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ded10611-80cf-497e-87d5-3dc3af54962c-ovsdbserver-sb\") pod \"dnsmasq-dns-58bd875f97-67m8z\" (UID: \"ded10611-80cf-497e-87d5-3dc3af54962c\") " pod="openstack/dnsmasq-dns-58bd875f97-67m8z" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.767358 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ded10611-80cf-497e-87d5-3dc3af54962c-ovsdbserver-nb\") pod \"dnsmasq-dns-58bd875f97-67m8z\" (UID: \"ded10611-80cf-497e-87d5-3dc3af54962c\") " pod="openstack/dnsmasq-dns-58bd875f97-67m8z" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.801163 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvdfp\" (UniqueName: \"kubernetes.io/projected/ded10611-80cf-497e-87d5-3dc3af54962c-kube-api-access-kvdfp\") pod \"dnsmasq-dns-58bd875f97-67m8z\" (UID: \"ded10611-80cf-497e-87d5-3dc3af54962c\") " pod="openstack/dnsmasq-dns-58bd875f97-67m8z" Nov 28 07:14:06 crc kubenswrapper[4946]: I1128 07:14:06.966017 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd875f97-67m8z" Nov 28 07:14:07 crc kubenswrapper[4946]: I1128 07:14:07.084332 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wm9fv"] Nov 28 07:14:08 crc kubenswrapper[4946]: W1128 07:14:08.344879 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c1972c6_dc6e_46b8_a2a8_dcbc4a873a08.slice/crio-b564078b348c2bcf44f711ce76b205e521ced0d4dacf8dd440c610f0a366acbb WatchSource:0}: Error finding container b564078b348c2bcf44f711ce76b205e521ced0d4dacf8dd440c610f0a366acbb: Status 404 returned error can't find the container with id b564078b348c2bcf44f711ce76b205e521ced0d4dacf8dd440c610f0a366acbb Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.449137 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8446fd7c75-kwwmc" Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.462941 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57dc4c6697-lx6ht" Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.501608 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpcd5\" (UniqueName: \"kubernetes.io/projected/6cbb552d-dc8c-407a-82b2-248bdfb5efe3-kube-api-access-mpcd5\") pod \"6cbb552d-dc8c-407a-82b2-248bdfb5efe3\" (UID: \"6cbb552d-dc8c-407a-82b2-248bdfb5efe3\") " Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.501695 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbb552d-dc8c-407a-82b2-248bdfb5efe3-config\") pod \"6cbb552d-dc8c-407a-82b2-248bdfb5efe3\" (UID: \"6cbb552d-dc8c-407a-82b2-248bdfb5efe3\") " Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.501926 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cbb552d-dc8c-407a-82b2-248bdfb5efe3-dns-svc\") pod \"6cbb552d-dc8c-407a-82b2-248bdfb5efe3\" (UID: \"6cbb552d-dc8c-407a-82b2-248bdfb5efe3\") " Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.502524 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cbb552d-dc8c-407a-82b2-248bdfb5efe3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6cbb552d-dc8c-407a-82b2-248bdfb5efe3" (UID: "6cbb552d-dc8c-407a-82b2-248bdfb5efe3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.502691 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cbb552d-dc8c-407a-82b2-248bdfb5efe3-config" (OuterVolumeSpecName: "config") pod "6cbb552d-dc8c-407a-82b2-248bdfb5efe3" (UID: "6cbb552d-dc8c-407a-82b2-248bdfb5efe3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.509860 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cbb552d-dc8c-407a-82b2-248bdfb5efe3-kube-api-access-mpcd5" (OuterVolumeSpecName: "kube-api-access-mpcd5") pod "6cbb552d-dc8c-407a-82b2-248bdfb5efe3" (UID: "6cbb552d-dc8c-407a-82b2-248bdfb5efe3"). InnerVolumeSpecName "kube-api-access-mpcd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.604023 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ce6fbc6-22c3-4e15-97c0-690b30c63673-dns-svc\") pod \"1ce6fbc6-22c3-4e15-97c0-690b30c63673\" (UID: \"1ce6fbc6-22c3-4e15-97c0-690b30c63673\") " Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.604092 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-956fk\" (UniqueName: \"kubernetes.io/projected/1ce6fbc6-22c3-4e15-97c0-690b30c63673-kube-api-access-956fk\") pod \"1ce6fbc6-22c3-4e15-97c0-690b30c63673\" (UID: \"1ce6fbc6-22c3-4e15-97c0-690b30c63673\") " Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.604261 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ce6fbc6-22c3-4e15-97c0-690b30c63673-config\") pod \"1ce6fbc6-22c3-4e15-97c0-690b30c63673\" (UID: \"1ce6fbc6-22c3-4e15-97c0-690b30c63673\") " Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.604575 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpcd5\" (UniqueName: \"kubernetes.io/projected/6cbb552d-dc8c-407a-82b2-248bdfb5efe3-kube-api-access-mpcd5\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.604596 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbb552d-dc8c-407a-82b2-248bdfb5efe3-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.604607 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cbb552d-dc8c-407a-82b2-248bdfb5efe3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.604611 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ce6fbc6-22c3-4e15-97c0-690b30c63673-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1ce6fbc6-22c3-4e15-97c0-690b30c63673" (UID: "1ce6fbc6-22c3-4e15-97c0-690b30c63673"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.604814 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ce6fbc6-22c3-4e15-97c0-690b30c63673-config" (OuterVolumeSpecName: "config") pod "1ce6fbc6-22c3-4e15-97c0-690b30c63673" (UID: "1ce6fbc6-22c3-4e15-97c0-690b30c63673"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.608119 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ce6fbc6-22c3-4e15-97c0-690b30c63673-kube-api-access-956fk" (OuterVolumeSpecName: "kube-api-access-956fk") pod "1ce6fbc6-22c3-4e15-97c0-690b30c63673" (UID: "1ce6fbc6-22c3-4e15-97c0-690b30c63673"). InnerVolumeSpecName "kube-api-access-956fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.706109 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ce6fbc6-22c3-4e15-97c0-690b30c63673-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.706139 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-956fk\" (UniqueName: \"kubernetes.io/projected/1ce6fbc6-22c3-4e15-97c0-690b30c63673-kube-api-access-956fk\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.706151 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ce6fbc6-22c3-4e15-97c0-690b30c63673-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.746602 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wm9fv" event={"ID":"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08","Type":"ContainerStarted","Data":"b564078b348c2bcf44f711ce76b205e521ced0d4dacf8dd440c610f0a366acbb"} Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.748322 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57dc4c6697-lx6ht" event={"ID":"1ce6fbc6-22c3-4e15-97c0-690b30c63673","Type":"ContainerDied","Data":"18c78b57ec644363bf33b67ea5566bfdfacb929068489e47d6c4a41a5bf766f2"} Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.748371 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57dc4c6697-lx6ht" Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.750957 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8446fd7c75-kwwmc" event={"ID":"6cbb552d-dc8c-407a-82b2-248bdfb5efe3","Type":"ContainerDied","Data":"ffd9460489657d36a01512b68ba5684325b493fabeb1ac82929a44abfd32e4fa"} Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.751109 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8446fd7c75-kwwmc" Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.838607 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8446fd7c75-kwwmc"] Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.844481 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8446fd7c75-kwwmc"] Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.878123 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57dc4c6697-lx6ht"] Nov 28 07:14:08 crc kubenswrapper[4946]: I1128 07:14:08.887043 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57dc4c6697-lx6ht"] Nov 28 07:14:09 crc kubenswrapper[4946]: I1128 07:14:09.700793 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-684dc5d7df-f27ds"] Nov 28 07:14:09 crc kubenswrapper[4946]: W1128 07:14:09.709860 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb0f7bd4_9675_4486_b870_4bee6cbb97cd.slice/crio-8c84694d3d5921764b8efd79cd50913dc86b974d9427bc58a52145812c970b8f WatchSource:0}: Error finding container 8c84694d3d5921764b8efd79cd50913dc86b974d9427bc58a52145812c970b8f: Status 404 returned error can't find the container with id 8c84694d3d5921764b8efd79cd50913dc86b974d9427bc58a52145812c970b8f Nov 28 07:14:09 crc kubenswrapper[4946]: I1128 07:14:09.801484 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"926ec930-a8f3-4c87-9963-39779e7309cc","Type":"ContainerStarted","Data":"87bce839fcc780921b732faa53136928a6e9541af2325b83cd8ed770c4841758"} Nov 28 07:14:09 crc kubenswrapper[4946]: I1128 07:14:09.804418 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xzzn6" event={"ID":"5feb905d-9c23-4603-b118-fdc05a237848","Type":"ContainerStarted","Data":"304cacbd4eab8b01e9777876a67dcba73de4883771064616952125d6fef1d4cb"} Nov 28 07:14:09 crc kubenswrapper[4946]: I1128 07:14:09.804756 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-xzzn6" Nov 28 07:14:09 crc kubenswrapper[4946]: I1128 07:14:09.806599 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a2e2a06b-b43f-4a3b-985d-964e177c3c06","Type":"ContainerStarted","Data":"89293c54aab4564bb3c1f382ee28896a803e4970c73eac5d5bec925d87394c71"} Nov 28 07:14:09 crc kubenswrapper[4946]: I1128 07:14:09.808567 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-684dc5d7df-f27ds" event={"ID":"eb0f7bd4-9675-4486-b870-4bee6cbb97cd","Type":"ContainerStarted","Data":"8c84694d3d5921764b8efd79cd50913dc86b974d9427bc58a52145812c970b8f"} Nov 28 07:14:09 crc kubenswrapper[4946]: I1128 07:14:09.811022 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gpnz5" event={"ID":"944abeae-3f1d-4391-a375-b64ed9c17b14","Type":"ContainerStarted","Data":"c4ddf22e9c5893825d45cbc30aa68a2bbb399585a0d6a7695769865ef74f1816"} Nov 28 07:14:09 crc kubenswrapper[4946]: I1128 07:14:09.829630 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-xzzn6" podStartSLOduration=22.696196685 podStartE2EDuration="27.829606807s" podCreationTimestamp="2025-11-28 07:13:42 +0000 UTC" firstStartedPulling="2025-11-28 07:14:04.180703244 +0000 UTC m=+1298.558768355" lastFinishedPulling="2025-11-28 07:14:09.314113366 +0000 UTC m=+1303.692178477" observedRunningTime="2025-11-28 07:14:09.829262268 +0000 UTC m=+1304.207327389" watchObservedRunningTime="2025-11-28 07:14:09.829606807 +0000 UTC m=+1304.207671918" Nov 28 07:14:09 crc kubenswrapper[4946]: W1128 07:14:09.855263 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podded10611_80cf_497e_87d5_3dc3af54962c.slice/crio-2b75eb00bbc05748fe8a15427b97d21245b4ff660e3fbd1d8757f9bf0bfd52c4 WatchSource:0}: Error finding container 2b75eb00bbc05748fe8a15427b97d21245b4ff660e3fbd1d8757f9bf0bfd52c4: Status 404 returned error can't find the container with id 2b75eb00bbc05748fe8a15427b97d21245b4ff660e3fbd1d8757f9bf0bfd52c4 Nov 28 07:14:09 crc kubenswrapper[4946]: I1128 07:14:09.855646 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bd875f97-67m8z"] Nov 28 07:14:10 crc kubenswrapper[4946]: I1128 07:14:10.003591 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ce6fbc6-22c3-4e15-97c0-690b30c63673" path="/var/lib/kubelet/pods/1ce6fbc6-22c3-4e15-97c0-690b30c63673/volumes" Nov 28 07:14:10 crc kubenswrapper[4946]: I1128 07:14:10.003967 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cbb552d-dc8c-407a-82b2-248bdfb5efe3" path="/var/lib/kubelet/pods/6cbb552d-dc8c-407a-82b2-248bdfb5efe3/volumes" Nov 28 07:14:10 crc kubenswrapper[4946]: I1128 07:14:10.823232 4946 generic.go:334] "Generic (PLEG): container finished" podID="944abeae-3f1d-4391-a375-b64ed9c17b14" containerID="c4ddf22e9c5893825d45cbc30aa68a2bbb399585a0d6a7695769865ef74f1816" exitCode=0 Nov 28 07:14:10 crc kubenswrapper[4946]: I1128 07:14:10.823321 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gpnz5" event={"ID":"944abeae-3f1d-4391-a375-b64ed9c17b14","Type":"ContainerDied","Data":"c4ddf22e9c5893825d45cbc30aa68a2bbb399585a0d6a7695769865ef74f1816"} Nov 28 07:14:10 crc kubenswrapper[4946]: I1128 07:14:10.827084 4946 generic.go:334] "Generic (PLEG): container finished" podID="ded10611-80cf-497e-87d5-3dc3af54962c" containerID="968a28598656b20deee60eb65370884e6f3e1cb7430d5e21a6a164687cb048a5" exitCode=0 Nov 28 07:14:10 crc kubenswrapper[4946]: I1128 07:14:10.827160 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd875f97-67m8z" event={"ID":"ded10611-80cf-497e-87d5-3dc3af54962c","Type":"ContainerDied","Data":"968a28598656b20deee60eb65370884e6f3e1cb7430d5e21a6a164687cb048a5"} Nov 28 07:14:10 crc kubenswrapper[4946]: I1128 07:14:10.827314 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd875f97-67m8z" event={"ID":"ded10611-80cf-497e-87d5-3dc3af54962c","Type":"ContainerStarted","Data":"2b75eb00bbc05748fe8a15427b97d21245b4ff660e3fbd1d8757f9bf0bfd52c4"} Nov 28 07:14:10 crc kubenswrapper[4946]: I1128 07:14:10.829755 4946 generic.go:334] "Generic (PLEG): container finished" podID="eb0f7bd4-9675-4486-b870-4bee6cbb97cd" containerID="ac6a50a3cb3d71e5236b997ae2743f38138b9d14ff9179c365531407db0e35a3" exitCode=0 Nov 28 07:14:10 crc kubenswrapper[4946]: I1128 07:14:10.830020 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-684dc5d7df-f27ds" event={"ID":"eb0f7bd4-9675-4486-b870-4bee6cbb97cd","Type":"ContainerDied","Data":"ac6a50a3cb3d71e5236b997ae2743f38138b9d14ff9179c365531407db0e35a3"} Nov 28 07:14:12 crc kubenswrapper[4946]: I1128 07:14:12.848564 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd875f97-67m8z" event={"ID":"ded10611-80cf-497e-87d5-3dc3af54962c","Type":"ContainerStarted","Data":"02954776648d9637d7d33b8892b12ba998a24466bd49a009f1e8210276eaf130"} Nov 28 07:14:12 crc kubenswrapper[4946]: I1128 07:14:12.850876 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58bd875f97-67m8z" Nov 28 07:14:12 crc kubenswrapper[4946]: I1128 07:14:12.852732 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a2e2a06b-b43f-4a3b-985d-964e177c3c06","Type":"ContainerStarted","Data":"46e527156036f2db6b68d08e685dc075879a5f56a11d96918d1c98628885e7e8"} Nov 28 07:14:12 crc kubenswrapper[4946]: I1128 07:14:12.857135 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-684dc5d7df-f27ds" event={"ID":"eb0f7bd4-9675-4486-b870-4bee6cbb97cd","Type":"ContainerStarted","Data":"d22817c5653e9d397c31d0190af427f92a2555c9cfe99039486b62525e6b98cc"} Nov 28 07:14:12 crc kubenswrapper[4946]: I1128 07:14:12.858124 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-684dc5d7df-f27ds" Nov 28 07:14:12 crc kubenswrapper[4946]: I1128 07:14:12.859520 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wm9fv" event={"ID":"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08","Type":"ContainerStarted","Data":"7daeb0c76b280f50561eb7b7c2864caf3e13cb79b3ecceb81295252f5c19b84a"} Nov 28 07:14:12 crc kubenswrapper[4946]: I1128 07:14:12.862490 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gpnz5" event={"ID":"944abeae-3f1d-4391-a375-b64ed9c17b14","Type":"ContainerStarted","Data":"b12ef267dfaa1906a3e883237d5228860410b06e258081a1cabe06d004313ce6"} Nov 28 07:14:12 crc kubenswrapper[4946]: I1128 07:14:12.862568 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gpnz5" event={"ID":"944abeae-3f1d-4391-a375-b64ed9c17b14","Type":"ContainerStarted","Data":"c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443"} Nov 28 07:14:12 crc kubenswrapper[4946]: I1128 07:14:12.863151 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:14:12 crc kubenswrapper[4946]: I1128 07:14:12.863289 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:14:12 crc kubenswrapper[4946]: I1128 07:14:12.864830 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"926ec930-a8f3-4c87-9963-39779e7309cc","Type":"ContainerStarted","Data":"ca0d99baba41d46f00aebe709dcc220db85de34c057c9dcc89055c2491a77711"} Nov 28 07:14:12 crc kubenswrapper[4946]: I1128 07:14:12.884225 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58bd875f97-67m8z" podStartSLOduration=6.225819503 podStartE2EDuration="6.88419272s" podCreationTimestamp="2025-11-28 07:14:06 +0000 UTC" firstStartedPulling="2025-11-28 07:14:09.862663448 +0000 UTC m=+1304.240728579" lastFinishedPulling="2025-11-28 07:14:10.521036685 +0000 UTC m=+1304.899101796" observedRunningTime="2025-11-28 07:14:12.87481977 +0000 UTC m=+1307.252884921" watchObservedRunningTime="2025-11-28 07:14:12.88419272 +0000 UTC m=+1307.262257861" Nov 28 07:14:12 crc kubenswrapper[4946]: I1128 07:14:12.919316 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-wm9fv" podStartSLOduration=3.018529752 podStartE2EDuration="6.919276721s" podCreationTimestamp="2025-11-28 07:14:06 +0000 UTC" firstStartedPulling="2025-11-28 07:14:08.353511721 +0000 UTC m=+1302.731576832" lastFinishedPulling="2025-11-28 07:14:12.25425868 +0000 UTC m=+1306.632323801" observedRunningTime="2025-11-28 07:14:12.902671054 +0000 UTC m=+1307.280736215" watchObservedRunningTime="2025-11-28 07:14:12.919276721 +0000 UTC m=+1307.297341862" Nov 28 07:14:12 crc kubenswrapper[4946]: I1128 07:14:12.942845 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-684dc5d7df-f27ds" podStartSLOduration=6.455218402 podStartE2EDuration="6.942800398s" podCreationTimestamp="2025-11-28 07:14:06 +0000 UTC" firstStartedPulling="2025-11-28 07:14:09.712550884 +0000 UTC m=+1304.090615985" lastFinishedPulling="2025-11-28 07:14:10.20013287 +0000 UTC m=+1304.578197981" observedRunningTime="2025-11-28 07:14:12.929529833 +0000 UTC m=+1307.307594974" watchObservedRunningTime="2025-11-28 07:14:12.942800398 +0000 UTC m=+1307.320865549" Nov 28 07:14:12 crc kubenswrapper[4946]: I1128 07:14:12.985604 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=19.894950993 podStartE2EDuration="27.985579798s" podCreationTimestamp="2025-11-28 07:13:45 +0000 UTC" firstStartedPulling="2025-11-28 07:14:04.203191286 +0000 UTC m=+1298.581256397" lastFinishedPulling="2025-11-28 07:14:12.293820061 +0000 UTC m=+1306.671885202" observedRunningTime="2025-11-28 07:14:12.971278937 +0000 UTC m=+1307.349344128" watchObservedRunningTime="2025-11-28 07:14:12.985579798 +0000 UTC m=+1307.363644899" Nov 28 07:14:13 crc kubenswrapper[4946]: I1128 07:14:13.022189 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=23.445629568 podStartE2EDuration="31.019908521s" podCreationTimestamp="2025-11-28 07:13:42 +0000 UTC" firstStartedPulling="2025-11-28 07:14:04.71552309 +0000 UTC m=+1299.093588251" lastFinishedPulling="2025-11-28 07:14:12.289802093 +0000 UTC m=+1306.667867204" observedRunningTime="2025-11-28 07:14:13.011498884 +0000 UTC m=+1307.389563995" watchObservedRunningTime="2025-11-28 07:14:13.019908521 +0000 UTC m=+1307.400210857" Nov 28 07:14:13 crc kubenswrapper[4946]: I1128 07:14:13.046333 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-gpnz5" podStartSLOduration=26.947050028 podStartE2EDuration="31.046312119s" podCreationTimestamp="2025-11-28 07:13:42 +0000 UTC" firstStartedPulling="2025-11-28 07:14:05.201031655 +0000 UTC m=+1299.579096766" lastFinishedPulling="2025-11-28 07:14:09.300293736 +0000 UTC m=+1303.678358857" observedRunningTime="2025-11-28 07:14:13.037773399 +0000 UTC m=+1307.415838500" watchObservedRunningTime="2025-11-28 07:14:13.046312119 +0000 UTC m=+1307.424377230" Nov 28 07:14:13 crc kubenswrapper[4946]: I1128 07:14:13.596107 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 28 07:14:13 crc kubenswrapper[4946]: I1128 07:14:13.650434 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 28 07:14:13 crc kubenswrapper[4946]: I1128 07:14:13.850941 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 28 07:14:13 crc kubenswrapper[4946]: I1128 07:14:13.851344 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 28 07:14:13 crc kubenswrapper[4946]: I1128 07:14:13.873796 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 28 07:14:13 crc kubenswrapper[4946]: I1128 07:14:13.895712 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 28 07:14:14 crc kubenswrapper[4946]: I1128 07:14:14.881004 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"62fdad5e-59c5-4d8f-87da-79b384fb82be","Type":"ContainerStarted","Data":"9030866a771edcec9dbaa59d8cf162cc3924cacd011d8fb010810714f827e7e0"} Nov 28 07:14:14 crc kubenswrapper[4946]: I1128 07:14:14.881758 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 28 07:14:14 crc kubenswrapper[4946]: I1128 07:14:14.882923 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"13d10c33-0ca9-47d5-ac49-19391cebfb39","Type":"ContainerStarted","Data":"f4f9909c167a1c79bb8b42fbdbe2ecd7c4b12b8dfe0b0cc573d4848c3dfb98d4"} Nov 28 07:14:14 crc kubenswrapper[4946]: I1128 07:14:14.907359 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.361085526 podStartE2EDuration="37.90730156s" podCreationTimestamp="2025-11-28 07:13:37 +0000 UTC" firstStartedPulling="2025-11-28 07:13:37.955701148 +0000 UTC m=+1272.333766259" lastFinishedPulling="2025-11-28 07:14:14.501917162 +0000 UTC m=+1308.879982293" observedRunningTime="2025-11-28 07:14:14.900100763 +0000 UTC m=+1309.278165874" watchObservedRunningTime="2025-11-28 07:14:14.90730156 +0000 UTC m=+1309.285366701" Nov 28 07:14:14 crc kubenswrapper[4946]: I1128 07:14:14.929790 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 28 07:14:14 crc kubenswrapper[4946]: I1128 07:14:14.957996 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.314744 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.316823 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.321232 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.321435 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.321512 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-p47lq" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.321974 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.328765 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.464134 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c274eefa-3598-470b-9b07-25928903d425-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " pod="openstack/ovn-northd-0" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.465864 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c274eefa-3598-470b-9b07-25928903d425-config\") pod \"ovn-northd-0\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " pod="openstack/ovn-northd-0" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.466029 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c274eefa-3598-470b-9b07-25928903d425-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " pod="openstack/ovn-northd-0" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.466188 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c274eefa-3598-470b-9b07-25928903d425-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " pod="openstack/ovn-northd-0" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.466539 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c274eefa-3598-470b-9b07-25928903d425-scripts\") pod \"ovn-northd-0\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " pod="openstack/ovn-northd-0" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.466672 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ggpl\" (UniqueName: \"kubernetes.io/projected/c274eefa-3598-470b-9b07-25928903d425-kube-api-access-5ggpl\") pod \"ovn-northd-0\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " pod="openstack/ovn-northd-0" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.466723 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c274eefa-3598-470b-9b07-25928903d425-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " pod="openstack/ovn-northd-0" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.568822 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c274eefa-3598-470b-9b07-25928903d425-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " pod="openstack/ovn-northd-0" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.568906 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c274eefa-3598-470b-9b07-25928903d425-config\") pod \"ovn-northd-0\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " pod="openstack/ovn-northd-0" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.569061 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c274eefa-3598-470b-9b07-25928903d425-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " pod="openstack/ovn-northd-0" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.569094 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c274eefa-3598-470b-9b07-25928903d425-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " pod="openstack/ovn-northd-0" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.569127 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c274eefa-3598-470b-9b07-25928903d425-scripts\") pod \"ovn-northd-0\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " pod="openstack/ovn-northd-0" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.569153 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ggpl\" (UniqueName: \"kubernetes.io/projected/c274eefa-3598-470b-9b07-25928903d425-kube-api-access-5ggpl\") pod \"ovn-northd-0\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " pod="openstack/ovn-northd-0" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.569174 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c274eefa-3598-470b-9b07-25928903d425-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " pod="openstack/ovn-northd-0" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.569648 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c274eefa-3598-470b-9b07-25928903d425-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " pod="openstack/ovn-northd-0" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.570583 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c274eefa-3598-470b-9b07-25928903d425-scripts\") pod \"ovn-northd-0\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " pod="openstack/ovn-northd-0" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.570588 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c274eefa-3598-470b-9b07-25928903d425-config\") pod \"ovn-northd-0\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " pod="openstack/ovn-northd-0" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.575293 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c274eefa-3598-470b-9b07-25928903d425-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " pod="openstack/ovn-northd-0" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.578569 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c274eefa-3598-470b-9b07-25928903d425-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " pod="openstack/ovn-northd-0" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.579437 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c274eefa-3598-470b-9b07-25928903d425-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " pod="openstack/ovn-northd-0" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.590198 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ggpl\" (UniqueName: \"kubernetes.io/projected/c274eefa-3598-470b-9b07-25928903d425-kube-api-access-5ggpl\") pod \"ovn-northd-0\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " pod="openstack/ovn-northd-0" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.653177 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 28 07:14:15 crc kubenswrapper[4946]: I1128 07:14:15.898453 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e87edf72-a3a2-4df0-8249-5902f158998d","Type":"ContainerStarted","Data":"fa977f3dc63e5d9bc6c7308001c1dcfb76ff1420cbaefe8d12d5d89989d89e5c"} Nov 28 07:14:16 crc kubenswrapper[4946]: I1128 07:14:16.130822 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 28 07:14:16 crc kubenswrapper[4946]: I1128 07:14:16.912434 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c274eefa-3598-470b-9b07-25928903d425","Type":"ContainerStarted","Data":"987f48672497b27e918fb98693d89ce4a72d2e67cde202c100496eda8ae7515c"} Nov 28 07:14:17 crc kubenswrapper[4946]: I1128 07:14:17.922622 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c274eefa-3598-470b-9b07-25928903d425","Type":"ContainerStarted","Data":"f8b2e10c106db832955230f63c48caf7a4ee259e84e9b344f03d06060ead1493"} Nov 28 07:14:17 crc kubenswrapper[4946]: I1128 07:14:17.923546 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c274eefa-3598-470b-9b07-25928903d425","Type":"ContainerStarted","Data":"7e494d19861cee1bf5cc7147950495eba143f3dbb2bec8c94fbf0c3c3c16f2dd"} Nov 28 07:14:17 crc kubenswrapper[4946]: I1128 07:14:17.923568 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 28 07:14:17 crc kubenswrapper[4946]: I1128 07:14:17.924527 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3521840d-60d0-450c-8c05-7e2ad0fc4e97","Type":"ContainerStarted","Data":"88c96e9b6c9b6c9a01377b5eb6cd235cde6a0cea15f68f81f0dba3c64839e047"} Nov 28 07:14:17 crc kubenswrapper[4946]: I1128 07:14:17.925994 4946 generic.go:334] "Generic (PLEG): container finished" podID="13d10c33-0ca9-47d5-ac49-19391cebfb39" containerID="f4f9909c167a1c79bb8b42fbdbe2ecd7c4b12b8dfe0b0cc573d4848c3dfb98d4" exitCode=0 Nov 28 07:14:17 crc kubenswrapper[4946]: I1128 07:14:17.926038 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"13d10c33-0ca9-47d5-ac49-19391cebfb39","Type":"ContainerDied","Data":"f4f9909c167a1c79bb8b42fbdbe2ecd7c4b12b8dfe0b0cc573d4848c3dfb98d4"} Nov 28 07:14:17 crc kubenswrapper[4946]: I1128 07:14:17.956880 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.711648151 podStartE2EDuration="2.956852471s" podCreationTimestamp="2025-11-28 07:14:15 +0000 UTC" firstStartedPulling="2025-11-28 07:14:16.136962097 +0000 UTC m=+1310.515027218" lastFinishedPulling="2025-11-28 07:14:17.382166417 +0000 UTC m=+1311.760231538" observedRunningTime="2025-11-28 07:14:17.939338451 +0000 UTC m=+1312.317403582" watchObservedRunningTime="2025-11-28 07:14:17.956852471 +0000 UTC m=+1312.334917592" Nov 28 07:14:18 crc kubenswrapper[4946]: I1128 07:14:18.938105 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"13d10c33-0ca9-47d5-ac49-19391cebfb39","Type":"ContainerStarted","Data":"63032ee424994bedde2af09f7484105017c4f440ba2229de43942a3d82430207"} Nov 28 07:14:18 crc kubenswrapper[4946]: I1128 07:14:18.940976 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"59fdca77-b333-44be-ab8c-96a2f4bcc340","Type":"ContainerStarted","Data":"a6bb9670947ddc29c15771ad78d06a158426d4a0b2e7d5a9827785ca30e28082"} Nov 28 07:14:18 crc kubenswrapper[4946]: I1128 07:14:18.972346 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=7.907799457 podStartE2EDuration="44.972317321s" podCreationTimestamp="2025-11-28 07:13:34 +0000 UTC" firstStartedPulling="2025-11-28 07:13:36.451989384 +0000 UTC m=+1270.830054495" lastFinishedPulling="2025-11-28 07:14:13.516507238 +0000 UTC m=+1307.894572359" observedRunningTime="2025-11-28 07:14:18.964447798 +0000 UTC m=+1313.342512949" watchObservedRunningTime="2025-11-28 07:14:18.972317321 +0000 UTC m=+1313.350382442" Nov 28 07:14:19 crc kubenswrapper[4946]: I1128 07:14:19.461319 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 28 07:14:19 crc kubenswrapper[4946]: I1128 07:14:19.966176 4946 generic.go:334] "Generic (PLEG): container finished" podID="e87edf72-a3a2-4df0-8249-5902f158998d" containerID="fa977f3dc63e5d9bc6c7308001c1dcfb76ff1420cbaefe8d12d5d89989d89e5c" exitCode=0 Nov 28 07:14:19 crc kubenswrapper[4946]: I1128 07:14:19.966632 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e87edf72-a3a2-4df0-8249-5902f158998d","Type":"ContainerDied","Data":"fa977f3dc63e5d9bc6c7308001c1dcfb76ff1420cbaefe8d12d5d89989d89e5c"} Nov 28 07:14:20 crc kubenswrapper[4946]: I1128 07:14:20.979691 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e87edf72-a3a2-4df0-8249-5902f158998d","Type":"ContainerStarted","Data":"3f80615bde2b6e4140684945a44377c8842d66297f035126f4b7271272f03968"} Nov 28 07:14:21 crc kubenswrapper[4946]: I1128 07:14:21.010168 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371990.844645 podStartE2EDuration="46.010130552s" podCreationTimestamp="2025-11-28 07:13:35 +0000 UTC" firstStartedPulling="2025-11-28 07:13:37.740903817 +0000 UTC m=+1272.118968928" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:14:21.004422962 +0000 UTC m=+1315.382488143" watchObservedRunningTime="2025-11-28 07:14:21.010130552 +0000 UTC m=+1315.388195703" Nov 28 07:14:21 crc kubenswrapper[4946]: I1128 07:14:21.752718 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-684dc5d7df-f27ds" Nov 28 07:14:21 crc kubenswrapper[4946]: I1128 07:14:21.968930 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58bd875f97-67m8z" Nov 28 07:14:22 crc kubenswrapper[4946]: I1128 07:14:22.058495 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-684dc5d7df-f27ds"] Nov 28 07:14:22 crc kubenswrapper[4946]: I1128 07:14:22.058880 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-684dc5d7df-f27ds" podUID="eb0f7bd4-9675-4486-b870-4bee6cbb97cd" containerName="dnsmasq-dns" containerID="cri-o://d22817c5653e9d397c31d0190af427f92a2555c9cfe99039486b62525e6b98cc" gracePeriod=10 Nov 28 07:14:22 crc kubenswrapper[4946]: I1128 07:14:22.429724 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 28 07:14:22 crc kubenswrapper[4946]: I1128 07:14:22.618446 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-684dc5d7df-f27ds" Nov 28 07:14:22 crc kubenswrapper[4946]: I1128 07:14:22.734147 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb0f7bd4-9675-4486-b870-4bee6cbb97cd-dns-svc\") pod \"eb0f7bd4-9675-4486-b870-4bee6cbb97cd\" (UID: \"eb0f7bd4-9675-4486-b870-4bee6cbb97cd\") " Nov 28 07:14:22 crc kubenswrapper[4946]: I1128 07:14:22.734302 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0f7bd4-9675-4486-b870-4bee6cbb97cd-config\") pod \"eb0f7bd4-9675-4486-b870-4bee6cbb97cd\" (UID: \"eb0f7bd4-9675-4486-b870-4bee6cbb97cd\") " Nov 28 07:14:22 crc kubenswrapper[4946]: I1128 07:14:22.734326 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb0f7bd4-9675-4486-b870-4bee6cbb97cd-ovsdbserver-nb\") pod \"eb0f7bd4-9675-4486-b870-4bee6cbb97cd\" (UID: \"eb0f7bd4-9675-4486-b870-4bee6cbb97cd\") " Nov 28 07:14:22 crc kubenswrapper[4946]: I1128 07:14:22.734349 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4qkv\" (UniqueName: \"kubernetes.io/projected/eb0f7bd4-9675-4486-b870-4bee6cbb97cd-kube-api-access-d4qkv\") pod \"eb0f7bd4-9675-4486-b870-4bee6cbb97cd\" (UID: \"eb0f7bd4-9675-4486-b870-4bee6cbb97cd\") " Nov 28 07:14:22 crc kubenswrapper[4946]: I1128 07:14:22.770617 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb0f7bd4-9675-4486-b870-4bee6cbb97cd-kube-api-access-d4qkv" (OuterVolumeSpecName: "kube-api-access-d4qkv") pod "eb0f7bd4-9675-4486-b870-4bee6cbb97cd" (UID: "eb0f7bd4-9675-4486-b870-4bee6cbb97cd"). InnerVolumeSpecName "kube-api-access-d4qkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:14:22 crc kubenswrapper[4946]: I1128 07:14:22.798989 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb0f7bd4-9675-4486-b870-4bee6cbb97cd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eb0f7bd4-9675-4486-b870-4bee6cbb97cd" (UID: "eb0f7bd4-9675-4486-b870-4bee6cbb97cd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:14:22 crc kubenswrapper[4946]: I1128 07:14:22.802027 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb0f7bd4-9675-4486-b870-4bee6cbb97cd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb0f7bd4-9675-4486-b870-4bee6cbb97cd" (UID: "eb0f7bd4-9675-4486-b870-4bee6cbb97cd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:14:22 crc kubenswrapper[4946]: I1128 07:14:22.830069 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb0f7bd4-9675-4486-b870-4bee6cbb97cd-config" (OuterVolumeSpecName: "config") pod "eb0f7bd4-9675-4486-b870-4bee6cbb97cd" (UID: "eb0f7bd4-9675-4486-b870-4bee6cbb97cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:14:22 crc kubenswrapper[4946]: I1128 07:14:22.836356 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb0f7bd4-9675-4486-b870-4bee6cbb97cd-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:22 crc kubenswrapper[4946]: I1128 07:14:22.836383 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0f7bd4-9675-4486-b870-4bee6cbb97cd-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:22 crc kubenswrapper[4946]: I1128 07:14:22.836393 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb0f7bd4-9675-4486-b870-4bee6cbb97cd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:22 crc kubenswrapper[4946]: I1128 07:14:22.836409 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4qkv\" (UniqueName: \"kubernetes.io/projected/eb0f7bd4-9675-4486-b870-4bee6cbb97cd-kube-api-access-d4qkv\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:23 crc kubenswrapper[4946]: I1128 07:14:23.003037 4946 generic.go:334] "Generic (PLEG): container finished" podID="eb0f7bd4-9675-4486-b870-4bee6cbb97cd" containerID="d22817c5653e9d397c31d0190af427f92a2555c9cfe99039486b62525e6b98cc" exitCode=0 Nov 28 07:14:23 crc kubenswrapper[4946]: I1128 07:14:23.003105 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-684dc5d7df-f27ds" event={"ID":"eb0f7bd4-9675-4486-b870-4bee6cbb97cd","Type":"ContainerDied","Data":"d22817c5653e9d397c31d0190af427f92a2555c9cfe99039486b62525e6b98cc"} Nov 28 07:14:23 crc kubenswrapper[4946]: I1128 07:14:23.003152 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-684dc5d7df-f27ds" event={"ID":"eb0f7bd4-9675-4486-b870-4bee6cbb97cd","Type":"ContainerDied","Data":"8c84694d3d5921764b8efd79cd50913dc86b974d9427bc58a52145812c970b8f"} Nov 28 07:14:23 crc kubenswrapper[4946]: I1128 07:14:23.003186 4946 scope.go:117] "RemoveContainer" containerID="d22817c5653e9d397c31d0190af427f92a2555c9cfe99039486b62525e6b98cc" Nov 28 07:14:23 crc kubenswrapper[4946]: I1128 07:14:23.003357 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-684dc5d7df-f27ds" Nov 28 07:14:23 crc kubenswrapper[4946]: I1128 07:14:23.035026 4946 scope.go:117] "RemoveContainer" containerID="ac6a50a3cb3d71e5236b997ae2743f38138b9d14ff9179c365531407db0e35a3" Nov 28 07:14:23 crc kubenswrapper[4946]: I1128 07:14:23.053846 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-684dc5d7df-f27ds"] Nov 28 07:14:23 crc kubenswrapper[4946]: I1128 07:14:23.058981 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-684dc5d7df-f27ds"] Nov 28 07:14:23 crc kubenswrapper[4946]: I1128 07:14:23.061753 4946 scope.go:117] "RemoveContainer" containerID="d22817c5653e9d397c31d0190af427f92a2555c9cfe99039486b62525e6b98cc" Nov 28 07:14:23 crc kubenswrapper[4946]: E1128 07:14:23.062373 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d22817c5653e9d397c31d0190af427f92a2555c9cfe99039486b62525e6b98cc\": container with ID starting with d22817c5653e9d397c31d0190af427f92a2555c9cfe99039486b62525e6b98cc not found: ID does not exist" containerID="d22817c5653e9d397c31d0190af427f92a2555c9cfe99039486b62525e6b98cc" Nov 28 07:14:23 crc kubenswrapper[4946]: I1128 07:14:23.062436 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d22817c5653e9d397c31d0190af427f92a2555c9cfe99039486b62525e6b98cc"} err="failed to get container status \"d22817c5653e9d397c31d0190af427f92a2555c9cfe99039486b62525e6b98cc\": rpc error: code = NotFound desc = could not find container \"d22817c5653e9d397c31d0190af427f92a2555c9cfe99039486b62525e6b98cc\": container with ID starting with d22817c5653e9d397c31d0190af427f92a2555c9cfe99039486b62525e6b98cc not found: ID does not exist" Nov 28 07:14:23 crc kubenswrapper[4946]: I1128 07:14:23.062503 4946 scope.go:117] "RemoveContainer" containerID="ac6a50a3cb3d71e5236b997ae2743f38138b9d14ff9179c365531407db0e35a3" Nov 28 07:14:23 crc kubenswrapper[4946]: E1128 07:14:23.062907 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac6a50a3cb3d71e5236b997ae2743f38138b9d14ff9179c365531407db0e35a3\": container with ID starting with ac6a50a3cb3d71e5236b997ae2743f38138b9d14ff9179c365531407db0e35a3 not found: ID does not exist" containerID="ac6a50a3cb3d71e5236b997ae2743f38138b9d14ff9179c365531407db0e35a3" Nov 28 07:14:23 crc kubenswrapper[4946]: I1128 07:14:23.062949 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac6a50a3cb3d71e5236b997ae2743f38138b9d14ff9179c365531407db0e35a3"} err="failed to get container status \"ac6a50a3cb3d71e5236b997ae2743f38138b9d14ff9179c365531407db0e35a3\": rpc error: code = NotFound desc = could not find container \"ac6a50a3cb3d71e5236b997ae2743f38138b9d14ff9179c365531407db0e35a3\": container with ID starting with ac6a50a3cb3d71e5236b997ae2743f38138b9d14ff9179c365531407db0e35a3 not found: ID does not exist" Nov 28 07:14:24 crc kubenswrapper[4946]: I1128 07:14:24.017134 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb0f7bd4-9675-4486-b870-4bee6cbb97cd" path="/var/lib/kubelet/pods/eb0f7bd4-9675-4486-b870-4bee6cbb97cd/volumes" Nov 28 07:14:25 crc kubenswrapper[4946]: I1128 07:14:25.642920 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 28 07:14:25 crc kubenswrapper[4946]: I1128 07:14:25.644314 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 28 07:14:25 crc kubenswrapper[4946]: I1128 07:14:25.721142 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 28 07:14:26 crc kubenswrapper[4946]: I1128 07:14:26.148047 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.082898 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-7d4w7"] Nov 28 07:14:27 crc kubenswrapper[4946]: E1128 07:14:27.083972 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0f7bd4-9675-4486-b870-4bee6cbb97cd" containerName="init" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.084001 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0f7bd4-9675-4486-b870-4bee6cbb97cd" containerName="init" Nov 28 07:14:27 crc kubenswrapper[4946]: E1128 07:14:27.084036 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0f7bd4-9675-4486-b870-4bee6cbb97cd" containerName="dnsmasq-dns" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.084047 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0f7bd4-9675-4486-b870-4bee6cbb97cd" containerName="dnsmasq-dns" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.084328 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb0f7bd4-9675-4486-b870-4bee6cbb97cd" containerName="dnsmasq-dns" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.085223 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7d4w7" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.090648 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.090704 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.098240 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7786-account-create-update-jrfgb"] Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.099886 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7786-account-create-update-jrfgb" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.108295 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7d4w7"] Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.144051 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.145305 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e85173bc-690d-4513-924c-0ba5c540d432-operator-scripts\") pod \"keystone-db-create-7d4w7\" (UID: \"e85173bc-690d-4513-924c-0ba5c540d432\") " pod="openstack/keystone-db-create-7d4w7" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.145561 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kt2h\" (UniqueName: \"kubernetes.io/projected/e85173bc-690d-4513-924c-0ba5c540d432-kube-api-access-2kt2h\") pod \"keystone-db-create-7d4w7\" (UID: \"e85173bc-690d-4513-924c-0ba5c540d432\") " pod="openstack/keystone-db-create-7d4w7" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.175201 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7786-account-create-update-jrfgb"] Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.220704 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.247629 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e85173bc-690d-4513-924c-0ba5c540d432-operator-scripts\") pod \"keystone-db-create-7d4w7\" (UID: \"e85173bc-690d-4513-924c-0ba5c540d432\") " pod="openstack/keystone-db-create-7d4w7" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.247821 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kt2h\" (UniqueName: \"kubernetes.io/projected/e85173bc-690d-4513-924c-0ba5c540d432-kube-api-access-2kt2h\") pod \"keystone-db-create-7d4w7\" (UID: \"e85173bc-690d-4513-924c-0ba5c540d432\") " pod="openstack/keystone-db-create-7d4w7" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.249154 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e85173bc-690d-4513-924c-0ba5c540d432-operator-scripts\") pod \"keystone-db-create-7d4w7\" (UID: \"e85173bc-690d-4513-924c-0ba5c540d432\") " pod="openstack/keystone-db-create-7d4w7" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.273790 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kt2h\" (UniqueName: \"kubernetes.io/projected/e85173bc-690d-4513-924c-0ba5c540d432-kube-api-access-2kt2h\") pod \"keystone-db-create-7d4w7\" (UID: \"e85173bc-690d-4513-924c-0ba5c540d432\") " pod="openstack/keystone-db-create-7d4w7" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.280397 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-5ncqm"] Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.282146 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5ncqm" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.288108 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5ncqm"] Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.349862 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ssch\" (UniqueName: \"kubernetes.io/projected/3dcd796a-a7cc-4e4a-866b-c26819be5e92-kube-api-access-7ssch\") pod \"keystone-7786-account-create-update-jrfgb\" (UID: \"3dcd796a-a7cc-4e4a-866b-c26819be5e92\") " pod="openstack/keystone-7786-account-create-update-jrfgb" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.350198 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dcd796a-a7cc-4e4a-866b-c26819be5e92-operator-scripts\") pod \"keystone-7786-account-create-update-jrfgb\" (UID: \"3dcd796a-a7cc-4e4a-866b-c26819be5e92\") " pod="openstack/keystone-7786-account-create-update-jrfgb" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.452058 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrjx7\" (UniqueName: \"kubernetes.io/projected/5b2a9fdb-501b-4dad-a136-ca4a932e64d0-kube-api-access-rrjx7\") pod \"placement-db-create-5ncqm\" (UID: \"5b2a9fdb-501b-4dad-a136-ca4a932e64d0\") " pod="openstack/placement-db-create-5ncqm" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.452294 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dcd796a-a7cc-4e4a-866b-c26819be5e92-operator-scripts\") pod \"keystone-7786-account-create-update-jrfgb\" (UID: \"3dcd796a-a7cc-4e4a-866b-c26819be5e92\") " pod="openstack/keystone-7786-account-create-update-jrfgb" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.452411 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ssch\" (UniqueName: \"kubernetes.io/projected/3dcd796a-a7cc-4e4a-866b-c26819be5e92-kube-api-access-7ssch\") pod \"keystone-7786-account-create-update-jrfgb\" (UID: \"3dcd796a-a7cc-4e4a-866b-c26819be5e92\") " pod="openstack/keystone-7786-account-create-update-jrfgb" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.452503 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b2a9fdb-501b-4dad-a136-ca4a932e64d0-operator-scripts\") pod \"placement-db-create-5ncqm\" (UID: \"5b2a9fdb-501b-4dad-a136-ca4a932e64d0\") " pod="openstack/placement-db-create-5ncqm" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.453594 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dcd796a-a7cc-4e4a-866b-c26819be5e92-operator-scripts\") pod \"keystone-7786-account-create-update-jrfgb\" (UID: \"3dcd796a-a7cc-4e4a-866b-c26819be5e92\") " pod="openstack/keystone-7786-account-create-update-jrfgb" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.459232 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7d4w7" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.490432 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ssch\" (UniqueName: \"kubernetes.io/projected/3dcd796a-a7cc-4e4a-866b-c26819be5e92-kube-api-access-7ssch\") pod \"keystone-7786-account-create-update-jrfgb\" (UID: \"3dcd796a-a7cc-4e4a-866b-c26819be5e92\") " pod="openstack/keystone-7786-account-create-update-jrfgb" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.490492 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d9cc-account-create-update-vqrs4"] Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.496996 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d9cc-account-create-update-vqrs4" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.499481 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.505661 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d9cc-account-create-update-vqrs4"] Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.554425 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b2a9fdb-501b-4dad-a136-ca4a932e64d0-operator-scripts\") pod \"placement-db-create-5ncqm\" (UID: \"5b2a9fdb-501b-4dad-a136-ca4a932e64d0\") " pod="openstack/placement-db-create-5ncqm" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.554749 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrjx7\" (UniqueName: \"kubernetes.io/projected/5b2a9fdb-501b-4dad-a136-ca4a932e64d0-kube-api-access-rrjx7\") pod \"placement-db-create-5ncqm\" (UID: \"5b2a9fdb-501b-4dad-a136-ca4a932e64d0\") " pod="openstack/placement-db-create-5ncqm" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.555912 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b2a9fdb-501b-4dad-a136-ca4a932e64d0-operator-scripts\") pod \"placement-db-create-5ncqm\" (UID: \"5b2a9fdb-501b-4dad-a136-ca4a932e64d0\") " pod="openstack/placement-db-create-5ncqm" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.579720 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrjx7\" (UniqueName: \"kubernetes.io/projected/5b2a9fdb-501b-4dad-a136-ca4a932e64d0-kube-api-access-rrjx7\") pod \"placement-db-create-5ncqm\" (UID: \"5b2a9fdb-501b-4dad-a136-ca4a932e64d0\") " pod="openstack/placement-db-create-5ncqm" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.639505 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5ncqm" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.658009 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnqmj\" (UniqueName: \"kubernetes.io/projected/feb2d0e9-f8e7-4c39-be09-05728a90c0e9-kube-api-access-mnqmj\") pod \"placement-d9cc-account-create-update-vqrs4\" (UID: \"feb2d0e9-f8e7-4c39-be09-05728a90c0e9\") " pod="openstack/placement-d9cc-account-create-update-vqrs4" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.658100 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb2d0e9-f8e7-4c39-be09-05728a90c0e9-operator-scripts\") pod \"placement-d9cc-account-create-update-vqrs4\" (UID: \"feb2d0e9-f8e7-4c39-be09-05728a90c0e9\") " pod="openstack/placement-d9cc-account-create-update-vqrs4" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.759860 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnqmj\" (UniqueName: \"kubernetes.io/projected/feb2d0e9-f8e7-4c39-be09-05728a90c0e9-kube-api-access-mnqmj\") pod \"placement-d9cc-account-create-update-vqrs4\" (UID: \"feb2d0e9-f8e7-4c39-be09-05728a90c0e9\") " pod="openstack/placement-d9cc-account-create-update-vqrs4" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.759981 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb2d0e9-f8e7-4c39-be09-05728a90c0e9-operator-scripts\") pod \"placement-d9cc-account-create-update-vqrs4\" (UID: \"feb2d0e9-f8e7-4c39-be09-05728a90c0e9\") " pod="openstack/placement-d9cc-account-create-update-vqrs4" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.762226 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb2d0e9-f8e7-4c39-be09-05728a90c0e9-operator-scripts\") pod \"placement-d9cc-account-create-update-vqrs4\" (UID: \"feb2d0e9-f8e7-4c39-be09-05728a90c0e9\") " pod="openstack/placement-d9cc-account-create-update-vqrs4" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.776171 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7786-account-create-update-jrfgb" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.781471 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnqmj\" (UniqueName: \"kubernetes.io/projected/feb2d0e9-f8e7-4c39-be09-05728a90c0e9-kube-api-access-mnqmj\") pod \"placement-d9cc-account-create-update-vqrs4\" (UID: \"feb2d0e9-f8e7-4c39-be09-05728a90c0e9\") " pod="openstack/placement-d9cc-account-create-update-vqrs4" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.881003 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d9cc-account-create-update-vqrs4" Nov 28 07:14:27 crc kubenswrapper[4946]: I1128 07:14:27.939876 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7d4w7"] Nov 28 07:14:27 crc kubenswrapper[4946]: W1128 07:14:27.947958 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode85173bc_690d_4513_924c_0ba5c540d432.slice/crio-de1242199d1be0bfaed85814a3a08a785072af661353c0db409eda7928b60f9d WatchSource:0}: Error finding container de1242199d1be0bfaed85814a3a08a785072af661353c0db409eda7928b60f9d: Status 404 returned error can't find the container with id de1242199d1be0bfaed85814a3a08a785072af661353c0db409eda7928b60f9d Nov 28 07:14:28 crc kubenswrapper[4946]: W1128 07:14:28.060123 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3dcd796a_a7cc_4e4a_866b_c26819be5e92.slice/crio-3fd4964e5848f677d6521dd19010b2a366eecc2be9a28fecccb4b4a492a34b21 WatchSource:0}: Error finding container 3fd4964e5848f677d6521dd19010b2a366eecc2be9a28fecccb4b4a492a34b21: Status 404 returned error can't find the container with id 3fd4964e5848f677d6521dd19010b2a366eecc2be9a28fecccb4b4a492a34b21 Nov 28 07:14:28 crc kubenswrapper[4946]: I1128 07:14:28.061620 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7786-account-create-update-jrfgb"] Nov 28 07:14:28 crc kubenswrapper[4946]: I1128 07:14:28.069741 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7d4w7" event={"ID":"e85173bc-690d-4513-924c-0ba5c540d432","Type":"ContainerStarted","Data":"de1242199d1be0bfaed85814a3a08a785072af661353c0db409eda7928b60f9d"} Nov 28 07:14:28 crc kubenswrapper[4946]: I1128 07:14:28.094165 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5ncqm"] Nov 28 07:14:28 crc kubenswrapper[4946]: W1128 07:14:28.117015 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b2a9fdb_501b_4dad_a136_ca4a932e64d0.slice/crio-4b223df1816e09227fd8c6f166f78204cb49685f29444e296d6aa7df57d28369 WatchSource:0}: Error finding container 4b223df1816e09227fd8c6f166f78204cb49685f29444e296d6aa7df57d28369: Status 404 returned error can't find the container with id 4b223df1816e09227fd8c6f166f78204cb49685f29444e296d6aa7df57d28369 Nov 28 07:14:28 crc kubenswrapper[4946]: I1128 07:14:28.184375 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 28 07:14:28 crc kubenswrapper[4946]: I1128 07:14:28.377082 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d9cc-account-create-update-vqrs4"] Nov 28 07:14:28 crc kubenswrapper[4946]: W1128 07:14:28.378372 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfeb2d0e9_f8e7_4c39_be09_05728a90c0e9.slice/crio-f4d9e41c3ad39835dec895e55b4f2b15cf129c7984b74692bde4a73fe06e3949 WatchSource:0}: Error finding container f4d9e41c3ad39835dec895e55b4f2b15cf129c7984b74692bde4a73fe06e3949: Status 404 returned error can't find the container with id f4d9e41c3ad39835dec895e55b4f2b15cf129c7984b74692bde4a73fe06e3949 Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.085164 4946 generic.go:334] "Generic (PLEG): container finished" podID="5b2a9fdb-501b-4dad-a136-ca4a932e64d0" containerID="c1578a212bfd245eec3e5a729841471bea41a9a76672a4ed4f1a0685f80cbb7f" exitCode=0 Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.085249 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5ncqm" event={"ID":"5b2a9fdb-501b-4dad-a136-ca4a932e64d0","Type":"ContainerDied","Data":"c1578a212bfd245eec3e5a729841471bea41a9a76672a4ed4f1a0685f80cbb7f"} Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.085285 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5ncqm" event={"ID":"5b2a9fdb-501b-4dad-a136-ca4a932e64d0","Type":"ContainerStarted","Data":"4b223df1816e09227fd8c6f166f78204cb49685f29444e296d6aa7df57d28369"} Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.088341 4946 generic.go:334] "Generic (PLEG): container finished" podID="e85173bc-690d-4513-924c-0ba5c540d432" containerID="aa1deedf4ec4d42dcf2f044cbe783e61d0b64e879b3d079f45f29b27749d5e49" exitCode=0 Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.088412 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7d4w7" event={"ID":"e85173bc-690d-4513-924c-0ba5c540d432","Type":"ContainerDied","Data":"aa1deedf4ec4d42dcf2f044cbe783e61d0b64e879b3d079f45f29b27749d5e49"} Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.091350 4946 generic.go:334] "Generic (PLEG): container finished" podID="3dcd796a-a7cc-4e4a-866b-c26819be5e92" containerID="9d86a0bc4007c7ecc8d7c768fbb98e17ac09a5daa0a1ecf769ccf31eead5582f" exitCode=0 Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.091483 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7786-account-create-update-jrfgb" event={"ID":"3dcd796a-a7cc-4e4a-866b-c26819be5e92","Type":"ContainerDied","Data":"9d86a0bc4007c7ecc8d7c768fbb98e17ac09a5daa0a1ecf769ccf31eead5582f"} Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.091562 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7786-account-create-update-jrfgb" event={"ID":"3dcd796a-a7cc-4e4a-866b-c26819be5e92","Type":"ContainerStarted","Data":"3fd4964e5848f677d6521dd19010b2a366eecc2be9a28fecccb4b4a492a34b21"} Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.093637 4946 generic.go:334] "Generic (PLEG): container finished" podID="feb2d0e9-f8e7-4c39-be09-05728a90c0e9" containerID="4537c2e5ab62fa6caf7b8e50f18eb201daa55d3f5e4e0b453aef1b2b5a9efd81" exitCode=0 Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.093829 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d9cc-account-create-update-vqrs4" event={"ID":"feb2d0e9-f8e7-4c39-be09-05728a90c0e9","Type":"ContainerDied","Data":"4537c2e5ab62fa6caf7b8e50f18eb201daa55d3f5e4e0b453aef1b2b5a9efd81"} Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.093899 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d9cc-account-create-update-vqrs4" event={"ID":"feb2d0e9-f8e7-4c39-be09-05728a90c0e9","Type":"ContainerStarted","Data":"f4d9e41c3ad39835dec895e55b4f2b15cf129c7984b74692bde4a73fe06e3949"} Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.212558 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c8cb8df65-wcsz4"] Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.214559 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.260623 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c8cb8df65-wcsz4"] Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.301266 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91e352b8-41dc-44ca-8f6d-77c1e7de5469-dns-svc\") pod \"dnsmasq-dns-7c8cb8df65-wcsz4\" (UID: \"91e352b8-41dc-44ca-8f6d-77c1e7de5469\") " pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.301550 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91e352b8-41dc-44ca-8f6d-77c1e7de5469-ovsdbserver-nb\") pod \"dnsmasq-dns-7c8cb8df65-wcsz4\" (UID: \"91e352b8-41dc-44ca-8f6d-77c1e7de5469\") " pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.301727 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91e352b8-41dc-44ca-8f6d-77c1e7de5469-config\") pod \"dnsmasq-dns-7c8cb8df65-wcsz4\" (UID: \"91e352b8-41dc-44ca-8f6d-77c1e7de5469\") " pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.302011 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91e352b8-41dc-44ca-8f6d-77c1e7de5469-ovsdbserver-sb\") pod \"dnsmasq-dns-7c8cb8df65-wcsz4\" (UID: \"91e352b8-41dc-44ca-8f6d-77c1e7de5469\") " pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.302148 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7ntw\" (UniqueName: \"kubernetes.io/projected/91e352b8-41dc-44ca-8f6d-77c1e7de5469-kube-api-access-k7ntw\") pod \"dnsmasq-dns-7c8cb8df65-wcsz4\" (UID: \"91e352b8-41dc-44ca-8f6d-77c1e7de5469\") " pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.403498 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91e352b8-41dc-44ca-8f6d-77c1e7de5469-dns-svc\") pod \"dnsmasq-dns-7c8cb8df65-wcsz4\" (UID: \"91e352b8-41dc-44ca-8f6d-77c1e7de5469\") " pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.403578 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91e352b8-41dc-44ca-8f6d-77c1e7de5469-ovsdbserver-nb\") pod \"dnsmasq-dns-7c8cb8df65-wcsz4\" (UID: \"91e352b8-41dc-44ca-8f6d-77c1e7de5469\") " pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.403618 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91e352b8-41dc-44ca-8f6d-77c1e7de5469-config\") pod \"dnsmasq-dns-7c8cb8df65-wcsz4\" (UID: \"91e352b8-41dc-44ca-8f6d-77c1e7de5469\") " pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.403666 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91e352b8-41dc-44ca-8f6d-77c1e7de5469-ovsdbserver-sb\") pod \"dnsmasq-dns-7c8cb8df65-wcsz4\" (UID: \"91e352b8-41dc-44ca-8f6d-77c1e7de5469\") " pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.403711 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7ntw\" (UniqueName: \"kubernetes.io/projected/91e352b8-41dc-44ca-8f6d-77c1e7de5469-kube-api-access-k7ntw\") pod \"dnsmasq-dns-7c8cb8df65-wcsz4\" (UID: \"91e352b8-41dc-44ca-8f6d-77c1e7de5469\") " pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.404744 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91e352b8-41dc-44ca-8f6d-77c1e7de5469-ovsdbserver-nb\") pod \"dnsmasq-dns-7c8cb8df65-wcsz4\" (UID: \"91e352b8-41dc-44ca-8f6d-77c1e7de5469\") " pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.404769 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91e352b8-41dc-44ca-8f6d-77c1e7de5469-ovsdbserver-sb\") pod \"dnsmasq-dns-7c8cb8df65-wcsz4\" (UID: \"91e352b8-41dc-44ca-8f6d-77c1e7de5469\") " pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.404747 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91e352b8-41dc-44ca-8f6d-77c1e7de5469-config\") pod \"dnsmasq-dns-7c8cb8df65-wcsz4\" (UID: \"91e352b8-41dc-44ca-8f6d-77c1e7de5469\") " pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.405483 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91e352b8-41dc-44ca-8f6d-77c1e7de5469-dns-svc\") pod \"dnsmasq-dns-7c8cb8df65-wcsz4\" (UID: \"91e352b8-41dc-44ca-8f6d-77c1e7de5469\") " pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.429318 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7ntw\" (UniqueName: \"kubernetes.io/projected/91e352b8-41dc-44ca-8f6d-77c1e7de5469-kube-api-access-k7ntw\") pod \"dnsmasq-dns-7c8cb8df65-wcsz4\" (UID: \"91e352b8-41dc-44ca-8f6d-77c1e7de5469\") " pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" Nov 28 07:14:29 crc kubenswrapper[4946]: I1128 07:14:29.542607 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.006850 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c8cb8df65-wcsz4"] Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.108564 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" event={"ID":"91e352b8-41dc-44ca-8f6d-77c1e7de5469","Type":"ContainerStarted","Data":"1dfabc0c89786705e3e67d967a2bac9e427cf78cf69d19465fa9c48ffa70df6a"} Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.386102 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.399683 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.402159 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-gfg2h" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.402168 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.408415 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.411747 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.434346 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.526022 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-etc-swift\") pod \"swift-storage-0\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " pod="openstack/swift-storage-0" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.526127 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " pod="openstack/swift-storage-0" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.526192 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b7d310ee-b686-4e3d-b554-393fc09a770d-cache\") pod \"swift-storage-0\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " pod="openstack/swift-storage-0" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.526225 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq5d5\" (UniqueName: \"kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-kube-api-access-zq5d5\") pod \"swift-storage-0\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " pod="openstack/swift-storage-0" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.526307 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b7d310ee-b686-4e3d-b554-393fc09a770d-lock\") pod \"swift-storage-0\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " pod="openstack/swift-storage-0" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.577108 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5ncqm" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.627449 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b2a9fdb-501b-4dad-a136-ca4a932e64d0-operator-scripts\") pod \"5b2a9fdb-501b-4dad-a136-ca4a932e64d0\" (UID: \"5b2a9fdb-501b-4dad-a136-ca4a932e64d0\") " Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.627804 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrjx7\" (UniqueName: \"kubernetes.io/projected/5b2a9fdb-501b-4dad-a136-ca4a932e64d0-kube-api-access-rrjx7\") pod \"5b2a9fdb-501b-4dad-a136-ca4a932e64d0\" (UID: \"5b2a9fdb-501b-4dad-a136-ca4a932e64d0\") " Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.628039 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b7d310ee-b686-4e3d-b554-393fc09a770d-lock\") pod \"swift-storage-0\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " pod="openstack/swift-storage-0" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.628109 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-etc-swift\") pod \"swift-storage-0\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " pod="openstack/swift-storage-0" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.628153 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " pod="openstack/swift-storage-0" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.628194 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b7d310ee-b686-4e3d-b554-393fc09a770d-cache\") pod \"swift-storage-0\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " pod="openstack/swift-storage-0" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.628214 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq5d5\" (UniqueName: \"kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-kube-api-access-zq5d5\") pod \"swift-storage-0\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " pod="openstack/swift-storage-0" Nov 28 07:14:30 crc kubenswrapper[4946]: E1128 07:14:30.628502 4946 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 07:14:30 crc kubenswrapper[4946]: E1128 07:14:30.628553 4946 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 07:14:30 crc kubenswrapper[4946]: E1128 07:14:30.628643 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-etc-swift podName:b7d310ee-b686-4e3d-b554-393fc09a770d nodeName:}" failed. No retries permitted until 2025-11-28 07:14:31.128609052 +0000 UTC m=+1325.506674363 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-etc-swift") pod "swift-storage-0" (UID: "b7d310ee-b686-4e3d-b554-393fc09a770d") : configmap "swift-ring-files" not found Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.629141 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/swift-storage-0" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.629227 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b7d310ee-b686-4e3d-b554-393fc09a770d-lock\") pod \"swift-storage-0\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " pod="openstack/swift-storage-0" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.629813 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b2a9fdb-501b-4dad-a136-ca4a932e64d0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5b2a9fdb-501b-4dad-a136-ca4a932e64d0" (UID: "5b2a9fdb-501b-4dad-a136-ca4a932e64d0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.634727 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b7d310ee-b686-4e3d-b554-393fc09a770d-cache\") pod \"swift-storage-0\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " pod="openstack/swift-storage-0" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.652757 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b2a9fdb-501b-4dad-a136-ca4a932e64d0-kube-api-access-rrjx7" (OuterVolumeSpecName: "kube-api-access-rrjx7") pod "5b2a9fdb-501b-4dad-a136-ca4a932e64d0" (UID: "5b2a9fdb-501b-4dad-a136-ca4a932e64d0"). InnerVolumeSpecName "kube-api-access-rrjx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.657494 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " pod="openstack/swift-storage-0" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.665350 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq5d5\" (UniqueName: \"kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-kube-api-access-zq5d5\") pod \"swift-storage-0\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " pod="openstack/swift-storage-0" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.728417 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-nmgj8"] Nov 28 07:14:30 crc kubenswrapper[4946]: E1128 07:14:30.729073 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b2a9fdb-501b-4dad-a136-ca4a932e64d0" containerName="mariadb-database-create" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.729097 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b2a9fdb-501b-4dad-a136-ca4a932e64d0" containerName="mariadb-database-create" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.729338 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b2a9fdb-501b-4dad-a136-ca4a932e64d0" containerName="mariadb-database-create" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.729588 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b2a9fdb-501b-4dad-a136-ca4a932e64d0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.729653 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrjx7\" (UniqueName: \"kubernetes.io/projected/5b2a9fdb-501b-4dad-a136-ca4a932e64d0-kube-api-access-rrjx7\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.730178 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nmgj8" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.738760 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.738933 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.741988 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.757500 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d9cc-account-create-update-vqrs4" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.759831 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-nmgj8"] Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.766932 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7d4w7" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.767829 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.784627 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7786-account-create-update-jrfgb" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.833175 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e85173bc-690d-4513-924c-0ba5c540d432-operator-scripts\") pod \"e85173bc-690d-4513-924c-0ba5c540d432\" (UID: \"e85173bc-690d-4513-924c-0ba5c540d432\") " Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.833265 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kt2h\" (UniqueName: \"kubernetes.io/projected/e85173bc-690d-4513-924c-0ba5c540d432-kube-api-access-2kt2h\") pod \"e85173bc-690d-4513-924c-0ba5c540d432\" (UID: \"e85173bc-690d-4513-924c-0ba5c540d432\") " Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.833382 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnqmj\" (UniqueName: \"kubernetes.io/projected/feb2d0e9-f8e7-4c39-be09-05728a90c0e9-kube-api-access-mnqmj\") pod \"feb2d0e9-f8e7-4c39-be09-05728a90c0e9\" (UID: \"feb2d0e9-f8e7-4c39-be09-05728a90c0e9\") " Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.833470 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb2d0e9-f8e7-4c39-be09-05728a90c0e9-operator-scripts\") pod \"feb2d0e9-f8e7-4c39-be09-05728a90c0e9\" (UID: \"feb2d0e9-f8e7-4c39-be09-05728a90c0e9\") " Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.833976 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d1b70933-a95b-4637-8c02-d77b9ef7a980-dispersionconf\") pod \"swift-ring-rebalance-nmgj8\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " pod="openstack/swift-ring-rebalance-nmgj8" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.834016 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d1b70933-a95b-4637-8c02-d77b9ef7a980-ring-data-devices\") pod \"swift-ring-rebalance-nmgj8\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " pod="openstack/swift-ring-rebalance-nmgj8" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.834038 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d1b70933-a95b-4637-8c02-d77b9ef7a980-etc-swift\") pod \"swift-ring-rebalance-nmgj8\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " pod="openstack/swift-ring-rebalance-nmgj8" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.834064 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1b70933-a95b-4637-8c02-d77b9ef7a980-combined-ca-bundle\") pod \"swift-ring-rebalance-nmgj8\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " pod="openstack/swift-ring-rebalance-nmgj8" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.834102 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1b70933-a95b-4637-8c02-d77b9ef7a980-scripts\") pod \"swift-ring-rebalance-nmgj8\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " pod="openstack/swift-ring-rebalance-nmgj8" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.834158 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt64r\" (UniqueName: \"kubernetes.io/projected/d1b70933-a95b-4637-8c02-d77b9ef7a980-kube-api-access-xt64r\") pod \"swift-ring-rebalance-nmgj8\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " pod="openstack/swift-ring-rebalance-nmgj8" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.834192 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d1b70933-a95b-4637-8c02-d77b9ef7a980-swiftconf\") pod \"swift-ring-rebalance-nmgj8\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " pod="openstack/swift-ring-rebalance-nmgj8" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.835991 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/feb2d0e9-f8e7-4c39-be09-05728a90c0e9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "feb2d0e9-f8e7-4c39-be09-05728a90c0e9" (UID: "feb2d0e9-f8e7-4c39-be09-05728a90c0e9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.837228 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e85173bc-690d-4513-924c-0ba5c540d432-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e85173bc-690d-4513-924c-0ba5c540d432" (UID: "e85173bc-690d-4513-924c-0ba5c540d432"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.841114 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feb2d0e9-f8e7-4c39-be09-05728a90c0e9-kube-api-access-mnqmj" (OuterVolumeSpecName: "kube-api-access-mnqmj") pod "feb2d0e9-f8e7-4c39-be09-05728a90c0e9" (UID: "feb2d0e9-f8e7-4c39-be09-05728a90c0e9"). InnerVolumeSpecName "kube-api-access-mnqmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.841407 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e85173bc-690d-4513-924c-0ba5c540d432-kube-api-access-2kt2h" (OuterVolumeSpecName: "kube-api-access-2kt2h") pod "e85173bc-690d-4513-924c-0ba5c540d432" (UID: "e85173bc-690d-4513-924c-0ba5c540d432"). InnerVolumeSpecName "kube-api-access-2kt2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.935394 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ssch\" (UniqueName: \"kubernetes.io/projected/3dcd796a-a7cc-4e4a-866b-c26819be5e92-kube-api-access-7ssch\") pod \"3dcd796a-a7cc-4e4a-866b-c26819be5e92\" (UID: \"3dcd796a-a7cc-4e4a-866b-c26819be5e92\") " Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.935522 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dcd796a-a7cc-4e4a-866b-c26819be5e92-operator-scripts\") pod \"3dcd796a-a7cc-4e4a-866b-c26819be5e92\" (UID: \"3dcd796a-a7cc-4e4a-866b-c26819be5e92\") " Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.935797 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d1b70933-a95b-4637-8c02-d77b9ef7a980-swiftconf\") pod \"swift-ring-rebalance-nmgj8\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " pod="openstack/swift-ring-rebalance-nmgj8" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.935901 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d1b70933-a95b-4637-8c02-d77b9ef7a980-dispersionconf\") pod \"swift-ring-rebalance-nmgj8\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " pod="openstack/swift-ring-rebalance-nmgj8" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.935930 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d1b70933-a95b-4637-8c02-d77b9ef7a980-ring-data-devices\") pod \"swift-ring-rebalance-nmgj8\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " pod="openstack/swift-ring-rebalance-nmgj8" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.935950 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d1b70933-a95b-4637-8c02-d77b9ef7a980-etc-swift\") pod \"swift-ring-rebalance-nmgj8\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " pod="openstack/swift-ring-rebalance-nmgj8" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.935970 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1b70933-a95b-4637-8c02-d77b9ef7a980-combined-ca-bundle\") pod \"swift-ring-rebalance-nmgj8\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " pod="openstack/swift-ring-rebalance-nmgj8" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.935998 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1b70933-a95b-4637-8c02-d77b9ef7a980-scripts\") pod \"swift-ring-rebalance-nmgj8\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " pod="openstack/swift-ring-rebalance-nmgj8" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.936038 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt64r\" (UniqueName: \"kubernetes.io/projected/d1b70933-a95b-4637-8c02-d77b9ef7a980-kube-api-access-xt64r\") pod \"swift-ring-rebalance-nmgj8\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " pod="openstack/swift-ring-rebalance-nmgj8" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.936086 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e85173bc-690d-4513-924c-0ba5c540d432-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.936097 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kt2h\" (UniqueName: \"kubernetes.io/projected/e85173bc-690d-4513-924c-0ba5c540d432-kube-api-access-2kt2h\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.936109 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnqmj\" (UniqueName: \"kubernetes.io/projected/feb2d0e9-f8e7-4c39-be09-05728a90c0e9-kube-api-access-mnqmj\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.936121 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb2d0e9-f8e7-4c39-be09-05728a90c0e9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.938773 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d1b70933-a95b-4637-8c02-d77b9ef7a980-ring-data-devices\") pod \"swift-ring-rebalance-nmgj8\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " pod="openstack/swift-ring-rebalance-nmgj8" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.941496 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dcd796a-a7cc-4e4a-866b-c26819be5e92-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3dcd796a-a7cc-4e4a-866b-c26819be5e92" (UID: "3dcd796a-a7cc-4e4a-866b-c26819be5e92"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.944669 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d1b70933-a95b-4637-8c02-d77b9ef7a980-swiftconf\") pod \"swift-ring-rebalance-nmgj8\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " pod="openstack/swift-ring-rebalance-nmgj8" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.946740 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dcd796a-a7cc-4e4a-866b-c26819be5e92-kube-api-access-7ssch" (OuterVolumeSpecName: "kube-api-access-7ssch") pod "3dcd796a-a7cc-4e4a-866b-c26819be5e92" (UID: "3dcd796a-a7cc-4e4a-866b-c26819be5e92"). InnerVolumeSpecName "kube-api-access-7ssch". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.947378 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1b70933-a95b-4637-8c02-d77b9ef7a980-scripts\") pod \"swift-ring-rebalance-nmgj8\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " pod="openstack/swift-ring-rebalance-nmgj8" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.947679 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d1b70933-a95b-4637-8c02-d77b9ef7a980-etc-swift\") pod \"swift-ring-rebalance-nmgj8\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " pod="openstack/swift-ring-rebalance-nmgj8" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.948720 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d1b70933-a95b-4637-8c02-d77b9ef7a980-dispersionconf\") pod \"swift-ring-rebalance-nmgj8\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " pod="openstack/swift-ring-rebalance-nmgj8" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.956371 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1b70933-a95b-4637-8c02-d77b9ef7a980-combined-ca-bundle\") pod \"swift-ring-rebalance-nmgj8\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " pod="openstack/swift-ring-rebalance-nmgj8" Nov 28 07:14:30 crc kubenswrapper[4946]: I1128 07:14:30.979561 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt64r\" (UniqueName: \"kubernetes.io/projected/d1b70933-a95b-4637-8c02-d77b9ef7a980-kube-api-access-xt64r\") pod \"swift-ring-rebalance-nmgj8\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " pod="openstack/swift-ring-rebalance-nmgj8" Nov 28 07:14:31 crc kubenswrapper[4946]: I1128 07:14:31.039932 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ssch\" (UniqueName: \"kubernetes.io/projected/3dcd796a-a7cc-4e4a-866b-c26819be5e92-kube-api-access-7ssch\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:31 crc kubenswrapper[4946]: I1128 07:14:31.039968 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dcd796a-a7cc-4e4a-866b-c26819be5e92-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:31 crc kubenswrapper[4946]: I1128 07:14:31.091719 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nmgj8" Nov 28 07:14:31 crc kubenswrapper[4946]: I1128 07:14:31.121753 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d9cc-account-create-update-vqrs4" event={"ID":"feb2d0e9-f8e7-4c39-be09-05728a90c0e9","Type":"ContainerDied","Data":"f4d9e41c3ad39835dec895e55b4f2b15cf129c7984b74692bde4a73fe06e3949"} Nov 28 07:14:31 crc kubenswrapper[4946]: I1128 07:14:31.121807 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4d9e41c3ad39835dec895e55b4f2b15cf129c7984b74692bde4a73fe06e3949" Nov 28 07:14:31 crc kubenswrapper[4946]: I1128 07:14:31.121780 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d9cc-account-create-update-vqrs4" Nov 28 07:14:31 crc kubenswrapper[4946]: I1128 07:14:31.124698 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5ncqm" event={"ID":"5b2a9fdb-501b-4dad-a136-ca4a932e64d0","Type":"ContainerDied","Data":"4b223df1816e09227fd8c6f166f78204cb49685f29444e296d6aa7df57d28369"} Nov 28 07:14:31 crc kubenswrapper[4946]: I1128 07:14:31.124749 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b223df1816e09227fd8c6f166f78204cb49685f29444e296d6aa7df57d28369" Nov 28 07:14:31 crc kubenswrapper[4946]: I1128 07:14:31.124836 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5ncqm" Nov 28 07:14:31 crc kubenswrapper[4946]: I1128 07:14:31.129519 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7d4w7" event={"ID":"e85173bc-690d-4513-924c-0ba5c540d432","Type":"ContainerDied","Data":"de1242199d1be0bfaed85814a3a08a785072af661353c0db409eda7928b60f9d"} Nov 28 07:14:31 crc kubenswrapper[4946]: I1128 07:14:31.129568 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de1242199d1be0bfaed85814a3a08a785072af661353c0db409eda7928b60f9d" Nov 28 07:14:31 crc kubenswrapper[4946]: I1128 07:14:31.129655 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7d4w7" Nov 28 07:14:31 crc kubenswrapper[4946]: I1128 07:14:31.131856 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7786-account-create-update-jrfgb" event={"ID":"3dcd796a-a7cc-4e4a-866b-c26819be5e92","Type":"ContainerDied","Data":"3fd4964e5848f677d6521dd19010b2a366eecc2be9a28fecccb4b4a492a34b21"} Nov 28 07:14:31 crc kubenswrapper[4946]: I1128 07:14:31.131907 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fd4964e5848f677d6521dd19010b2a366eecc2be9a28fecccb4b4a492a34b21" Nov 28 07:14:31 crc kubenswrapper[4946]: I1128 07:14:31.131925 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7786-account-create-update-jrfgb" Nov 28 07:14:31 crc kubenswrapper[4946]: I1128 07:14:31.141870 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-etc-swift\") pod \"swift-storage-0\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " pod="openstack/swift-storage-0" Nov 28 07:14:31 crc kubenswrapper[4946]: E1128 07:14:31.142084 4946 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 07:14:31 crc kubenswrapper[4946]: E1128 07:14:31.142124 4946 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 07:14:31 crc kubenswrapper[4946]: E1128 07:14:31.142213 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-etc-swift podName:b7d310ee-b686-4e3d-b554-393fc09a770d nodeName:}" failed. No retries permitted until 2025-11-28 07:14:32.142172016 +0000 UTC m=+1326.520237127 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-etc-swift") pod "swift-storage-0" (UID: "b7d310ee-b686-4e3d-b554-393fc09a770d") : configmap "swift-ring-files" not found Nov 28 07:14:31 crc kubenswrapper[4946]: I1128 07:14:31.592562 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-nmgj8"] Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.146933 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nmgj8" event={"ID":"d1b70933-a95b-4637-8c02-d77b9ef7a980","Type":"ContainerStarted","Data":"def9048b1504c92c4c41d5f89efc25780c648d4a1a1f1dbed282d2bf48fe1e05"} Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.163120 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-etc-swift\") pod \"swift-storage-0\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " pod="openstack/swift-storage-0" Nov 28 07:14:32 crc kubenswrapper[4946]: E1128 07:14:32.163397 4946 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 07:14:32 crc kubenswrapper[4946]: E1128 07:14:32.163424 4946 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 07:14:32 crc kubenswrapper[4946]: E1128 07:14:32.163519 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-etc-swift podName:b7d310ee-b686-4e3d-b554-393fc09a770d nodeName:}" failed. No retries permitted until 2025-11-28 07:14:34.163493561 +0000 UTC m=+1328.541558692 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-etc-swift") pod "swift-storage-0" (UID: "b7d310ee-b686-4e3d-b554-393fc09a770d") : configmap "swift-ring-files" not found Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.688524 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-kd7m8"] Nov 28 07:14:32 crc kubenswrapper[4946]: E1128 07:14:32.688952 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85173bc-690d-4513-924c-0ba5c540d432" containerName="mariadb-database-create" Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.688977 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85173bc-690d-4513-924c-0ba5c540d432" containerName="mariadb-database-create" Nov 28 07:14:32 crc kubenswrapper[4946]: E1128 07:14:32.688997 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb2d0e9-f8e7-4c39-be09-05728a90c0e9" containerName="mariadb-account-create-update" Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.689003 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb2d0e9-f8e7-4c39-be09-05728a90c0e9" containerName="mariadb-account-create-update" Nov 28 07:14:32 crc kubenswrapper[4946]: E1128 07:14:32.689020 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dcd796a-a7cc-4e4a-866b-c26819be5e92" containerName="mariadb-account-create-update" Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.689027 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dcd796a-a7cc-4e4a-866b-c26819be5e92" containerName="mariadb-account-create-update" Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.689244 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="e85173bc-690d-4513-924c-0ba5c540d432" containerName="mariadb-database-create" Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.689279 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dcd796a-a7cc-4e4a-866b-c26819be5e92" containerName="mariadb-account-create-update" Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.689296 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="feb2d0e9-f8e7-4c39-be09-05728a90c0e9" containerName="mariadb-account-create-update" Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.690406 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kd7m8" Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.703158 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-kd7m8"] Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.770977 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b763-account-create-update-vz6db"] Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.774626 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b763-account-create-update-vz6db" Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.776323 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bab5772e-0e4e-4425-a851-10e59b8b602c-operator-scripts\") pod \"glance-db-create-kd7m8\" (UID: \"bab5772e-0e4e-4425-a851-10e59b8b602c\") " pod="openstack/glance-db-create-kd7m8" Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.776395 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzvmr\" (UniqueName: \"kubernetes.io/projected/bab5772e-0e4e-4425-a851-10e59b8b602c-kube-api-access-rzvmr\") pod \"glance-db-create-kd7m8\" (UID: \"bab5772e-0e4e-4425-a851-10e59b8b602c\") " pod="openstack/glance-db-create-kd7m8" Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.777419 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.782485 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b763-account-create-update-vz6db"] Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.878841 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lct62\" (UniqueName: \"kubernetes.io/projected/e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee-kube-api-access-lct62\") pod \"glance-b763-account-create-update-vz6db\" (UID: \"e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee\") " pod="openstack/glance-b763-account-create-update-vz6db" Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.878896 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee-operator-scripts\") pod \"glance-b763-account-create-update-vz6db\" (UID: \"e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee\") " pod="openstack/glance-b763-account-create-update-vz6db" Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.879201 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bab5772e-0e4e-4425-a851-10e59b8b602c-operator-scripts\") pod \"glance-db-create-kd7m8\" (UID: \"bab5772e-0e4e-4425-a851-10e59b8b602c\") " pod="openstack/glance-db-create-kd7m8" Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.879351 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzvmr\" (UniqueName: \"kubernetes.io/projected/bab5772e-0e4e-4425-a851-10e59b8b602c-kube-api-access-rzvmr\") pod \"glance-db-create-kd7m8\" (UID: \"bab5772e-0e4e-4425-a851-10e59b8b602c\") " pod="openstack/glance-db-create-kd7m8" Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.881928 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bab5772e-0e4e-4425-a851-10e59b8b602c-operator-scripts\") pod \"glance-db-create-kd7m8\" (UID: \"bab5772e-0e4e-4425-a851-10e59b8b602c\") " pod="openstack/glance-db-create-kd7m8" Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.910843 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzvmr\" (UniqueName: \"kubernetes.io/projected/bab5772e-0e4e-4425-a851-10e59b8b602c-kube-api-access-rzvmr\") pod \"glance-db-create-kd7m8\" (UID: \"bab5772e-0e4e-4425-a851-10e59b8b602c\") " pod="openstack/glance-db-create-kd7m8" Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.981640 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lct62\" (UniqueName: \"kubernetes.io/projected/e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee-kube-api-access-lct62\") pod \"glance-b763-account-create-update-vz6db\" (UID: \"e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee\") " pod="openstack/glance-b763-account-create-update-vz6db" Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.981729 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee-operator-scripts\") pod \"glance-b763-account-create-update-vz6db\" (UID: \"e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee\") " pod="openstack/glance-b763-account-create-update-vz6db" Nov 28 07:14:32 crc kubenswrapper[4946]: I1128 07:14:32.982843 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee-operator-scripts\") pod \"glance-b763-account-create-update-vz6db\" (UID: \"e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee\") " pod="openstack/glance-b763-account-create-update-vz6db" Nov 28 07:14:33 crc kubenswrapper[4946]: I1128 07:14:33.002690 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lct62\" (UniqueName: \"kubernetes.io/projected/e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee-kube-api-access-lct62\") pod \"glance-b763-account-create-update-vz6db\" (UID: \"e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee\") " pod="openstack/glance-b763-account-create-update-vz6db" Nov 28 07:14:33 crc kubenswrapper[4946]: I1128 07:14:33.015758 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kd7m8" Nov 28 07:14:33 crc kubenswrapper[4946]: I1128 07:14:33.173110 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b763-account-create-update-vz6db" Nov 28 07:14:33 crc kubenswrapper[4946]: I1128 07:14:33.177350 4946 generic.go:334] "Generic (PLEG): container finished" podID="91e352b8-41dc-44ca-8f6d-77c1e7de5469" containerID="f30c67527b275a897b7f74c6d77350b5369816fdfe63a2e4bed944f133e98f9e" exitCode=0 Nov 28 07:14:33 crc kubenswrapper[4946]: I1128 07:14:33.177393 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" event={"ID":"91e352b8-41dc-44ca-8f6d-77c1e7de5469","Type":"ContainerDied","Data":"f30c67527b275a897b7f74c6d77350b5369816fdfe63a2e4bed944f133e98f9e"} Nov 28 07:14:33 crc kubenswrapper[4946]: I1128 07:14:33.522041 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-kd7m8"] Nov 28 07:14:33 crc kubenswrapper[4946]: W1128 07:14:33.533093 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbab5772e_0e4e_4425_a851_10e59b8b602c.slice/crio-887ed96ba62de8a02b5e7a83221afe6b48abd28295d832deb5d3b0978a51b414 WatchSource:0}: Error finding container 887ed96ba62de8a02b5e7a83221afe6b48abd28295d832deb5d3b0978a51b414: Status 404 returned error can't find the container with id 887ed96ba62de8a02b5e7a83221afe6b48abd28295d832deb5d3b0978a51b414 Nov 28 07:14:33 crc kubenswrapper[4946]: I1128 07:14:33.711786 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b763-account-create-update-vz6db"] Nov 28 07:14:34 crc kubenswrapper[4946]: I1128 07:14:34.190057 4946 generic.go:334] "Generic (PLEG): container finished" podID="bab5772e-0e4e-4425-a851-10e59b8b602c" containerID="17f6467df1ccf59fb978ae9c2f548d312c3f97527025965f75f479954b2b0652" exitCode=0 Nov 28 07:14:34 crc kubenswrapper[4946]: I1128 07:14:34.190115 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kd7m8" event={"ID":"bab5772e-0e4e-4425-a851-10e59b8b602c","Type":"ContainerDied","Data":"17f6467df1ccf59fb978ae9c2f548d312c3f97527025965f75f479954b2b0652"} Nov 28 07:14:34 crc kubenswrapper[4946]: I1128 07:14:34.190182 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kd7m8" event={"ID":"bab5772e-0e4e-4425-a851-10e59b8b602c","Type":"ContainerStarted","Data":"887ed96ba62de8a02b5e7a83221afe6b48abd28295d832deb5d3b0978a51b414"} Nov 28 07:14:34 crc kubenswrapper[4946]: I1128 07:14:34.193571 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" event={"ID":"91e352b8-41dc-44ca-8f6d-77c1e7de5469","Type":"ContainerStarted","Data":"c943cffe443eec957964cd2a79890f077db273d84828eb15f3af69c5ce24c40a"} Nov 28 07:14:34 crc kubenswrapper[4946]: I1128 07:14:34.193785 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" Nov 28 07:14:34 crc kubenswrapper[4946]: I1128 07:14:34.218242 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-etc-swift\") pod \"swift-storage-0\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " pod="openstack/swift-storage-0" Nov 28 07:14:34 crc kubenswrapper[4946]: E1128 07:14:34.218846 4946 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 07:14:34 crc kubenswrapper[4946]: E1128 07:14:34.218887 4946 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 07:14:34 crc kubenswrapper[4946]: E1128 07:14:34.218987 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-etc-swift podName:b7d310ee-b686-4e3d-b554-393fc09a770d nodeName:}" failed. No retries permitted until 2025-11-28 07:14:38.218962634 +0000 UTC m=+1332.597027745 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-etc-swift") pod "swift-storage-0" (UID: "b7d310ee-b686-4e3d-b554-393fc09a770d") : configmap "swift-ring-files" not found Nov 28 07:14:34 crc kubenswrapper[4946]: I1128 07:14:34.246498 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" podStartSLOduration=5.24647025 podStartE2EDuration="5.24647025s" podCreationTimestamp="2025-11-28 07:14:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:14:34.243976158 +0000 UTC m=+1328.622041269" watchObservedRunningTime="2025-11-28 07:14:34.24647025 +0000 UTC m=+1328.624535361" Nov 28 07:14:38 crc kubenswrapper[4946]: I1128 07:14:38.313537 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-etc-swift\") pod \"swift-storage-0\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " pod="openstack/swift-storage-0" Nov 28 07:14:38 crc kubenswrapper[4946]: E1128 07:14:38.314033 4946 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 07:14:38 crc kubenswrapper[4946]: E1128 07:14:38.314371 4946 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 07:14:38 crc kubenswrapper[4946]: E1128 07:14:38.314511 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-etc-swift podName:b7d310ee-b686-4e3d-b554-393fc09a770d nodeName:}" failed. No retries permitted until 2025-11-28 07:14:46.314433713 +0000 UTC m=+1340.692498854 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-etc-swift") pod "swift-storage-0" (UID: "b7d310ee-b686-4e3d-b554-393fc09a770d") : configmap "swift-ring-files" not found Nov 28 07:14:39 crc kubenswrapper[4946]: I1128 07:14:39.153590 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kd7m8" Nov 28 07:14:39 crc kubenswrapper[4946]: I1128 07:14:39.232938 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzvmr\" (UniqueName: \"kubernetes.io/projected/bab5772e-0e4e-4425-a851-10e59b8b602c-kube-api-access-rzvmr\") pod \"bab5772e-0e4e-4425-a851-10e59b8b602c\" (UID: \"bab5772e-0e4e-4425-a851-10e59b8b602c\") " Nov 28 07:14:39 crc kubenswrapper[4946]: I1128 07:14:39.233959 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bab5772e-0e4e-4425-a851-10e59b8b602c-operator-scripts\") pod \"bab5772e-0e4e-4425-a851-10e59b8b602c\" (UID: \"bab5772e-0e4e-4425-a851-10e59b8b602c\") " Nov 28 07:14:39 crc kubenswrapper[4946]: I1128 07:14:39.235674 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bab5772e-0e4e-4425-a851-10e59b8b602c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bab5772e-0e4e-4425-a851-10e59b8b602c" (UID: "bab5772e-0e4e-4425-a851-10e59b8b602c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:14:39 crc kubenswrapper[4946]: I1128 07:14:39.240639 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bab5772e-0e4e-4425-a851-10e59b8b602c-kube-api-access-rzvmr" (OuterVolumeSpecName: "kube-api-access-rzvmr") pod "bab5772e-0e4e-4425-a851-10e59b8b602c" (UID: "bab5772e-0e4e-4425-a851-10e59b8b602c"). InnerVolumeSpecName "kube-api-access-rzvmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:14:39 crc kubenswrapper[4946]: I1128 07:14:39.249197 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kd7m8" event={"ID":"bab5772e-0e4e-4425-a851-10e59b8b602c","Type":"ContainerDied","Data":"887ed96ba62de8a02b5e7a83221afe6b48abd28295d832deb5d3b0978a51b414"} Nov 28 07:14:39 crc kubenswrapper[4946]: I1128 07:14:39.249248 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="887ed96ba62de8a02b5e7a83221afe6b48abd28295d832deb5d3b0978a51b414" Nov 28 07:14:39 crc kubenswrapper[4946]: I1128 07:14:39.249325 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kd7m8" Nov 28 07:14:39 crc kubenswrapper[4946]: I1128 07:14:39.253153 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b763-account-create-update-vz6db" event={"ID":"e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee","Type":"ContainerStarted","Data":"cea2f17c1917d0d07fb2d8cf63e2db9017030126a323ab59fefd53a5634c194e"} Nov 28 07:14:39 crc kubenswrapper[4946]: I1128 07:14:39.337063 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bab5772e-0e4e-4425-a851-10e59b8b602c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:39 crc kubenswrapper[4946]: I1128 07:14:39.337106 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzvmr\" (UniqueName: \"kubernetes.io/projected/bab5772e-0e4e-4425-a851-10e59b8b602c-kube-api-access-rzvmr\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:39 crc kubenswrapper[4946]: I1128 07:14:39.543663 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" Nov 28 07:14:39 crc kubenswrapper[4946]: I1128 07:14:39.641541 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bd875f97-67m8z"] Nov 28 07:14:39 crc kubenswrapper[4946]: I1128 07:14:39.641921 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58bd875f97-67m8z" podUID="ded10611-80cf-497e-87d5-3dc3af54962c" containerName="dnsmasq-dns" containerID="cri-o://02954776648d9637d7d33b8892b12ba998a24466bd49a009f1e8210276eaf130" gracePeriod=10 Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.129822 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd875f97-67m8z" Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.160837 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ded10611-80cf-497e-87d5-3dc3af54962c-ovsdbserver-nb\") pod \"ded10611-80cf-497e-87d5-3dc3af54962c\" (UID: \"ded10611-80cf-497e-87d5-3dc3af54962c\") " Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.161300 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ded10611-80cf-497e-87d5-3dc3af54962c-ovsdbserver-sb\") pod \"ded10611-80cf-497e-87d5-3dc3af54962c\" (UID: \"ded10611-80cf-497e-87d5-3dc3af54962c\") " Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.161447 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ded10611-80cf-497e-87d5-3dc3af54962c-config\") pod \"ded10611-80cf-497e-87d5-3dc3af54962c\" (UID: \"ded10611-80cf-497e-87d5-3dc3af54962c\") " Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.161591 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ded10611-80cf-497e-87d5-3dc3af54962c-dns-svc\") pod \"ded10611-80cf-497e-87d5-3dc3af54962c\" (UID: \"ded10611-80cf-497e-87d5-3dc3af54962c\") " Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.161716 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvdfp\" (UniqueName: \"kubernetes.io/projected/ded10611-80cf-497e-87d5-3dc3af54962c-kube-api-access-kvdfp\") pod \"ded10611-80cf-497e-87d5-3dc3af54962c\" (UID: \"ded10611-80cf-497e-87d5-3dc3af54962c\") " Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.171672 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ded10611-80cf-497e-87d5-3dc3af54962c-kube-api-access-kvdfp" (OuterVolumeSpecName: "kube-api-access-kvdfp") pod "ded10611-80cf-497e-87d5-3dc3af54962c" (UID: "ded10611-80cf-497e-87d5-3dc3af54962c"). InnerVolumeSpecName "kube-api-access-kvdfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.225546 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ded10611-80cf-497e-87d5-3dc3af54962c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ded10611-80cf-497e-87d5-3dc3af54962c" (UID: "ded10611-80cf-497e-87d5-3dc3af54962c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.227890 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ded10611-80cf-497e-87d5-3dc3af54962c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ded10611-80cf-497e-87d5-3dc3af54962c" (UID: "ded10611-80cf-497e-87d5-3dc3af54962c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.230841 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ded10611-80cf-497e-87d5-3dc3af54962c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ded10611-80cf-497e-87d5-3dc3af54962c" (UID: "ded10611-80cf-497e-87d5-3dc3af54962c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.233434 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ded10611-80cf-497e-87d5-3dc3af54962c-config" (OuterVolumeSpecName: "config") pod "ded10611-80cf-497e-87d5-3dc3af54962c" (UID: "ded10611-80cf-497e-87d5-3dc3af54962c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.264389 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ded10611-80cf-497e-87d5-3dc3af54962c-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.264430 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ded10611-80cf-497e-87d5-3dc3af54962c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.264454 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvdfp\" (UniqueName: \"kubernetes.io/projected/ded10611-80cf-497e-87d5-3dc3af54962c-kube-api-access-kvdfp\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.264500 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ded10611-80cf-497e-87d5-3dc3af54962c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.264516 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ded10611-80cf-497e-87d5-3dc3af54962c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.267094 4946 generic.go:334] "Generic (PLEG): container finished" podID="ded10611-80cf-497e-87d5-3dc3af54962c" containerID="02954776648d9637d7d33b8892b12ba998a24466bd49a009f1e8210276eaf130" exitCode=0 Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.267204 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd875f97-67m8z" event={"ID":"ded10611-80cf-497e-87d5-3dc3af54962c","Type":"ContainerDied","Data":"02954776648d9637d7d33b8892b12ba998a24466bd49a009f1e8210276eaf130"} Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.267288 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd875f97-67m8z" event={"ID":"ded10611-80cf-497e-87d5-3dc3af54962c","Type":"ContainerDied","Data":"2b75eb00bbc05748fe8a15427b97d21245b4ff660e3fbd1d8757f9bf0bfd52c4"} Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.267319 4946 scope.go:117] "RemoveContainer" containerID="02954776648d9637d7d33b8892b12ba998a24466bd49a009f1e8210276eaf130" Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.267160 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd875f97-67m8z" Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.271130 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nmgj8" event={"ID":"d1b70933-a95b-4637-8c02-d77b9ef7a980","Type":"ContainerStarted","Data":"275e074cabc40bc932fd3ba12868723a484346ce4531f93073acde75bcb63621"} Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.275229 4946 generic.go:334] "Generic (PLEG): container finished" podID="e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee" containerID="8651895f1595b9ee0a8ba4d22caea8c416ed8eea4ef61868ade64414a8501ff6" exitCode=0 Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.275292 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b763-account-create-update-vz6db" event={"ID":"e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee","Type":"ContainerDied","Data":"8651895f1595b9ee0a8ba4d22caea8c416ed8eea4ef61868ade64414a8501ff6"} Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.300772 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-nmgj8" podStartSLOduration=2.846945213 podStartE2EDuration="10.300742129s" podCreationTimestamp="2025-11-28 07:14:30 +0000 UTC" firstStartedPulling="2025-11-28 07:14:31.589843693 +0000 UTC m=+1325.967908794" lastFinishedPulling="2025-11-28 07:14:39.043640579 +0000 UTC m=+1333.421705710" observedRunningTime="2025-11-28 07:14:40.298108575 +0000 UTC m=+1334.676173706" watchObservedRunningTime="2025-11-28 07:14:40.300742129 +0000 UTC m=+1334.678807260" Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.307143 4946 scope.go:117] "RemoveContainer" containerID="968a28598656b20deee60eb65370884e6f3e1cb7430d5e21a6a164687cb048a5" Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.341192 4946 scope.go:117] "RemoveContainer" containerID="02954776648d9637d7d33b8892b12ba998a24466bd49a009f1e8210276eaf130" Nov 28 07:14:40 crc kubenswrapper[4946]: E1128 07:14:40.341713 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02954776648d9637d7d33b8892b12ba998a24466bd49a009f1e8210276eaf130\": container with ID starting with 02954776648d9637d7d33b8892b12ba998a24466bd49a009f1e8210276eaf130 not found: ID does not exist" containerID="02954776648d9637d7d33b8892b12ba998a24466bd49a009f1e8210276eaf130" Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.341745 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02954776648d9637d7d33b8892b12ba998a24466bd49a009f1e8210276eaf130"} err="failed to get container status \"02954776648d9637d7d33b8892b12ba998a24466bd49a009f1e8210276eaf130\": rpc error: code = NotFound desc = could not find container \"02954776648d9637d7d33b8892b12ba998a24466bd49a009f1e8210276eaf130\": container with ID starting with 02954776648d9637d7d33b8892b12ba998a24466bd49a009f1e8210276eaf130 not found: ID does not exist" Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.341773 4946 scope.go:117] "RemoveContainer" containerID="968a28598656b20deee60eb65370884e6f3e1cb7430d5e21a6a164687cb048a5" Nov 28 07:14:40 crc kubenswrapper[4946]: E1128 07:14:40.342089 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"968a28598656b20deee60eb65370884e6f3e1cb7430d5e21a6a164687cb048a5\": container with ID starting with 968a28598656b20deee60eb65370884e6f3e1cb7430d5e21a6a164687cb048a5 not found: ID does not exist" containerID="968a28598656b20deee60eb65370884e6f3e1cb7430d5e21a6a164687cb048a5" Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.342144 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"968a28598656b20deee60eb65370884e6f3e1cb7430d5e21a6a164687cb048a5"} err="failed to get container status \"968a28598656b20deee60eb65370884e6f3e1cb7430d5e21a6a164687cb048a5\": rpc error: code = NotFound desc = could not find container \"968a28598656b20deee60eb65370884e6f3e1cb7430d5e21a6a164687cb048a5\": container with ID starting with 968a28598656b20deee60eb65370884e6f3e1cb7430d5e21a6a164687cb048a5 not found: ID does not exist" Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.347975 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bd875f97-67m8z"] Nov 28 07:14:40 crc kubenswrapper[4946]: I1128 07:14:40.356609 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58bd875f97-67m8z"] Nov 28 07:14:41 crc kubenswrapper[4946]: I1128 07:14:41.673948 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b763-account-create-update-vz6db" Nov 28 07:14:41 crc kubenswrapper[4946]: I1128 07:14:41.695017 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee-operator-scripts\") pod \"e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee\" (UID: \"e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee\") " Nov 28 07:14:41 crc kubenswrapper[4946]: I1128 07:14:41.695535 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lct62\" (UniqueName: \"kubernetes.io/projected/e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee-kube-api-access-lct62\") pod \"e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee\" (UID: \"e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee\") " Nov 28 07:14:41 crc kubenswrapper[4946]: I1128 07:14:41.696084 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee" (UID: "e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:14:41 crc kubenswrapper[4946]: I1128 07:14:41.703238 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee-kube-api-access-lct62" (OuterVolumeSpecName: "kube-api-access-lct62") pod "e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee" (UID: "e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee"). InnerVolumeSpecName "kube-api-access-lct62". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:14:41 crc kubenswrapper[4946]: I1128 07:14:41.798828 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lct62\" (UniqueName: \"kubernetes.io/projected/e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee-kube-api-access-lct62\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:41 crc kubenswrapper[4946]: I1128 07:14:41.798888 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:42 crc kubenswrapper[4946]: I1128 07:14:42.005987 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ded10611-80cf-497e-87d5-3dc3af54962c" path="/var/lib/kubelet/pods/ded10611-80cf-497e-87d5-3dc3af54962c/volumes" Nov 28 07:14:42 crc kubenswrapper[4946]: I1128 07:14:42.300619 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b763-account-create-update-vz6db" event={"ID":"e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee","Type":"ContainerDied","Data":"cea2f17c1917d0d07fb2d8cf63e2db9017030126a323ab59fefd53a5634c194e"} Nov 28 07:14:42 crc kubenswrapper[4946]: I1128 07:14:42.300685 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cea2f17c1917d0d07fb2d8cf63e2db9017030126a323ab59fefd53a5634c194e" Nov 28 07:14:42 crc kubenswrapper[4946]: I1128 07:14:42.300726 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b763-account-create-update-vz6db" Nov 28 07:14:42 crc kubenswrapper[4946]: I1128 07:14:42.962304 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-5xxm2"] Nov 28 07:14:42 crc kubenswrapper[4946]: E1128 07:14:42.963320 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab5772e-0e4e-4425-a851-10e59b8b602c" containerName="mariadb-database-create" Nov 28 07:14:42 crc kubenswrapper[4946]: I1128 07:14:42.963338 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab5772e-0e4e-4425-a851-10e59b8b602c" containerName="mariadb-database-create" Nov 28 07:14:42 crc kubenswrapper[4946]: E1128 07:14:42.963365 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ded10611-80cf-497e-87d5-3dc3af54962c" containerName="dnsmasq-dns" Nov 28 07:14:42 crc kubenswrapper[4946]: I1128 07:14:42.963376 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ded10611-80cf-497e-87d5-3dc3af54962c" containerName="dnsmasq-dns" Nov 28 07:14:42 crc kubenswrapper[4946]: E1128 07:14:42.963386 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee" containerName="mariadb-account-create-update" Nov 28 07:14:42 crc kubenswrapper[4946]: I1128 07:14:42.963394 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee" containerName="mariadb-account-create-update" Nov 28 07:14:42 crc kubenswrapper[4946]: E1128 07:14:42.963406 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ded10611-80cf-497e-87d5-3dc3af54962c" containerName="init" Nov 28 07:14:42 crc kubenswrapper[4946]: I1128 07:14:42.963413 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ded10611-80cf-497e-87d5-3dc3af54962c" containerName="init" Nov 28 07:14:42 crc kubenswrapper[4946]: I1128 07:14:42.963668 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab5772e-0e4e-4425-a851-10e59b8b602c" containerName="mariadb-database-create" Nov 28 07:14:42 crc kubenswrapper[4946]: I1128 07:14:42.963686 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="ded10611-80cf-497e-87d5-3dc3af54962c" containerName="dnsmasq-dns" Nov 28 07:14:42 crc kubenswrapper[4946]: I1128 07:14:42.963695 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee" containerName="mariadb-account-create-update" Nov 28 07:14:42 crc kubenswrapper[4946]: I1128 07:14:42.964518 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5xxm2" Nov 28 07:14:42 crc kubenswrapper[4946]: I1128 07:14:42.969804 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4ftvr" Nov 28 07:14:42 crc kubenswrapper[4946]: I1128 07:14:42.970197 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 28 07:14:42 crc kubenswrapper[4946]: I1128 07:14:42.992614 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5xxm2"] Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.011866 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-xzzn6" podUID="5feb905d-9c23-4603-b118-fdc05a237848" containerName="ovn-controller" probeResult="failure" output=< Nov 28 07:14:43 crc kubenswrapper[4946]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 28 07:14:43 crc kubenswrapper[4946]: > Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.024365 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.026434 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/57401bfe-4d01-4983-8703-d78c50a9886e-db-sync-config-data\") pod \"glance-db-sync-5xxm2\" (UID: \"57401bfe-4d01-4983-8703-d78c50a9886e\") " pod="openstack/glance-db-sync-5xxm2" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.026593 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57401bfe-4d01-4983-8703-d78c50a9886e-combined-ca-bundle\") pod \"glance-db-sync-5xxm2\" (UID: \"57401bfe-4d01-4983-8703-d78c50a9886e\") " pod="openstack/glance-db-sync-5xxm2" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.026629 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzcmd\" (UniqueName: \"kubernetes.io/projected/57401bfe-4d01-4983-8703-d78c50a9886e-kube-api-access-vzcmd\") pod \"glance-db-sync-5xxm2\" (UID: \"57401bfe-4d01-4983-8703-d78c50a9886e\") " pod="openstack/glance-db-sync-5xxm2" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.026706 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57401bfe-4d01-4983-8703-d78c50a9886e-config-data\") pod \"glance-db-sync-5xxm2\" (UID: \"57401bfe-4d01-4983-8703-d78c50a9886e\") " pod="openstack/glance-db-sync-5xxm2" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.061621 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.132105 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57401bfe-4d01-4983-8703-d78c50a9886e-config-data\") pod \"glance-db-sync-5xxm2\" (UID: \"57401bfe-4d01-4983-8703-d78c50a9886e\") " pod="openstack/glance-db-sync-5xxm2" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.133046 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/57401bfe-4d01-4983-8703-d78c50a9886e-db-sync-config-data\") pod \"glance-db-sync-5xxm2\" (UID: \"57401bfe-4d01-4983-8703-d78c50a9886e\") " pod="openstack/glance-db-sync-5xxm2" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.133154 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57401bfe-4d01-4983-8703-d78c50a9886e-combined-ca-bundle\") pod \"glance-db-sync-5xxm2\" (UID: \"57401bfe-4d01-4983-8703-d78c50a9886e\") " pod="openstack/glance-db-sync-5xxm2" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.133176 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzcmd\" (UniqueName: \"kubernetes.io/projected/57401bfe-4d01-4983-8703-d78c50a9886e-kube-api-access-vzcmd\") pod \"glance-db-sync-5xxm2\" (UID: \"57401bfe-4d01-4983-8703-d78c50a9886e\") " pod="openstack/glance-db-sync-5xxm2" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.137485 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57401bfe-4d01-4983-8703-d78c50a9886e-config-data\") pod \"glance-db-sync-5xxm2\" (UID: \"57401bfe-4d01-4983-8703-d78c50a9886e\") " pod="openstack/glance-db-sync-5xxm2" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.138069 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57401bfe-4d01-4983-8703-d78c50a9886e-combined-ca-bundle\") pod \"glance-db-sync-5xxm2\" (UID: \"57401bfe-4d01-4983-8703-d78c50a9886e\") " pod="openstack/glance-db-sync-5xxm2" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.138816 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/57401bfe-4d01-4983-8703-d78c50a9886e-db-sync-config-data\") pod \"glance-db-sync-5xxm2\" (UID: \"57401bfe-4d01-4983-8703-d78c50a9886e\") " pod="openstack/glance-db-sync-5xxm2" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.149440 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzcmd\" (UniqueName: \"kubernetes.io/projected/57401bfe-4d01-4983-8703-d78c50a9886e-kube-api-access-vzcmd\") pod \"glance-db-sync-5xxm2\" (UID: \"57401bfe-4d01-4983-8703-d78c50a9886e\") " pod="openstack/glance-db-sync-5xxm2" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.277547 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xzzn6-config-vcrzc"] Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.279283 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xzzn6-config-vcrzc" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.284320 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.291752 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xzzn6-config-vcrzc"] Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.298726 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5xxm2" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.336369 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-additional-scripts\") pod \"ovn-controller-xzzn6-config-vcrzc\" (UID: \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\") " pod="openstack/ovn-controller-xzzn6-config-vcrzc" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.336489 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-var-log-ovn\") pod \"ovn-controller-xzzn6-config-vcrzc\" (UID: \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\") " pod="openstack/ovn-controller-xzzn6-config-vcrzc" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.336576 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-var-run-ovn\") pod \"ovn-controller-xzzn6-config-vcrzc\" (UID: \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\") " pod="openstack/ovn-controller-xzzn6-config-vcrzc" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.336601 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f55x7\" (UniqueName: \"kubernetes.io/projected/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-kube-api-access-f55x7\") pod \"ovn-controller-xzzn6-config-vcrzc\" (UID: \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\") " pod="openstack/ovn-controller-xzzn6-config-vcrzc" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.336700 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-scripts\") pod \"ovn-controller-xzzn6-config-vcrzc\" (UID: \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\") " pod="openstack/ovn-controller-xzzn6-config-vcrzc" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.336743 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-var-run\") pod \"ovn-controller-xzzn6-config-vcrzc\" (UID: \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\") " pod="openstack/ovn-controller-xzzn6-config-vcrzc" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.439248 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-var-log-ovn\") pod \"ovn-controller-xzzn6-config-vcrzc\" (UID: \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\") " pod="openstack/ovn-controller-xzzn6-config-vcrzc" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.439795 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-var-run-ovn\") pod \"ovn-controller-xzzn6-config-vcrzc\" (UID: \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\") " pod="openstack/ovn-controller-xzzn6-config-vcrzc" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.439814 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-var-log-ovn\") pod \"ovn-controller-xzzn6-config-vcrzc\" (UID: \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\") " pod="openstack/ovn-controller-xzzn6-config-vcrzc" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.439830 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f55x7\" (UniqueName: \"kubernetes.io/projected/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-kube-api-access-f55x7\") pod \"ovn-controller-xzzn6-config-vcrzc\" (UID: \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\") " pod="openstack/ovn-controller-xzzn6-config-vcrzc" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.439921 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-var-run-ovn\") pod \"ovn-controller-xzzn6-config-vcrzc\" (UID: \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\") " pod="openstack/ovn-controller-xzzn6-config-vcrzc" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.440010 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-scripts\") pod \"ovn-controller-xzzn6-config-vcrzc\" (UID: \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\") " pod="openstack/ovn-controller-xzzn6-config-vcrzc" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.440057 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-var-run\") pod \"ovn-controller-xzzn6-config-vcrzc\" (UID: \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\") " pod="openstack/ovn-controller-xzzn6-config-vcrzc" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.440137 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-additional-scripts\") pod \"ovn-controller-xzzn6-config-vcrzc\" (UID: \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\") " pod="openstack/ovn-controller-xzzn6-config-vcrzc" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.440257 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-var-run\") pod \"ovn-controller-xzzn6-config-vcrzc\" (UID: \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\") " pod="openstack/ovn-controller-xzzn6-config-vcrzc" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.441981 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-additional-scripts\") pod \"ovn-controller-xzzn6-config-vcrzc\" (UID: \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\") " pod="openstack/ovn-controller-xzzn6-config-vcrzc" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.442867 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-scripts\") pod \"ovn-controller-xzzn6-config-vcrzc\" (UID: \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\") " pod="openstack/ovn-controller-xzzn6-config-vcrzc" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.457369 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f55x7\" (UniqueName: \"kubernetes.io/projected/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-kube-api-access-f55x7\") pod \"ovn-controller-xzzn6-config-vcrzc\" (UID: \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\") " pod="openstack/ovn-controller-xzzn6-config-vcrzc" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.611748 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xzzn6-config-vcrzc" Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.906473 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5xxm2"] Nov 28 07:14:43 crc kubenswrapper[4946]: I1128 07:14:43.943330 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xzzn6-config-vcrzc"] Nov 28 07:14:43 crc kubenswrapper[4946]: W1128 07:14:43.946802 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d040768_befa_41c4_9cd0_d1c3f10e1bc9.slice/crio-fdd921f57597b732e79429050f97f145a278d84f7faad67e1fbe90de9cd08486 WatchSource:0}: Error finding container fdd921f57597b732e79429050f97f145a278d84f7faad67e1fbe90de9cd08486: Status 404 returned error can't find the container with id fdd921f57597b732e79429050f97f145a278d84f7faad67e1fbe90de9cd08486 Nov 28 07:14:44 crc kubenswrapper[4946]: I1128 07:14:44.324369 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5xxm2" event={"ID":"57401bfe-4d01-4983-8703-d78c50a9886e","Type":"ContainerStarted","Data":"235e2ac4782d0c869b3dcf661d3c3d5fed93d0093eaccc2e5882a881ab8aa943"} Nov 28 07:14:44 crc kubenswrapper[4946]: I1128 07:14:44.327874 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xzzn6-config-vcrzc" event={"ID":"8d040768-befa-41c4-9cd0-d1c3f10e1bc9","Type":"ContainerStarted","Data":"b3ab86ce5a317099367dac8c17c5175422e65311efedf83f30335372fd066f0e"} Nov 28 07:14:44 crc kubenswrapper[4946]: I1128 07:14:44.327909 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xzzn6-config-vcrzc" event={"ID":"8d040768-befa-41c4-9cd0-d1c3f10e1bc9","Type":"ContainerStarted","Data":"fdd921f57597b732e79429050f97f145a278d84f7faad67e1fbe90de9cd08486"} Nov 28 07:14:44 crc kubenswrapper[4946]: I1128 07:14:44.355834 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-xzzn6-config-vcrzc" podStartSLOduration=1.355812867 podStartE2EDuration="1.355812867s" podCreationTimestamp="2025-11-28 07:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:14:44.346496988 +0000 UTC m=+1338.724562119" watchObservedRunningTime="2025-11-28 07:14:44.355812867 +0000 UTC m=+1338.733877978" Nov 28 07:14:45 crc kubenswrapper[4946]: I1128 07:14:45.339113 4946 generic.go:334] "Generic (PLEG): container finished" podID="8d040768-befa-41c4-9cd0-d1c3f10e1bc9" containerID="b3ab86ce5a317099367dac8c17c5175422e65311efedf83f30335372fd066f0e" exitCode=0 Nov 28 07:14:45 crc kubenswrapper[4946]: I1128 07:14:45.339218 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xzzn6-config-vcrzc" event={"ID":"8d040768-befa-41c4-9cd0-d1c3f10e1bc9","Type":"ContainerDied","Data":"b3ab86ce5a317099367dac8c17c5175422e65311efedf83f30335372fd066f0e"} Nov 28 07:14:46 crc kubenswrapper[4946]: I1128 07:14:46.408217 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-etc-swift\") pod \"swift-storage-0\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " pod="openstack/swift-storage-0" Nov 28 07:14:46 crc kubenswrapper[4946]: E1128 07:14:46.408895 4946 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 07:14:46 crc kubenswrapper[4946]: E1128 07:14:46.408912 4946 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 07:14:46 crc kubenswrapper[4946]: E1128 07:14:46.408966 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-etc-swift podName:b7d310ee-b686-4e3d-b554-393fc09a770d nodeName:}" failed. No retries permitted until 2025-11-28 07:15:02.408949584 +0000 UTC m=+1356.787014695 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-etc-swift") pod "swift-storage-0" (UID: "b7d310ee-b686-4e3d-b554-393fc09a770d") : configmap "swift-ring-files" not found Nov 28 07:14:46 crc kubenswrapper[4946]: I1128 07:14:46.729571 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xzzn6-config-vcrzc" Nov 28 07:14:46 crc kubenswrapper[4946]: I1128 07:14:46.820512 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-additional-scripts\") pod \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\" (UID: \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\") " Nov 28 07:14:46 crc kubenswrapper[4946]: I1128 07:14:46.820672 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-var-run-ovn\") pod \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\" (UID: \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\") " Nov 28 07:14:46 crc kubenswrapper[4946]: I1128 07:14:46.820754 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-scripts\") pod \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\" (UID: \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\") " Nov 28 07:14:46 crc kubenswrapper[4946]: I1128 07:14:46.820875 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-var-log-ovn\") pod \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\" (UID: \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\") " Nov 28 07:14:46 crc kubenswrapper[4946]: I1128 07:14:46.820887 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "8d040768-befa-41c4-9cd0-d1c3f10e1bc9" (UID: "8d040768-befa-41c4-9cd0-d1c3f10e1bc9"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:14:46 crc kubenswrapper[4946]: I1128 07:14:46.820968 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f55x7\" (UniqueName: \"kubernetes.io/projected/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-kube-api-access-f55x7\") pod \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\" (UID: \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\") " Nov 28 07:14:46 crc kubenswrapper[4946]: I1128 07:14:46.821071 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "8d040768-befa-41c4-9cd0-d1c3f10e1bc9" (UID: "8d040768-befa-41c4-9cd0-d1c3f10e1bc9"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:14:46 crc kubenswrapper[4946]: I1128 07:14:46.821115 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-var-run\") pod \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\" (UID: \"8d040768-befa-41c4-9cd0-d1c3f10e1bc9\") " Nov 28 07:14:46 crc kubenswrapper[4946]: I1128 07:14:46.821281 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-var-run" (OuterVolumeSpecName: "var-run") pod "8d040768-befa-41c4-9cd0-d1c3f10e1bc9" (UID: "8d040768-befa-41c4-9cd0-d1c3f10e1bc9"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:14:46 crc kubenswrapper[4946]: I1128 07:14:46.821676 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "8d040768-befa-41c4-9cd0-d1c3f10e1bc9" (UID: "8d040768-befa-41c4-9cd0-d1c3f10e1bc9"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:14:46 crc kubenswrapper[4946]: I1128 07:14:46.821751 4946 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:46 crc kubenswrapper[4946]: I1128 07:14:46.821774 4946 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:46 crc kubenswrapper[4946]: I1128 07:14:46.821790 4946 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-var-run\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:46 crc kubenswrapper[4946]: I1128 07:14:46.823290 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-scripts" (OuterVolumeSpecName: "scripts") pod "8d040768-befa-41c4-9cd0-d1c3f10e1bc9" (UID: "8d040768-befa-41c4-9cd0-d1c3f10e1bc9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:14:46 crc kubenswrapper[4946]: I1128 07:14:46.827377 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-kube-api-access-f55x7" (OuterVolumeSpecName: "kube-api-access-f55x7") pod "8d040768-befa-41c4-9cd0-d1c3f10e1bc9" (UID: "8d040768-befa-41c4-9cd0-d1c3f10e1bc9"). InnerVolumeSpecName "kube-api-access-f55x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:14:46 crc kubenswrapper[4946]: I1128 07:14:46.924292 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f55x7\" (UniqueName: \"kubernetes.io/projected/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-kube-api-access-f55x7\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:46 crc kubenswrapper[4946]: I1128 07:14:46.924693 4946 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:46 crc kubenswrapper[4946]: I1128 07:14:46.924714 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d040768-befa-41c4-9cd0-d1c3f10e1bc9-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:47 crc kubenswrapper[4946]: I1128 07:14:47.364568 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xzzn6-config-vcrzc" event={"ID":"8d040768-befa-41c4-9cd0-d1c3f10e1bc9","Type":"ContainerDied","Data":"fdd921f57597b732e79429050f97f145a278d84f7faad67e1fbe90de9cd08486"} Nov 28 07:14:47 crc kubenswrapper[4946]: I1128 07:14:47.364618 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xzzn6-config-vcrzc" Nov 28 07:14:47 crc kubenswrapper[4946]: I1128 07:14:47.364625 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdd921f57597b732e79429050f97f145a278d84f7faad67e1fbe90de9cd08486" Nov 28 07:14:47 crc kubenswrapper[4946]: I1128 07:14:47.467542 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xzzn6-config-vcrzc"] Nov 28 07:14:47 crc kubenswrapper[4946]: I1128 07:14:47.478148 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xzzn6-config-vcrzc"] Nov 28 07:14:48 crc kubenswrapper[4946]: I1128 07:14:48.003732 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d040768-befa-41c4-9cd0-d1c3f10e1bc9" path="/var/lib/kubelet/pods/8d040768-befa-41c4-9cd0-d1c3f10e1bc9/volumes" Nov 28 07:14:48 crc kubenswrapper[4946]: I1128 07:14:48.004586 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-xzzn6" Nov 28 07:14:48 crc kubenswrapper[4946]: I1128 07:14:48.382605 4946 generic.go:334] "Generic (PLEG): container finished" podID="d1b70933-a95b-4637-8c02-d77b9ef7a980" containerID="275e074cabc40bc932fd3ba12868723a484346ce4531f93073acde75bcb63621" exitCode=0 Nov 28 07:14:48 crc kubenswrapper[4946]: I1128 07:14:48.382723 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nmgj8" event={"ID":"d1b70933-a95b-4637-8c02-d77b9ef7a980","Type":"ContainerDied","Data":"275e074cabc40bc932fd3ba12868723a484346ce4531f93073acde75bcb63621"} Nov 28 07:14:50 crc kubenswrapper[4946]: I1128 07:14:50.406659 4946 generic.go:334] "Generic (PLEG): container finished" podID="3521840d-60d0-450c-8c05-7e2ad0fc4e97" containerID="88c96e9b6c9b6c9a01377b5eb6cd235cde6a0cea15f68f81f0dba3c64839e047" exitCode=0 Nov 28 07:14:50 crc kubenswrapper[4946]: I1128 07:14:50.406850 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3521840d-60d0-450c-8c05-7e2ad0fc4e97","Type":"ContainerDied","Data":"88c96e9b6c9b6c9a01377b5eb6cd235cde6a0cea15f68f81f0dba3c64839e047"} Nov 28 07:14:52 crc kubenswrapper[4946]: I1128 07:14:52.446423 4946 generic.go:334] "Generic (PLEG): container finished" podID="59fdca77-b333-44be-ab8c-96a2f4bcc340" containerID="a6bb9670947ddc29c15771ad78d06a158426d4a0b2e7d5a9827785ca30e28082" exitCode=0 Nov 28 07:14:52 crc kubenswrapper[4946]: I1128 07:14:52.446604 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"59fdca77-b333-44be-ab8c-96a2f4bcc340","Type":"ContainerDied","Data":"a6bb9670947ddc29c15771ad78d06a158426d4a0b2e7d5a9827785ca30e28082"} Nov 28 07:14:56 crc kubenswrapper[4946]: I1128 07:14:56.311368 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nmgj8" Nov 28 07:14:56 crc kubenswrapper[4946]: I1128 07:14:56.370487 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt64r\" (UniqueName: \"kubernetes.io/projected/d1b70933-a95b-4637-8c02-d77b9ef7a980-kube-api-access-xt64r\") pod \"d1b70933-a95b-4637-8c02-d77b9ef7a980\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " Nov 28 07:14:56 crc kubenswrapper[4946]: I1128 07:14:56.370666 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d1b70933-a95b-4637-8c02-d77b9ef7a980-ring-data-devices\") pod \"d1b70933-a95b-4637-8c02-d77b9ef7a980\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " Nov 28 07:14:56 crc kubenswrapper[4946]: I1128 07:14:56.370730 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d1b70933-a95b-4637-8c02-d77b9ef7a980-dispersionconf\") pod \"d1b70933-a95b-4637-8c02-d77b9ef7a980\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " Nov 28 07:14:56 crc kubenswrapper[4946]: I1128 07:14:56.370826 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1b70933-a95b-4637-8c02-d77b9ef7a980-combined-ca-bundle\") pod \"d1b70933-a95b-4637-8c02-d77b9ef7a980\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " Nov 28 07:14:56 crc kubenswrapper[4946]: I1128 07:14:56.370860 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d1b70933-a95b-4637-8c02-d77b9ef7a980-etc-swift\") pod \"d1b70933-a95b-4637-8c02-d77b9ef7a980\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " Nov 28 07:14:56 crc kubenswrapper[4946]: I1128 07:14:56.370892 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d1b70933-a95b-4637-8c02-d77b9ef7a980-swiftconf\") pod \"d1b70933-a95b-4637-8c02-d77b9ef7a980\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " Nov 28 07:14:56 crc kubenswrapper[4946]: I1128 07:14:56.371002 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1b70933-a95b-4637-8c02-d77b9ef7a980-scripts\") pod \"d1b70933-a95b-4637-8c02-d77b9ef7a980\" (UID: \"d1b70933-a95b-4637-8c02-d77b9ef7a980\") " Nov 28 07:14:56 crc kubenswrapper[4946]: I1128 07:14:56.372221 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1b70933-a95b-4637-8c02-d77b9ef7a980-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "d1b70933-a95b-4637-8c02-d77b9ef7a980" (UID: "d1b70933-a95b-4637-8c02-d77b9ef7a980"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:14:56 crc kubenswrapper[4946]: I1128 07:14:56.384341 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1b70933-a95b-4637-8c02-d77b9ef7a980-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "d1b70933-a95b-4637-8c02-d77b9ef7a980" (UID: "d1b70933-a95b-4637-8c02-d77b9ef7a980"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:14:56 crc kubenswrapper[4946]: I1128 07:14:56.385598 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1b70933-a95b-4637-8c02-d77b9ef7a980-kube-api-access-xt64r" (OuterVolumeSpecName: "kube-api-access-xt64r") pod "d1b70933-a95b-4637-8c02-d77b9ef7a980" (UID: "d1b70933-a95b-4637-8c02-d77b9ef7a980"). InnerVolumeSpecName "kube-api-access-xt64r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:14:56 crc kubenswrapper[4946]: I1128 07:14:56.393641 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1b70933-a95b-4637-8c02-d77b9ef7a980-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "d1b70933-a95b-4637-8c02-d77b9ef7a980" (UID: "d1b70933-a95b-4637-8c02-d77b9ef7a980"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:14:56 crc kubenswrapper[4946]: I1128 07:14:56.418278 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1b70933-a95b-4637-8c02-d77b9ef7a980-scripts" (OuterVolumeSpecName: "scripts") pod "d1b70933-a95b-4637-8c02-d77b9ef7a980" (UID: "d1b70933-a95b-4637-8c02-d77b9ef7a980"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:14:56 crc kubenswrapper[4946]: I1128 07:14:56.424273 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1b70933-a95b-4637-8c02-d77b9ef7a980-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1b70933-a95b-4637-8c02-d77b9ef7a980" (UID: "d1b70933-a95b-4637-8c02-d77b9ef7a980"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:14:56 crc kubenswrapper[4946]: I1128 07:14:56.428703 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1b70933-a95b-4637-8c02-d77b9ef7a980-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "d1b70933-a95b-4637-8c02-d77b9ef7a980" (UID: "d1b70933-a95b-4637-8c02-d77b9ef7a980"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:14:56 crc kubenswrapper[4946]: I1128 07:14:56.472817 4946 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d1b70933-a95b-4637-8c02-d77b9ef7a980-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:56 crc kubenswrapper[4946]: I1128 07:14:56.472842 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1b70933-a95b-4637-8c02-d77b9ef7a980-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:56 crc kubenswrapper[4946]: I1128 07:14:56.472852 4946 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d1b70933-a95b-4637-8c02-d77b9ef7a980-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:56 crc kubenswrapper[4946]: I1128 07:14:56.472861 4946 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d1b70933-a95b-4637-8c02-d77b9ef7a980-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:56 crc kubenswrapper[4946]: I1128 07:14:56.472869 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1b70933-a95b-4637-8c02-d77b9ef7a980-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:56 crc kubenswrapper[4946]: I1128 07:14:56.472877 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt64r\" (UniqueName: \"kubernetes.io/projected/d1b70933-a95b-4637-8c02-d77b9ef7a980-kube-api-access-xt64r\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:56 crc kubenswrapper[4946]: I1128 07:14:56.472888 4946 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d1b70933-a95b-4637-8c02-d77b9ef7a980-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 28 07:14:56 crc kubenswrapper[4946]: I1128 07:14:56.492509 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nmgj8" event={"ID":"d1b70933-a95b-4637-8c02-d77b9ef7a980","Type":"ContainerDied","Data":"def9048b1504c92c4c41d5f89efc25780c648d4a1a1f1dbed282d2bf48fe1e05"} Nov 28 07:14:56 crc kubenswrapper[4946]: I1128 07:14:56.492573 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="def9048b1504c92c4c41d5f89efc25780c648d4a1a1f1dbed282d2bf48fe1e05" Nov 28 07:14:56 crc kubenswrapper[4946]: I1128 07:14:56.492646 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nmgj8" Nov 28 07:14:57 crc kubenswrapper[4946]: I1128 07:14:57.505960 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3521840d-60d0-450c-8c05-7e2ad0fc4e97","Type":"ContainerStarted","Data":"5a518b7e7d038229a500bd8709ec0d601f6bd6d8f0d81ac3077b20e90a835629"} Nov 28 07:14:57 crc kubenswrapper[4946]: I1128 07:14:57.506910 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:14:57 crc kubenswrapper[4946]: I1128 07:14:57.508228 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5xxm2" event={"ID":"57401bfe-4d01-4983-8703-d78c50a9886e","Type":"ContainerStarted","Data":"9f56c9f71e0ce25a35f7cf818a1d4df2217737ffb3cd13d7f1a49bcebcfe6150"} Nov 28 07:14:57 crc kubenswrapper[4946]: I1128 07:14:57.511605 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"59fdca77-b333-44be-ab8c-96a2f4bcc340","Type":"ContainerStarted","Data":"ff38770809a612b7b65b4599d06925432273b10bf7ef83575ef3bffae3781506"} Nov 28 07:14:57 crc kubenswrapper[4946]: I1128 07:14:57.512217 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 28 07:14:57 crc kubenswrapper[4946]: I1128 07:14:57.552869 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=44.040029481 podStartE2EDuration="1m24.55283898s" podCreationTimestamp="2025-11-28 07:13:33 +0000 UTC" firstStartedPulling="2025-11-28 07:13:35.13809666 +0000 UTC m=+1269.516161771" lastFinishedPulling="2025-11-28 07:14:15.650906159 +0000 UTC m=+1310.028971270" observedRunningTime="2025-11-28 07:14:57.546368401 +0000 UTC m=+1351.924433522" watchObservedRunningTime="2025-11-28 07:14:57.55283898 +0000 UTC m=+1351.930904121" Nov 28 07:14:57 crc kubenswrapper[4946]: I1128 07:14:57.576323 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371951.278479 podStartE2EDuration="1m25.576296816s" podCreationTimestamp="2025-11-28 07:13:32 +0000 UTC" firstStartedPulling="2025-11-28 07:13:34.909325446 +0000 UTC m=+1269.287390557" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:14:57.574154633 +0000 UTC m=+1351.952219754" watchObservedRunningTime="2025-11-28 07:14:57.576296816 +0000 UTC m=+1351.954361927" Nov 28 07:14:57 crc kubenswrapper[4946]: I1128 07:14:57.599780 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-5xxm2" podStartSLOduration=3.344785297 podStartE2EDuration="15.599749151s" podCreationTimestamp="2025-11-28 07:14:42 +0000 UTC" firstStartedPulling="2025-11-28 07:14:43.926926451 +0000 UTC m=+1338.304991562" lastFinishedPulling="2025-11-28 07:14:56.181890295 +0000 UTC m=+1350.559955416" observedRunningTime="2025-11-28 07:14:57.591716364 +0000 UTC m=+1351.969781475" watchObservedRunningTime="2025-11-28 07:14:57.599749151 +0000 UTC m=+1351.977814262" Nov 28 07:15:00 crc kubenswrapper[4946]: I1128 07:15:00.143616 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405235-7j8pc"] Nov 28 07:15:00 crc kubenswrapper[4946]: E1128 07:15:00.144665 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d040768-befa-41c4-9cd0-d1c3f10e1bc9" containerName="ovn-config" Nov 28 07:15:00 crc kubenswrapper[4946]: I1128 07:15:00.144688 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d040768-befa-41c4-9cd0-d1c3f10e1bc9" containerName="ovn-config" Nov 28 07:15:00 crc kubenswrapper[4946]: E1128 07:15:00.144708 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1b70933-a95b-4637-8c02-d77b9ef7a980" containerName="swift-ring-rebalance" Nov 28 07:15:00 crc kubenswrapper[4946]: I1128 07:15:00.144717 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1b70933-a95b-4637-8c02-d77b9ef7a980" containerName="swift-ring-rebalance" Nov 28 07:15:00 crc kubenswrapper[4946]: I1128 07:15:00.144949 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1b70933-a95b-4637-8c02-d77b9ef7a980" containerName="swift-ring-rebalance" Nov 28 07:15:00 crc kubenswrapper[4946]: I1128 07:15:00.144969 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d040768-befa-41c4-9cd0-d1c3f10e1bc9" containerName="ovn-config" Nov 28 07:15:00 crc kubenswrapper[4946]: I1128 07:15:00.145862 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-7j8pc" Nov 28 07:15:00 crc kubenswrapper[4946]: I1128 07:15:00.149166 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 07:15:00 crc kubenswrapper[4946]: I1128 07:15:00.149518 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 07:15:00 crc kubenswrapper[4946]: I1128 07:15:00.154449 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405235-7j8pc"] Nov 28 07:15:00 crc kubenswrapper[4946]: I1128 07:15:00.251917 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f58b01ab-52cf-4bd2-a799-8ca6b0de39c4-config-volume\") pod \"collect-profiles-29405235-7j8pc\" (UID: \"f58b01ab-52cf-4bd2-a799-8ca6b0de39c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-7j8pc" Nov 28 07:15:00 crc kubenswrapper[4946]: I1128 07:15:00.252003 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5hpk\" (UniqueName: \"kubernetes.io/projected/f58b01ab-52cf-4bd2-a799-8ca6b0de39c4-kube-api-access-b5hpk\") pod \"collect-profiles-29405235-7j8pc\" (UID: \"f58b01ab-52cf-4bd2-a799-8ca6b0de39c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-7j8pc" Nov 28 07:15:00 crc kubenswrapper[4946]: I1128 07:15:00.252729 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f58b01ab-52cf-4bd2-a799-8ca6b0de39c4-secret-volume\") pod \"collect-profiles-29405235-7j8pc\" (UID: \"f58b01ab-52cf-4bd2-a799-8ca6b0de39c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-7j8pc" Nov 28 07:15:00 crc kubenswrapper[4946]: I1128 07:15:00.355120 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f58b01ab-52cf-4bd2-a799-8ca6b0de39c4-config-volume\") pod \"collect-profiles-29405235-7j8pc\" (UID: \"f58b01ab-52cf-4bd2-a799-8ca6b0de39c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-7j8pc" Nov 28 07:15:00 crc kubenswrapper[4946]: I1128 07:15:00.355196 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5hpk\" (UniqueName: \"kubernetes.io/projected/f58b01ab-52cf-4bd2-a799-8ca6b0de39c4-kube-api-access-b5hpk\") pod \"collect-profiles-29405235-7j8pc\" (UID: \"f58b01ab-52cf-4bd2-a799-8ca6b0de39c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-7j8pc" Nov 28 07:15:00 crc kubenswrapper[4946]: I1128 07:15:00.355364 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f58b01ab-52cf-4bd2-a799-8ca6b0de39c4-secret-volume\") pod \"collect-profiles-29405235-7j8pc\" (UID: \"f58b01ab-52cf-4bd2-a799-8ca6b0de39c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-7j8pc" Nov 28 07:15:00 crc kubenswrapper[4946]: I1128 07:15:00.356376 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f58b01ab-52cf-4bd2-a799-8ca6b0de39c4-config-volume\") pod \"collect-profiles-29405235-7j8pc\" (UID: \"f58b01ab-52cf-4bd2-a799-8ca6b0de39c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-7j8pc" Nov 28 07:15:00 crc kubenswrapper[4946]: I1128 07:15:00.361986 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f58b01ab-52cf-4bd2-a799-8ca6b0de39c4-secret-volume\") pod \"collect-profiles-29405235-7j8pc\" (UID: \"f58b01ab-52cf-4bd2-a799-8ca6b0de39c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-7j8pc" Nov 28 07:15:00 crc kubenswrapper[4946]: I1128 07:15:00.399677 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5hpk\" (UniqueName: \"kubernetes.io/projected/f58b01ab-52cf-4bd2-a799-8ca6b0de39c4-kube-api-access-b5hpk\") pod \"collect-profiles-29405235-7j8pc\" (UID: \"f58b01ab-52cf-4bd2-a799-8ca6b0de39c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-7j8pc" Nov 28 07:15:00 crc kubenswrapper[4946]: I1128 07:15:00.471453 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-7j8pc" Nov 28 07:15:00 crc kubenswrapper[4946]: I1128 07:15:00.953924 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405235-7j8pc"] Nov 28 07:15:01 crc kubenswrapper[4946]: I1128 07:15:01.545720 4946 generic.go:334] "Generic (PLEG): container finished" podID="f58b01ab-52cf-4bd2-a799-8ca6b0de39c4" containerID="10cb33614b5154ddfb6e59d821c3c8ec7bd2185825823036d0b7bbf3af9f8614" exitCode=0 Nov 28 07:15:01 crc kubenswrapper[4946]: I1128 07:15:01.545772 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-7j8pc" event={"ID":"f58b01ab-52cf-4bd2-a799-8ca6b0de39c4","Type":"ContainerDied","Data":"10cb33614b5154ddfb6e59d821c3c8ec7bd2185825823036d0b7bbf3af9f8614"} Nov 28 07:15:01 crc kubenswrapper[4946]: I1128 07:15:01.545804 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-7j8pc" event={"ID":"f58b01ab-52cf-4bd2-a799-8ca6b0de39c4","Type":"ContainerStarted","Data":"a95691415eb25a3dd5c0402b31e4d3ba02055bb9cfb01e6790694ceb403a0747"} Nov 28 07:15:02 crc kubenswrapper[4946]: I1128 07:15:02.501727 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-etc-swift\") pod \"swift-storage-0\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " pod="openstack/swift-storage-0" Nov 28 07:15:02 crc kubenswrapper[4946]: I1128 07:15:02.511269 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-etc-swift\") pod \"swift-storage-0\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " pod="openstack/swift-storage-0" Nov 28 07:15:02 crc kubenswrapper[4946]: I1128 07:15:02.571969 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 28 07:15:02 crc kubenswrapper[4946]: I1128 07:15:02.982920 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-7j8pc" Nov 28 07:15:03 crc kubenswrapper[4946]: I1128 07:15:03.116511 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5hpk\" (UniqueName: \"kubernetes.io/projected/f58b01ab-52cf-4bd2-a799-8ca6b0de39c4-kube-api-access-b5hpk\") pod \"f58b01ab-52cf-4bd2-a799-8ca6b0de39c4\" (UID: \"f58b01ab-52cf-4bd2-a799-8ca6b0de39c4\") " Nov 28 07:15:03 crc kubenswrapper[4946]: I1128 07:15:03.116634 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f58b01ab-52cf-4bd2-a799-8ca6b0de39c4-config-volume\") pod \"f58b01ab-52cf-4bd2-a799-8ca6b0de39c4\" (UID: \"f58b01ab-52cf-4bd2-a799-8ca6b0de39c4\") " Nov 28 07:15:03 crc kubenswrapper[4946]: I1128 07:15:03.116782 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f58b01ab-52cf-4bd2-a799-8ca6b0de39c4-secret-volume\") pod \"f58b01ab-52cf-4bd2-a799-8ca6b0de39c4\" (UID: \"f58b01ab-52cf-4bd2-a799-8ca6b0de39c4\") " Nov 28 07:15:03 crc kubenswrapper[4946]: I1128 07:15:03.117638 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f58b01ab-52cf-4bd2-a799-8ca6b0de39c4-config-volume" (OuterVolumeSpecName: "config-volume") pod "f58b01ab-52cf-4bd2-a799-8ca6b0de39c4" (UID: "f58b01ab-52cf-4bd2-a799-8ca6b0de39c4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:03 crc kubenswrapper[4946]: I1128 07:15:03.122711 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f58b01ab-52cf-4bd2-a799-8ca6b0de39c4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f58b01ab-52cf-4bd2-a799-8ca6b0de39c4" (UID: "f58b01ab-52cf-4bd2-a799-8ca6b0de39c4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:15:03 crc kubenswrapper[4946]: I1128 07:15:03.130309 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f58b01ab-52cf-4bd2-a799-8ca6b0de39c4-kube-api-access-b5hpk" (OuterVolumeSpecName: "kube-api-access-b5hpk") pod "f58b01ab-52cf-4bd2-a799-8ca6b0de39c4" (UID: "f58b01ab-52cf-4bd2-a799-8ca6b0de39c4"). InnerVolumeSpecName "kube-api-access-b5hpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:15:03 crc kubenswrapper[4946]: I1128 07:15:03.219501 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5hpk\" (UniqueName: \"kubernetes.io/projected/f58b01ab-52cf-4bd2-a799-8ca6b0de39c4-kube-api-access-b5hpk\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:03 crc kubenswrapper[4946]: I1128 07:15:03.219534 4946 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f58b01ab-52cf-4bd2-a799-8ca6b0de39c4-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:03 crc kubenswrapper[4946]: I1128 07:15:03.219547 4946 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f58b01ab-52cf-4bd2-a799-8ca6b0de39c4-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:03 crc kubenswrapper[4946]: W1128 07:15:03.222889 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7d310ee_b686_4e3d_b554_393fc09a770d.slice/crio-0dc3aa7184828a9c2dc73fcaaeda830932197a16333d625c40726297d605a9b2 WatchSource:0}: Error finding container 0dc3aa7184828a9c2dc73fcaaeda830932197a16333d625c40726297d605a9b2: Status 404 returned error can't find the container with id 0dc3aa7184828a9c2dc73fcaaeda830932197a16333d625c40726297d605a9b2 Nov 28 07:15:03 crc kubenswrapper[4946]: I1128 07:15:03.239888 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 28 07:15:03 crc kubenswrapper[4946]: I1128 07:15:03.565678 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerStarted","Data":"0dc3aa7184828a9c2dc73fcaaeda830932197a16333d625c40726297d605a9b2"} Nov 28 07:15:03 crc kubenswrapper[4946]: I1128 07:15:03.567479 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-7j8pc" event={"ID":"f58b01ab-52cf-4bd2-a799-8ca6b0de39c4","Type":"ContainerDied","Data":"a95691415eb25a3dd5c0402b31e4d3ba02055bb9cfb01e6790694ceb403a0747"} Nov 28 07:15:03 crc kubenswrapper[4946]: I1128 07:15:03.567512 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a95691415eb25a3dd5c0402b31e4d3ba02055bb9cfb01e6790694ceb403a0747" Nov 28 07:15:03 crc kubenswrapper[4946]: I1128 07:15:03.567551 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-7j8pc" Nov 28 07:15:04 crc kubenswrapper[4946]: I1128 07:15:04.585886 4946 generic.go:334] "Generic (PLEG): container finished" podID="57401bfe-4d01-4983-8703-d78c50a9886e" containerID="9f56c9f71e0ce25a35f7cf818a1d4df2217737ffb3cd13d7f1a49bcebcfe6150" exitCode=0 Nov 28 07:15:04 crc kubenswrapper[4946]: I1128 07:15:04.586152 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5xxm2" event={"ID":"57401bfe-4d01-4983-8703-d78c50a9886e","Type":"ContainerDied","Data":"9f56c9f71e0ce25a35f7cf818a1d4df2217737ffb3cd13d7f1a49bcebcfe6150"} Nov 28 07:15:05 crc kubenswrapper[4946]: I1128 07:15:05.597617 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerStarted","Data":"1d5e25dca53ee666bbf126081c00a7502572f05a1dfc6f018a9664b36c521ea0"} Nov 28 07:15:05 crc kubenswrapper[4946]: I1128 07:15:05.598127 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerStarted","Data":"2ac3d6bf342fed17048fda402f024c8913172e1208e753608700d74e53cfe4f3"} Nov 28 07:15:05 crc kubenswrapper[4946]: I1128 07:15:05.598143 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerStarted","Data":"847a01abfc1b8567574b77c168b044fe726b365c808d4ec665d2ff16afd92625"} Nov 28 07:15:05 crc kubenswrapper[4946]: I1128 07:15:05.598153 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerStarted","Data":"849cb349e2d8d74c8a5c873c020253aa6650e18316daa4e4cda33facfa08c64a"} Nov 28 07:15:06 crc kubenswrapper[4946]: I1128 07:15:06.036928 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5xxm2" Nov 28 07:15:06 crc kubenswrapper[4946]: I1128 07:15:06.182030 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57401bfe-4d01-4983-8703-d78c50a9886e-config-data\") pod \"57401bfe-4d01-4983-8703-d78c50a9886e\" (UID: \"57401bfe-4d01-4983-8703-d78c50a9886e\") " Nov 28 07:15:06 crc kubenswrapper[4946]: I1128 07:15:06.182107 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzcmd\" (UniqueName: \"kubernetes.io/projected/57401bfe-4d01-4983-8703-d78c50a9886e-kube-api-access-vzcmd\") pod \"57401bfe-4d01-4983-8703-d78c50a9886e\" (UID: \"57401bfe-4d01-4983-8703-d78c50a9886e\") " Nov 28 07:15:06 crc kubenswrapper[4946]: I1128 07:15:06.182206 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57401bfe-4d01-4983-8703-d78c50a9886e-combined-ca-bundle\") pod \"57401bfe-4d01-4983-8703-d78c50a9886e\" (UID: \"57401bfe-4d01-4983-8703-d78c50a9886e\") " Nov 28 07:15:06 crc kubenswrapper[4946]: I1128 07:15:06.182391 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/57401bfe-4d01-4983-8703-d78c50a9886e-db-sync-config-data\") pod \"57401bfe-4d01-4983-8703-d78c50a9886e\" (UID: \"57401bfe-4d01-4983-8703-d78c50a9886e\") " Nov 28 07:15:06 crc kubenswrapper[4946]: I1128 07:15:06.188646 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57401bfe-4d01-4983-8703-d78c50a9886e-kube-api-access-vzcmd" (OuterVolumeSpecName: "kube-api-access-vzcmd") pod "57401bfe-4d01-4983-8703-d78c50a9886e" (UID: "57401bfe-4d01-4983-8703-d78c50a9886e"). InnerVolumeSpecName "kube-api-access-vzcmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:15:06 crc kubenswrapper[4946]: I1128 07:15:06.203409 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57401bfe-4d01-4983-8703-d78c50a9886e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "57401bfe-4d01-4983-8703-d78c50a9886e" (UID: "57401bfe-4d01-4983-8703-d78c50a9886e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:15:06 crc kubenswrapper[4946]: I1128 07:15:06.223356 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57401bfe-4d01-4983-8703-d78c50a9886e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57401bfe-4d01-4983-8703-d78c50a9886e" (UID: "57401bfe-4d01-4983-8703-d78c50a9886e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:15:06 crc kubenswrapper[4946]: I1128 07:15:06.262956 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57401bfe-4d01-4983-8703-d78c50a9886e-config-data" (OuterVolumeSpecName: "config-data") pod "57401bfe-4d01-4983-8703-d78c50a9886e" (UID: "57401bfe-4d01-4983-8703-d78c50a9886e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:15:06 crc kubenswrapper[4946]: I1128 07:15:06.286842 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57401bfe-4d01-4983-8703-d78c50a9886e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:06 crc kubenswrapper[4946]: I1128 07:15:06.286883 4946 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/57401bfe-4d01-4983-8703-d78c50a9886e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:06 crc kubenswrapper[4946]: I1128 07:15:06.286897 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57401bfe-4d01-4983-8703-d78c50a9886e-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:06 crc kubenswrapper[4946]: I1128 07:15:06.286912 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzcmd\" (UniqueName: \"kubernetes.io/projected/57401bfe-4d01-4983-8703-d78c50a9886e-kube-api-access-vzcmd\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:06 crc kubenswrapper[4946]: I1128 07:15:06.608669 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5xxm2" event={"ID":"57401bfe-4d01-4983-8703-d78c50a9886e","Type":"ContainerDied","Data":"235e2ac4782d0c869b3dcf661d3c3d5fed93d0093eaccc2e5882a881ab8aa943"} Nov 28 07:15:06 crc kubenswrapper[4946]: I1128 07:15:06.608732 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="235e2ac4782d0c869b3dcf661d3c3d5fed93d0093eaccc2e5882a881ab8aa943" Nov 28 07:15:06 crc kubenswrapper[4946]: I1128 07:15:06.608830 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5xxm2" Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.058786 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7dd4845ddf-vgnn4"] Nov 28 07:15:07 crc kubenswrapper[4946]: E1128 07:15:07.059657 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57401bfe-4d01-4983-8703-d78c50a9886e" containerName="glance-db-sync" Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.059680 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="57401bfe-4d01-4983-8703-d78c50a9886e" containerName="glance-db-sync" Nov 28 07:15:07 crc kubenswrapper[4946]: E1128 07:15:07.059702 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f58b01ab-52cf-4bd2-a799-8ca6b0de39c4" containerName="collect-profiles" Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.059710 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="f58b01ab-52cf-4bd2-a799-8ca6b0de39c4" containerName="collect-profiles" Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.059870 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="57401bfe-4d01-4983-8703-d78c50a9886e" containerName="glance-db-sync" Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.059901 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="f58b01ab-52cf-4bd2-a799-8ca6b0de39c4" containerName="collect-profiles" Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.060945 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.088084 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dd4845ddf-vgnn4"] Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.207793 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-config\") pod \"dnsmasq-dns-7dd4845ddf-vgnn4\" (UID: \"b506f165-10fc-4e19-aafa-6a8f3f5d28ad\") " pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.208139 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-ovsdbserver-sb\") pod \"dnsmasq-dns-7dd4845ddf-vgnn4\" (UID: \"b506f165-10fc-4e19-aafa-6a8f3f5d28ad\") " pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.208167 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-dns-svc\") pod \"dnsmasq-dns-7dd4845ddf-vgnn4\" (UID: \"b506f165-10fc-4e19-aafa-6a8f3f5d28ad\") " pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.208189 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-ovsdbserver-nb\") pod \"dnsmasq-dns-7dd4845ddf-vgnn4\" (UID: \"b506f165-10fc-4e19-aafa-6a8f3f5d28ad\") " pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.208538 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qczl\" (UniqueName: \"kubernetes.io/projected/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-kube-api-access-8qczl\") pod \"dnsmasq-dns-7dd4845ddf-vgnn4\" (UID: \"b506f165-10fc-4e19-aafa-6a8f3f5d28ad\") " pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.310568 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-config\") pod \"dnsmasq-dns-7dd4845ddf-vgnn4\" (UID: \"b506f165-10fc-4e19-aafa-6a8f3f5d28ad\") " pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.310692 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-ovsdbserver-sb\") pod \"dnsmasq-dns-7dd4845ddf-vgnn4\" (UID: \"b506f165-10fc-4e19-aafa-6a8f3f5d28ad\") " pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.310747 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-dns-svc\") pod \"dnsmasq-dns-7dd4845ddf-vgnn4\" (UID: \"b506f165-10fc-4e19-aafa-6a8f3f5d28ad\") " pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.310796 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-ovsdbserver-nb\") pod \"dnsmasq-dns-7dd4845ddf-vgnn4\" (UID: \"b506f165-10fc-4e19-aafa-6a8f3f5d28ad\") " pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.311123 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qczl\" (UniqueName: \"kubernetes.io/projected/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-kube-api-access-8qczl\") pod \"dnsmasq-dns-7dd4845ddf-vgnn4\" (UID: \"b506f165-10fc-4e19-aafa-6a8f3f5d28ad\") " pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.311659 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-config\") pod \"dnsmasq-dns-7dd4845ddf-vgnn4\" (UID: \"b506f165-10fc-4e19-aafa-6a8f3f5d28ad\") " pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.311715 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-ovsdbserver-sb\") pod \"dnsmasq-dns-7dd4845ddf-vgnn4\" (UID: \"b506f165-10fc-4e19-aafa-6a8f3f5d28ad\") " pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.312399 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-dns-svc\") pod \"dnsmasq-dns-7dd4845ddf-vgnn4\" (UID: \"b506f165-10fc-4e19-aafa-6a8f3f5d28ad\") " pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.312487 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-ovsdbserver-nb\") pod \"dnsmasq-dns-7dd4845ddf-vgnn4\" (UID: \"b506f165-10fc-4e19-aafa-6a8f3f5d28ad\") " pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.347091 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qczl\" (UniqueName: \"kubernetes.io/projected/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-kube-api-access-8qczl\") pod \"dnsmasq-dns-7dd4845ddf-vgnn4\" (UID: \"b506f165-10fc-4e19-aafa-6a8f3f5d28ad\") " pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.390915 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.634245 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerStarted","Data":"d6be15027db8aca6a49663ad2705ef701b99e1ed3611ca1bdf517ec488caf40d"} Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.634795 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerStarted","Data":"67bb3d551d212ca961ad8ea7d743990298fb96bccbac103b400c071fec12a04a"} Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.634842 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerStarted","Data":"0b64d0bffa35322fe586ccacf74d1f4ab7472d93d29af359db4d89db97ef491e"} Nov 28 07:15:07 crc kubenswrapper[4946]: I1128 07:15:07.937802 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dd4845ddf-vgnn4"] Nov 28 07:15:07 crc kubenswrapper[4946]: W1128 07:15:07.947630 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb506f165_10fc_4e19_aafa_6a8f3f5d28ad.slice/crio-e71affe939b408b10f6a0a72ef36abebb033c19499e2a86a1e3271a76af624f9 WatchSource:0}: Error finding container e71affe939b408b10f6a0a72ef36abebb033c19499e2a86a1e3271a76af624f9: Status 404 returned error can't find the container with id e71affe939b408b10f6a0a72ef36abebb033c19499e2a86a1e3271a76af624f9 Nov 28 07:15:08 crc kubenswrapper[4946]: I1128 07:15:08.654334 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerStarted","Data":"61e1d296d26ca679c99de8f8b9f1a8d780c7ec16fe7bd59626b4da870f354696"} Nov 28 07:15:08 crc kubenswrapper[4946]: I1128 07:15:08.657681 4946 generic.go:334] "Generic (PLEG): container finished" podID="b506f165-10fc-4e19-aafa-6a8f3f5d28ad" containerID="1cdfb48546206067f3b29f1ff7671d08f3be3fe8bf0f8f865f21719cadc2dc96" exitCode=0 Nov 28 07:15:08 crc kubenswrapper[4946]: I1128 07:15:08.657739 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" event={"ID":"b506f165-10fc-4e19-aafa-6a8f3f5d28ad","Type":"ContainerDied","Data":"1cdfb48546206067f3b29f1ff7671d08f3be3fe8bf0f8f865f21719cadc2dc96"} Nov 28 07:15:08 crc kubenswrapper[4946]: I1128 07:15:08.657775 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" event={"ID":"b506f165-10fc-4e19-aafa-6a8f3f5d28ad","Type":"ContainerStarted","Data":"e71affe939b408b10f6a0a72ef36abebb033c19499e2a86a1e3271a76af624f9"} Nov 28 07:15:09 crc kubenswrapper[4946]: I1128 07:15:09.677522 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerStarted","Data":"3bb83e39c083e832f8a225589956ac89495d0a3270463d35cd5097cffffacde0"} Nov 28 07:15:09 crc kubenswrapper[4946]: I1128 07:15:09.678255 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerStarted","Data":"7be129de7a59c6e25c3ad9da58ad52ccaea20aa7256f4fab80d19e1a02a9f707"} Nov 28 07:15:09 crc kubenswrapper[4946]: I1128 07:15:09.682332 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" event={"ID":"b506f165-10fc-4e19-aafa-6a8f3f5d28ad","Type":"ContainerStarted","Data":"ba37f521e12d57906b66dc94e1cd692b617295298c1764201bc28f0cfb340437"} Nov 28 07:15:09 crc kubenswrapper[4946]: I1128 07:15:09.682613 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" Nov 28 07:15:09 crc kubenswrapper[4946]: I1128 07:15:09.705996 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" podStartSLOduration=2.705974233 podStartE2EDuration="2.705974233s" podCreationTimestamp="2025-11-28 07:15:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:15:09.704029376 +0000 UTC m=+1364.082094487" watchObservedRunningTime="2025-11-28 07:15:09.705974233 +0000 UTC m=+1364.084039344" Nov 28 07:15:10 crc kubenswrapper[4946]: I1128 07:15:10.751705 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerStarted","Data":"5bd491e35ece677c5f005652efd7b988e650a41c30c0108a8430aaaf6dcac5e9"} Nov 28 07:15:10 crc kubenswrapper[4946]: I1128 07:15:10.753066 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerStarted","Data":"1c43b5f235fc838d55b3e3138910a90b5396838761a0454dd53a66990464a1e9"} Nov 28 07:15:10 crc kubenswrapper[4946]: I1128 07:15:10.753143 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerStarted","Data":"179205cba904e1173a81ebd776bc1a0a5b564f12e0185b57e69ccb4e5be0d40a"} Nov 28 07:15:10 crc kubenswrapper[4946]: I1128 07:15:10.753195 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerStarted","Data":"882d75cc76bc05932cfdf4ca19f83e0f38f9b3f3c27568b1f8c72bd6dd29c2f0"} Nov 28 07:15:10 crc kubenswrapper[4946]: I1128 07:15:10.753250 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerStarted","Data":"08e99b903087a98fbf737f58badd6a71a2d6baa168caa8e67046f8c35f351fbf"} Nov 28 07:15:10 crc kubenswrapper[4946]: I1128 07:15:10.833590 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.011997014 podStartE2EDuration="41.833558674s" podCreationTimestamp="2025-11-28 07:14:29 +0000 UTC" firstStartedPulling="2025-11-28 07:15:03.226270604 +0000 UTC m=+1357.604335735" lastFinishedPulling="2025-11-28 07:15:09.047832274 +0000 UTC m=+1363.425897395" observedRunningTime="2025-11-28 07:15:10.825199449 +0000 UTC m=+1365.203264560" watchObservedRunningTime="2025-11-28 07:15:10.833558674 +0000 UTC m=+1365.211623785" Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.139484 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dd4845ddf-vgnn4"] Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.171757 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b6bbf7467-w7p6s"] Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.174018 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.177138 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.180549 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b6bbf7467-w7p6s"] Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.312083 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-config\") pod \"dnsmasq-dns-b6bbf7467-w7p6s\" (UID: \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\") " pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.312148 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82p6n\" (UniqueName: \"kubernetes.io/projected/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-kube-api-access-82p6n\") pod \"dnsmasq-dns-b6bbf7467-w7p6s\" (UID: \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\") " pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.312400 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-ovsdbserver-nb\") pod \"dnsmasq-dns-b6bbf7467-w7p6s\" (UID: \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\") " pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.312550 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-ovsdbserver-sb\") pod \"dnsmasq-dns-b6bbf7467-w7p6s\" (UID: \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\") " pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.312605 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-dns-svc\") pod \"dnsmasq-dns-b6bbf7467-w7p6s\" (UID: \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\") " pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.312770 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-dns-swift-storage-0\") pod \"dnsmasq-dns-b6bbf7467-w7p6s\" (UID: \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\") " pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.414630 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82p6n\" (UniqueName: \"kubernetes.io/projected/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-kube-api-access-82p6n\") pod \"dnsmasq-dns-b6bbf7467-w7p6s\" (UID: \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\") " pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.415207 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-ovsdbserver-nb\") pod \"dnsmasq-dns-b6bbf7467-w7p6s\" (UID: \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\") " pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.415267 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-ovsdbserver-sb\") pod \"dnsmasq-dns-b6bbf7467-w7p6s\" (UID: \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\") " pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.415312 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-dns-svc\") pod \"dnsmasq-dns-b6bbf7467-w7p6s\" (UID: \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\") " pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.415388 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-dns-swift-storage-0\") pod \"dnsmasq-dns-b6bbf7467-w7p6s\" (UID: \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\") " pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.415528 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-config\") pod \"dnsmasq-dns-b6bbf7467-w7p6s\" (UID: \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\") " pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.416160 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-ovsdbserver-nb\") pod \"dnsmasq-dns-b6bbf7467-w7p6s\" (UID: \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\") " pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.416372 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-ovsdbserver-sb\") pod \"dnsmasq-dns-b6bbf7467-w7p6s\" (UID: \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\") " pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.416553 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-dns-swift-storage-0\") pod \"dnsmasq-dns-b6bbf7467-w7p6s\" (UID: \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\") " pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.417148 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-config\") pod \"dnsmasq-dns-b6bbf7467-w7p6s\" (UID: \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\") " pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.417266 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-dns-svc\") pod \"dnsmasq-dns-b6bbf7467-w7p6s\" (UID: \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\") " pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.439479 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82p6n\" (UniqueName: \"kubernetes.io/projected/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-kube-api-access-82p6n\") pod \"dnsmasq-dns-b6bbf7467-w7p6s\" (UID: \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\") " pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.498987 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.761352 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" podUID="b506f165-10fc-4e19-aafa-6a8f3f5d28ad" containerName="dnsmasq-dns" containerID="cri-o://ba37f521e12d57906b66dc94e1cd692b617295298c1764201bc28f0cfb340437" gracePeriod=10 Nov 28 07:15:11 crc kubenswrapper[4946]: I1128 07:15:11.967068 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b6bbf7467-w7p6s"] Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.180521 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.339681 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-ovsdbserver-nb\") pod \"b506f165-10fc-4e19-aafa-6a8f3f5d28ad\" (UID: \"b506f165-10fc-4e19-aafa-6a8f3f5d28ad\") " Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.340121 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-dns-svc\") pod \"b506f165-10fc-4e19-aafa-6a8f3f5d28ad\" (UID: \"b506f165-10fc-4e19-aafa-6a8f3f5d28ad\") " Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.340169 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qczl\" (UniqueName: \"kubernetes.io/projected/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-kube-api-access-8qczl\") pod \"b506f165-10fc-4e19-aafa-6a8f3f5d28ad\" (UID: \"b506f165-10fc-4e19-aafa-6a8f3f5d28ad\") " Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.340218 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-config\") pod \"b506f165-10fc-4e19-aafa-6a8f3f5d28ad\" (UID: \"b506f165-10fc-4e19-aafa-6a8f3f5d28ad\") " Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.340395 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-ovsdbserver-sb\") pod \"b506f165-10fc-4e19-aafa-6a8f3f5d28ad\" (UID: \"b506f165-10fc-4e19-aafa-6a8f3f5d28ad\") " Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.344961 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-kube-api-access-8qczl" (OuterVolumeSpecName: "kube-api-access-8qczl") pod "b506f165-10fc-4e19-aafa-6a8f3f5d28ad" (UID: "b506f165-10fc-4e19-aafa-6a8f3f5d28ad"). InnerVolumeSpecName "kube-api-access-8qczl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.382698 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b506f165-10fc-4e19-aafa-6a8f3f5d28ad" (UID: "b506f165-10fc-4e19-aafa-6a8f3f5d28ad"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.397166 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b506f165-10fc-4e19-aafa-6a8f3f5d28ad" (UID: "b506f165-10fc-4e19-aafa-6a8f3f5d28ad"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.400307 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b506f165-10fc-4e19-aafa-6a8f3f5d28ad" (UID: "b506f165-10fc-4e19-aafa-6a8f3f5d28ad"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.404384 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-config" (OuterVolumeSpecName: "config") pod "b506f165-10fc-4e19-aafa-6a8f3f5d28ad" (UID: "b506f165-10fc-4e19-aafa-6a8f3f5d28ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.442847 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.442888 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.442903 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.442916 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qczl\" (UniqueName: \"kubernetes.io/projected/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-kube-api-access-8qczl\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.442932 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b506f165-10fc-4e19-aafa-6a8f3f5d28ad-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.774574 4946 generic.go:334] "Generic (PLEG): container finished" podID="1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a" containerID="5ddbb291fca54747893c71ebe000f837f96598a320bce3b00dd58884b9fe427d" exitCode=0 Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.774691 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" event={"ID":"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a","Type":"ContainerDied","Data":"5ddbb291fca54747893c71ebe000f837f96598a320bce3b00dd58884b9fe427d"} Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.774763 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" event={"ID":"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a","Type":"ContainerStarted","Data":"d1e11c781d2b65af34be06feebe590bdb655d9e8daed7c8ee33198544f1fc802"} Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.777882 4946 generic.go:334] "Generic (PLEG): container finished" podID="b506f165-10fc-4e19-aafa-6a8f3f5d28ad" containerID="ba37f521e12d57906b66dc94e1cd692b617295298c1764201bc28f0cfb340437" exitCode=0 Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.777940 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.778058 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" event={"ID":"b506f165-10fc-4e19-aafa-6a8f3f5d28ad","Type":"ContainerDied","Data":"ba37f521e12d57906b66dc94e1cd692b617295298c1764201bc28f0cfb340437"} Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.778097 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dd4845ddf-vgnn4" event={"ID":"b506f165-10fc-4e19-aafa-6a8f3f5d28ad","Type":"ContainerDied","Data":"e71affe939b408b10f6a0a72ef36abebb033c19499e2a86a1e3271a76af624f9"} Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.778119 4946 scope.go:117] "RemoveContainer" containerID="ba37f521e12d57906b66dc94e1cd692b617295298c1764201bc28f0cfb340437" Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.825071 4946 scope.go:117] "RemoveContainer" containerID="1cdfb48546206067f3b29f1ff7671d08f3be3fe8bf0f8f865f21719cadc2dc96" Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.936998 4946 scope.go:117] "RemoveContainer" containerID="ba37f521e12d57906b66dc94e1cd692b617295298c1764201bc28f0cfb340437" Nov 28 07:15:12 crc kubenswrapper[4946]: E1128 07:15:12.939532 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba37f521e12d57906b66dc94e1cd692b617295298c1764201bc28f0cfb340437\": container with ID starting with ba37f521e12d57906b66dc94e1cd692b617295298c1764201bc28f0cfb340437 not found: ID does not exist" containerID="ba37f521e12d57906b66dc94e1cd692b617295298c1764201bc28f0cfb340437" Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.939591 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba37f521e12d57906b66dc94e1cd692b617295298c1764201bc28f0cfb340437"} err="failed to get container status \"ba37f521e12d57906b66dc94e1cd692b617295298c1764201bc28f0cfb340437\": rpc error: code = NotFound desc = could not find container \"ba37f521e12d57906b66dc94e1cd692b617295298c1764201bc28f0cfb340437\": container with ID starting with ba37f521e12d57906b66dc94e1cd692b617295298c1764201bc28f0cfb340437 not found: ID does not exist" Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.939619 4946 scope.go:117] "RemoveContainer" containerID="1cdfb48546206067f3b29f1ff7671d08f3be3fe8bf0f8f865f21719cadc2dc96" Nov 28 07:15:12 crc kubenswrapper[4946]: E1128 07:15:12.944117 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cdfb48546206067f3b29f1ff7671d08f3be3fe8bf0f8f865f21719cadc2dc96\": container with ID starting with 1cdfb48546206067f3b29f1ff7671d08f3be3fe8bf0f8f865f21719cadc2dc96 not found: ID does not exist" containerID="1cdfb48546206067f3b29f1ff7671d08f3be3fe8bf0f8f865f21719cadc2dc96" Nov 28 07:15:12 crc kubenswrapper[4946]: I1128 07:15:12.944206 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cdfb48546206067f3b29f1ff7671d08f3be3fe8bf0f8f865f21719cadc2dc96"} err="failed to get container status \"1cdfb48546206067f3b29f1ff7671d08f3be3fe8bf0f8f865f21719cadc2dc96\": rpc error: code = NotFound desc = could not find container \"1cdfb48546206067f3b29f1ff7671d08f3be3fe8bf0f8f865f21719cadc2dc96\": container with ID starting with 1cdfb48546206067f3b29f1ff7671d08f3be3fe8bf0f8f865f21719cadc2dc96 not found: ID does not exist" Nov 28 07:15:13 crc kubenswrapper[4946]: I1128 07:15:13.019885 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dd4845ddf-vgnn4"] Nov 28 07:15:13 crc kubenswrapper[4946]: I1128 07:15:13.027962 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7dd4845ddf-vgnn4"] Nov 28 07:15:13 crc kubenswrapper[4946]: I1128 07:15:13.793254 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" event={"ID":"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a","Type":"ContainerStarted","Data":"7cd49cf4518b0ea942b229b274255151d8e9f64339d104b51c8bcf7aad882348"} Nov 28 07:15:13 crc kubenswrapper[4946]: I1128 07:15:13.793453 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" Nov 28 07:15:13 crc kubenswrapper[4946]: I1128 07:15:13.832003 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" podStartSLOduration=2.831967191 podStartE2EDuration="2.831967191s" podCreationTimestamp="2025-11-28 07:15:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:15:13.827746267 +0000 UTC m=+1368.205811418" watchObservedRunningTime="2025-11-28 07:15:13.831967191 +0000 UTC m=+1368.210032332" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.004275 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b506f165-10fc-4e19-aafa-6a8f3f5d28ad" path="/var/lib/kubelet/pods/b506f165-10fc-4e19-aafa-6a8f3f5d28ad/volumes" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.195672 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.451706 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.707708 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-cwsts"] Nov 28 07:15:14 crc kubenswrapper[4946]: E1128 07:15:14.708158 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b506f165-10fc-4e19-aafa-6a8f3f5d28ad" containerName="dnsmasq-dns" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.708175 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b506f165-10fc-4e19-aafa-6a8f3f5d28ad" containerName="dnsmasq-dns" Nov 28 07:15:14 crc kubenswrapper[4946]: E1128 07:15:14.708185 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b506f165-10fc-4e19-aafa-6a8f3f5d28ad" containerName="init" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.708192 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b506f165-10fc-4e19-aafa-6a8f3f5d28ad" containerName="init" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.708392 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b506f165-10fc-4e19-aafa-6a8f3f5d28ad" containerName="dnsmasq-dns" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.709079 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cwsts" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.724175 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-cwsts"] Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.792665 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c6b3d2d-881d-4a19-8551-140a9c02fe4f-operator-scripts\") pod \"cinder-db-create-cwsts\" (UID: \"0c6b3d2d-881d-4a19-8551-140a9c02fe4f\") " pod="openstack/cinder-db-create-cwsts" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.792753 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5m8k\" (UniqueName: \"kubernetes.io/projected/0c6b3d2d-881d-4a19-8551-140a9c02fe4f-kube-api-access-l5m8k\") pod \"cinder-db-create-cwsts\" (UID: \"0c6b3d2d-881d-4a19-8551-140a9c02fe4f\") " pod="openstack/cinder-db-create-cwsts" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.810876 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-zbgwb"] Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.812322 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zbgwb" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.829535 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-1925-account-create-update-877c2"] Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.832594 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1925-account-create-update-877c2" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.836869 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.846652 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1925-account-create-update-877c2"] Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.873103 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-zbgwb"] Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.896602 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/802d0ec6-164b-4033-91d5-514bbd50bc23-operator-scripts\") pod \"barbican-db-create-zbgwb\" (UID: \"802d0ec6-164b-4033-91d5-514bbd50bc23\") " pod="openstack/barbican-db-create-zbgwb" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.896678 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjrqs\" (UniqueName: \"kubernetes.io/projected/802d0ec6-164b-4033-91d5-514bbd50bc23-kube-api-access-sjrqs\") pod \"barbican-db-create-zbgwb\" (UID: \"802d0ec6-164b-4033-91d5-514bbd50bc23\") " pod="openstack/barbican-db-create-zbgwb" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.896731 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c480cce-4090-47dd-8337-99546f661b9d-operator-scripts\") pod \"barbican-1925-account-create-update-877c2\" (UID: \"1c480cce-4090-47dd-8337-99546f661b9d\") " pod="openstack/barbican-1925-account-create-update-877c2" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.896798 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c6b3d2d-881d-4a19-8551-140a9c02fe4f-operator-scripts\") pod \"cinder-db-create-cwsts\" (UID: \"0c6b3d2d-881d-4a19-8551-140a9c02fe4f\") " pod="openstack/cinder-db-create-cwsts" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.896846 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5m8k\" (UniqueName: \"kubernetes.io/projected/0c6b3d2d-881d-4a19-8551-140a9c02fe4f-kube-api-access-l5m8k\") pod \"cinder-db-create-cwsts\" (UID: \"0c6b3d2d-881d-4a19-8551-140a9c02fe4f\") " pod="openstack/cinder-db-create-cwsts" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.896871 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jpg5\" (UniqueName: \"kubernetes.io/projected/1c480cce-4090-47dd-8337-99546f661b9d-kube-api-access-6jpg5\") pod \"barbican-1925-account-create-update-877c2\" (UID: \"1c480cce-4090-47dd-8337-99546f661b9d\") " pod="openstack/barbican-1925-account-create-update-877c2" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.897916 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c6b3d2d-881d-4a19-8551-140a9c02fe4f-operator-scripts\") pod \"cinder-db-create-cwsts\" (UID: \"0c6b3d2d-881d-4a19-8551-140a9c02fe4f\") " pod="openstack/cinder-db-create-cwsts" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.908739 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-8ebd-account-create-update-mn7h9"] Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.909987 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8ebd-account-create-update-mn7h9" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.914200 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.924289 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8ebd-account-create-update-mn7h9"] Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.927070 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5m8k\" (UniqueName: \"kubernetes.io/projected/0c6b3d2d-881d-4a19-8551-140a9c02fe4f-kube-api-access-l5m8k\") pod \"cinder-db-create-cwsts\" (UID: \"0c6b3d2d-881d-4a19-8551-140a9c02fe4f\") " pod="openstack/cinder-db-create-cwsts" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.989814 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-zrmg4"] Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.991324 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zrmg4" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.994957 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.995187 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-hmftc" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.995401 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.995610 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.998040 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jpg5\" (UniqueName: \"kubernetes.io/projected/1c480cce-4090-47dd-8337-99546f661b9d-kube-api-access-6jpg5\") pod \"barbican-1925-account-create-update-877c2\" (UID: \"1c480cce-4090-47dd-8337-99546f661b9d\") " pod="openstack/barbican-1925-account-create-update-877c2" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.998191 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/759da62f-4114-48b9-9eb0-1fad429f3044-operator-scripts\") pod \"cinder-8ebd-account-create-update-mn7h9\" (UID: \"759da62f-4114-48b9-9eb0-1fad429f3044\") " pod="openstack/cinder-8ebd-account-create-update-mn7h9" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.998288 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/802d0ec6-164b-4033-91d5-514bbd50bc23-operator-scripts\") pod \"barbican-db-create-zbgwb\" (UID: \"802d0ec6-164b-4033-91d5-514bbd50bc23\") " pod="openstack/barbican-db-create-zbgwb" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.998392 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjrqs\" (UniqueName: \"kubernetes.io/projected/802d0ec6-164b-4033-91d5-514bbd50bc23-kube-api-access-sjrqs\") pod \"barbican-db-create-zbgwb\" (UID: \"802d0ec6-164b-4033-91d5-514bbd50bc23\") " pod="openstack/barbican-db-create-zbgwb" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.998487 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c480cce-4090-47dd-8337-99546f661b9d-operator-scripts\") pod \"barbican-1925-account-create-update-877c2\" (UID: \"1c480cce-4090-47dd-8337-99546f661b9d\") " pod="openstack/barbican-1925-account-create-update-877c2" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.998587 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86gsj\" (UniqueName: \"kubernetes.io/projected/759da62f-4114-48b9-9eb0-1fad429f3044-kube-api-access-86gsj\") pod \"cinder-8ebd-account-create-update-mn7h9\" (UID: \"759da62f-4114-48b9-9eb0-1fad429f3044\") " pod="openstack/cinder-8ebd-account-create-update-mn7h9" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.999354 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/802d0ec6-164b-4033-91d5-514bbd50bc23-operator-scripts\") pod \"barbican-db-create-zbgwb\" (UID: \"802d0ec6-164b-4033-91d5-514bbd50bc23\") " pod="openstack/barbican-db-create-zbgwb" Nov 28 07:15:14 crc kubenswrapper[4946]: I1128 07:15:14.999801 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c480cce-4090-47dd-8337-99546f661b9d-operator-scripts\") pod \"barbican-1925-account-create-update-877c2\" (UID: \"1c480cce-4090-47dd-8337-99546f661b9d\") " pod="openstack/barbican-1925-account-create-update-877c2" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:14.999894 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-zrmg4"] Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.015546 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjrqs\" (UniqueName: \"kubernetes.io/projected/802d0ec6-164b-4033-91d5-514bbd50bc23-kube-api-access-sjrqs\") pod \"barbican-db-create-zbgwb\" (UID: \"802d0ec6-164b-4033-91d5-514bbd50bc23\") " pod="openstack/barbican-db-create-zbgwb" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.017028 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jpg5\" (UniqueName: \"kubernetes.io/projected/1c480cce-4090-47dd-8337-99546f661b9d-kube-api-access-6jpg5\") pod \"barbican-1925-account-create-update-877c2\" (UID: \"1c480cce-4090-47dd-8337-99546f661b9d\") " pod="openstack/barbican-1925-account-create-update-877c2" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.047098 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cwsts" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.101003 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d18445-e4a1-4a57-abd5-222cade7df9f-config-data\") pod \"keystone-db-sync-zrmg4\" (UID: \"91d18445-e4a1-4a57-abd5-222cade7df9f\") " pod="openstack/keystone-db-sync-zrmg4" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.101296 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86gsj\" (UniqueName: \"kubernetes.io/projected/759da62f-4114-48b9-9eb0-1fad429f3044-kube-api-access-86gsj\") pod \"cinder-8ebd-account-create-update-mn7h9\" (UID: \"759da62f-4114-48b9-9eb0-1fad429f3044\") " pod="openstack/cinder-8ebd-account-create-update-mn7h9" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.101560 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/759da62f-4114-48b9-9eb0-1fad429f3044-operator-scripts\") pod \"cinder-8ebd-account-create-update-mn7h9\" (UID: \"759da62f-4114-48b9-9eb0-1fad429f3044\") " pod="openstack/cinder-8ebd-account-create-update-mn7h9" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.101595 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j657f\" (UniqueName: \"kubernetes.io/projected/91d18445-e4a1-4a57-abd5-222cade7df9f-kube-api-access-j657f\") pod \"keystone-db-sync-zrmg4\" (UID: \"91d18445-e4a1-4a57-abd5-222cade7df9f\") " pod="openstack/keystone-db-sync-zrmg4" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.101684 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d18445-e4a1-4a57-abd5-222cade7df9f-combined-ca-bundle\") pod \"keystone-db-sync-zrmg4\" (UID: \"91d18445-e4a1-4a57-abd5-222cade7df9f\") " pod="openstack/keystone-db-sync-zrmg4" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.103201 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/759da62f-4114-48b9-9eb0-1fad429f3044-operator-scripts\") pod \"cinder-8ebd-account-create-update-mn7h9\" (UID: \"759da62f-4114-48b9-9eb0-1fad429f3044\") " pod="openstack/cinder-8ebd-account-create-update-mn7h9" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.119965 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-dghw2"] Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.121801 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dghw2" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.124410 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86gsj\" (UniqueName: \"kubernetes.io/projected/759da62f-4114-48b9-9eb0-1fad429f3044-kube-api-access-86gsj\") pod \"cinder-8ebd-account-create-update-mn7h9\" (UID: \"759da62f-4114-48b9-9eb0-1fad429f3044\") " pod="openstack/cinder-8ebd-account-create-update-mn7h9" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.137820 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zbgwb" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.163538 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dghw2"] Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.164598 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1925-account-create-update-877c2" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.205213 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j657f\" (UniqueName: \"kubernetes.io/projected/91d18445-e4a1-4a57-abd5-222cade7df9f-kube-api-access-j657f\") pod \"keystone-db-sync-zrmg4\" (UID: \"91d18445-e4a1-4a57-abd5-222cade7df9f\") " pod="openstack/keystone-db-sync-zrmg4" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.205733 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d18445-e4a1-4a57-abd5-222cade7df9f-combined-ca-bundle\") pod \"keystone-db-sync-zrmg4\" (UID: \"91d18445-e4a1-4a57-abd5-222cade7df9f\") " pod="openstack/keystone-db-sync-zrmg4" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.215553 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d18445-e4a1-4a57-abd5-222cade7df9f-config-data\") pod \"keystone-db-sync-zrmg4\" (UID: \"91d18445-e4a1-4a57-abd5-222cade7df9f\") " pod="openstack/keystone-db-sync-zrmg4" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.215684 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba455d60-e55b-40f4-9e67-a171de4223f0-operator-scripts\") pod \"neutron-db-create-dghw2\" (UID: \"ba455d60-e55b-40f4-9e67-a171de4223f0\") " pod="openstack/neutron-db-create-dghw2" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.215955 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwd7w\" (UniqueName: \"kubernetes.io/projected/ba455d60-e55b-40f4-9e67-a171de4223f0-kube-api-access-xwd7w\") pod \"neutron-db-create-dghw2\" (UID: \"ba455d60-e55b-40f4-9e67-a171de4223f0\") " pod="openstack/neutron-db-create-dghw2" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.219112 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d18445-e4a1-4a57-abd5-222cade7df9f-combined-ca-bundle\") pod \"keystone-db-sync-zrmg4\" (UID: \"91d18445-e4a1-4a57-abd5-222cade7df9f\") " pod="openstack/keystone-db-sync-zrmg4" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.219688 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d18445-e4a1-4a57-abd5-222cade7df9f-config-data\") pod \"keystone-db-sync-zrmg4\" (UID: \"91d18445-e4a1-4a57-abd5-222cade7df9f\") " pod="openstack/keystone-db-sync-zrmg4" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.234724 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8ebd-account-create-update-mn7h9" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.259028 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j657f\" (UniqueName: \"kubernetes.io/projected/91d18445-e4a1-4a57-abd5-222cade7df9f-kube-api-access-j657f\") pod \"keystone-db-sync-zrmg4\" (UID: \"91d18445-e4a1-4a57-abd5-222cade7df9f\") " pod="openstack/keystone-db-sync-zrmg4" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.266808 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c822-account-create-update-lr7q8"] Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.273055 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c822-account-create-update-lr7q8" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.277619 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.281276 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c822-account-create-update-lr7q8"] Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.317384 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba455d60-e55b-40f4-9e67-a171de4223f0-operator-scripts\") pod \"neutron-db-create-dghw2\" (UID: \"ba455d60-e55b-40f4-9e67-a171de4223f0\") " pod="openstack/neutron-db-create-dghw2" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.317490 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f619657b-649a-4152-886e-9357b34fced2-operator-scripts\") pod \"neutron-c822-account-create-update-lr7q8\" (UID: \"f619657b-649a-4152-886e-9357b34fced2\") " pod="openstack/neutron-c822-account-create-update-lr7q8" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.317521 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwd7w\" (UniqueName: \"kubernetes.io/projected/ba455d60-e55b-40f4-9e67-a171de4223f0-kube-api-access-xwd7w\") pod \"neutron-db-create-dghw2\" (UID: \"ba455d60-e55b-40f4-9e67-a171de4223f0\") " pod="openstack/neutron-db-create-dghw2" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.317569 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzr7m\" (UniqueName: \"kubernetes.io/projected/f619657b-649a-4152-886e-9357b34fced2-kube-api-access-pzr7m\") pod \"neutron-c822-account-create-update-lr7q8\" (UID: \"f619657b-649a-4152-886e-9357b34fced2\") " pod="openstack/neutron-c822-account-create-update-lr7q8" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.318897 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba455d60-e55b-40f4-9e67-a171de4223f0-operator-scripts\") pod \"neutron-db-create-dghw2\" (UID: \"ba455d60-e55b-40f4-9e67-a171de4223f0\") " pod="openstack/neutron-db-create-dghw2" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.344286 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwd7w\" (UniqueName: \"kubernetes.io/projected/ba455d60-e55b-40f4-9e67-a171de4223f0-kube-api-access-xwd7w\") pod \"neutron-db-create-dghw2\" (UID: \"ba455d60-e55b-40f4-9e67-a171de4223f0\") " pod="openstack/neutron-db-create-dghw2" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.419594 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f619657b-649a-4152-886e-9357b34fced2-operator-scripts\") pod \"neutron-c822-account-create-update-lr7q8\" (UID: \"f619657b-649a-4152-886e-9357b34fced2\") " pod="openstack/neutron-c822-account-create-update-lr7q8" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.419703 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzr7m\" (UniqueName: \"kubernetes.io/projected/f619657b-649a-4152-886e-9357b34fced2-kube-api-access-pzr7m\") pod \"neutron-c822-account-create-update-lr7q8\" (UID: \"f619657b-649a-4152-886e-9357b34fced2\") " pod="openstack/neutron-c822-account-create-update-lr7q8" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.421019 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f619657b-649a-4152-886e-9357b34fced2-operator-scripts\") pod \"neutron-c822-account-create-update-lr7q8\" (UID: \"f619657b-649a-4152-886e-9357b34fced2\") " pod="openstack/neutron-c822-account-create-update-lr7q8" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.440547 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzr7m\" (UniqueName: \"kubernetes.io/projected/f619657b-649a-4152-886e-9357b34fced2-kube-api-access-pzr7m\") pod \"neutron-c822-account-create-update-lr7q8\" (UID: \"f619657b-649a-4152-886e-9357b34fced2\") " pod="openstack/neutron-c822-account-create-update-lr7q8" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.493283 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zrmg4" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.552894 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dghw2" Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.604973 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c822-account-create-update-lr7q8" Nov 28 07:15:15 crc kubenswrapper[4946]: W1128 07:15:15.730978 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c6b3d2d_881d_4a19_8551_140a9c02fe4f.slice/crio-b04b715a5ee7589eaecb7dbd436657ef860f94bee2d826fe19604b26ee9115d1 WatchSource:0}: Error finding container b04b715a5ee7589eaecb7dbd436657ef860f94bee2d826fe19604b26ee9115d1: Status 404 returned error can't find the container with id b04b715a5ee7589eaecb7dbd436657ef860f94bee2d826fe19604b26ee9115d1 Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.743558 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-cwsts"] Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.815452 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-zbgwb"] Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.821794 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cwsts" event={"ID":"0c6b3d2d-881d-4a19-8551-140a9c02fe4f","Type":"ContainerStarted","Data":"b04b715a5ee7589eaecb7dbd436657ef860f94bee2d826fe19604b26ee9115d1"} Nov 28 07:15:15 crc kubenswrapper[4946]: I1128 07:15:15.889391 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1925-account-create-update-877c2"] Nov 28 07:15:15 crc kubenswrapper[4946]: W1128 07:15:15.907820 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c480cce_4090_47dd_8337_99546f661b9d.slice/crio-5262c6aa2e47804aee6e1ca71a05d725f6843c79127393a63ae2c43aaebf5ae3 WatchSource:0}: Error finding container 5262c6aa2e47804aee6e1ca71a05d725f6843c79127393a63ae2c43aaebf5ae3: Status 404 returned error can't find the container with id 5262c6aa2e47804aee6e1ca71a05d725f6843c79127393a63ae2c43aaebf5ae3 Nov 28 07:15:16 crc kubenswrapper[4946]: W1128 07:15:16.051665 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod759da62f_4114_48b9_9eb0_1fad429f3044.slice/crio-17e256ad44d5c18ef31df8f97e950a4028b5051b21c0d0ddf7b88831660b9a7b WatchSource:0}: Error finding container 17e256ad44d5c18ef31df8f97e950a4028b5051b21c0d0ddf7b88831660b9a7b: Status 404 returned error can't find the container with id 17e256ad44d5c18ef31df8f97e950a4028b5051b21c0d0ddf7b88831660b9a7b Nov 28 07:15:16 crc kubenswrapper[4946]: I1128 07:15:16.080418 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8ebd-account-create-update-mn7h9"] Nov 28 07:15:16 crc kubenswrapper[4946]: I1128 07:15:16.136863 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-zrmg4"] Nov 28 07:15:16 crc kubenswrapper[4946]: I1128 07:15:16.146254 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dghw2"] Nov 28 07:15:16 crc kubenswrapper[4946]: W1128 07:15:16.151397 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91d18445_e4a1_4a57_abd5_222cade7df9f.slice/crio-7c9ba4b2fb1c21d31dd972a627917211a011efdb730e87425ca854e72a53c179 WatchSource:0}: Error finding container 7c9ba4b2fb1c21d31dd972a627917211a011efdb730e87425ca854e72a53c179: Status 404 returned error can't find the container with id 7c9ba4b2fb1c21d31dd972a627917211a011efdb730e87425ca854e72a53c179 Nov 28 07:15:16 crc kubenswrapper[4946]: I1128 07:15:16.310991 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c822-account-create-update-lr7q8"] Nov 28 07:15:16 crc kubenswrapper[4946]: W1128 07:15:16.315667 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf619657b_649a_4152_886e_9357b34fced2.slice/crio-6c0ee5c6c1bfb955ce260570ffdabc5b3bb5489824a6d5f609376b3abe2358a7 WatchSource:0}: Error finding container 6c0ee5c6c1bfb955ce260570ffdabc5b3bb5489824a6d5f609376b3abe2358a7: Status 404 returned error can't find the container with id 6c0ee5c6c1bfb955ce260570ffdabc5b3bb5489824a6d5f609376b3abe2358a7 Nov 28 07:15:16 crc kubenswrapper[4946]: I1128 07:15:16.832442 4946 generic.go:334] "Generic (PLEG): container finished" podID="ba455d60-e55b-40f4-9e67-a171de4223f0" containerID="a4520b1e79a227ffd054996a0b1d87713c95eb613cdf51e4e33231939731796d" exitCode=0 Nov 28 07:15:16 crc kubenswrapper[4946]: I1128 07:15:16.832647 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dghw2" event={"ID":"ba455d60-e55b-40f4-9e67-a171de4223f0","Type":"ContainerDied","Data":"a4520b1e79a227ffd054996a0b1d87713c95eb613cdf51e4e33231939731796d"} Nov 28 07:15:16 crc kubenswrapper[4946]: I1128 07:15:16.832912 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dghw2" event={"ID":"ba455d60-e55b-40f4-9e67-a171de4223f0","Type":"ContainerStarted","Data":"942bd72d388376b9c1dec6b14f3f5e88aabdf0f06f5ee0a72f60c49eeb9c2ba3"} Nov 28 07:15:16 crc kubenswrapper[4946]: I1128 07:15:16.836174 4946 generic.go:334] "Generic (PLEG): container finished" podID="1c480cce-4090-47dd-8337-99546f661b9d" containerID="033021f7f40b9435bea47524398b2573a4b1eaaa8d23bc44625948305eabd648" exitCode=0 Nov 28 07:15:16 crc kubenswrapper[4946]: I1128 07:15:16.836309 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1925-account-create-update-877c2" event={"ID":"1c480cce-4090-47dd-8337-99546f661b9d","Type":"ContainerDied","Data":"033021f7f40b9435bea47524398b2573a4b1eaaa8d23bc44625948305eabd648"} Nov 28 07:15:16 crc kubenswrapper[4946]: I1128 07:15:16.836353 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1925-account-create-update-877c2" event={"ID":"1c480cce-4090-47dd-8337-99546f661b9d","Type":"ContainerStarted","Data":"5262c6aa2e47804aee6e1ca71a05d725f6843c79127393a63ae2c43aaebf5ae3"} Nov 28 07:15:16 crc kubenswrapper[4946]: I1128 07:15:16.839090 4946 generic.go:334] "Generic (PLEG): container finished" podID="f619657b-649a-4152-886e-9357b34fced2" containerID="441402af77e9f4e15b43afd73dd8a933edb223fe92fcbbad3e9635df9b92743c" exitCode=0 Nov 28 07:15:16 crc kubenswrapper[4946]: I1128 07:15:16.839151 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c822-account-create-update-lr7q8" event={"ID":"f619657b-649a-4152-886e-9357b34fced2","Type":"ContainerDied","Data":"441402af77e9f4e15b43afd73dd8a933edb223fe92fcbbad3e9635df9b92743c"} Nov 28 07:15:16 crc kubenswrapper[4946]: I1128 07:15:16.839223 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c822-account-create-update-lr7q8" event={"ID":"f619657b-649a-4152-886e-9357b34fced2","Type":"ContainerStarted","Data":"6c0ee5c6c1bfb955ce260570ffdabc5b3bb5489824a6d5f609376b3abe2358a7"} Nov 28 07:15:16 crc kubenswrapper[4946]: I1128 07:15:16.840777 4946 generic.go:334] "Generic (PLEG): container finished" podID="759da62f-4114-48b9-9eb0-1fad429f3044" containerID="dc3c127513486f5382ffc7404818074fd53d094defda55359502651b1c8d18ef" exitCode=0 Nov 28 07:15:16 crc kubenswrapper[4946]: I1128 07:15:16.840821 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8ebd-account-create-update-mn7h9" event={"ID":"759da62f-4114-48b9-9eb0-1fad429f3044","Type":"ContainerDied","Data":"dc3c127513486f5382ffc7404818074fd53d094defda55359502651b1c8d18ef"} Nov 28 07:15:16 crc kubenswrapper[4946]: I1128 07:15:16.840847 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8ebd-account-create-update-mn7h9" event={"ID":"759da62f-4114-48b9-9eb0-1fad429f3044","Type":"ContainerStarted","Data":"17e256ad44d5c18ef31df8f97e950a4028b5051b21c0d0ddf7b88831660b9a7b"} Nov 28 07:15:16 crc kubenswrapper[4946]: I1128 07:15:16.843041 4946 generic.go:334] "Generic (PLEG): container finished" podID="802d0ec6-164b-4033-91d5-514bbd50bc23" containerID="a70612598df74c451158c1e4c0bf961c5ebacf1837e0399d558530ad6df9f0d1" exitCode=0 Nov 28 07:15:16 crc kubenswrapper[4946]: I1128 07:15:16.843113 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zbgwb" event={"ID":"802d0ec6-164b-4033-91d5-514bbd50bc23","Type":"ContainerDied","Data":"a70612598df74c451158c1e4c0bf961c5ebacf1837e0399d558530ad6df9f0d1"} Nov 28 07:15:16 crc kubenswrapper[4946]: I1128 07:15:16.843137 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zbgwb" event={"ID":"802d0ec6-164b-4033-91d5-514bbd50bc23","Type":"ContainerStarted","Data":"8824972bb0ec61a13023b5dabac09b716f16d607efb58e076cffae51ce220bbb"} Nov 28 07:15:16 crc kubenswrapper[4946]: I1128 07:15:16.845367 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zrmg4" event={"ID":"91d18445-e4a1-4a57-abd5-222cade7df9f","Type":"ContainerStarted","Data":"7c9ba4b2fb1c21d31dd972a627917211a011efdb730e87425ca854e72a53c179"} Nov 28 07:15:16 crc kubenswrapper[4946]: I1128 07:15:16.851792 4946 generic.go:334] "Generic (PLEG): container finished" podID="0c6b3d2d-881d-4a19-8551-140a9c02fe4f" containerID="708604ae7d5da79105e4d57c3239ec79662e44de189acba45fef8858a84a3729" exitCode=0 Nov 28 07:15:16 crc kubenswrapper[4946]: I1128 07:15:16.851876 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cwsts" event={"ID":"0c6b3d2d-881d-4a19-8551-140a9c02fe4f","Type":"ContainerDied","Data":"708604ae7d5da79105e4d57c3239ec79662e44de189acba45fef8858a84a3729"} Nov 28 07:15:16 crc kubenswrapper[4946]: E1128 07:15:16.864403 4946 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba455d60_e55b_40f4_9e67_a171de4223f0.slice/crio-a4520b1e79a227ffd054996a0b1d87713c95eb613cdf51e4e33231939731796d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf619657b_649a_4152_886e_9357b34fced2.slice/crio-conmon-441402af77e9f4e15b43afd73dd8a933edb223fe92fcbbad3e9635df9b92743c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba455d60_e55b_40f4_9e67_a171de4223f0.slice/crio-conmon-a4520b1e79a227ffd054996a0b1d87713c95eb613cdf51e4e33231939731796d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod759da62f_4114_48b9_9eb0_1fad429f3044.slice/crio-conmon-dc3c127513486f5382ffc7404818074fd53d094defda55359502651b1c8d18ef.scope\": RecentStats: unable to find data in memory cache]" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.271435 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c822-account-create-update-lr7q8" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.389093 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzr7m\" (UniqueName: \"kubernetes.io/projected/f619657b-649a-4152-886e-9357b34fced2-kube-api-access-pzr7m\") pod \"f619657b-649a-4152-886e-9357b34fced2\" (UID: \"f619657b-649a-4152-886e-9357b34fced2\") " Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.389263 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f619657b-649a-4152-886e-9357b34fced2-operator-scripts\") pod \"f619657b-649a-4152-886e-9357b34fced2\" (UID: \"f619657b-649a-4152-886e-9357b34fced2\") " Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.390773 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f619657b-649a-4152-886e-9357b34fced2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f619657b-649a-4152-886e-9357b34fced2" (UID: "f619657b-649a-4152-886e-9357b34fced2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.400759 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f619657b-649a-4152-886e-9357b34fced2-kube-api-access-pzr7m" (OuterVolumeSpecName: "kube-api-access-pzr7m") pod "f619657b-649a-4152-886e-9357b34fced2" (UID: "f619657b-649a-4152-886e-9357b34fced2"). InnerVolumeSpecName "kube-api-access-pzr7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.491608 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzr7m\" (UniqueName: \"kubernetes.io/projected/f619657b-649a-4152-886e-9357b34fced2-kube-api-access-pzr7m\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.491652 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f619657b-649a-4152-886e-9357b34fced2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.502866 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1925-account-create-update-877c2" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.530235 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zbgwb" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.544728 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8ebd-account-create-update-mn7h9" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.593216 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/759da62f-4114-48b9-9eb0-1fad429f3044-operator-scripts\") pod \"759da62f-4114-48b9-9eb0-1fad429f3044\" (UID: \"759da62f-4114-48b9-9eb0-1fad429f3044\") " Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.593267 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c480cce-4090-47dd-8337-99546f661b9d-operator-scripts\") pod \"1c480cce-4090-47dd-8337-99546f661b9d\" (UID: \"1c480cce-4090-47dd-8337-99546f661b9d\") " Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.593300 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjrqs\" (UniqueName: \"kubernetes.io/projected/802d0ec6-164b-4033-91d5-514bbd50bc23-kube-api-access-sjrqs\") pod \"802d0ec6-164b-4033-91d5-514bbd50bc23\" (UID: \"802d0ec6-164b-4033-91d5-514bbd50bc23\") " Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.593357 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/802d0ec6-164b-4033-91d5-514bbd50bc23-operator-scripts\") pod \"802d0ec6-164b-4033-91d5-514bbd50bc23\" (UID: \"802d0ec6-164b-4033-91d5-514bbd50bc23\") " Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.593390 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jpg5\" (UniqueName: \"kubernetes.io/projected/1c480cce-4090-47dd-8337-99546f661b9d-kube-api-access-6jpg5\") pod \"1c480cce-4090-47dd-8337-99546f661b9d\" (UID: \"1c480cce-4090-47dd-8337-99546f661b9d\") " Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.593613 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86gsj\" (UniqueName: \"kubernetes.io/projected/759da62f-4114-48b9-9eb0-1fad429f3044-kube-api-access-86gsj\") pod \"759da62f-4114-48b9-9eb0-1fad429f3044\" (UID: \"759da62f-4114-48b9-9eb0-1fad429f3044\") " Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.597621 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/802d0ec6-164b-4033-91d5-514bbd50bc23-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "802d0ec6-164b-4033-91d5-514bbd50bc23" (UID: "802d0ec6-164b-4033-91d5-514bbd50bc23"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.597652 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/759da62f-4114-48b9-9eb0-1fad429f3044-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "759da62f-4114-48b9-9eb0-1fad429f3044" (UID: "759da62f-4114-48b9-9eb0-1fad429f3044"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.598144 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c480cce-4090-47dd-8337-99546f661b9d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1c480cce-4090-47dd-8337-99546f661b9d" (UID: "1c480cce-4090-47dd-8337-99546f661b9d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.600109 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c480cce-4090-47dd-8337-99546f661b9d-kube-api-access-6jpg5" (OuterVolumeSpecName: "kube-api-access-6jpg5") pod "1c480cce-4090-47dd-8337-99546f661b9d" (UID: "1c480cce-4090-47dd-8337-99546f661b9d"). InnerVolumeSpecName "kube-api-access-6jpg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.600187 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/759da62f-4114-48b9-9eb0-1fad429f3044-kube-api-access-86gsj" (OuterVolumeSpecName: "kube-api-access-86gsj") pod "759da62f-4114-48b9-9eb0-1fad429f3044" (UID: "759da62f-4114-48b9-9eb0-1fad429f3044"). InnerVolumeSpecName "kube-api-access-86gsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.600214 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/802d0ec6-164b-4033-91d5-514bbd50bc23-kube-api-access-sjrqs" (OuterVolumeSpecName: "kube-api-access-sjrqs") pod "802d0ec6-164b-4033-91d5-514bbd50bc23" (UID: "802d0ec6-164b-4033-91d5-514bbd50bc23"). InnerVolumeSpecName "kube-api-access-sjrqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.615581 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cwsts" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.622281 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dghw2" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.695609 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba455d60-e55b-40f4-9e67-a171de4223f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba455d60-e55b-40f4-9e67-a171de4223f0" (UID: "ba455d60-e55b-40f4-9e67-a171de4223f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.694934 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba455d60-e55b-40f4-9e67-a171de4223f0-operator-scripts\") pod \"ba455d60-e55b-40f4-9e67-a171de4223f0\" (UID: \"ba455d60-e55b-40f4-9e67-a171de4223f0\") " Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.695724 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c6b3d2d-881d-4a19-8551-140a9c02fe4f-operator-scripts\") pod \"0c6b3d2d-881d-4a19-8551-140a9c02fe4f\" (UID: \"0c6b3d2d-881d-4a19-8551-140a9c02fe4f\") " Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.696260 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c6b3d2d-881d-4a19-8551-140a9c02fe4f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c6b3d2d-881d-4a19-8551-140a9c02fe4f" (UID: "0c6b3d2d-881d-4a19-8551-140a9c02fe4f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.695751 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5m8k\" (UniqueName: \"kubernetes.io/projected/0c6b3d2d-881d-4a19-8551-140a9c02fe4f-kube-api-access-l5m8k\") pod \"0c6b3d2d-881d-4a19-8551-140a9c02fe4f\" (UID: \"0c6b3d2d-881d-4a19-8551-140a9c02fe4f\") " Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.696383 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwd7w\" (UniqueName: \"kubernetes.io/projected/ba455d60-e55b-40f4-9e67-a171de4223f0-kube-api-access-xwd7w\") pod \"ba455d60-e55b-40f4-9e67-a171de4223f0\" (UID: \"ba455d60-e55b-40f4-9e67-a171de4223f0\") " Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.696997 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86gsj\" (UniqueName: \"kubernetes.io/projected/759da62f-4114-48b9-9eb0-1fad429f3044-kube-api-access-86gsj\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.697019 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/759da62f-4114-48b9-9eb0-1fad429f3044-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.697034 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c480cce-4090-47dd-8337-99546f661b9d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.697048 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjrqs\" (UniqueName: \"kubernetes.io/projected/802d0ec6-164b-4033-91d5-514bbd50bc23-kube-api-access-sjrqs\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.697061 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/802d0ec6-164b-4033-91d5-514bbd50bc23-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.697072 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jpg5\" (UniqueName: \"kubernetes.io/projected/1c480cce-4090-47dd-8337-99546f661b9d-kube-api-access-6jpg5\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.697085 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba455d60-e55b-40f4-9e67-a171de4223f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.697096 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c6b3d2d-881d-4a19-8551-140a9c02fe4f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.699023 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c6b3d2d-881d-4a19-8551-140a9c02fe4f-kube-api-access-l5m8k" (OuterVolumeSpecName: "kube-api-access-l5m8k") pod "0c6b3d2d-881d-4a19-8551-140a9c02fe4f" (UID: "0c6b3d2d-881d-4a19-8551-140a9c02fe4f"). InnerVolumeSpecName "kube-api-access-l5m8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.700542 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba455d60-e55b-40f4-9e67-a171de4223f0-kube-api-access-xwd7w" (OuterVolumeSpecName: "kube-api-access-xwd7w") pod "ba455d60-e55b-40f4-9e67-a171de4223f0" (UID: "ba455d60-e55b-40f4-9e67-a171de4223f0"). InnerVolumeSpecName "kube-api-access-xwd7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.798603 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5m8k\" (UniqueName: \"kubernetes.io/projected/0c6b3d2d-881d-4a19-8551-140a9c02fe4f-kube-api-access-l5m8k\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.798641 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwd7w\" (UniqueName: \"kubernetes.io/projected/ba455d60-e55b-40f4-9e67-a171de4223f0-kube-api-access-xwd7w\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.874611 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8ebd-account-create-update-mn7h9" event={"ID":"759da62f-4114-48b9-9eb0-1fad429f3044","Type":"ContainerDied","Data":"17e256ad44d5c18ef31df8f97e950a4028b5051b21c0d0ddf7b88831660b9a7b"} Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.874673 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17e256ad44d5c18ef31df8f97e950a4028b5051b21c0d0ddf7b88831660b9a7b" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.874683 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8ebd-account-create-update-mn7h9" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.879093 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zbgwb" event={"ID":"802d0ec6-164b-4033-91d5-514bbd50bc23","Type":"ContainerDied","Data":"8824972bb0ec61a13023b5dabac09b716f16d607efb58e076cffae51ce220bbb"} Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.879135 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8824972bb0ec61a13023b5dabac09b716f16d607efb58e076cffae51ce220bbb" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.879203 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zbgwb" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.893687 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cwsts" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.893705 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cwsts" event={"ID":"0c6b3d2d-881d-4a19-8551-140a9c02fe4f","Type":"ContainerDied","Data":"b04b715a5ee7589eaecb7dbd436657ef860f94bee2d826fe19604b26ee9115d1"} Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.893764 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b04b715a5ee7589eaecb7dbd436657ef860f94bee2d826fe19604b26ee9115d1" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.898938 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dghw2" event={"ID":"ba455d60-e55b-40f4-9e67-a171de4223f0","Type":"ContainerDied","Data":"942bd72d388376b9c1dec6b14f3f5e88aabdf0f06f5ee0a72f60c49eeb9c2ba3"} Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.898976 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="942bd72d388376b9c1dec6b14f3f5e88aabdf0f06f5ee0a72f60c49eeb9c2ba3" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.898987 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dghw2" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.902127 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1925-account-create-update-877c2" event={"ID":"1c480cce-4090-47dd-8337-99546f661b9d","Type":"ContainerDied","Data":"5262c6aa2e47804aee6e1ca71a05d725f6843c79127393a63ae2c43aaebf5ae3"} Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.902175 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1925-account-create-update-877c2" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.902181 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5262c6aa2e47804aee6e1ca71a05d725f6843c79127393a63ae2c43aaebf5ae3" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.903784 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c822-account-create-update-lr7q8" event={"ID":"f619657b-649a-4152-886e-9357b34fced2","Type":"ContainerDied","Data":"6c0ee5c6c1bfb955ce260570ffdabc5b3bb5489824a6d5f609376b3abe2358a7"} Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.903822 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c0ee5c6c1bfb955ce260570ffdabc5b3bb5489824a6d5f609376b3abe2358a7" Nov 28 07:15:18 crc kubenswrapper[4946]: I1128 07:15:18.903845 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c822-account-create-update-lr7q8" Nov 28 07:15:21 crc kubenswrapper[4946]: I1128 07:15:21.500573 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" Nov 28 07:15:21 crc kubenswrapper[4946]: I1128 07:15:21.588586 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c8cb8df65-wcsz4"] Nov 28 07:15:21 crc kubenswrapper[4946]: I1128 07:15:21.588952 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" podUID="91e352b8-41dc-44ca-8f6d-77c1e7de5469" containerName="dnsmasq-dns" containerID="cri-o://c943cffe443eec957964cd2a79890f077db273d84828eb15f3af69c5ce24c40a" gracePeriod=10 Nov 28 07:15:21 crc kubenswrapper[4946]: I1128 07:15:21.938334 4946 generic.go:334] "Generic (PLEG): container finished" podID="91e352b8-41dc-44ca-8f6d-77c1e7de5469" containerID="c943cffe443eec957964cd2a79890f077db273d84828eb15f3af69c5ce24c40a" exitCode=0 Nov 28 07:15:21 crc kubenswrapper[4946]: I1128 07:15:21.938540 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" event={"ID":"91e352b8-41dc-44ca-8f6d-77c1e7de5469","Type":"ContainerDied","Data":"c943cffe443eec957964cd2a79890f077db273d84828eb15f3af69c5ce24c40a"} Nov 28 07:15:22 crc kubenswrapper[4946]: I1128 07:15:22.327918 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" Nov 28 07:15:22 crc kubenswrapper[4946]: I1128 07:15:22.380223 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91e352b8-41dc-44ca-8f6d-77c1e7de5469-config\") pod \"91e352b8-41dc-44ca-8f6d-77c1e7de5469\" (UID: \"91e352b8-41dc-44ca-8f6d-77c1e7de5469\") " Nov 28 07:15:22 crc kubenswrapper[4946]: I1128 07:15:22.380451 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91e352b8-41dc-44ca-8f6d-77c1e7de5469-ovsdbserver-nb\") pod \"91e352b8-41dc-44ca-8f6d-77c1e7de5469\" (UID: \"91e352b8-41dc-44ca-8f6d-77c1e7de5469\") " Nov 28 07:15:22 crc kubenswrapper[4946]: I1128 07:15:22.380558 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7ntw\" (UniqueName: \"kubernetes.io/projected/91e352b8-41dc-44ca-8f6d-77c1e7de5469-kube-api-access-k7ntw\") pod \"91e352b8-41dc-44ca-8f6d-77c1e7de5469\" (UID: \"91e352b8-41dc-44ca-8f6d-77c1e7de5469\") " Nov 28 07:15:22 crc kubenswrapper[4946]: I1128 07:15:22.380719 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91e352b8-41dc-44ca-8f6d-77c1e7de5469-dns-svc\") pod \"91e352b8-41dc-44ca-8f6d-77c1e7de5469\" (UID: \"91e352b8-41dc-44ca-8f6d-77c1e7de5469\") " Nov 28 07:15:22 crc kubenswrapper[4946]: I1128 07:15:22.380800 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91e352b8-41dc-44ca-8f6d-77c1e7de5469-ovsdbserver-sb\") pod \"91e352b8-41dc-44ca-8f6d-77c1e7de5469\" (UID: \"91e352b8-41dc-44ca-8f6d-77c1e7de5469\") " Nov 28 07:15:22 crc kubenswrapper[4946]: I1128 07:15:22.391544 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91e352b8-41dc-44ca-8f6d-77c1e7de5469-kube-api-access-k7ntw" (OuterVolumeSpecName: "kube-api-access-k7ntw") pod "91e352b8-41dc-44ca-8f6d-77c1e7de5469" (UID: "91e352b8-41dc-44ca-8f6d-77c1e7de5469"). InnerVolumeSpecName "kube-api-access-k7ntw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:15:22 crc kubenswrapper[4946]: I1128 07:15:22.425025 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91e352b8-41dc-44ca-8f6d-77c1e7de5469-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "91e352b8-41dc-44ca-8f6d-77c1e7de5469" (UID: "91e352b8-41dc-44ca-8f6d-77c1e7de5469"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:22 crc kubenswrapper[4946]: I1128 07:15:22.435324 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91e352b8-41dc-44ca-8f6d-77c1e7de5469-config" (OuterVolumeSpecName: "config") pod "91e352b8-41dc-44ca-8f6d-77c1e7de5469" (UID: "91e352b8-41dc-44ca-8f6d-77c1e7de5469"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:22 crc kubenswrapper[4946]: I1128 07:15:22.435917 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91e352b8-41dc-44ca-8f6d-77c1e7de5469-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "91e352b8-41dc-44ca-8f6d-77c1e7de5469" (UID: "91e352b8-41dc-44ca-8f6d-77c1e7de5469"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:22 crc kubenswrapper[4946]: I1128 07:15:22.445657 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91e352b8-41dc-44ca-8f6d-77c1e7de5469-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "91e352b8-41dc-44ca-8f6d-77c1e7de5469" (UID: "91e352b8-41dc-44ca-8f6d-77c1e7de5469"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:22 crc kubenswrapper[4946]: I1128 07:15:22.483231 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91e352b8-41dc-44ca-8f6d-77c1e7de5469-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:22 crc kubenswrapper[4946]: I1128 07:15:22.483276 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91e352b8-41dc-44ca-8f6d-77c1e7de5469-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:22 crc kubenswrapper[4946]: I1128 07:15:22.483291 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7ntw\" (UniqueName: \"kubernetes.io/projected/91e352b8-41dc-44ca-8f6d-77c1e7de5469-kube-api-access-k7ntw\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:22 crc kubenswrapper[4946]: I1128 07:15:22.483303 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91e352b8-41dc-44ca-8f6d-77c1e7de5469-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:22 crc kubenswrapper[4946]: I1128 07:15:22.483312 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91e352b8-41dc-44ca-8f6d-77c1e7de5469-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:22 crc kubenswrapper[4946]: I1128 07:15:22.955444 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zrmg4" event={"ID":"91d18445-e4a1-4a57-abd5-222cade7df9f","Type":"ContainerStarted","Data":"7c1543b88dc71b87f31608f10e009dc744e9ea56762eb51a9a779b1da61ce75b"} Nov 28 07:15:22 crc kubenswrapper[4946]: I1128 07:15:22.964852 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" event={"ID":"91e352b8-41dc-44ca-8f6d-77c1e7de5469","Type":"ContainerDied","Data":"1dfabc0c89786705e3e67d967a2bac9e427cf78cf69d19465fa9c48ffa70df6a"} Nov 28 07:15:22 crc kubenswrapper[4946]: I1128 07:15:22.965107 4946 scope.go:117] "RemoveContainer" containerID="c943cffe443eec957964cd2a79890f077db273d84828eb15f3af69c5ce24c40a" Nov 28 07:15:22 crc kubenswrapper[4946]: I1128 07:15:22.965918 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8cb8df65-wcsz4" Nov 28 07:15:22 crc kubenswrapper[4946]: I1128 07:15:22.987475 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-zrmg4" podStartSLOduration=3.165480494 podStartE2EDuration="8.98743139s" podCreationTimestamp="2025-11-28 07:15:14 +0000 UTC" firstStartedPulling="2025-11-28 07:15:16.168133128 +0000 UTC m=+1370.546198239" lastFinishedPulling="2025-11-28 07:15:21.990084004 +0000 UTC m=+1376.368149135" observedRunningTime="2025-11-28 07:15:22.981503995 +0000 UTC m=+1377.359569116" watchObservedRunningTime="2025-11-28 07:15:22.98743139 +0000 UTC m=+1377.365496501" Nov 28 07:15:23 crc kubenswrapper[4946]: I1128 07:15:23.024734 4946 scope.go:117] "RemoveContainer" containerID="f30c67527b275a897b7f74c6d77350b5369816fdfe63a2e4bed944f133e98f9e" Nov 28 07:15:23 crc kubenswrapper[4946]: I1128 07:15:23.034758 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c8cb8df65-wcsz4"] Nov 28 07:15:23 crc kubenswrapper[4946]: I1128 07:15:23.049622 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c8cb8df65-wcsz4"] Nov 28 07:15:24 crc kubenswrapper[4946]: I1128 07:15:24.005543 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91e352b8-41dc-44ca-8f6d-77c1e7de5469" path="/var/lib/kubelet/pods/91e352b8-41dc-44ca-8f6d-77c1e7de5469/volumes" Nov 28 07:15:24 crc kubenswrapper[4946]: I1128 07:15:24.731111 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:15:24 crc kubenswrapper[4946]: I1128 07:15:24.731650 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:15:25 crc kubenswrapper[4946]: I1128 07:15:25.997798 4946 generic.go:334] "Generic (PLEG): container finished" podID="91d18445-e4a1-4a57-abd5-222cade7df9f" containerID="7c1543b88dc71b87f31608f10e009dc744e9ea56762eb51a9a779b1da61ce75b" exitCode=0 Nov 28 07:15:26 crc kubenswrapper[4946]: I1128 07:15:26.005495 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zrmg4" event={"ID":"91d18445-e4a1-4a57-abd5-222cade7df9f","Type":"ContainerDied","Data":"7c1543b88dc71b87f31608f10e009dc744e9ea56762eb51a9a779b1da61ce75b"} Nov 28 07:15:27 crc kubenswrapper[4946]: I1128 07:15:27.450880 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zrmg4" Nov 28 07:15:27 crc kubenswrapper[4946]: I1128 07:15:27.500722 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d18445-e4a1-4a57-abd5-222cade7df9f-config-data\") pod \"91d18445-e4a1-4a57-abd5-222cade7df9f\" (UID: \"91d18445-e4a1-4a57-abd5-222cade7df9f\") " Nov 28 07:15:27 crc kubenswrapper[4946]: I1128 07:15:27.500792 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j657f\" (UniqueName: \"kubernetes.io/projected/91d18445-e4a1-4a57-abd5-222cade7df9f-kube-api-access-j657f\") pod \"91d18445-e4a1-4a57-abd5-222cade7df9f\" (UID: \"91d18445-e4a1-4a57-abd5-222cade7df9f\") " Nov 28 07:15:27 crc kubenswrapper[4946]: I1128 07:15:27.500978 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d18445-e4a1-4a57-abd5-222cade7df9f-combined-ca-bundle\") pod \"91d18445-e4a1-4a57-abd5-222cade7df9f\" (UID: \"91d18445-e4a1-4a57-abd5-222cade7df9f\") " Nov 28 07:15:27 crc kubenswrapper[4946]: I1128 07:15:27.506712 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91d18445-e4a1-4a57-abd5-222cade7df9f-kube-api-access-j657f" (OuterVolumeSpecName: "kube-api-access-j657f") pod "91d18445-e4a1-4a57-abd5-222cade7df9f" (UID: "91d18445-e4a1-4a57-abd5-222cade7df9f"). InnerVolumeSpecName "kube-api-access-j657f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:15:27 crc kubenswrapper[4946]: I1128 07:15:27.528672 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91d18445-e4a1-4a57-abd5-222cade7df9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91d18445-e4a1-4a57-abd5-222cade7df9f" (UID: "91d18445-e4a1-4a57-abd5-222cade7df9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:15:27 crc kubenswrapper[4946]: I1128 07:15:27.545884 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91d18445-e4a1-4a57-abd5-222cade7df9f-config-data" (OuterVolumeSpecName: "config-data") pod "91d18445-e4a1-4a57-abd5-222cade7df9f" (UID: "91d18445-e4a1-4a57-abd5-222cade7df9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:15:27 crc kubenswrapper[4946]: I1128 07:15:27.604052 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d18445-e4a1-4a57-abd5-222cade7df9f-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:27 crc kubenswrapper[4946]: I1128 07:15:27.604093 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j657f\" (UniqueName: \"kubernetes.io/projected/91d18445-e4a1-4a57-abd5-222cade7df9f-kube-api-access-j657f\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:27 crc kubenswrapper[4946]: I1128 07:15:27.604107 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d18445-e4a1-4a57-abd5-222cade7df9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.020049 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zrmg4" event={"ID":"91d18445-e4a1-4a57-abd5-222cade7df9f","Type":"ContainerDied","Data":"7c9ba4b2fb1c21d31dd972a627917211a011efdb730e87425ca854e72a53c179"} Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.020432 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c9ba4b2fb1c21d31dd972a627917211a011efdb730e87425ca854e72a53c179" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.020190 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zrmg4" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.334248 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wvnxz"] Nov 28 07:15:28 crc kubenswrapper[4946]: E1128 07:15:28.336560 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="802d0ec6-164b-4033-91d5-514bbd50bc23" containerName="mariadb-database-create" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.336587 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="802d0ec6-164b-4033-91d5-514bbd50bc23" containerName="mariadb-database-create" Nov 28 07:15:28 crc kubenswrapper[4946]: E1128 07:15:28.336612 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91e352b8-41dc-44ca-8f6d-77c1e7de5469" containerName="init" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.336620 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e352b8-41dc-44ca-8f6d-77c1e7de5469" containerName="init" Nov 28 07:15:28 crc kubenswrapper[4946]: E1128 07:15:28.336632 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f619657b-649a-4152-886e-9357b34fced2" containerName="mariadb-account-create-update" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.336639 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="f619657b-649a-4152-886e-9357b34fced2" containerName="mariadb-account-create-update" Nov 28 07:15:28 crc kubenswrapper[4946]: E1128 07:15:28.336653 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="759da62f-4114-48b9-9eb0-1fad429f3044" containerName="mariadb-account-create-update" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.336661 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="759da62f-4114-48b9-9eb0-1fad429f3044" containerName="mariadb-account-create-update" Nov 28 07:15:28 crc kubenswrapper[4946]: E1128 07:15:28.336673 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba455d60-e55b-40f4-9e67-a171de4223f0" containerName="mariadb-database-create" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.336679 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba455d60-e55b-40f4-9e67-a171de4223f0" containerName="mariadb-database-create" Nov 28 07:15:28 crc kubenswrapper[4946]: E1128 07:15:28.336690 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91e352b8-41dc-44ca-8f6d-77c1e7de5469" containerName="dnsmasq-dns" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.336697 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e352b8-41dc-44ca-8f6d-77c1e7de5469" containerName="dnsmasq-dns" Nov 28 07:15:28 crc kubenswrapper[4946]: E1128 07:15:28.336710 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c6b3d2d-881d-4a19-8551-140a9c02fe4f" containerName="mariadb-database-create" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.336717 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c6b3d2d-881d-4a19-8551-140a9c02fe4f" containerName="mariadb-database-create" Nov 28 07:15:28 crc kubenswrapper[4946]: E1128 07:15:28.336735 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c480cce-4090-47dd-8337-99546f661b9d" containerName="mariadb-account-create-update" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.336741 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c480cce-4090-47dd-8337-99546f661b9d" containerName="mariadb-account-create-update" Nov 28 07:15:28 crc kubenswrapper[4946]: E1128 07:15:28.336750 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91d18445-e4a1-4a57-abd5-222cade7df9f" containerName="keystone-db-sync" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.336756 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d18445-e4a1-4a57-abd5-222cade7df9f" containerName="keystone-db-sync" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.336987 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="759da62f-4114-48b9-9eb0-1fad429f3044" containerName="mariadb-account-create-update" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.337005 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="f619657b-649a-4152-886e-9357b34fced2" containerName="mariadb-account-create-update" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.337021 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="91e352b8-41dc-44ca-8f6d-77c1e7de5469" containerName="dnsmasq-dns" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.337032 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="802d0ec6-164b-4033-91d5-514bbd50bc23" containerName="mariadb-database-create" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.337046 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba455d60-e55b-40f4-9e67-a171de4223f0" containerName="mariadb-database-create" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.337057 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="91d18445-e4a1-4a57-abd5-222cade7df9f" containerName="keystone-db-sync" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.337070 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c480cce-4090-47dd-8337-99546f661b9d" containerName="mariadb-account-create-update" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.337079 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c6b3d2d-881d-4a19-8551-140a9c02fe4f" containerName="mariadb-database-create" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.337702 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wvnxz" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.339970 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.340231 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.340354 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.340601 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.341220 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-hmftc" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.352374 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wvnxz"] Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.386861 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76bb7864cf-xh4zj"] Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.390025 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76bb7864cf-xh4zj" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.422812 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-dns-svc\") pod \"dnsmasq-dns-76bb7864cf-xh4zj\" (UID: \"db6faea9-e5b3-4621-b198-1d323a1825d4\") " pod="openstack/dnsmasq-dns-76bb7864cf-xh4zj" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.422912 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-ovsdbserver-nb\") pod \"dnsmasq-dns-76bb7864cf-xh4zj\" (UID: \"db6faea9-e5b3-4621-b198-1d323a1825d4\") " pod="openstack/dnsmasq-dns-76bb7864cf-xh4zj" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.422947 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-562d2\" (UniqueName: \"kubernetes.io/projected/db6faea9-e5b3-4621-b198-1d323a1825d4-kube-api-access-562d2\") pod \"dnsmasq-dns-76bb7864cf-xh4zj\" (UID: \"db6faea9-e5b3-4621-b198-1d323a1825d4\") " pod="openstack/dnsmasq-dns-76bb7864cf-xh4zj" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.423021 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-combined-ca-bundle\") pod \"keystone-bootstrap-wvnxz\" (UID: \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\") " pod="openstack/keystone-bootstrap-wvnxz" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.423153 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-config\") pod \"dnsmasq-dns-76bb7864cf-xh4zj\" (UID: \"db6faea9-e5b3-4621-b198-1d323a1825d4\") " pod="openstack/dnsmasq-dns-76bb7864cf-xh4zj" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.423290 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-config-data\") pod \"keystone-bootstrap-wvnxz\" (UID: \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\") " pod="openstack/keystone-bootstrap-wvnxz" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.423334 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-fernet-keys\") pod \"keystone-bootstrap-wvnxz\" (UID: \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\") " pod="openstack/keystone-bootstrap-wvnxz" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.423363 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6rqh\" (UniqueName: \"kubernetes.io/projected/b16ae262-483b-4ed8-9ab8-15dad5215dbc-kube-api-access-f6rqh\") pod \"keystone-bootstrap-wvnxz\" (UID: \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\") " pod="openstack/keystone-bootstrap-wvnxz" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.423387 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-dns-swift-storage-0\") pod \"dnsmasq-dns-76bb7864cf-xh4zj\" (UID: \"db6faea9-e5b3-4621-b198-1d323a1825d4\") " pod="openstack/dnsmasq-dns-76bb7864cf-xh4zj" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.423447 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-scripts\") pod \"keystone-bootstrap-wvnxz\" (UID: \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\") " pod="openstack/keystone-bootstrap-wvnxz" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.423561 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-credential-keys\") pod \"keystone-bootstrap-wvnxz\" (UID: \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\") " pod="openstack/keystone-bootstrap-wvnxz" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.423593 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-ovsdbserver-sb\") pod \"dnsmasq-dns-76bb7864cf-xh4zj\" (UID: \"db6faea9-e5b3-4621-b198-1d323a1825d4\") " pod="openstack/dnsmasq-dns-76bb7864cf-xh4zj" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.478550 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76bb7864cf-xh4zj"] Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.525125 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-config-data\") pod \"keystone-bootstrap-wvnxz\" (UID: \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\") " pod="openstack/keystone-bootstrap-wvnxz" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.525202 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-fernet-keys\") pod \"keystone-bootstrap-wvnxz\" (UID: \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\") " pod="openstack/keystone-bootstrap-wvnxz" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.525237 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6rqh\" (UniqueName: \"kubernetes.io/projected/b16ae262-483b-4ed8-9ab8-15dad5215dbc-kube-api-access-f6rqh\") pod \"keystone-bootstrap-wvnxz\" (UID: \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\") " pod="openstack/keystone-bootstrap-wvnxz" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.525262 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-dns-swift-storage-0\") pod \"dnsmasq-dns-76bb7864cf-xh4zj\" (UID: \"db6faea9-e5b3-4621-b198-1d323a1825d4\") " pod="openstack/dnsmasq-dns-76bb7864cf-xh4zj" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.525304 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-scripts\") pod \"keystone-bootstrap-wvnxz\" (UID: \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\") " pod="openstack/keystone-bootstrap-wvnxz" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.525333 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-credential-keys\") pod \"keystone-bootstrap-wvnxz\" (UID: \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\") " pod="openstack/keystone-bootstrap-wvnxz" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.525360 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-ovsdbserver-sb\") pod \"dnsmasq-dns-76bb7864cf-xh4zj\" (UID: \"db6faea9-e5b3-4621-b198-1d323a1825d4\") " pod="openstack/dnsmasq-dns-76bb7864cf-xh4zj" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.525398 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-dns-svc\") pod \"dnsmasq-dns-76bb7864cf-xh4zj\" (UID: \"db6faea9-e5b3-4621-b198-1d323a1825d4\") " pod="openstack/dnsmasq-dns-76bb7864cf-xh4zj" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.525436 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-ovsdbserver-nb\") pod \"dnsmasq-dns-76bb7864cf-xh4zj\" (UID: \"db6faea9-e5b3-4621-b198-1d323a1825d4\") " pod="openstack/dnsmasq-dns-76bb7864cf-xh4zj" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.525480 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-562d2\" (UniqueName: \"kubernetes.io/projected/db6faea9-e5b3-4621-b198-1d323a1825d4-kube-api-access-562d2\") pod \"dnsmasq-dns-76bb7864cf-xh4zj\" (UID: \"db6faea9-e5b3-4621-b198-1d323a1825d4\") " pod="openstack/dnsmasq-dns-76bb7864cf-xh4zj" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.525518 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-combined-ca-bundle\") pod \"keystone-bootstrap-wvnxz\" (UID: \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\") " pod="openstack/keystone-bootstrap-wvnxz" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.525588 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-config\") pod \"dnsmasq-dns-76bb7864cf-xh4zj\" (UID: \"db6faea9-e5b3-4621-b198-1d323a1825d4\") " pod="openstack/dnsmasq-dns-76bb7864cf-xh4zj" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.526599 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-config\") pod \"dnsmasq-dns-76bb7864cf-xh4zj\" (UID: \"db6faea9-e5b3-4621-b198-1d323a1825d4\") " pod="openstack/dnsmasq-dns-76bb7864cf-xh4zj" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.526602 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-ovsdbserver-nb\") pod \"dnsmasq-dns-76bb7864cf-xh4zj\" (UID: \"db6faea9-e5b3-4621-b198-1d323a1825d4\") " pod="openstack/dnsmasq-dns-76bb7864cf-xh4zj" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.527304 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-ovsdbserver-sb\") pod \"dnsmasq-dns-76bb7864cf-xh4zj\" (UID: \"db6faea9-e5b3-4621-b198-1d323a1825d4\") " pod="openstack/dnsmasq-dns-76bb7864cf-xh4zj" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.527932 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-dns-svc\") pod \"dnsmasq-dns-76bb7864cf-xh4zj\" (UID: \"db6faea9-e5b3-4621-b198-1d323a1825d4\") " pod="openstack/dnsmasq-dns-76bb7864cf-xh4zj" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.529689 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-dns-swift-storage-0\") pod \"dnsmasq-dns-76bb7864cf-xh4zj\" (UID: \"db6faea9-e5b3-4621-b198-1d323a1825d4\") " pod="openstack/dnsmasq-dns-76bb7864cf-xh4zj" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.533321 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-fernet-keys\") pod \"keystone-bootstrap-wvnxz\" (UID: \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\") " pod="openstack/keystone-bootstrap-wvnxz" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.547472 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-combined-ca-bundle\") pod \"keystone-bootstrap-wvnxz\" (UID: \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\") " pod="openstack/keystone-bootstrap-wvnxz" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.548184 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-credential-keys\") pod \"keystone-bootstrap-wvnxz\" (UID: \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\") " pod="openstack/keystone-bootstrap-wvnxz" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.548491 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-config-data\") pod \"keystone-bootstrap-wvnxz\" (UID: \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\") " pod="openstack/keystone-bootstrap-wvnxz" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.552104 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-scripts\") pod \"keystone-bootstrap-wvnxz\" (UID: \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\") " pod="openstack/keystone-bootstrap-wvnxz" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.565833 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6rqh\" (UniqueName: \"kubernetes.io/projected/b16ae262-483b-4ed8-9ab8-15dad5215dbc-kube-api-access-f6rqh\") pod \"keystone-bootstrap-wvnxz\" (UID: \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\") " pod="openstack/keystone-bootstrap-wvnxz" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.581057 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-562d2\" (UniqueName: \"kubernetes.io/projected/db6faea9-e5b3-4621-b198-1d323a1825d4-kube-api-access-562d2\") pod \"dnsmasq-dns-76bb7864cf-xh4zj\" (UID: \"db6faea9-e5b3-4621-b198-1d323a1825d4\") " pod="openstack/dnsmasq-dns-76bb7864cf-xh4zj" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.629672 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.644445 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.644622 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.651395 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.651685 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.661384 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wvnxz" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.734384 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8310da02-e0c7-4dab-bc26-7139ca576c2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " pod="openstack/ceilometer-0" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.734942 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8310da02-e0c7-4dab-bc26-7139ca576c2c-config-data\") pod \"ceilometer-0\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " pod="openstack/ceilometer-0" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.734994 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8310da02-e0c7-4dab-bc26-7139ca576c2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " pod="openstack/ceilometer-0" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.735023 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8310da02-e0c7-4dab-bc26-7139ca576c2c-log-httpd\") pod \"ceilometer-0\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " pod="openstack/ceilometer-0" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.735048 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g8xr\" (UniqueName: \"kubernetes.io/projected/8310da02-e0c7-4dab-bc26-7139ca576c2c-kube-api-access-5g8xr\") pod \"ceilometer-0\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " pod="openstack/ceilometer-0" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.735099 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8310da02-e0c7-4dab-bc26-7139ca576c2c-scripts\") pod \"ceilometer-0\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " pod="openstack/ceilometer-0" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.735115 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8310da02-e0c7-4dab-bc26-7139ca576c2c-run-httpd\") pod \"ceilometer-0\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " pod="openstack/ceilometer-0" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.741807 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-9p5hs"] Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.743101 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9p5hs" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.749019 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76bb7864cf-xh4zj" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.749894 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fpfpf" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.750169 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.756650 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.779576 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9p5hs"] Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.801257 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-8wk8q"] Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.802668 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8wk8q" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.814533 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.815004 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-qfgjc" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.815179 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.837492 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-l7pvv"] Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.838797 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8310da02-e0c7-4dab-bc26-7139ca576c2c-config-data\") pod \"ceilometer-0\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " pod="openstack/ceilometer-0" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.838837 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-config-data\") pod \"cinder-db-sync-8wk8q\" (UID: \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\") " pod="openstack/cinder-db-sync-8wk8q" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.838871 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/97fa13e8-c7b4-4612-912c-2976861bce81-config\") pod \"neutron-db-sync-9p5hs\" (UID: \"97fa13e8-c7b4-4612-912c-2976861bce81\") " pod="openstack/neutron-db-sync-9p5hs" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.838955 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8310da02-e0c7-4dab-bc26-7139ca576c2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " pod="openstack/ceilometer-0" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.838976 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-etc-machine-id\") pod \"cinder-db-sync-8wk8q\" (UID: \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\") " pod="openstack/cinder-db-sync-8wk8q" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.839009 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8310da02-e0c7-4dab-bc26-7139ca576c2c-log-httpd\") pod \"ceilometer-0\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " pod="openstack/ceilometer-0" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.839030 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-954wx\" (UniqueName: \"kubernetes.io/projected/97fa13e8-c7b4-4612-912c-2976861bce81-kube-api-access-954wx\") pod \"neutron-db-sync-9p5hs\" (UID: \"97fa13e8-c7b4-4612-912c-2976861bce81\") " pod="openstack/neutron-db-sync-9p5hs" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.839053 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g8xr\" (UniqueName: \"kubernetes.io/projected/8310da02-e0c7-4dab-bc26-7139ca576c2c-kube-api-access-5g8xr\") pod \"ceilometer-0\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " pod="openstack/ceilometer-0" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.839074 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97fa13e8-c7b4-4612-912c-2976861bce81-combined-ca-bundle\") pod \"neutron-db-sync-9p5hs\" (UID: \"97fa13e8-c7b4-4612-912c-2976861bce81\") " pod="openstack/neutron-db-sync-9p5hs" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.839112 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-db-sync-config-data\") pod \"cinder-db-sync-8wk8q\" (UID: \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\") " pod="openstack/cinder-db-sync-8wk8q" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.839144 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5z6v\" (UniqueName: \"kubernetes.io/projected/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-kube-api-access-m5z6v\") pod \"cinder-db-sync-8wk8q\" (UID: \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\") " pod="openstack/cinder-db-sync-8wk8q" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.839164 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-combined-ca-bundle\") pod \"cinder-db-sync-8wk8q\" (UID: \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\") " pod="openstack/cinder-db-sync-8wk8q" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.839183 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8310da02-e0c7-4dab-bc26-7139ca576c2c-scripts\") pod \"ceilometer-0\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " pod="openstack/ceilometer-0" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.839200 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8310da02-e0c7-4dab-bc26-7139ca576c2c-run-httpd\") pod \"ceilometer-0\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " pod="openstack/ceilometer-0" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.839242 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-scripts\") pod \"cinder-db-sync-8wk8q\" (UID: \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\") " pod="openstack/cinder-db-sync-8wk8q" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.839267 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8310da02-e0c7-4dab-bc26-7139ca576c2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " pod="openstack/ceilometer-0" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.839784 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l7pvv" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.840545 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8310da02-e0c7-4dab-bc26-7139ca576c2c-run-httpd\") pod \"ceilometer-0\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " pod="openstack/ceilometer-0" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.848988 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.849212 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-pcsnk" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.849674 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8310da02-e0c7-4dab-bc26-7139ca576c2c-log-httpd\") pod \"ceilometer-0\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " pod="openstack/ceilometer-0" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.852034 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-l7pvv"] Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.852284 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8310da02-e0c7-4dab-bc26-7139ca576c2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " pod="openstack/ceilometer-0" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.858169 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8310da02-e0c7-4dab-bc26-7139ca576c2c-config-data\") pod \"ceilometer-0\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " pod="openstack/ceilometer-0" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.858625 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8310da02-e0c7-4dab-bc26-7139ca576c2c-scripts\") pod \"ceilometer-0\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " pod="openstack/ceilometer-0" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.864786 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8310da02-e0c7-4dab-bc26-7139ca576c2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " pod="openstack/ceilometer-0" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.866095 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-8wk8q"] Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.913003 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g8xr\" (UniqueName: \"kubernetes.io/projected/8310da02-e0c7-4dab-bc26-7139ca576c2c-kube-api-access-5g8xr\") pod \"ceilometer-0\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " pod="openstack/ceilometer-0" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.918544 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-cbhjz"] Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.920203 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-cbhjz" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.929792 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4dxs2" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.929984 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.929997 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.941519 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-config-data\") pod \"cinder-db-sync-8wk8q\" (UID: \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\") " pod="openstack/cinder-db-sync-8wk8q" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.941582 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/97fa13e8-c7b4-4612-912c-2976861bce81-config\") pod \"neutron-db-sync-9p5hs\" (UID: \"97fa13e8-c7b4-4612-912c-2976861bce81\") " pod="openstack/neutron-db-sync-9p5hs" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.941655 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-etc-machine-id\") pod \"cinder-db-sync-8wk8q\" (UID: \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\") " pod="openstack/cinder-db-sync-8wk8q" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.941717 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-954wx\" (UniqueName: \"kubernetes.io/projected/97fa13e8-c7b4-4612-912c-2976861bce81-kube-api-access-954wx\") pod \"neutron-db-sync-9p5hs\" (UID: \"97fa13e8-c7b4-4612-912c-2976861bce81\") " pod="openstack/neutron-db-sync-9p5hs" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.941749 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97fa13e8-c7b4-4612-912c-2976861bce81-combined-ca-bundle\") pod \"neutron-db-sync-9p5hs\" (UID: \"97fa13e8-c7b4-4612-912c-2976861bce81\") " pod="openstack/neutron-db-sync-9p5hs" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.941809 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-db-sync-config-data\") pod \"cinder-db-sync-8wk8q\" (UID: \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\") " pod="openstack/cinder-db-sync-8wk8q" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.941854 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5z6v\" (UniqueName: \"kubernetes.io/projected/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-kube-api-access-m5z6v\") pod \"cinder-db-sync-8wk8q\" (UID: \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\") " pod="openstack/cinder-db-sync-8wk8q" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.941882 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-combined-ca-bundle\") pod \"cinder-db-sync-8wk8q\" (UID: \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\") " pod="openstack/cinder-db-sync-8wk8q" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.941914 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-scripts\") pod \"cinder-db-sync-8wk8q\" (UID: \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\") " pod="openstack/cinder-db-sync-8wk8q" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.943065 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-etc-machine-id\") pod \"cinder-db-sync-8wk8q\" (UID: \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\") " pod="openstack/cinder-db-sync-8wk8q" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.950939 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97fa13e8-c7b4-4612-912c-2976861bce81-combined-ca-bundle\") pod \"neutron-db-sync-9p5hs\" (UID: \"97fa13e8-c7b4-4612-912c-2976861bce81\") " pod="openstack/neutron-db-sync-9p5hs" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.951859 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-scripts\") pod \"cinder-db-sync-8wk8q\" (UID: \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\") " pod="openstack/cinder-db-sync-8wk8q" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.963217 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-config-data\") pod \"cinder-db-sync-8wk8q\" (UID: \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\") " pod="openstack/cinder-db-sync-8wk8q" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.966212 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-combined-ca-bundle\") pod \"cinder-db-sync-8wk8q\" (UID: \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\") " pod="openstack/cinder-db-sync-8wk8q" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.966803 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-cbhjz"] Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.973219 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/97fa13e8-c7b4-4612-912c-2976861bce81-config\") pod \"neutron-db-sync-9p5hs\" (UID: \"97fa13e8-c7b4-4612-912c-2976861bce81\") " pod="openstack/neutron-db-sync-9p5hs" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.973783 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.977744 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-db-sync-config-data\") pod \"cinder-db-sync-8wk8q\" (UID: \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\") " pod="openstack/cinder-db-sync-8wk8q" Nov 28 07:15:28 crc kubenswrapper[4946]: I1128 07:15:28.978507 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5z6v\" (UniqueName: \"kubernetes.io/projected/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-kube-api-access-m5z6v\") pod \"cinder-db-sync-8wk8q\" (UID: \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\") " pod="openstack/cinder-db-sync-8wk8q" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.009206 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-954wx\" (UniqueName: \"kubernetes.io/projected/97fa13e8-c7b4-4612-912c-2976861bce81-kube-api-access-954wx\") pod \"neutron-db-sync-9p5hs\" (UID: \"97fa13e8-c7b4-4612-912c-2976861bce81\") " pod="openstack/neutron-db-sync-9p5hs" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.035725 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76bb7864cf-xh4zj"] Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.049164 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cbfc817-e82c-436d-bc3a-6a3a94ee82e8-combined-ca-bundle\") pod \"barbican-db-sync-l7pvv\" (UID: \"0cbfc817-e82c-436d-bc3a-6a3a94ee82e8\") " pod="openstack/barbican-db-sync-l7pvv" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.049245 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0cbfc817-e82c-436d-bc3a-6a3a94ee82e8-db-sync-config-data\") pod \"barbican-db-sync-l7pvv\" (UID: \"0cbfc817-e82c-436d-bc3a-6a3a94ee82e8\") " pod="openstack/barbican-db-sync-l7pvv" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.049272 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zzwr\" (UniqueName: \"kubernetes.io/projected/a4229e67-36fc-40ba-8d90-af5b0d95743f-kube-api-access-4zzwr\") pod \"placement-db-sync-cbhjz\" (UID: \"a4229e67-36fc-40ba-8d90-af5b0d95743f\") " pod="openstack/placement-db-sync-cbhjz" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.049343 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4229e67-36fc-40ba-8d90-af5b0d95743f-config-data\") pod \"placement-db-sync-cbhjz\" (UID: \"a4229e67-36fc-40ba-8d90-af5b0d95743f\") " pod="openstack/placement-db-sync-cbhjz" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.049375 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4229e67-36fc-40ba-8d90-af5b0d95743f-combined-ca-bundle\") pod \"placement-db-sync-cbhjz\" (UID: \"a4229e67-36fc-40ba-8d90-af5b0d95743f\") " pod="openstack/placement-db-sync-cbhjz" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.055828 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hppf\" (UniqueName: \"kubernetes.io/projected/0cbfc817-e82c-436d-bc3a-6a3a94ee82e8-kube-api-access-8hppf\") pod \"barbican-db-sync-l7pvv\" (UID: \"0cbfc817-e82c-436d-bc3a-6a3a94ee82e8\") " pod="openstack/barbican-db-sync-l7pvv" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.055891 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4229e67-36fc-40ba-8d90-af5b0d95743f-scripts\") pod \"placement-db-sync-cbhjz\" (UID: \"a4229e67-36fc-40ba-8d90-af5b0d95743f\") " pod="openstack/placement-db-sync-cbhjz" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.055918 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4229e67-36fc-40ba-8d90-af5b0d95743f-logs\") pod \"placement-db-sync-cbhjz\" (UID: \"a4229e67-36fc-40ba-8d90-af5b0d95743f\") " pod="openstack/placement-db-sync-cbhjz" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.071872 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f5d458b55-225m9"] Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.073545 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f5d458b55-225m9" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.080544 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f5d458b55-225m9"] Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.159157 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cbfc817-e82c-436d-bc3a-6a3a94ee82e8-combined-ca-bundle\") pod \"barbican-db-sync-l7pvv\" (UID: \"0cbfc817-e82c-436d-bc3a-6a3a94ee82e8\") " pod="openstack/barbican-db-sync-l7pvv" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.159235 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0cbfc817-e82c-436d-bc3a-6a3a94ee82e8-db-sync-config-data\") pod \"barbican-db-sync-l7pvv\" (UID: \"0cbfc817-e82c-436d-bc3a-6a3a94ee82e8\") " pod="openstack/barbican-db-sync-l7pvv" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.159253 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zzwr\" (UniqueName: \"kubernetes.io/projected/a4229e67-36fc-40ba-8d90-af5b0d95743f-kube-api-access-4zzwr\") pod \"placement-db-sync-cbhjz\" (UID: \"a4229e67-36fc-40ba-8d90-af5b0d95743f\") " pod="openstack/placement-db-sync-cbhjz" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.159310 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4229e67-36fc-40ba-8d90-af5b0d95743f-config-data\") pod \"placement-db-sync-cbhjz\" (UID: \"a4229e67-36fc-40ba-8d90-af5b0d95743f\") " pod="openstack/placement-db-sync-cbhjz" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.159346 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4229e67-36fc-40ba-8d90-af5b0d95743f-combined-ca-bundle\") pod \"placement-db-sync-cbhjz\" (UID: \"a4229e67-36fc-40ba-8d90-af5b0d95743f\") " pod="openstack/placement-db-sync-cbhjz" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.159387 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hppf\" (UniqueName: \"kubernetes.io/projected/0cbfc817-e82c-436d-bc3a-6a3a94ee82e8-kube-api-access-8hppf\") pod \"barbican-db-sync-l7pvv\" (UID: \"0cbfc817-e82c-436d-bc3a-6a3a94ee82e8\") " pod="openstack/barbican-db-sync-l7pvv" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.159432 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4229e67-36fc-40ba-8d90-af5b0d95743f-scripts\") pod \"placement-db-sync-cbhjz\" (UID: \"a4229e67-36fc-40ba-8d90-af5b0d95743f\") " pod="openstack/placement-db-sync-cbhjz" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.159470 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4229e67-36fc-40ba-8d90-af5b0d95743f-logs\") pod \"placement-db-sync-cbhjz\" (UID: \"a4229e67-36fc-40ba-8d90-af5b0d95743f\") " pod="openstack/placement-db-sync-cbhjz" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.164969 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4229e67-36fc-40ba-8d90-af5b0d95743f-logs\") pod \"placement-db-sync-cbhjz\" (UID: \"a4229e67-36fc-40ba-8d90-af5b0d95743f\") " pod="openstack/placement-db-sync-cbhjz" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.165507 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4229e67-36fc-40ba-8d90-af5b0d95743f-scripts\") pod \"placement-db-sync-cbhjz\" (UID: \"a4229e67-36fc-40ba-8d90-af5b0d95743f\") " pod="openstack/placement-db-sync-cbhjz" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.166400 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9p5hs" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.167881 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4229e67-36fc-40ba-8d90-af5b0d95743f-combined-ca-bundle\") pod \"placement-db-sync-cbhjz\" (UID: \"a4229e67-36fc-40ba-8d90-af5b0d95743f\") " pod="openstack/placement-db-sync-cbhjz" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.168279 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0cbfc817-e82c-436d-bc3a-6a3a94ee82e8-db-sync-config-data\") pod \"barbican-db-sync-l7pvv\" (UID: \"0cbfc817-e82c-436d-bc3a-6a3a94ee82e8\") " pod="openstack/barbican-db-sync-l7pvv" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.195770 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zzwr\" (UniqueName: \"kubernetes.io/projected/a4229e67-36fc-40ba-8d90-af5b0d95743f-kube-api-access-4zzwr\") pod \"placement-db-sync-cbhjz\" (UID: \"a4229e67-36fc-40ba-8d90-af5b0d95743f\") " pod="openstack/placement-db-sync-cbhjz" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.198643 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cbfc817-e82c-436d-bc3a-6a3a94ee82e8-combined-ca-bundle\") pod \"barbican-db-sync-l7pvv\" (UID: \"0cbfc817-e82c-436d-bc3a-6a3a94ee82e8\") " pod="openstack/barbican-db-sync-l7pvv" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.209026 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4229e67-36fc-40ba-8d90-af5b0d95743f-config-data\") pod \"placement-db-sync-cbhjz\" (UID: \"a4229e67-36fc-40ba-8d90-af5b0d95743f\") " pod="openstack/placement-db-sync-cbhjz" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.209164 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hppf\" (UniqueName: \"kubernetes.io/projected/0cbfc817-e82c-436d-bc3a-6a3a94ee82e8-kube-api-access-8hppf\") pod \"barbican-db-sync-l7pvv\" (UID: \"0cbfc817-e82c-436d-bc3a-6a3a94ee82e8\") " pod="openstack/barbican-db-sync-l7pvv" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.210541 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8wk8q" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.235019 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l7pvv" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.251325 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wvnxz"] Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.261397 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-dns-swift-storage-0\") pod \"dnsmasq-dns-5f5d458b55-225m9\" (UID: \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\") " pod="openstack/dnsmasq-dns-5f5d458b55-225m9" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.261729 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-dns-svc\") pod \"dnsmasq-dns-5f5d458b55-225m9\" (UID: \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\") " pod="openstack/dnsmasq-dns-5f5d458b55-225m9" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.261888 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7br7z\" (UniqueName: \"kubernetes.io/projected/be5ff997-7c05-4d71-8a40-1cc01a423bc3-kube-api-access-7br7z\") pod \"dnsmasq-dns-5f5d458b55-225m9\" (UID: \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\") " pod="openstack/dnsmasq-dns-5f5d458b55-225m9" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.262002 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-ovsdbserver-nb\") pod \"dnsmasq-dns-5f5d458b55-225m9\" (UID: \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\") " pod="openstack/dnsmasq-dns-5f5d458b55-225m9" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.262100 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-ovsdbserver-sb\") pod \"dnsmasq-dns-5f5d458b55-225m9\" (UID: \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\") " pod="openstack/dnsmasq-dns-5f5d458b55-225m9" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.262248 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-config\") pod \"dnsmasq-dns-5f5d458b55-225m9\" (UID: \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\") " pod="openstack/dnsmasq-dns-5f5d458b55-225m9" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.270746 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-cbhjz" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.363952 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-dns-swift-storage-0\") pod \"dnsmasq-dns-5f5d458b55-225m9\" (UID: \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\") " pod="openstack/dnsmasq-dns-5f5d458b55-225m9" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.364587 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-dns-svc\") pod \"dnsmasq-dns-5f5d458b55-225m9\" (UID: \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\") " pod="openstack/dnsmasq-dns-5f5d458b55-225m9" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.364752 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7br7z\" (UniqueName: \"kubernetes.io/projected/be5ff997-7c05-4d71-8a40-1cc01a423bc3-kube-api-access-7br7z\") pod \"dnsmasq-dns-5f5d458b55-225m9\" (UID: \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\") " pod="openstack/dnsmasq-dns-5f5d458b55-225m9" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.364819 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-ovsdbserver-nb\") pod \"dnsmasq-dns-5f5d458b55-225m9\" (UID: \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\") " pod="openstack/dnsmasq-dns-5f5d458b55-225m9" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.364869 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-ovsdbserver-sb\") pod \"dnsmasq-dns-5f5d458b55-225m9\" (UID: \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\") " pod="openstack/dnsmasq-dns-5f5d458b55-225m9" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.365219 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-config\") pod \"dnsmasq-dns-5f5d458b55-225m9\" (UID: \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\") " pod="openstack/dnsmasq-dns-5f5d458b55-225m9" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.365791 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-dns-swift-storage-0\") pod \"dnsmasq-dns-5f5d458b55-225m9\" (UID: \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\") " pod="openstack/dnsmasq-dns-5f5d458b55-225m9" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.367008 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-dns-svc\") pod \"dnsmasq-dns-5f5d458b55-225m9\" (UID: \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\") " pod="openstack/dnsmasq-dns-5f5d458b55-225m9" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.367297 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-ovsdbserver-nb\") pod \"dnsmasq-dns-5f5d458b55-225m9\" (UID: \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\") " pod="openstack/dnsmasq-dns-5f5d458b55-225m9" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.368421 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-config\") pod \"dnsmasq-dns-5f5d458b55-225m9\" (UID: \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\") " pod="openstack/dnsmasq-dns-5f5d458b55-225m9" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.368813 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-ovsdbserver-sb\") pod \"dnsmasq-dns-5f5d458b55-225m9\" (UID: \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\") " pod="openstack/dnsmasq-dns-5f5d458b55-225m9" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.396198 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7br7z\" (UniqueName: \"kubernetes.io/projected/be5ff997-7c05-4d71-8a40-1cc01a423bc3-kube-api-access-7br7z\") pod \"dnsmasq-dns-5f5d458b55-225m9\" (UID: \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\") " pod="openstack/dnsmasq-dns-5f5d458b55-225m9" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.435093 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f5d458b55-225m9" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.467945 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.469563 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.473406 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4ftvr" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.473715 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.477482 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.477925 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.560333 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.572847 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbd63f33-9cb3-47ec-ab44-04be204321ff-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.572960 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.573017 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd63f33-9cb3-47ec-ab44-04be204321ff-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.573056 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbd63f33-9cb3-47ec-ab44-04be204321ff-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.573079 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6tc2\" (UniqueName: \"kubernetes.io/projected/bbd63f33-9cb3-47ec-ab44-04be204321ff-kube-api-access-d6tc2\") pod \"glance-default-external-api-0\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.573107 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbd63f33-9cb3-47ec-ab44-04be204321ff-config-data\") pod \"glance-default-external-api-0\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.573169 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbd63f33-9cb3-47ec-ab44-04be204321ff-logs\") pod \"glance-default-external-api-0\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.573216 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbd63f33-9cb3-47ec-ab44-04be204321ff-scripts\") pod \"glance-default-external-api-0\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.610064 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.611778 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.639762 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76bb7864cf-xh4zj"] Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.651346 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.651615 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.674972 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbd63f33-9cb3-47ec-ab44-04be204321ff-scripts\") pod \"glance-default-external-api-0\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.675076 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbd63f33-9cb3-47ec-ab44-04be204321ff-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.675099 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.675139 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd63f33-9cb3-47ec-ab44-04be204321ff-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.675171 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbd63f33-9cb3-47ec-ab44-04be204321ff-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.675197 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6tc2\" (UniqueName: \"kubernetes.io/projected/bbd63f33-9cb3-47ec-ab44-04be204321ff-kube-api-access-d6tc2\") pod \"glance-default-external-api-0\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.675219 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbd63f33-9cb3-47ec-ab44-04be204321ff-config-data\") pod \"glance-default-external-api-0\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.675261 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbd63f33-9cb3-47ec-ab44-04be204321ff-logs\") pod \"glance-default-external-api-0\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.675843 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbd63f33-9cb3-47ec-ab44-04be204321ff-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.676777 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbd63f33-9cb3-47ec-ab44-04be204321ff-logs\") pod \"glance-default-external-api-0\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.679300 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.680242 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbd63f33-9cb3-47ec-ab44-04be204321ff-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.686769 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.688017 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd63f33-9cb3-47ec-ab44-04be204321ff-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.691084 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbd63f33-9cb3-47ec-ab44-04be204321ff-scripts\") pod \"glance-default-external-api-0\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.705016 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbd63f33-9cb3-47ec-ab44-04be204321ff-config-data\") pod \"glance-default-external-api-0\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.720054 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6tc2\" (UniqueName: \"kubernetes.io/projected/bbd63f33-9cb3-47ec-ab44-04be204321ff-kube-api-access-d6tc2\") pod \"glance-default-external-api-0\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.748572 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.752919 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.777051 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53c012db-de22-4e6c-9240-89b78163eff1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.777110 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53c012db-de22-4e6c-9240-89b78163eff1-logs\") pod \"glance-default-internal-api-0\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.777173 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9bcg\" (UniqueName: \"kubernetes.io/projected/53c012db-de22-4e6c-9240-89b78163eff1-kube-api-access-l9bcg\") pod \"glance-default-internal-api-0\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.777202 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53c012db-de22-4e6c-9240-89b78163eff1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.777235 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53c012db-de22-4e6c-9240-89b78163eff1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.777266 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.777283 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53c012db-de22-4e6c-9240-89b78163eff1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.777305 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53c012db-de22-4e6c-9240-89b78163eff1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.807726 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.838435 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9p5hs"] Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.879248 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9bcg\" (UniqueName: \"kubernetes.io/projected/53c012db-de22-4e6c-9240-89b78163eff1-kube-api-access-l9bcg\") pod \"glance-default-internal-api-0\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.879454 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53c012db-de22-4e6c-9240-89b78163eff1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.879508 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53c012db-de22-4e6c-9240-89b78163eff1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.879543 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.879560 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53c012db-de22-4e6c-9240-89b78163eff1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.879585 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53c012db-de22-4e6c-9240-89b78163eff1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.879652 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53c012db-de22-4e6c-9240-89b78163eff1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.879682 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53c012db-de22-4e6c-9240-89b78163eff1-logs\") pod \"glance-default-internal-api-0\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.880112 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53c012db-de22-4e6c-9240-89b78163eff1-logs\") pod \"glance-default-internal-api-0\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.880347 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53c012db-de22-4e6c-9240-89b78163eff1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.880454 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.885485 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53c012db-de22-4e6c-9240-89b78163eff1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.900380 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53c012db-de22-4e6c-9240-89b78163eff1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.905985 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53c012db-de22-4e6c-9240-89b78163eff1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.916292 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53c012db-de22-4e6c-9240-89b78163eff1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.919504 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9bcg\" (UniqueName: \"kubernetes.io/projected/53c012db-de22-4e6c-9240-89b78163eff1-kube-api-access-l9bcg\") pod \"glance-default-internal-api-0\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.951730 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:29 crc kubenswrapper[4946]: I1128 07:15:29.986385 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-l7pvv"] Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.136682 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8310da02-e0c7-4dab-bc26-7139ca576c2c","Type":"ContainerStarted","Data":"c26e837a2742c742a2d43b1c36855f77af9ae27aa510959b01bc2b1b75beac70"} Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.150566 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.159665 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l7pvv" event={"ID":"0cbfc817-e82c-436d-bc3a-6a3a94ee82e8","Type":"ContainerStarted","Data":"e040daeaf40cd7bfe5dec09a9b17b151e13f1dc4fbe084f3e257ca5fa6fa03f2"} Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.175546 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-cbhjz"] Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.208057 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-8wk8q"] Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.208819 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wvnxz" event={"ID":"b16ae262-483b-4ed8-9ab8-15dad5215dbc","Type":"ContainerStarted","Data":"d18f3a1fbfca471cddca95a6a9f466632a71cd94221c0b60e67c5785129513a6"} Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.208895 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wvnxz" event={"ID":"b16ae262-483b-4ed8-9ab8-15dad5215dbc","Type":"ContainerStarted","Data":"ad955b0e79c969460e184e4f65ccf3e34e10221d589515ceab29b708e4831842"} Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.242918 4946 generic.go:334] "Generic (PLEG): container finished" podID="db6faea9-e5b3-4621-b198-1d323a1825d4" containerID="71920607a74ed95ce6ac4a57ff6247d2ee44fbb9ce824096a60b3f1d757c7c2e" exitCode=0 Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.243018 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76bb7864cf-xh4zj" event={"ID":"db6faea9-e5b3-4621-b198-1d323a1825d4","Type":"ContainerDied","Data":"71920607a74ed95ce6ac4a57ff6247d2ee44fbb9ce824096a60b3f1d757c7c2e"} Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.243051 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76bb7864cf-xh4zj" event={"ID":"db6faea9-e5b3-4621-b198-1d323a1825d4","Type":"ContainerStarted","Data":"f248fb41095309c2b0db58bbabf527a0e771529df64011c2dc5d145a881f09ec"} Nov 28 07:15:30 crc kubenswrapper[4946]: W1128 07:15:30.266112 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d0d856a_c2e4_4ccf_adfb_70391210f8d9.slice/crio-cf8b41b8a4e8be9d4c4cddf896de608a53cba6eca033c6775fadf6d6900660c4 WatchSource:0}: Error finding container cf8b41b8a4e8be9d4c4cddf896de608a53cba6eca033c6775fadf6d6900660c4: Status 404 returned error can't find the container with id cf8b41b8a4e8be9d4c4cddf896de608a53cba6eca033c6775fadf6d6900660c4 Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.267907 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wvnxz" podStartSLOduration=2.267880591 podStartE2EDuration="2.267880591s" podCreationTimestamp="2025-11-28 07:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:15:30.245707577 +0000 UTC m=+1384.623772678" watchObservedRunningTime="2025-11-28 07:15:30.267880591 +0000 UTC m=+1384.645945702" Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.325693 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9p5hs" event={"ID":"97fa13e8-c7b4-4612-912c-2976861bce81","Type":"ContainerStarted","Data":"e9075c5decdb3425d24b6f806097a34c41ef6dfc68cae3e4911a14e504d1d3ab"} Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.354948 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f5d458b55-225m9"] Nov 28 07:15:30 crc kubenswrapper[4946]: W1128 07:15:30.387568 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe5ff997_7c05_4d71_8a40_1cc01a423bc3.slice/crio-03a25ab23f06b7da69cb8471d0fddf30a08f6e86d8b3370a0b92699a2762b055 WatchSource:0}: Error finding container 03a25ab23f06b7da69cb8471d0fddf30a08f6e86d8b3370a0b92699a2762b055: Status 404 returned error can't find the container with id 03a25ab23f06b7da69cb8471d0fddf30a08f6e86d8b3370a0b92699a2762b055 Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.521086 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:15:30 crc kubenswrapper[4946]: W1128 07:15:30.560071 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbd63f33_9cb3_47ec_ab44_04be204321ff.slice/crio-a5fedd4f3c77699494e8a29335f19cbfc8a0933ce615542113792453428ea43f WatchSource:0}: Error finding container a5fedd4f3c77699494e8a29335f19cbfc8a0933ce615542113792453428ea43f: Status 404 returned error can't find the container with id a5fedd4f3c77699494e8a29335f19cbfc8a0933ce615542113792453428ea43f Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.835564 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76bb7864cf-xh4zj" Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.938886 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.942377 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-ovsdbserver-sb\") pod \"db6faea9-e5b3-4621-b198-1d323a1825d4\" (UID: \"db6faea9-e5b3-4621-b198-1d323a1825d4\") " Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.942728 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-562d2\" (UniqueName: \"kubernetes.io/projected/db6faea9-e5b3-4621-b198-1d323a1825d4-kube-api-access-562d2\") pod \"db6faea9-e5b3-4621-b198-1d323a1825d4\" (UID: \"db6faea9-e5b3-4621-b198-1d323a1825d4\") " Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.942766 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-dns-swift-storage-0\") pod \"db6faea9-e5b3-4621-b198-1d323a1825d4\" (UID: \"db6faea9-e5b3-4621-b198-1d323a1825d4\") " Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.942791 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-config\") pod \"db6faea9-e5b3-4621-b198-1d323a1825d4\" (UID: \"db6faea9-e5b3-4621-b198-1d323a1825d4\") " Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.942841 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-ovsdbserver-nb\") pod \"db6faea9-e5b3-4621-b198-1d323a1825d4\" (UID: \"db6faea9-e5b3-4621-b198-1d323a1825d4\") " Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.942869 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-dns-svc\") pod \"db6faea9-e5b3-4621-b198-1d323a1825d4\" (UID: \"db6faea9-e5b3-4621-b198-1d323a1825d4\") " Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.950448 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db6faea9-e5b3-4621-b198-1d323a1825d4-kube-api-access-562d2" (OuterVolumeSpecName: "kube-api-access-562d2") pod "db6faea9-e5b3-4621-b198-1d323a1825d4" (UID: "db6faea9-e5b3-4621-b198-1d323a1825d4"). InnerVolumeSpecName "kube-api-access-562d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.966967 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-config" (OuterVolumeSpecName: "config") pod "db6faea9-e5b3-4621-b198-1d323a1825d4" (UID: "db6faea9-e5b3-4621-b198-1d323a1825d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.977438 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "db6faea9-e5b3-4621-b198-1d323a1825d4" (UID: "db6faea9-e5b3-4621-b198-1d323a1825d4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:30 crc kubenswrapper[4946]: I1128 07:15:30.995382 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "db6faea9-e5b3-4621-b198-1d323a1825d4" (UID: "db6faea9-e5b3-4621-b198-1d323a1825d4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:31 crc kubenswrapper[4946]: I1128 07:15:31.004293 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "db6faea9-e5b3-4621-b198-1d323a1825d4" (UID: "db6faea9-e5b3-4621-b198-1d323a1825d4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:31 crc kubenswrapper[4946]: I1128 07:15:31.006886 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "db6faea9-e5b3-4621-b198-1d323a1825d4" (UID: "db6faea9-e5b3-4621-b198-1d323a1825d4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:31 crc kubenswrapper[4946]: I1128 07:15:31.045953 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:31 crc kubenswrapper[4946]: I1128 07:15:31.045981 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:31 crc kubenswrapper[4946]: I1128 07:15:31.045991 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:31 crc kubenswrapper[4946]: I1128 07:15:31.046000 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-562d2\" (UniqueName: \"kubernetes.io/projected/db6faea9-e5b3-4621-b198-1d323a1825d4-kube-api-access-562d2\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:31 crc kubenswrapper[4946]: I1128 07:15:31.046011 4946 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:31 crc kubenswrapper[4946]: I1128 07:15:31.046020 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db6faea9-e5b3-4621-b198-1d323a1825d4-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:31 crc kubenswrapper[4946]: I1128 07:15:31.345117 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9p5hs" event={"ID":"97fa13e8-c7b4-4612-912c-2976861bce81","Type":"ContainerStarted","Data":"fc88598dd6761393b295b22d6e88c16126d2718d69bd736288f3b924ac60358d"} Nov 28 07:15:31 crc kubenswrapper[4946]: I1128 07:15:31.358794 4946 generic.go:334] "Generic (PLEG): container finished" podID="be5ff997-7c05-4d71-8a40-1cc01a423bc3" containerID="133486b76b76281205b48295c969bee20c80a954b6722df2a9d7436c4c620478" exitCode=0 Nov 28 07:15:31 crc kubenswrapper[4946]: I1128 07:15:31.358907 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f5d458b55-225m9" event={"ID":"be5ff997-7c05-4d71-8a40-1cc01a423bc3","Type":"ContainerDied","Data":"133486b76b76281205b48295c969bee20c80a954b6722df2a9d7436c4c620478"} Nov 28 07:15:31 crc kubenswrapper[4946]: I1128 07:15:31.358941 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f5d458b55-225m9" event={"ID":"be5ff997-7c05-4d71-8a40-1cc01a423bc3","Type":"ContainerStarted","Data":"03a25ab23f06b7da69cb8471d0fddf30a08f6e86d8b3370a0b92699a2762b055"} Nov 28 07:15:31 crc kubenswrapper[4946]: I1128 07:15:31.380234 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-cbhjz" event={"ID":"a4229e67-36fc-40ba-8d90-af5b0d95743f","Type":"ContainerStarted","Data":"5e648766bc17e861d464e00b88b45c7346d8d528bd289794550a29e666ea25f3"} Nov 28 07:15:31 crc kubenswrapper[4946]: I1128 07:15:31.386511 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8wk8q" event={"ID":"8d0d856a-c2e4-4ccf-adfb-70391210f8d9","Type":"ContainerStarted","Data":"cf8b41b8a4e8be9d4c4cddf896de608a53cba6eca033c6775fadf6d6900660c4"} Nov 28 07:15:31 crc kubenswrapper[4946]: I1128 07:15:31.392639 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"53c012db-de22-4e6c-9240-89b78163eff1","Type":"ContainerStarted","Data":"7cd39ee81110b59de0b88a9c81eb0acee047e5fc0244b1827986d2f218cfe664"} Nov 28 07:15:31 crc kubenswrapper[4946]: I1128 07:15:31.431985 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76bb7864cf-xh4zj" event={"ID":"db6faea9-e5b3-4621-b198-1d323a1825d4","Type":"ContainerDied","Data":"f248fb41095309c2b0db58bbabf527a0e771529df64011c2dc5d145a881f09ec"} Nov 28 07:15:31 crc kubenswrapper[4946]: I1128 07:15:31.432060 4946 scope.go:117] "RemoveContainer" containerID="71920607a74ed95ce6ac4a57ff6247d2ee44fbb9ce824096a60b3f1d757c7c2e" Nov 28 07:15:31 crc kubenswrapper[4946]: I1128 07:15:31.432244 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76bb7864cf-xh4zj" Nov 28 07:15:31 crc kubenswrapper[4946]: I1128 07:15:31.458286 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bbd63f33-9cb3-47ec-ab44-04be204321ff","Type":"ContainerStarted","Data":"a5fedd4f3c77699494e8a29335f19cbfc8a0933ce615542113792453428ea43f"} Nov 28 07:15:31 crc kubenswrapper[4946]: I1128 07:15:31.478553 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-9p5hs" podStartSLOduration=3.478531758 podStartE2EDuration="3.478531758s" podCreationTimestamp="2025-11-28 07:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:15:31.377983223 +0000 UTC m=+1385.756048334" watchObservedRunningTime="2025-11-28 07:15:31.478531758 +0000 UTC m=+1385.856596869" Nov 28 07:15:31 crc kubenswrapper[4946]: I1128 07:15:31.593228 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:15:31 crc kubenswrapper[4946]: I1128 07:15:31.630795 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:15:31 crc kubenswrapper[4946]: I1128 07:15:31.641063 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:15:31 crc kubenswrapper[4946]: I1128 07:15:31.645824 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76bb7864cf-xh4zj"] Nov 28 07:15:31 crc kubenswrapper[4946]: I1128 07:15:31.649759 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76bb7864cf-xh4zj"] Nov 28 07:15:32 crc kubenswrapper[4946]: I1128 07:15:32.002703 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db6faea9-e5b3-4621-b198-1d323a1825d4" path="/var/lib/kubelet/pods/db6faea9-e5b3-4621-b198-1d323a1825d4/volumes" Nov 28 07:15:32 crc kubenswrapper[4946]: I1128 07:15:32.473104 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"53c012db-de22-4e6c-9240-89b78163eff1","Type":"ContainerStarted","Data":"217b73121230eedf6bc2104ad4dab0acd6ffb3e896c76f6fa3d4e9132d016e74"} Nov 28 07:15:32 crc kubenswrapper[4946]: I1128 07:15:32.485090 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bbd63f33-9cb3-47ec-ab44-04be204321ff","Type":"ContainerStarted","Data":"4d4b0729302163f9765842d3c8ebafc303b6afa3579e50271b80e282415492ba"} Nov 28 07:15:32 crc kubenswrapper[4946]: I1128 07:15:32.492572 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f5d458b55-225m9" event={"ID":"be5ff997-7c05-4d71-8a40-1cc01a423bc3","Type":"ContainerStarted","Data":"ad9154061ecdccee32327c0040940e4bf6b7afef612d378da3b9ffd35cf3e8fe"} Nov 28 07:15:32 crc kubenswrapper[4946]: I1128 07:15:32.492718 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f5d458b55-225m9" Nov 28 07:15:32 crc kubenswrapper[4946]: I1128 07:15:32.523354 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f5d458b55-225m9" podStartSLOduration=4.523300648 podStartE2EDuration="4.523300648s" podCreationTimestamp="2025-11-28 07:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:15:32.512197655 +0000 UTC m=+1386.890262776" watchObservedRunningTime="2025-11-28 07:15:32.523300648 +0000 UTC m=+1386.901365759" Nov 28 07:15:33 crc kubenswrapper[4946]: I1128 07:15:33.503801 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"53c012db-de22-4e6c-9240-89b78163eff1","Type":"ContainerStarted","Data":"ccf44d8757a6df17cbb0907a8941dedb546ba9d919bc8c5210c862fcd792e74e"} Nov 28 07:15:33 crc kubenswrapper[4946]: I1128 07:15:33.503963 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="53c012db-de22-4e6c-9240-89b78163eff1" containerName="glance-log" containerID="cri-o://217b73121230eedf6bc2104ad4dab0acd6ffb3e896c76f6fa3d4e9132d016e74" gracePeriod=30 Nov 28 07:15:33 crc kubenswrapper[4946]: I1128 07:15:33.504014 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="53c012db-de22-4e6c-9240-89b78163eff1" containerName="glance-httpd" containerID="cri-o://ccf44d8757a6df17cbb0907a8941dedb546ba9d919bc8c5210c862fcd792e74e" gracePeriod=30 Nov 28 07:15:33 crc kubenswrapper[4946]: I1128 07:15:33.507494 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bbd63f33-9cb3-47ec-ab44-04be204321ff","Type":"ContainerStarted","Data":"e839daac9174d1eb9d5063219d04f6c22be7758193fb8693f7b09a6392c2aadd"} Nov 28 07:15:33 crc kubenswrapper[4946]: I1128 07:15:33.507753 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bbd63f33-9cb3-47ec-ab44-04be204321ff" containerName="glance-log" containerID="cri-o://4d4b0729302163f9765842d3c8ebafc303b6afa3579e50271b80e282415492ba" gracePeriod=30 Nov 28 07:15:33 crc kubenswrapper[4946]: I1128 07:15:33.507812 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bbd63f33-9cb3-47ec-ab44-04be204321ff" containerName="glance-httpd" containerID="cri-o://e839daac9174d1eb9d5063219d04f6c22be7758193fb8693f7b09a6392c2aadd" gracePeriod=30 Nov 28 07:15:33 crc kubenswrapper[4946]: I1128 07:15:33.536722 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.536701798 podStartE2EDuration="5.536701798s" podCreationTimestamp="2025-11-28 07:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:15:33.535368225 +0000 UTC m=+1387.913433336" watchObservedRunningTime="2025-11-28 07:15:33.536701798 +0000 UTC m=+1387.914766909" Nov 28 07:15:33 crc kubenswrapper[4946]: I1128 07:15:33.574970 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.574945196 podStartE2EDuration="5.574945196s" podCreationTimestamp="2025-11-28 07:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:15:33.568204521 +0000 UTC m=+1387.946269632" watchObservedRunningTime="2025-11-28 07:15:33.574945196 +0000 UTC m=+1387.953010307" Nov 28 07:15:34 crc kubenswrapper[4946]: I1128 07:15:34.520017 4946 generic.go:334] "Generic (PLEG): container finished" podID="b16ae262-483b-4ed8-9ab8-15dad5215dbc" containerID="d18f3a1fbfca471cddca95a6a9f466632a71cd94221c0b60e67c5785129513a6" exitCode=0 Nov 28 07:15:34 crc kubenswrapper[4946]: I1128 07:15:34.520575 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wvnxz" event={"ID":"b16ae262-483b-4ed8-9ab8-15dad5215dbc","Type":"ContainerDied","Data":"d18f3a1fbfca471cddca95a6a9f466632a71cd94221c0b60e67c5785129513a6"} Nov 28 07:15:34 crc kubenswrapper[4946]: I1128 07:15:34.523674 4946 generic.go:334] "Generic (PLEG): container finished" podID="53c012db-de22-4e6c-9240-89b78163eff1" containerID="ccf44d8757a6df17cbb0907a8941dedb546ba9d919bc8c5210c862fcd792e74e" exitCode=0 Nov 28 07:15:34 crc kubenswrapper[4946]: I1128 07:15:34.523700 4946 generic.go:334] "Generic (PLEG): container finished" podID="53c012db-de22-4e6c-9240-89b78163eff1" containerID="217b73121230eedf6bc2104ad4dab0acd6ffb3e896c76f6fa3d4e9132d016e74" exitCode=143 Nov 28 07:15:34 crc kubenswrapper[4946]: I1128 07:15:34.523751 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"53c012db-de22-4e6c-9240-89b78163eff1","Type":"ContainerDied","Data":"ccf44d8757a6df17cbb0907a8941dedb546ba9d919bc8c5210c862fcd792e74e"} Nov 28 07:15:34 crc kubenswrapper[4946]: I1128 07:15:34.523778 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"53c012db-de22-4e6c-9240-89b78163eff1","Type":"ContainerDied","Data":"217b73121230eedf6bc2104ad4dab0acd6ffb3e896c76f6fa3d4e9132d016e74"} Nov 28 07:15:34 crc kubenswrapper[4946]: I1128 07:15:34.537354 4946 generic.go:334] "Generic (PLEG): container finished" podID="bbd63f33-9cb3-47ec-ab44-04be204321ff" containerID="e839daac9174d1eb9d5063219d04f6c22be7758193fb8693f7b09a6392c2aadd" exitCode=0 Nov 28 07:15:34 crc kubenswrapper[4946]: I1128 07:15:34.537484 4946 generic.go:334] "Generic (PLEG): container finished" podID="bbd63f33-9cb3-47ec-ab44-04be204321ff" containerID="4d4b0729302163f9765842d3c8ebafc303b6afa3579e50271b80e282415492ba" exitCode=143 Nov 28 07:15:34 crc kubenswrapper[4946]: I1128 07:15:34.537525 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bbd63f33-9cb3-47ec-ab44-04be204321ff","Type":"ContainerDied","Data":"e839daac9174d1eb9d5063219d04f6c22be7758193fb8693f7b09a6392c2aadd"} Nov 28 07:15:34 crc kubenswrapper[4946]: I1128 07:15:34.537561 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bbd63f33-9cb3-47ec-ab44-04be204321ff","Type":"ContainerDied","Data":"4d4b0729302163f9765842d3c8ebafc303b6afa3579e50271b80e282415492ba"} Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.508673 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.510059 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wvnxz" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.515853 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.559018 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"53c012db-de22-4e6c-9240-89b78163eff1","Type":"ContainerDied","Data":"7cd39ee81110b59de0b88a9c81eb0acee047e5fc0244b1827986d2f218cfe664"} Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.559068 4946 scope.go:117] "RemoveContainer" containerID="ccf44d8757a6df17cbb0907a8941dedb546ba9d919bc8c5210c862fcd792e74e" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.559189 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.561316 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bbd63f33-9cb3-47ec-ab44-04be204321ff","Type":"ContainerDied","Data":"a5fedd4f3c77699494e8a29335f19cbfc8a0933ce615542113792453428ea43f"} Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.561335 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.562364 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wvnxz" event={"ID":"b16ae262-483b-4ed8-9ab8-15dad5215dbc","Type":"ContainerDied","Data":"ad955b0e79c969460e184e4f65ccf3e34e10221d589515ceab29b708e4831842"} Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.562390 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad955b0e79c969460e184e4f65ccf3e34e10221d589515ceab29b708e4831842" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.562438 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wvnxz" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.603877 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53c012db-de22-4e6c-9240-89b78163eff1-internal-tls-certs\") pod \"53c012db-de22-4e6c-9240-89b78163eff1\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.603937 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd63f33-9cb3-47ec-ab44-04be204321ff-combined-ca-bundle\") pod \"bbd63f33-9cb3-47ec-ab44-04be204321ff\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.603971 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-scripts\") pod \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\" (UID: \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\") " Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.603989 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"53c012db-de22-4e6c-9240-89b78163eff1\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.604012 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-combined-ca-bundle\") pod \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\" (UID: \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\") " Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.604074 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-config-data\") pod \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\" (UID: \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\") " Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.604164 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6rqh\" (UniqueName: \"kubernetes.io/projected/b16ae262-483b-4ed8-9ab8-15dad5215dbc-kube-api-access-f6rqh\") pod \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\" (UID: \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\") " Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.604185 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbd63f33-9cb3-47ec-ab44-04be204321ff-httpd-run\") pod \"bbd63f33-9cb3-47ec-ab44-04be204321ff\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.604221 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"bbd63f33-9cb3-47ec-ab44-04be204321ff\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.604256 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbd63f33-9cb3-47ec-ab44-04be204321ff-config-data\") pod \"bbd63f33-9cb3-47ec-ab44-04be204321ff\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.604281 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53c012db-de22-4e6c-9240-89b78163eff1-httpd-run\") pod \"53c012db-de22-4e6c-9240-89b78163eff1\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.604297 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbd63f33-9cb3-47ec-ab44-04be204321ff-scripts\") pod \"bbd63f33-9cb3-47ec-ab44-04be204321ff\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.604318 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6tc2\" (UniqueName: \"kubernetes.io/projected/bbd63f33-9cb3-47ec-ab44-04be204321ff-kube-api-access-d6tc2\") pod \"bbd63f33-9cb3-47ec-ab44-04be204321ff\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.604335 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53c012db-de22-4e6c-9240-89b78163eff1-config-data\") pod \"53c012db-de22-4e6c-9240-89b78163eff1\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.604355 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-credential-keys\") pod \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\" (UID: \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\") " Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.604378 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53c012db-de22-4e6c-9240-89b78163eff1-logs\") pod \"53c012db-de22-4e6c-9240-89b78163eff1\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.604400 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9bcg\" (UniqueName: \"kubernetes.io/projected/53c012db-de22-4e6c-9240-89b78163eff1-kube-api-access-l9bcg\") pod \"53c012db-de22-4e6c-9240-89b78163eff1\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.604428 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-fernet-keys\") pod \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\" (UID: \"b16ae262-483b-4ed8-9ab8-15dad5215dbc\") " Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.604473 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53c012db-de22-4e6c-9240-89b78163eff1-combined-ca-bundle\") pod \"53c012db-de22-4e6c-9240-89b78163eff1\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.604512 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbd63f33-9cb3-47ec-ab44-04be204321ff-public-tls-certs\") pod \"bbd63f33-9cb3-47ec-ab44-04be204321ff\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.604527 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbd63f33-9cb3-47ec-ab44-04be204321ff-logs\") pod \"bbd63f33-9cb3-47ec-ab44-04be204321ff\" (UID: \"bbd63f33-9cb3-47ec-ab44-04be204321ff\") " Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.604547 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53c012db-de22-4e6c-9240-89b78163eff1-scripts\") pod \"53c012db-de22-4e6c-9240-89b78163eff1\" (UID: \"53c012db-de22-4e6c-9240-89b78163eff1\") " Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.607636 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbd63f33-9cb3-47ec-ab44-04be204321ff-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bbd63f33-9cb3-47ec-ab44-04be204321ff" (UID: "bbd63f33-9cb3-47ec-ab44-04be204321ff"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.613929 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53c012db-de22-4e6c-9240-89b78163eff1-kube-api-access-l9bcg" (OuterVolumeSpecName: "kube-api-access-l9bcg") pod "53c012db-de22-4e6c-9240-89b78163eff1" (UID: "53c012db-de22-4e6c-9240-89b78163eff1"). InnerVolumeSpecName "kube-api-access-l9bcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.616752 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b16ae262-483b-4ed8-9ab8-15dad5215dbc" (UID: "b16ae262-483b-4ed8-9ab8-15dad5215dbc"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.617082 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53c012db-de22-4e6c-9240-89b78163eff1-logs" (OuterVolumeSpecName: "logs") pod "53c012db-de22-4e6c-9240-89b78163eff1" (UID: "53c012db-de22-4e6c-9240-89b78163eff1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.617423 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53c012db-de22-4e6c-9240-89b78163eff1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "53c012db-de22-4e6c-9240-89b78163eff1" (UID: "53c012db-de22-4e6c-9240-89b78163eff1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.618253 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "53c012db-de22-4e6c-9240-89b78163eff1" (UID: "53c012db-de22-4e6c-9240-89b78163eff1"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.618331 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "bbd63f33-9cb3-47ec-ab44-04be204321ff" (UID: "bbd63f33-9cb3-47ec-ab44-04be204321ff"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.618388 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd63f33-9cb3-47ec-ab44-04be204321ff-scripts" (OuterVolumeSpecName: "scripts") pod "bbd63f33-9cb3-47ec-ab44-04be204321ff" (UID: "bbd63f33-9cb3-47ec-ab44-04be204321ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.618973 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbd63f33-9cb3-47ec-ab44-04be204321ff-kube-api-access-d6tc2" (OuterVolumeSpecName: "kube-api-access-d6tc2") pod "bbd63f33-9cb3-47ec-ab44-04be204321ff" (UID: "bbd63f33-9cb3-47ec-ab44-04be204321ff"). InnerVolumeSpecName "kube-api-access-d6tc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.620999 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b16ae262-483b-4ed8-9ab8-15dad5215dbc-kube-api-access-f6rqh" (OuterVolumeSpecName: "kube-api-access-f6rqh") pod "b16ae262-483b-4ed8-9ab8-15dad5215dbc" (UID: "b16ae262-483b-4ed8-9ab8-15dad5215dbc"). InnerVolumeSpecName "kube-api-access-f6rqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.621673 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbd63f33-9cb3-47ec-ab44-04be204321ff-logs" (OuterVolumeSpecName: "logs") pod "bbd63f33-9cb3-47ec-ab44-04be204321ff" (UID: "bbd63f33-9cb3-47ec-ab44-04be204321ff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.623797 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53c012db-de22-4e6c-9240-89b78163eff1-scripts" (OuterVolumeSpecName: "scripts") pod "53c012db-de22-4e6c-9240-89b78163eff1" (UID: "53c012db-de22-4e6c-9240-89b78163eff1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.638360 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-scripts" (OuterVolumeSpecName: "scripts") pod "b16ae262-483b-4ed8-9ab8-15dad5215dbc" (UID: "b16ae262-483b-4ed8-9ab8-15dad5215dbc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.645673 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b16ae262-483b-4ed8-9ab8-15dad5215dbc" (UID: "b16ae262-483b-4ed8-9ab8-15dad5215dbc"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.664217 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53c012db-de22-4e6c-9240-89b78163eff1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53c012db-de22-4e6c-9240-89b78163eff1" (UID: "53c012db-de22-4e6c-9240-89b78163eff1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.679487 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b16ae262-483b-4ed8-9ab8-15dad5215dbc" (UID: "b16ae262-483b-4ed8-9ab8-15dad5215dbc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.696621 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-config-data" (OuterVolumeSpecName: "config-data") pod "b16ae262-483b-4ed8-9ab8-15dad5215dbc" (UID: "b16ae262-483b-4ed8-9ab8-15dad5215dbc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.697653 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53c012db-de22-4e6c-9240-89b78163eff1-config-data" (OuterVolumeSpecName: "config-data") pod "53c012db-de22-4e6c-9240-89b78163eff1" (UID: "53c012db-de22-4e6c-9240-89b78163eff1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.707011 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53c012db-de22-4e6c-9240-89b78163eff1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.707042 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbd63f33-9cb3-47ec-ab44-04be204321ff-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.707054 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53c012db-de22-4e6c-9240-89b78163eff1-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.707074 4946 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.707084 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.707094 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.707102 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.707113 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6rqh\" (UniqueName: \"kubernetes.io/projected/b16ae262-483b-4ed8-9ab8-15dad5215dbc-kube-api-access-f6rqh\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.707124 4946 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbd63f33-9cb3-47ec-ab44-04be204321ff-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.707138 4946 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.707146 4946 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53c012db-de22-4e6c-9240-89b78163eff1-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.707157 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbd63f33-9cb3-47ec-ab44-04be204321ff-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.707166 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6tc2\" (UniqueName: \"kubernetes.io/projected/bbd63f33-9cb3-47ec-ab44-04be204321ff-kube-api-access-d6tc2\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.707174 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53c012db-de22-4e6c-9240-89b78163eff1-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.707192 4946 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.707200 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53c012db-de22-4e6c-9240-89b78163eff1-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.707209 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9bcg\" (UniqueName: \"kubernetes.io/projected/53c012db-de22-4e6c-9240-89b78163eff1-kube-api-access-l9bcg\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.707216 4946 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b16ae262-483b-4ed8-9ab8-15dad5215dbc-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.715521 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wvnxz"] Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.720848 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd63f33-9cb3-47ec-ab44-04be204321ff-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bbd63f33-9cb3-47ec-ab44-04be204321ff" (UID: "bbd63f33-9cb3-47ec-ab44-04be204321ff"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.722967 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd63f33-9cb3-47ec-ab44-04be204321ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbd63f33-9cb3-47ec-ab44-04be204321ff" (UID: "bbd63f33-9cb3-47ec-ab44-04be204321ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.725179 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wvnxz"] Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.730909 4946 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.737725 4946 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.741166 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd63f33-9cb3-47ec-ab44-04be204321ff-config-data" (OuterVolumeSpecName: "config-data") pod "bbd63f33-9cb3-47ec-ab44-04be204321ff" (UID: "bbd63f33-9cb3-47ec-ab44-04be204321ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.751549 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53c012db-de22-4e6c-9240-89b78163eff1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "53c012db-de22-4e6c-9240-89b78163eff1" (UID: "53c012db-de22-4e6c-9240-89b78163eff1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.814062 4946 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbd63f33-9cb3-47ec-ab44-04be204321ff-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.814107 4946 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53c012db-de22-4e6c-9240-89b78163eff1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.814148 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd63f33-9cb3-47ec-ab44-04be204321ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.814164 4946 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.814178 4946 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.814190 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbd63f33-9cb3-47ec-ab44-04be204321ff-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.826123 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-djhf5"] Nov 28 07:15:36 crc kubenswrapper[4946]: E1128 07:15:36.826911 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbd63f33-9cb3-47ec-ab44-04be204321ff" containerName="glance-log" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.827000 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbd63f33-9cb3-47ec-ab44-04be204321ff" containerName="glance-log" Nov 28 07:15:36 crc kubenswrapper[4946]: E1128 07:15:36.827081 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c012db-de22-4e6c-9240-89b78163eff1" containerName="glance-log" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.827148 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c012db-de22-4e6c-9240-89b78163eff1" containerName="glance-log" Nov 28 07:15:36 crc kubenswrapper[4946]: E1128 07:15:36.827215 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c012db-de22-4e6c-9240-89b78163eff1" containerName="glance-httpd" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.827308 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c012db-de22-4e6c-9240-89b78163eff1" containerName="glance-httpd" Nov 28 07:15:36 crc kubenswrapper[4946]: E1128 07:15:36.827384 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbd63f33-9cb3-47ec-ab44-04be204321ff" containerName="glance-httpd" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.827448 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbd63f33-9cb3-47ec-ab44-04be204321ff" containerName="glance-httpd" Nov 28 07:15:36 crc kubenswrapper[4946]: E1128 07:15:36.828099 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db6faea9-e5b3-4621-b198-1d323a1825d4" containerName="init" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.828206 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="db6faea9-e5b3-4621-b198-1d323a1825d4" containerName="init" Nov 28 07:15:36 crc kubenswrapper[4946]: E1128 07:15:36.828301 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16ae262-483b-4ed8-9ab8-15dad5215dbc" containerName="keystone-bootstrap" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.828376 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16ae262-483b-4ed8-9ab8-15dad5215dbc" containerName="keystone-bootstrap" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.840575 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c012db-de22-4e6c-9240-89b78163eff1" containerName="glance-log" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.840650 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="db6faea9-e5b3-4621-b198-1d323a1825d4" containerName="init" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.840675 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c012db-de22-4e6c-9240-89b78163eff1" containerName="glance-httpd" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.840695 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbd63f33-9cb3-47ec-ab44-04be204321ff" containerName="glance-log" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.840742 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16ae262-483b-4ed8-9ab8-15dad5215dbc" containerName="keystone-bootstrap" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.840761 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbd63f33-9cb3-47ec-ab44-04be204321ff" containerName="glance-httpd" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.841973 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-djhf5" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.874266 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-djhf5"] Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.928237 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldkrq\" (UniqueName: \"kubernetes.io/projected/000349d5-5671-46a1-b1d2-1954a9facc3e-kube-api-access-ldkrq\") pod \"keystone-bootstrap-djhf5\" (UID: \"000349d5-5671-46a1-b1d2-1954a9facc3e\") " pod="openstack/keystone-bootstrap-djhf5" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.928477 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-fernet-keys\") pod \"keystone-bootstrap-djhf5\" (UID: \"000349d5-5671-46a1-b1d2-1954a9facc3e\") " pod="openstack/keystone-bootstrap-djhf5" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.928556 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-credential-keys\") pod \"keystone-bootstrap-djhf5\" (UID: \"000349d5-5671-46a1-b1d2-1954a9facc3e\") " pod="openstack/keystone-bootstrap-djhf5" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.928949 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-combined-ca-bundle\") pod \"keystone-bootstrap-djhf5\" (UID: \"000349d5-5671-46a1-b1d2-1954a9facc3e\") " pod="openstack/keystone-bootstrap-djhf5" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.929010 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-scripts\") pod \"keystone-bootstrap-djhf5\" (UID: \"000349d5-5671-46a1-b1d2-1954a9facc3e\") " pod="openstack/keystone-bootstrap-djhf5" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.929063 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-config-data\") pod \"keystone-bootstrap-djhf5\" (UID: \"000349d5-5671-46a1-b1d2-1954a9facc3e\") " pod="openstack/keystone-bootstrap-djhf5" Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.931741 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:15:36 crc kubenswrapper[4946]: I1128 07:15:36.958944 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.001536 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.010958 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.021853 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.023723 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.026893 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.026945 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4ftvr" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.026892 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.027370 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.034292 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.036092 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-fernet-keys\") pod \"keystone-bootstrap-djhf5\" (UID: \"000349d5-5671-46a1-b1d2-1954a9facc3e\") " pod="openstack/keystone-bootstrap-djhf5" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.036168 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-credential-keys\") pod \"keystone-bootstrap-djhf5\" (UID: \"000349d5-5671-46a1-b1d2-1954a9facc3e\") " pod="openstack/keystone-bootstrap-djhf5" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.036255 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-combined-ca-bundle\") pod \"keystone-bootstrap-djhf5\" (UID: \"000349d5-5671-46a1-b1d2-1954a9facc3e\") " pod="openstack/keystone-bootstrap-djhf5" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.036293 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-scripts\") pod \"keystone-bootstrap-djhf5\" (UID: \"000349d5-5671-46a1-b1d2-1954a9facc3e\") " pod="openstack/keystone-bootstrap-djhf5" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.036333 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-config-data\") pod \"keystone-bootstrap-djhf5\" (UID: \"000349d5-5671-46a1-b1d2-1954a9facc3e\") " pod="openstack/keystone-bootstrap-djhf5" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.036583 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldkrq\" (UniqueName: \"kubernetes.io/projected/000349d5-5671-46a1-b1d2-1954a9facc3e-kube-api-access-ldkrq\") pod \"keystone-bootstrap-djhf5\" (UID: \"000349d5-5671-46a1-b1d2-1954a9facc3e\") " pod="openstack/keystone-bootstrap-djhf5" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.040509 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-scripts\") pod \"keystone-bootstrap-djhf5\" (UID: \"000349d5-5671-46a1-b1d2-1954a9facc3e\") " pod="openstack/keystone-bootstrap-djhf5" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.041167 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-config-data\") pod \"keystone-bootstrap-djhf5\" (UID: \"000349d5-5671-46a1-b1d2-1954a9facc3e\") " pod="openstack/keystone-bootstrap-djhf5" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.042737 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-credential-keys\") pod \"keystone-bootstrap-djhf5\" (UID: \"000349d5-5671-46a1-b1d2-1954a9facc3e\") " pod="openstack/keystone-bootstrap-djhf5" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.046989 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-combined-ca-bundle\") pod \"keystone-bootstrap-djhf5\" (UID: \"000349d5-5671-46a1-b1d2-1954a9facc3e\") " pod="openstack/keystone-bootstrap-djhf5" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.049672 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.050790 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-fernet-keys\") pod \"keystone-bootstrap-djhf5\" (UID: \"000349d5-5671-46a1-b1d2-1954a9facc3e\") " pod="openstack/keystone-bootstrap-djhf5" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.052045 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.054810 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.054947 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.065486 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.068942 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldkrq\" (UniqueName: \"kubernetes.io/projected/000349d5-5671-46a1-b1d2-1954a9facc3e-kube-api-access-ldkrq\") pod \"keystone-bootstrap-djhf5\" (UID: \"000349d5-5671-46a1-b1d2-1954a9facc3e\") " pod="openstack/keystone-bootstrap-djhf5" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.138174 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f7498cb-2048-4c04-b1ce-50a236db88a6-config-data\") pod \"glance-default-external-api-0\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.138233 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f7498cb-2048-4c04-b1ce-50a236db88a6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.138268 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f7498cb-2048-4c04-b1ce-50a236db88a6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.138308 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bf0f217-858e-49f9-8730-7376f77c6d4f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.138338 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf0f217-858e-49f9-8730-7376f77c6d4f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.138369 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.138388 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f7498cb-2048-4c04-b1ce-50a236db88a6-logs\") pod \"glance-default-external-api-0\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.138440 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f7498cb-2048-4c04-b1ce-50a236db88a6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.138476 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-496gw\" (UniqueName: \"kubernetes.io/projected/6bf0f217-858e-49f9-8730-7376f77c6d4f-kube-api-access-496gw\") pod \"glance-default-internal-api-0\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.138494 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bf0f217-858e-49f9-8730-7376f77c6d4f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.138519 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.138560 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bf0f217-858e-49f9-8730-7376f77c6d4f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.138585 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqbkl\" (UniqueName: \"kubernetes.io/projected/1f7498cb-2048-4c04-b1ce-50a236db88a6-kube-api-access-rqbkl\") pod \"glance-default-external-api-0\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.138619 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f217-858e-49f9-8730-7376f77c6d4f-logs\") pod \"glance-default-internal-api-0\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.138645 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f7498cb-2048-4c04-b1ce-50a236db88a6-scripts\") pod \"glance-default-external-api-0\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.138681 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f217-858e-49f9-8730-7376f77c6d4f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.165373 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-djhf5" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.242748 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.242809 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f7498cb-2048-4c04-b1ce-50a236db88a6-logs\") pod \"glance-default-external-api-0\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.242894 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f7498cb-2048-4c04-b1ce-50a236db88a6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.242922 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-496gw\" (UniqueName: \"kubernetes.io/projected/6bf0f217-858e-49f9-8730-7376f77c6d4f-kube-api-access-496gw\") pod \"glance-default-internal-api-0\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.242949 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bf0f217-858e-49f9-8730-7376f77c6d4f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.242983 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.243031 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bf0f217-858e-49f9-8730-7376f77c6d4f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.243049 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqbkl\" (UniqueName: \"kubernetes.io/projected/1f7498cb-2048-4c04-b1ce-50a236db88a6-kube-api-access-rqbkl\") pod \"glance-default-external-api-0\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.243080 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f217-858e-49f9-8730-7376f77c6d4f-logs\") pod \"glance-default-internal-api-0\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.243101 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f7498cb-2048-4c04-b1ce-50a236db88a6-scripts\") pod \"glance-default-external-api-0\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.243131 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f217-858e-49f9-8730-7376f77c6d4f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.243160 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f7498cb-2048-4c04-b1ce-50a236db88a6-config-data\") pod \"glance-default-external-api-0\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.243182 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f7498cb-2048-4c04-b1ce-50a236db88a6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.243208 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f7498cb-2048-4c04-b1ce-50a236db88a6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.243237 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bf0f217-858e-49f9-8730-7376f77c6d4f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.243268 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf0f217-858e-49f9-8730-7376f77c6d4f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.244760 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.244852 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f7498cb-2048-4c04-b1ce-50a236db88a6-logs\") pod \"glance-default-external-api-0\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.245229 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f7498cb-2048-4c04-b1ce-50a236db88a6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.245476 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.245910 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f217-858e-49f9-8730-7376f77c6d4f-logs\") pod \"glance-default-internal-api-0\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.248996 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f217-858e-49f9-8730-7376f77c6d4f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.254450 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bf0f217-858e-49f9-8730-7376f77c6d4f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.254578 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f7498cb-2048-4c04-b1ce-50a236db88a6-scripts\") pod \"glance-default-external-api-0\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.254612 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bf0f217-858e-49f9-8730-7376f77c6d4f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.256789 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf0f217-858e-49f9-8730-7376f77c6d4f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.262193 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f7498cb-2048-4c04-b1ce-50a236db88a6-config-data\") pod \"glance-default-external-api-0\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.262418 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqbkl\" (UniqueName: \"kubernetes.io/projected/1f7498cb-2048-4c04-b1ce-50a236db88a6-kube-api-access-rqbkl\") pod \"glance-default-external-api-0\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.266530 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bf0f217-858e-49f9-8730-7376f77c6d4f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.275865 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f7498cb-2048-4c04-b1ce-50a236db88a6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.276739 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f7498cb-2048-4c04-b1ce-50a236db88a6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.283612 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-496gw\" (UniqueName: \"kubernetes.io/projected/6bf0f217-858e-49f9-8730-7376f77c6d4f-kube-api-access-496gw\") pod \"glance-default-internal-api-0\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.340212 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.340324 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.343339 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 07:15:37 crc kubenswrapper[4946]: I1128 07:15:37.384548 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 07:15:38 crc kubenswrapper[4946]: I1128 07:15:38.007732 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53c012db-de22-4e6c-9240-89b78163eff1" path="/var/lib/kubelet/pods/53c012db-de22-4e6c-9240-89b78163eff1/volumes" Nov 28 07:15:38 crc kubenswrapper[4946]: I1128 07:15:38.012749 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b16ae262-483b-4ed8-9ab8-15dad5215dbc" path="/var/lib/kubelet/pods/b16ae262-483b-4ed8-9ab8-15dad5215dbc/volumes" Nov 28 07:15:38 crc kubenswrapper[4946]: I1128 07:15:38.013366 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbd63f33-9cb3-47ec-ab44-04be204321ff" path="/var/lib/kubelet/pods/bbd63f33-9cb3-47ec-ab44-04be204321ff/volumes" Nov 28 07:15:39 crc kubenswrapper[4946]: I1128 07:15:39.437594 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f5d458b55-225m9" Nov 28 07:15:39 crc kubenswrapper[4946]: I1128 07:15:39.496845 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b6bbf7467-w7p6s"] Nov 28 07:15:39 crc kubenswrapper[4946]: I1128 07:15:39.497161 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" podUID="1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a" containerName="dnsmasq-dns" containerID="cri-o://7cd49cf4518b0ea942b229b274255151d8e9f64339d104b51c8bcf7aad882348" gracePeriod=10 Nov 28 07:15:40 crc kubenswrapper[4946]: I1128 07:15:40.606344 4946 generic.go:334] "Generic (PLEG): container finished" podID="1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a" containerID="7cd49cf4518b0ea942b229b274255151d8e9f64339d104b51c8bcf7aad882348" exitCode=0 Nov 28 07:15:40 crc kubenswrapper[4946]: I1128 07:15:40.606400 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" event={"ID":"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a","Type":"ContainerDied","Data":"7cd49cf4518b0ea942b229b274255151d8e9f64339d104b51c8bcf7aad882348"} Nov 28 07:15:46 crc kubenswrapper[4946]: I1128 07:15:46.501095 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" podUID="1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: i/o timeout" Nov 28 07:15:48 crc kubenswrapper[4946]: I1128 07:15:48.695153 4946 generic.go:334] "Generic (PLEG): container finished" podID="97fa13e8-c7b4-4612-912c-2976861bce81" containerID="fc88598dd6761393b295b22d6e88c16126d2718d69bd736288f3b924ac60358d" exitCode=0 Nov 28 07:15:48 crc kubenswrapper[4946]: I1128 07:15:48.695245 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9p5hs" event={"ID":"97fa13e8-c7b4-4612-912c-2976861bce81","Type":"ContainerDied","Data":"fc88598dd6761393b295b22d6e88c16126d2718d69bd736288f3b924ac60358d"} Nov 28 07:15:51 crc kubenswrapper[4946]: I1128 07:15:51.502206 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" podUID="1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: i/o timeout" Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.060434 4946 scope.go:117] "RemoveContainer" containerID="217b73121230eedf6bc2104ad4dab0acd6ffb3e896c76f6fa3d4e9132d016e74" Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.190827 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.222662 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-ovsdbserver-sb\") pod \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\" (UID: \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\") " Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.222784 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82p6n\" (UniqueName: \"kubernetes.io/projected/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-kube-api-access-82p6n\") pod \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\" (UID: \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\") " Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.222838 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-ovsdbserver-nb\") pod \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\" (UID: \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\") " Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.222898 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-dns-svc\") pod \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\" (UID: \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\") " Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.223082 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-config\") pod \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\" (UID: \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\") " Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.223139 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-dns-swift-storage-0\") pod \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\" (UID: \"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a\") " Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.261604 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-kube-api-access-82p6n" (OuterVolumeSpecName: "kube-api-access-82p6n") pod "1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a" (UID: "1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a"). InnerVolumeSpecName "kube-api-access-82p6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.283756 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a" (UID: "1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.295289 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a" (UID: "1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.295518 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-config" (OuterVolumeSpecName: "config") pod "1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a" (UID: "1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.295628 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a" (UID: "1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.303684 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a" (UID: "1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.326618 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82p6n\" (UniqueName: \"kubernetes.io/projected/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-kube-api-access-82p6n\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.326659 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.326672 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.326682 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.326693 4946 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.326702 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.735213 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" event={"ID":"1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a","Type":"ContainerDied","Data":"d1e11c781d2b65af34be06feebe590bdb655d9e8daed7c8ee33198544f1fc802"} Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.735309 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.774295 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b6bbf7467-w7p6s"] Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.790925 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b6bbf7467-w7p6s"] Nov 28 07:15:52 crc kubenswrapper[4946]: E1128 07:15:52.864892 4946 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:2051e26a441f1ce22aeb8daa0137559d89bded994db8141c11dd580ae6d07a23" Nov 28 07:15:52 crc kubenswrapper[4946]: E1128 07:15:52.865347 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:2051e26a441f1ce22aeb8daa0137559d89bded994db8141c11dd580ae6d07a23,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nbbhc7h665h546h575h64dhcdh57h695h59bh7fh698h687h5c9hfdhd6hb6h5cdh574h584h5d9h86hc9h598h66h5c8hfh559h97hdbh88h7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5g8xr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8310da02-e0c7-4dab-bc26-7139ca576c2c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.869371 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.885024 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9p5hs" Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.940329 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/97fa13e8-c7b4-4612-912c-2976861bce81-config\") pod \"97fa13e8-c7b4-4612-912c-2976861bce81\" (UID: \"97fa13e8-c7b4-4612-912c-2976861bce81\") " Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.940840 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97fa13e8-c7b4-4612-912c-2976861bce81-combined-ca-bundle\") pod \"97fa13e8-c7b4-4612-912c-2976861bce81\" (UID: \"97fa13e8-c7b4-4612-912c-2976861bce81\") " Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.941098 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-954wx\" (UniqueName: \"kubernetes.io/projected/97fa13e8-c7b4-4612-912c-2976861bce81-kube-api-access-954wx\") pod \"97fa13e8-c7b4-4612-912c-2976861bce81\" (UID: \"97fa13e8-c7b4-4612-912c-2976861bce81\") " Nov 28 07:15:52 crc kubenswrapper[4946]: I1128 07:15:52.946715 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97fa13e8-c7b4-4612-912c-2976861bce81-kube-api-access-954wx" (OuterVolumeSpecName: "kube-api-access-954wx") pod "97fa13e8-c7b4-4612-912c-2976861bce81" (UID: "97fa13e8-c7b4-4612-912c-2976861bce81"). InnerVolumeSpecName "kube-api-access-954wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:15:53 crc kubenswrapper[4946]: I1128 07:15:53.007203 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97fa13e8-c7b4-4612-912c-2976861bce81-config" (OuterVolumeSpecName: "config") pod "97fa13e8-c7b4-4612-912c-2976861bce81" (UID: "97fa13e8-c7b4-4612-912c-2976861bce81"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:15:53 crc kubenswrapper[4946]: I1128 07:15:53.013967 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97fa13e8-c7b4-4612-912c-2976861bce81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97fa13e8-c7b4-4612-912c-2976861bce81" (UID: "97fa13e8-c7b4-4612-912c-2976861bce81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:15:53 crc kubenswrapper[4946]: I1128 07:15:53.044056 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/97fa13e8-c7b4-4612-912c-2976861bce81-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:53 crc kubenswrapper[4946]: I1128 07:15:53.044102 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97fa13e8-c7b4-4612-912c-2976861bce81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:53 crc kubenswrapper[4946]: I1128 07:15:53.044119 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-954wx\" (UniqueName: \"kubernetes.io/projected/97fa13e8-c7b4-4612-912c-2976861bce81-kube-api-access-954wx\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:53 crc kubenswrapper[4946]: I1128 07:15:53.749226 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9p5hs" event={"ID":"97fa13e8-c7b4-4612-912c-2976861bce81","Type":"ContainerDied","Data":"e9075c5decdb3425d24b6f806097a34c41ef6dfc68cae3e4911a14e504d1d3ab"} Nov 28 07:15:53 crc kubenswrapper[4946]: I1128 07:15:53.749519 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9075c5decdb3425d24b6f806097a34c41ef6dfc68cae3e4911a14e504d1d3ab" Nov 28 07:15:53 crc kubenswrapper[4946]: I1128 07:15:53.749325 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9p5hs" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.011100 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a" path="/var/lib/kubelet/pods/1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a/volumes" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.168037 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f8dc44d89-xffzr"] Nov 28 07:15:54 crc kubenswrapper[4946]: E1128 07:15:54.168536 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a" containerName="init" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.168562 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a" containerName="init" Nov 28 07:15:54 crc kubenswrapper[4946]: E1128 07:15:54.168589 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a" containerName="dnsmasq-dns" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.168599 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a" containerName="dnsmasq-dns" Nov 28 07:15:54 crc kubenswrapper[4946]: E1128 07:15:54.168619 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97fa13e8-c7b4-4612-912c-2976861bce81" containerName="neutron-db-sync" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.168627 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="97fa13e8-c7b4-4612-912c-2976861bce81" containerName="neutron-db-sync" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.168861 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="97fa13e8-c7b4-4612-912c-2976861bce81" containerName="neutron-db-sync" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.168890 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a" containerName="dnsmasq-dns" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.170164 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.210076 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f8dc44d89-xffzr"] Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.261901 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bbd4d5c56-h9gwc"] Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.263962 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bbd4d5c56-h9gwc" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.267655 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.267922 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.268825 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fpfpf" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.269512 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.273602 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-ovsdbserver-nb\") pod \"dnsmasq-dns-f8dc44d89-xffzr\" (UID: \"de236db3-71ed-4308-b783-aab4e13225b5\") " pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.273946 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-dns-swift-storage-0\") pod \"dnsmasq-dns-f8dc44d89-xffzr\" (UID: \"de236db3-71ed-4308-b783-aab4e13225b5\") " pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.273991 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmwf5\" (UniqueName: \"kubernetes.io/projected/de236db3-71ed-4308-b783-aab4e13225b5-kube-api-access-cmwf5\") pod \"dnsmasq-dns-f8dc44d89-xffzr\" (UID: \"de236db3-71ed-4308-b783-aab4e13225b5\") " pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.274315 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-config\") pod \"dnsmasq-dns-f8dc44d89-xffzr\" (UID: \"de236db3-71ed-4308-b783-aab4e13225b5\") " pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.274480 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-dns-svc\") pod \"dnsmasq-dns-f8dc44d89-xffzr\" (UID: \"de236db3-71ed-4308-b783-aab4e13225b5\") " pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.274555 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-ovsdbserver-sb\") pod \"dnsmasq-dns-f8dc44d89-xffzr\" (UID: \"de236db3-71ed-4308-b783-aab4e13225b5\") " pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.295938 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bbd4d5c56-h9gwc"] Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.376145 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9a8bc084-fa5e-4975-972d-9beb506babc1-config\") pod \"neutron-bbd4d5c56-h9gwc\" (UID: \"9a8bc084-fa5e-4975-972d-9beb506babc1\") " pod="openstack/neutron-bbd4d5c56-h9gwc" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.376202 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a8bc084-fa5e-4975-972d-9beb506babc1-combined-ca-bundle\") pod \"neutron-bbd4d5c56-h9gwc\" (UID: \"9a8bc084-fa5e-4975-972d-9beb506babc1\") " pod="openstack/neutron-bbd4d5c56-h9gwc" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.376243 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-dns-swift-storage-0\") pod \"dnsmasq-dns-f8dc44d89-xffzr\" (UID: \"de236db3-71ed-4308-b783-aab4e13225b5\") " pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.376260 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmwf5\" (UniqueName: \"kubernetes.io/projected/de236db3-71ed-4308-b783-aab4e13225b5-kube-api-access-cmwf5\") pod \"dnsmasq-dns-f8dc44d89-xffzr\" (UID: \"de236db3-71ed-4308-b783-aab4e13225b5\") " pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.376306 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-config\") pod \"dnsmasq-dns-f8dc44d89-xffzr\" (UID: \"de236db3-71ed-4308-b783-aab4e13225b5\") " pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.376343 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfqgr\" (UniqueName: \"kubernetes.io/projected/9a8bc084-fa5e-4975-972d-9beb506babc1-kube-api-access-xfqgr\") pod \"neutron-bbd4d5c56-h9gwc\" (UID: \"9a8bc084-fa5e-4975-972d-9beb506babc1\") " pod="openstack/neutron-bbd4d5c56-h9gwc" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.376369 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9a8bc084-fa5e-4975-972d-9beb506babc1-httpd-config\") pod \"neutron-bbd4d5c56-h9gwc\" (UID: \"9a8bc084-fa5e-4975-972d-9beb506babc1\") " pod="openstack/neutron-bbd4d5c56-h9gwc" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.376394 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-dns-svc\") pod \"dnsmasq-dns-f8dc44d89-xffzr\" (UID: \"de236db3-71ed-4308-b783-aab4e13225b5\") " pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.376413 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a8bc084-fa5e-4975-972d-9beb506babc1-ovndb-tls-certs\") pod \"neutron-bbd4d5c56-h9gwc\" (UID: \"9a8bc084-fa5e-4975-972d-9beb506babc1\") " pod="openstack/neutron-bbd4d5c56-h9gwc" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.376431 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-ovsdbserver-sb\") pod \"dnsmasq-dns-f8dc44d89-xffzr\" (UID: \"de236db3-71ed-4308-b783-aab4e13225b5\") " pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.376486 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-ovsdbserver-nb\") pod \"dnsmasq-dns-f8dc44d89-xffzr\" (UID: \"de236db3-71ed-4308-b783-aab4e13225b5\") " pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.377286 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-ovsdbserver-nb\") pod \"dnsmasq-dns-f8dc44d89-xffzr\" (UID: \"de236db3-71ed-4308-b783-aab4e13225b5\") " pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.377860 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-dns-swift-storage-0\") pod \"dnsmasq-dns-f8dc44d89-xffzr\" (UID: \"de236db3-71ed-4308-b783-aab4e13225b5\") " pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.378829 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-config\") pod \"dnsmasq-dns-f8dc44d89-xffzr\" (UID: \"de236db3-71ed-4308-b783-aab4e13225b5\") " pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.379491 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-dns-svc\") pod \"dnsmasq-dns-f8dc44d89-xffzr\" (UID: \"de236db3-71ed-4308-b783-aab4e13225b5\") " pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.380036 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-ovsdbserver-sb\") pod \"dnsmasq-dns-f8dc44d89-xffzr\" (UID: \"de236db3-71ed-4308-b783-aab4e13225b5\") " pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.402311 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmwf5\" (UniqueName: \"kubernetes.io/projected/de236db3-71ed-4308-b783-aab4e13225b5-kube-api-access-cmwf5\") pod \"dnsmasq-dns-f8dc44d89-xffzr\" (UID: \"de236db3-71ed-4308-b783-aab4e13225b5\") " pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.480807 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfqgr\" (UniqueName: \"kubernetes.io/projected/9a8bc084-fa5e-4975-972d-9beb506babc1-kube-api-access-xfqgr\") pod \"neutron-bbd4d5c56-h9gwc\" (UID: \"9a8bc084-fa5e-4975-972d-9beb506babc1\") " pod="openstack/neutron-bbd4d5c56-h9gwc" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.481124 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9a8bc084-fa5e-4975-972d-9beb506babc1-httpd-config\") pod \"neutron-bbd4d5c56-h9gwc\" (UID: \"9a8bc084-fa5e-4975-972d-9beb506babc1\") " pod="openstack/neutron-bbd4d5c56-h9gwc" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.481160 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a8bc084-fa5e-4975-972d-9beb506babc1-ovndb-tls-certs\") pod \"neutron-bbd4d5c56-h9gwc\" (UID: \"9a8bc084-fa5e-4975-972d-9beb506babc1\") " pod="openstack/neutron-bbd4d5c56-h9gwc" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.481395 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9a8bc084-fa5e-4975-972d-9beb506babc1-config\") pod \"neutron-bbd4d5c56-h9gwc\" (UID: \"9a8bc084-fa5e-4975-972d-9beb506babc1\") " pod="openstack/neutron-bbd4d5c56-h9gwc" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.481416 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a8bc084-fa5e-4975-972d-9beb506babc1-combined-ca-bundle\") pod \"neutron-bbd4d5c56-h9gwc\" (UID: \"9a8bc084-fa5e-4975-972d-9beb506babc1\") " pod="openstack/neutron-bbd4d5c56-h9gwc" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.489521 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9a8bc084-fa5e-4975-972d-9beb506babc1-httpd-config\") pod \"neutron-bbd4d5c56-h9gwc\" (UID: \"9a8bc084-fa5e-4975-972d-9beb506babc1\") " pod="openstack/neutron-bbd4d5c56-h9gwc" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.490238 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a8bc084-fa5e-4975-972d-9beb506babc1-combined-ca-bundle\") pod \"neutron-bbd4d5c56-h9gwc\" (UID: \"9a8bc084-fa5e-4975-972d-9beb506babc1\") " pod="openstack/neutron-bbd4d5c56-h9gwc" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.494998 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9a8bc084-fa5e-4975-972d-9beb506babc1-config\") pod \"neutron-bbd4d5c56-h9gwc\" (UID: \"9a8bc084-fa5e-4975-972d-9beb506babc1\") " pod="openstack/neutron-bbd4d5c56-h9gwc" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.498176 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a8bc084-fa5e-4975-972d-9beb506babc1-ovndb-tls-certs\") pod \"neutron-bbd4d5c56-h9gwc\" (UID: \"9a8bc084-fa5e-4975-972d-9beb506babc1\") " pod="openstack/neutron-bbd4d5c56-h9gwc" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.508732 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.530564 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfqgr\" (UniqueName: \"kubernetes.io/projected/9a8bc084-fa5e-4975-972d-9beb506babc1-kube-api-access-xfqgr\") pod \"neutron-bbd4d5c56-h9gwc\" (UID: \"9a8bc084-fa5e-4975-972d-9beb506babc1\") " pod="openstack/neutron-bbd4d5c56-h9gwc" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.567987 4946 scope.go:117] "RemoveContainer" containerID="e839daac9174d1eb9d5063219d04f6c22be7758193fb8693f7b09a6392c2aadd" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.588259 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bbd4d5c56-h9gwc" Nov 28 07:15:54 crc kubenswrapper[4946]: E1128 07:15:54.642476 4946 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b5266c9a26766fce2b92f95dff52d362a760f7baf1474cdcb33bd68570e096c0" Nov 28 07:15:54 crc kubenswrapper[4946]: E1128 07:15:54.642848 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b5266c9a26766fce2b92f95dff52d362a760f7baf1474cdcb33bd68570e096c0,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5z6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-8wk8q_openstack(8d0d856a-c2e4-4ccf-adfb-70391210f8d9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 07:15:54 crc kubenswrapper[4946]: E1128 07:15:54.643969 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-8wk8q" podUID="8d0d856a-c2e4-4ccf-adfb-70391210f8d9" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.674368 4946 scope.go:117] "RemoveContainer" containerID="4d4b0729302163f9765842d3c8ebafc303b6afa3579e50271b80e282415492ba" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.731763 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.731839 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.735219 4946 scope.go:117] "RemoveContainer" containerID="7cd49cf4518b0ea942b229b274255151d8e9f64339d104b51c8bcf7aad882348" Nov 28 07:15:54 crc kubenswrapper[4946]: I1128 07:15:54.806125 4946 scope.go:117] "RemoveContainer" containerID="5ddbb291fca54747893c71ebe000f837f96598a320bce3b00dd58884b9fe427d" Nov 28 07:15:54 crc kubenswrapper[4946]: E1128 07:15:54.812091 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b5266c9a26766fce2b92f95dff52d362a760f7baf1474cdcb33bd68570e096c0\\\"\"" pod="openstack/cinder-db-sync-8wk8q" podUID="8d0d856a-c2e4-4ccf-adfb-70391210f8d9" Nov 28 07:15:55 crc kubenswrapper[4946]: I1128 07:15:55.065420 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-djhf5"] Nov 28 07:15:55 crc kubenswrapper[4946]: I1128 07:15:55.277528 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f8dc44d89-xffzr"] Nov 28 07:15:55 crc kubenswrapper[4946]: I1128 07:15:55.374742 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:15:55 crc kubenswrapper[4946]: I1128 07:15:55.412181 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lpcd7"] Nov 28 07:15:55 crc kubenswrapper[4946]: I1128 07:15:55.415710 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lpcd7" Nov 28 07:15:55 crc kubenswrapper[4946]: I1128 07:15:55.465732 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lpcd7"] Nov 28 07:15:55 crc kubenswrapper[4946]: I1128 07:15:55.504739 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh22g\" (UniqueName: \"kubernetes.io/projected/b9bec43f-4374-47e6-968e-858a39bfa527-kube-api-access-zh22g\") pod \"certified-operators-lpcd7\" (UID: \"b9bec43f-4374-47e6-968e-858a39bfa527\") " pod="openshift-marketplace/certified-operators-lpcd7" Nov 28 07:15:55 crc kubenswrapper[4946]: I1128 07:15:55.504782 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bec43f-4374-47e6-968e-858a39bfa527-utilities\") pod \"certified-operators-lpcd7\" (UID: \"b9bec43f-4374-47e6-968e-858a39bfa527\") " pod="openshift-marketplace/certified-operators-lpcd7" Nov 28 07:15:55 crc kubenswrapper[4946]: I1128 07:15:55.504902 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bec43f-4374-47e6-968e-858a39bfa527-catalog-content\") pod \"certified-operators-lpcd7\" (UID: \"b9bec43f-4374-47e6-968e-858a39bfa527\") " pod="openshift-marketplace/certified-operators-lpcd7" Nov 28 07:15:55 crc kubenswrapper[4946]: I1128 07:15:55.565221 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bbd4d5c56-h9gwc"] Nov 28 07:15:55 crc kubenswrapper[4946]: I1128 07:15:55.606802 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bec43f-4374-47e6-968e-858a39bfa527-catalog-content\") pod \"certified-operators-lpcd7\" (UID: \"b9bec43f-4374-47e6-968e-858a39bfa527\") " pod="openshift-marketplace/certified-operators-lpcd7" Nov 28 07:15:55 crc kubenswrapper[4946]: I1128 07:15:55.606932 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh22g\" (UniqueName: \"kubernetes.io/projected/b9bec43f-4374-47e6-968e-858a39bfa527-kube-api-access-zh22g\") pod \"certified-operators-lpcd7\" (UID: \"b9bec43f-4374-47e6-968e-858a39bfa527\") " pod="openshift-marketplace/certified-operators-lpcd7" Nov 28 07:15:55 crc kubenswrapper[4946]: I1128 07:15:55.606954 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bec43f-4374-47e6-968e-858a39bfa527-utilities\") pod \"certified-operators-lpcd7\" (UID: \"b9bec43f-4374-47e6-968e-858a39bfa527\") " pod="openshift-marketplace/certified-operators-lpcd7" Nov 28 07:15:55 crc kubenswrapper[4946]: I1128 07:15:55.607241 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bec43f-4374-47e6-968e-858a39bfa527-catalog-content\") pod \"certified-operators-lpcd7\" (UID: \"b9bec43f-4374-47e6-968e-858a39bfa527\") " pod="openshift-marketplace/certified-operators-lpcd7" Nov 28 07:15:55 crc kubenswrapper[4946]: I1128 07:15:55.607384 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bec43f-4374-47e6-968e-858a39bfa527-utilities\") pod \"certified-operators-lpcd7\" (UID: \"b9bec43f-4374-47e6-968e-858a39bfa527\") " pod="openshift-marketplace/certified-operators-lpcd7" Nov 28 07:15:55 crc kubenswrapper[4946]: I1128 07:15:55.642397 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh22g\" (UniqueName: \"kubernetes.io/projected/b9bec43f-4374-47e6-968e-858a39bfa527-kube-api-access-zh22g\") pod \"certified-operators-lpcd7\" (UID: \"b9bec43f-4374-47e6-968e-858a39bfa527\") " pod="openshift-marketplace/certified-operators-lpcd7" Nov 28 07:15:55 crc kubenswrapper[4946]: I1128 07:15:55.789236 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lpcd7" Nov 28 07:15:55 crc kubenswrapper[4946]: I1128 07:15:55.818305 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-cbhjz" event={"ID":"a4229e67-36fc-40ba-8d90-af5b0d95743f","Type":"ContainerStarted","Data":"ac71999f7195041c7350010409e92caba1128a1523d424f3e5e2d979a06edf34"} Nov 28 07:15:55 crc kubenswrapper[4946]: I1128 07:15:55.824824 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-djhf5" event={"ID":"000349d5-5671-46a1-b1d2-1954a9facc3e","Type":"ContainerStarted","Data":"0f5395605e2863d1e20f2cb9839024fbafbfd2d53603e3ecacdf196ff0806098"} Nov 28 07:15:55 crc kubenswrapper[4946]: I1128 07:15:55.824875 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-djhf5" event={"ID":"000349d5-5671-46a1-b1d2-1954a9facc3e","Type":"ContainerStarted","Data":"eed8198dd99ab1d2c78a70b9b6047479aef2bb9257551a555accdcdc44f83a4b"} Nov 28 07:15:55 crc kubenswrapper[4946]: I1128 07:15:55.826986 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l7pvv" event={"ID":"0cbfc817-e82c-436d-bc3a-6a3a94ee82e8","Type":"ContainerStarted","Data":"8e951a45b4b9ae985f7ede9827a6d2b21c5d9745ebabcd97342388504f3849c8"} Nov 28 07:15:55 crc kubenswrapper[4946]: I1128 07:15:55.855803 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-cbhjz" podStartSLOduration=5.990984129 podStartE2EDuration="27.855776764s" podCreationTimestamp="2025-11-28 07:15:28 +0000 UTC" firstStartedPulling="2025-11-28 07:15:30.234571784 +0000 UTC m=+1384.612636895" lastFinishedPulling="2025-11-28 07:15:52.099364409 +0000 UTC m=+1406.477429530" observedRunningTime="2025-11-28 07:15:55.83808279 +0000 UTC m=+1410.216147901" watchObservedRunningTime="2025-11-28 07:15:55.855776764 +0000 UTC m=+1410.233841885" Nov 28 07:15:55 crc kubenswrapper[4946]: I1128 07:15:55.861801 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-l7pvv" podStartSLOduration=3.356756723 podStartE2EDuration="27.861781041s" podCreationTimestamp="2025-11-28 07:15:28 +0000 UTC" firstStartedPulling="2025-11-28 07:15:29.997629815 +0000 UTC m=+1384.375694926" lastFinishedPulling="2025-11-28 07:15:54.502654143 +0000 UTC m=+1408.880719244" observedRunningTime="2025-11-28 07:15:55.852979525 +0000 UTC m=+1410.231044636" watchObservedRunningTime="2025-11-28 07:15:55.861781041 +0000 UTC m=+1410.239846152" Nov 28 07:15:55 crc kubenswrapper[4946]: I1128 07:15:55.880862 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-djhf5" podStartSLOduration=19.880843248 podStartE2EDuration="19.880843248s" podCreationTimestamp="2025-11-28 07:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:15:55.876097842 +0000 UTC m=+1410.254162953" watchObservedRunningTime="2025-11-28 07:15:55.880843248 +0000 UTC m=+1410.258908349" Nov 28 07:15:56 crc kubenswrapper[4946]: I1128 07:15:56.236039 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:15:56 crc kubenswrapper[4946]: I1128 07:15:56.510692 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b6bbf7467-w7p6s" podUID="1a08a8da-4cf6-4aeb-aa37-a8ef59b4307a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: i/o timeout" Nov 28 07:15:56 crc kubenswrapper[4946]: I1128 07:15:56.856809 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" event={"ID":"de236db3-71ed-4308-b783-aab4e13225b5","Type":"ContainerStarted","Data":"f7822fbbff9ab1de4e040d30b74c78746be0bfb171266e7223c1139875e63371"} Nov 28 07:15:56 crc kubenswrapper[4946]: I1128 07:15:56.858963 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f7498cb-2048-4c04-b1ce-50a236db88a6","Type":"ContainerStarted","Data":"e21836b37220847731c77e60896743728b9129f1309dff4c359c55d25d676eec"} Nov 28 07:15:56 crc kubenswrapper[4946]: I1128 07:15:56.868896 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6bf0f217-858e-49f9-8730-7376f77c6d4f","Type":"ContainerStarted","Data":"a94ef9f9dcc0ff7b1f98598d34efd23b035470f5d2eb68d0179378a644cc09f6"} Nov 28 07:15:56 crc kubenswrapper[4946]: I1128 07:15:56.883953 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bbd4d5c56-h9gwc" event={"ID":"9a8bc084-fa5e-4975-972d-9beb506babc1","Type":"ContainerStarted","Data":"d120a6c789d441af9598ce92474c7e9a0116e5f4611a0232e22d939ca4af4a3b"} Nov 28 07:15:56 crc kubenswrapper[4946]: I1128 07:15:56.967543 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lpcd7"] Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.348110 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-777c4856b5-mgnhk"] Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.362607 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-777c4856b5-mgnhk"] Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.362703 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.364881 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.365402 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.464553 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-internal-tls-certs\") pod \"neutron-777c4856b5-mgnhk\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.465422 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-public-tls-certs\") pod \"neutron-777c4856b5-mgnhk\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.465554 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-combined-ca-bundle\") pod \"neutron-777c4856b5-mgnhk\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.465660 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4rxb\" (UniqueName: \"kubernetes.io/projected/d1578c84-1d87-41b2-bfa7-637c3b53366f-kube-api-access-d4rxb\") pod \"neutron-777c4856b5-mgnhk\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.465715 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-config\") pod \"neutron-777c4856b5-mgnhk\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.465848 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-httpd-config\") pod \"neutron-777c4856b5-mgnhk\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.465896 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-ovndb-tls-certs\") pod \"neutron-777c4856b5-mgnhk\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.568333 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-public-tls-certs\") pod \"neutron-777c4856b5-mgnhk\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.568450 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-combined-ca-bundle\") pod \"neutron-777c4856b5-mgnhk\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.568510 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4rxb\" (UniqueName: \"kubernetes.io/projected/d1578c84-1d87-41b2-bfa7-637c3b53366f-kube-api-access-d4rxb\") pod \"neutron-777c4856b5-mgnhk\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.568546 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-config\") pod \"neutron-777c4856b5-mgnhk\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.568641 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-httpd-config\") pod \"neutron-777c4856b5-mgnhk\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.568676 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-ovndb-tls-certs\") pod \"neutron-777c4856b5-mgnhk\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.568709 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-internal-tls-certs\") pod \"neutron-777c4856b5-mgnhk\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.577816 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-ovndb-tls-certs\") pod \"neutron-777c4856b5-mgnhk\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.578186 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-httpd-config\") pod \"neutron-777c4856b5-mgnhk\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.583746 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-public-tls-certs\") pod \"neutron-777c4856b5-mgnhk\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.585798 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-config\") pod \"neutron-777c4856b5-mgnhk\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.588655 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4rxb\" (UniqueName: \"kubernetes.io/projected/d1578c84-1d87-41b2-bfa7-637c3b53366f-kube-api-access-d4rxb\") pod \"neutron-777c4856b5-mgnhk\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.592728 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-internal-tls-certs\") pod \"neutron-777c4856b5-mgnhk\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.596725 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-combined-ca-bundle\") pod \"neutron-777c4856b5-mgnhk\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.698696 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:15:57 crc kubenswrapper[4946]: I1128 07:15:57.892320 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lpcd7" event={"ID":"b9bec43f-4374-47e6-968e-858a39bfa527","Type":"ContainerStarted","Data":"562964e86d74d3b02ee17bf1342fba7afbe650fe5ebc18f1be5e2a77aea4e1a0"} Nov 28 07:15:58 crc kubenswrapper[4946]: I1128 07:15:58.287838 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-777c4856b5-mgnhk"] Nov 28 07:15:58 crc kubenswrapper[4946]: I1128 07:15:58.901097 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-777c4856b5-mgnhk" event={"ID":"d1578c84-1d87-41b2-bfa7-637c3b53366f","Type":"ContainerStarted","Data":"f93b507d96c3a2691c211c317621d608fd981f14cc9e4fa676c3d903d95671ae"} Nov 28 07:15:59 crc kubenswrapper[4946]: I1128 07:15:59.919765 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f7498cb-2048-4c04-b1ce-50a236db88a6","Type":"ContainerStarted","Data":"b8f930faf3c00d4c11e4852daae6d3873a790d09d2e2b47665d2dcb7fd73ec0c"} Nov 28 07:15:59 crc kubenswrapper[4946]: I1128 07:15:59.924258 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-777c4856b5-mgnhk" event={"ID":"d1578c84-1d87-41b2-bfa7-637c3b53366f","Type":"ContainerStarted","Data":"1b4c77565cb683565551995a4fa5e6f14515e5125068ec275e329eaccc6d274a"} Nov 28 07:15:59 crc kubenswrapper[4946]: I1128 07:15:59.924314 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-777c4856b5-mgnhk" event={"ID":"d1578c84-1d87-41b2-bfa7-637c3b53366f","Type":"ContainerStarted","Data":"a520f59984f4e2b3696712434dc811010686939bd64077205d50d0ba4e29000f"} Nov 28 07:15:59 crc kubenswrapper[4946]: I1128 07:15:59.924417 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:15:59 crc kubenswrapper[4946]: I1128 07:15:59.926715 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6bf0f217-858e-49f9-8730-7376f77c6d4f","Type":"ContainerStarted","Data":"528448ade13d44b6e5c0e461e67cc76aee6c8f3216d6cef11cdf76b3702e2960"} Nov 28 07:15:59 crc kubenswrapper[4946]: I1128 07:15:59.930422 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bbd4d5c56-h9gwc" event={"ID":"9a8bc084-fa5e-4975-972d-9beb506babc1","Type":"ContainerStarted","Data":"065455f92a5371e9320f2881f9f4e8ad18f6c24f2a8ffb2a70fa6ce0026a58f6"} Nov 28 07:15:59 crc kubenswrapper[4946]: I1128 07:15:59.930453 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bbd4d5c56-h9gwc" event={"ID":"9a8bc084-fa5e-4975-972d-9beb506babc1","Type":"ContainerStarted","Data":"26bde06bf1c10c3c0f9429e58a4d550cc56eee29e9b26fa9be3a3166679b520d"} Nov 28 07:15:59 crc kubenswrapper[4946]: I1128 07:15:59.930612 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-bbd4d5c56-h9gwc" Nov 28 07:15:59 crc kubenswrapper[4946]: I1128 07:15:59.933447 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8310da02-e0c7-4dab-bc26-7139ca576c2c","Type":"ContainerStarted","Data":"f4240250529f6e7ff2ca469668b93e5b7f799dc70a5f6bfd1d23fc4836a7ef0b"} Nov 28 07:15:59 crc kubenswrapper[4946]: I1128 07:15:59.934880 4946 generic.go:334] "Generic (PLEG): container finished" podID="de236db3-71ed-4308-b783-aab4e13225b5" containerID="f5677399d8071635fd901f66ea39012572a4a6378ffd8fb45891d25f4bc1e80a" exitCode=0 Nov 28 07:15:59 crc kubenswrapper[4946]: I1128 07:15:59.934943 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" event={"ID":"de236db3-71ed-4308-b783-aab4e13225b5","Type":"ContainerDied","Data":"f5677399d8071635fd901f66ea39012572a4a6378ffd8fb45891d25f4bc1e80a"} Nov 28 07:15:59 crc kubenswrapper[4946]: I1128 07:15:59.936372 4946 generic.go:334] "Generic (PLEG): container finished" podID="b9bec43f-4374-47e6-968e-858a39bfa527" containerID="cca5dc3fdfd20cbb0d08b4649f9aca583e073316daade6a505f6df5453ac58d2" exitCode=0 Nov 28 07:15:59 crc kubenswrapper[4946]: I1128 07:15:59.936417 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lpcd7" event={"ID":"b9bec43f-4374-47e6-968e-858a39bfa527","Type":"ContainerDied","Data":"cca5dc3fdfd20cbb0d08b4649f9aca583e073316daade6a505f6df5453ac58d2"} Nov 28 07:15:59 crc kubenswrapper[4946]: I1128 07:15:59.956541 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-777c4856b5-mgnhk" podStartSLOduration=2.956517682 podStartE2EDuration="2.956517682s" podCreationTimestamp="2025-11-28 07:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:15:59.946413274 +0000 UTC m=+1414.324478385" watchObservedRunningTime="2025-11-28 07:15:59.956517682 +0000 UTC m=+1414.334582793" Nov 28 07:16:00 crc kubenswrapper[4946]: I1128 07:16:00.024907 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-bbd4d5c56-h9gwc" podStartSLOduration=6.024880248 podStartE2EDuration="6.024880248s" podCreationTimestamp="2025-11-28 07:15:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:16:00.010842224 +0000 UTC m=+1414.388907345" watchObservedRunningTime="2025-11-28 07:16:00.024880248 +0000 UTC m=+1414.402945359" Nov 28 07:16:00 crc kubenswrapper[4946]: I1128 07:16:00.954004 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f7498cb-2048-4c04-b1ce-50a236db88a6","Type":"ContainerStarted","Data":"b8582a0aeb08beaedc43c27127396d887ec2f6515c784b520e60cb8b5aec7db4"} Nov 28 07:16:00 crc kubenswrapper[4946]: I1128 07:16:00.956388 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6bf0f217-858e-49f9-8730-7376f77c6d4f","Type":"ContainerStarted","Data":"ecb1db803400e94fb374b1b0dec90f106d9649db844264b53700f85dbf61d268"} Nov 28 07:16:00 crc kubenswrapper[4946]: I1128 07:16:00.958372 4946 generic.go:334] "Generic (PLEG): container finished" podID="a4229e67-36fc-40ba-8d90-af5b0d95743f" containerID="ac71999f7195041c7350010409e92caba1128a1523d424f3e5e2d979a06edf34" exitCode=0 Nov 28 07:16:00 crc kubenswrapper[4946]: I1128 07:16:00.958439 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-cbhjz" event={"ID":"a4229e67-36fc-40ba-8d90-af5b0d95743f","Type":"ContainerDied","Data":"ac71999f7195041c7350010409e92caba1128a1523d424f3e5e2d979a06edf34"} Nov 28 07:16:00 crc kubenswrapper[4946]: I1128 07:16:00.962045 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" event={"ID":"de236db3-71ed-4308-b783-aab4e13225b5","Type":"ContainerStarted","Data":"8a7733a102e3c067aaba74d858d59b2ac5ee773f7059dc7d5d37060effd43f34"} Nov 28 07:16:00 crc kubenswrapper[4946]: I1128 07:16:00.962070 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" Nov 28 07:16:00 crc kubenswrapper[4946]: I1128 07:16:00.991328 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=24.991304347 podStartE2EDuration="24.991304347s" podCreationTimestamp="2025-11-28 07:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:16:00.978300468 +0000 UTC m=+1415.356365579" watchObservedRunningTime="2025-11-28 07:16:00.991304347 +0000 UTC m=+1415.369369458" Nov 28 07:16:01 crc kubenswrapper[4946]: I1128 07:16:01.040820 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=25.040802811 podStartE2EDuration="25.040802811s" podCreationTimestamp="2025-11-28 07:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:16:01.033329268 +0000 UTC m=+1415.411394379" watchObservedRunningTime="2025-11-28 07:16:01.040802811 +0000 UTC m=+1415.418867922" Nov 28 07:16:01 crc kubenswrapper[4946]: I1128 07:16:01.074902 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" podStartSLOduration=7.074879817 podStartE2EDuration="7.074879817s" podCreationTimestamp="2025-11-28 07:15:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:16:01.057630484 +0000 UTC m=+1415.435695595" watchObservedRunningTime="2025-11-28 07:16:01.074879817 +0000 UTC m=+1415.452944928" Nov 28 07:16:01 crc kubenswrapper[4946]: I1128 07:16:01.974328 4946 generic.go:334] "Generic (PLEG): container finished" podID="b9bec43f-4374-47e6-968e-858a39bfa527" containerID="f3a83246ec59447909c7b33851f31529ea71c070ce34a671a8edd57ba76a4bc8" exitCode=0 Nov 28 07:16:01 crc kubenswrapper[4946]: I1128 07:16:01.974421 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lpcd7" event={"ID":"b9bec43f-4374-47e6-968e-858a39bfa527","Type":"ContainerDied","Data":"f3a83246ec59447909c7b33851f31529ea71c070ce34a671a8edd57ba76a4bc8"} Nov 28 07:16:01 crc kubenswrapper[4946]: I1128 07:16:01.979071 4946 generic.go:334] "Generic (PLEG): container finished" podID="000349d5-5671-46a1-b1d2-1954a9facc3e" containerID="0f5395605e2863d1e20f2cb9839024fbafbfd2d53603e3ecacdf196ff0806098" exitCode=0 Nov 28 07:16:01 crc kubenswrapper[4946]: I1128 07:16:01.979122 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-djhf5" event={"ID":"000349d5-5671-46a1-b1d2-1954a9facc3e","Type":"ContainerDied","Data":"0f5395605e2863d1e20f2cb9839024fbafbfd2d53603e3ecacdf196ff0806098"} Nov 28 07:16:02 crc kubenswrapper[4946]: I1128 07:16:02.021882 4946 generic.go:334] "Generic (PLEG): container finished" podID="0cbfc817-e82c-436d-bc3a-6a3a94ee82e8" containerID="8e951a45b4b9ae985f7ede9827a6d2b21c5d9745ebabcd97342388504f3849c8" exitCode=0 Nov 28 07:16:02 crc kubenswrapper[4946]: I1128 07:16:02.022197 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l7pvv" event={"ID":"0cbfc817-e82c-436d-bc3a-6a3a94ee82e8","Type":"ContainerDied","Data":"8e951a45b4b9ae985f7ede9827a6d2b21c5d9745ebabcd97342388504f3849c8"} Nov 28 07:16:02 crc kubenswrapper[4946]: I1128 07:16:02.947803 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-cbhjz" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.010886 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4229e67-36fc-40ba-8d90-af5b0d95743f-logs\") pod \"a4229e67-36fc-40ba-8d90-af5b0d95743f\" (UID: \"a4229e67-36fc-40ba-8d90-af5b0d95743f\") " Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.010977 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4229e67-36fc-40ba-8d90-af5b0d95743f-scripts\") pod \"a4229e67-36fc-40ba-8d90-af5b0d95743f\" (UID: \"a4229e67-36fc-40ba-8d90-af5b0d95743f\") " Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.011012 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zzwr\" (UniqueName: \"kubernetes.io/projected/a4229e67-36fc-40ba-8d90-af5b0d95743f-kube-api-access-4zzwr\") pod \"a4229e67-36fc-40ba-8d90-af5b0d95743f\" (UID: \"a4229e67-36fc-40ba-8d90-af5b0d95743f\") " Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.011033 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4229e67-36fc-40ba-8d90-af5b0d95743f-config-data\") pod \"a4229e67-36fc-40ba-8d90-af5b0d95743f\" (UID: \"a4229e67-36fc-40ba-8d90-af5b0d95743f\") " Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.011290 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4229e67-36fc-40ba-8d90-af5b0d95743f-combined-ca-bundle\") pod \"a4229e67-36fc-40ba-8d90-af5b0d95743f\" (UID: \"a4229e67-36fc-40ba-8d90-af5b0d95743f\") " Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.012613 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4229e67-36fc-40ba-8d90-af5b0d95743f-logs" (OuterVolumeSpecName: "logs") pod "a4229e67-36fc-40ba-8d90-af5b0d95743f" (UID: "a4229e67-36fc-40ba-8d90-af5b0d95743f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.043743 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4229e67-36fc-40ba-8d90-af5b0d95743f-scripts" (OuterVolumeSpecName: "scripts") pod "a4229e67-36fc-40ba-8d90-af5b0d95743f" (UID: "a4229e67-36fc-40ba-8d90-af5b0d95743f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.051730 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4229e67-36fc-40ba-8d90-af5b0d95743f-kube-api-access-4zzwr" (OuterVolumeSpecName: "kube-api-access-4zzwr") pod "a4229e67-36fc-40ba-8d90-af5b0d95743f" (UID: "a4229e67-36fc-40ba-8d90-af5b0d95743f"). InnerVolumeSpecName "kube-api-access-4zzwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.082609 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4229e67-36fc-40ba-8d90-af5b0d95743f-config-data" (OuterVolumeSpecName: "config-data") pod "a4229e67-36fc-40ba-8d90-af5b0d95743f" (UID: "a4229e67-36fc-40ba-8d90-af5b0d95743f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.082656 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4229e67-36fc-40ba-8d90-af5b0d95743f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4229e67-36fc-40ba-8d90-af5b0d95743f" (UID: "a4229e67-36fc-40ba-8d90-af5b0d95743f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.083159 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-cbhjz" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.084551 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-cbhjz" event={"ID":"a4229e67-36fc-40ba-8d90-af5b0d95743f","Type":"ContainerDied","Data":"5e648766bc17e861d464e00b88b45c7346d8d528bd289794550a29e666ea25f3"} Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.084602 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e648766bc17e861d464e00b88b45c7346d8d528bd289794550a29e666ea25f3" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.113876 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4229e67-36fc-40ba-8d90-af5b0d95743f-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.113918 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zzwr\" (UniqueName: \"kubernetes.io/projected/a4229e67-36fc-40ba-8d90-af5b0d95743f-kube-api-access-4zzwr\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.113930 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4229e67-36fc-40ba-8d90-af5b0d95743f-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.113941 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4229e67-36fc-40ba-8d90-af5b0d95743f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.113954 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4229e67-36fc-40ba-8d90-af5b0d95743f-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.144574 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-96fb6f878-56tfz"] Nov 28 07:16:03 crc kubenswrapper[4946]: E1128 07:16:03.145068 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4229e67-36fc-40ba-8d90-af5b0d95743f" containerName="placement-db-sync" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.145087 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4229e67-36fc-40ba-8d90-af5b0d95743f" containerName="placement-db-sync" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.145475 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4229e67-36fc-40ba-8d90-af5b0d95743f" containerName="placement-db-sync" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.148222 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.152396 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.152767 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4dxs2" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.156845 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.156981 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.157198 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.171157 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-96fb6f878-56tfz"] Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.218269 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-internal-tls-certs\") pod \"placement-96fb6f878-56tfz\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.218354 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-scripts\") pod \"placement-96fb6f878-56tfz\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.218380 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnggb\" (UniqueName: \"kubernetes.io/projected/52101de8-a25c-4372-9df3-3f090167ff5f-kube-api-access-bnggb\") pod \"placement-96fb6f878-56tfz\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.218418 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-combined-ca-bundle\") pod \"placement-96fb6f878-56tfz\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.218693 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52101de8-a25c-4372-9df3-3f090167ff5f-logs\") pod \"placement-96fb6f878-56tfz\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.218751 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-public-tls-certs\") pod \"placement-96fb6f878-56tfz\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.218829 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-config-data\") pod \"placement-96fb6f878-56tfz\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.322040 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52101de8-a25c-4372-9df3-3f090167ff5f-logs\") pod \"placement-96fb6f878-56tfz\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.322104 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-public-tls-certs\") pod \"placement-96fb6f878-56tfz\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.322197 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-config-data\") pod \"placement-96fb6f878-56tfz\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.322474 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-internal-tls-certs\") pod \"placement-96fb6f878-56tfz\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.322498 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-scripts\") pod \"placement-96fb6f878-56tfz\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.322522 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnggb\" (UniqueName: \"kubernetes.io/projected/52101de8-a25c-4372-9df3-3f090167ff5f-kube-api-access-bnggb\") pod \"placement-96fb6f878-56tfz\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.322563 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-combined-ca-bundle\") pod \"placement-96fb6f878-56tfz\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.322642 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52101de8-a25c-4372-9df3-3f090167ff5f-logs\") pod \"placement-96fb6f878-56tfz\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.326350 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-scripts\") pod \"placement-96fb6f878-56tfz\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.326914 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-internal-tls-certs\") pod \"placement-96fb6f878-56tfz\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.330140 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-public-tls-certs\") pod \"placement-96fb6f878-56tfz\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.330301 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-combined-ca-bundle\") pod \"placement-96fb6f878-56tfz\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.337274 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-config-data\") pod \"placement-96fb6f878-56tfz\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.343662 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnggb\" (UniqueName: \"kubernetes.io/projected/52101de8-a25c-4372-9df3-3f090167ff5f-kube-api-access-bnggb\") pod \"placement-96fb6f878-56tfz\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:03 crc kubenswrapper[4946]: I1128 07:16:03.483294 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:05 crc kubenswrapper[4946]: I1128 07:16:05.998744 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l7pvv" Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.032026 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-djhf5" Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.133537 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8310da02-e0c7-4dab-bc26-7139ca576c2c","Type":"ContainerStarted","Data":"4708373a46192f99732c4354806f10e8a24e49a44a638af9271cd3c51380f78a"} Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.139225 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lpcd7" event={"ID":"b9bec43f-4374-47e6-968e-858a39bfa527","Type":"ContainerStarted","Data":"0564c31d1504747c43e38bd341896264381cf50d36d80baec84079cb38b524ab"} Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.142790 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldkrq\" (UniqueName: \"kubernetes.io/projected/000349d5-5671-46a1-b1d2-1954a9facc3e-kube-api-access-ldkrq\") pod \"000349d5-5671-46a1-b1d2-1954a9facc3e\" (UID: \"000349d5-5671-46a1-b1d2-1954a9facc3e\") " Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.143682 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-credential-keys\") pod \"000349d5-5671-46a1-b1d2-1954a9facc3e\" (UID: \"000349d5-5671-46a1-b1d2-1954a9facc3e\") " Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.143767 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-combined-ca-bundle\") pod \"000349d5-5671-46a1-b1d2-1954a9facc3e\" (UID: \"000349d5-5671-46a1-b1d2-1954a9facc3e\") " Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.143798 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0cbfc817-e82c-436d-bc3a-6a3a94ee82e8-db-sync-config-data\") pod \"0cbfc817-e82c-436d-bc3a-6a3a94ee82e8\" (UID: \"0cbfc817-e82c-436d-bc3a-6a3a94ee82e8\") " Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.143818 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-scripts\") pod \"000349d5-5671-46a1-b1d2-1954a9facc3e\" (UID: \"000349d5-5671-46a1-b1d2-1954a9facc3e\") " Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.143844 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hppf\" (UniqueName: \"kubernetes.io/projected/0cbfc817-e82c-436d-bc3a-6a3a94ee82e8-kube-api-access-8hppf\") pod \"0cbfc817-e82c-436d-bc3a-6a3a94ee82e8\" (UID: \"0cbfc817-e82c-436d-bc3a-6a3a94ee82e8\") " Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.143862 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-config-data\") pod \"000349d5-5671-46a1-b1d2-1954a9facc3e\" (UID: \"000349d5-5671-46a1-b1d2-1954a9facc3e\") " Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.143893 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cbfc817-e82c-436d-bc3a-6a3a94ee82e8-combined-ca-bundle\") pod \"0cbfc817-e82c-436d-bc3a-6a3a94ee82e8\" (UID: \"0cbfc817-e82c-436d-bc3a-6a3a94ee82e8\") " Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.143946 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-fernet-keys\") pod \"000349d5-5671-46a1-b1d2-1954a9facc3e\" (UID: \"000349d5-5671-46a1-b1d2-1954a9facc3e\") " Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.146393 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-djhf5" Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.146614 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-djhf5" event={"ID":"000349d5-5671-46a1-b1d2-1954a9facc3e","Type":"ContainerDied","Data":"eed8198dd99ab1d2c78a70b9b6047479aef2bb9257551a555accdcdc44f83a4b"} Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.146703 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eed8198dd99ab1d2c78a70b9b6047479aef2bb9257551a555accdcdc44f83a4b" Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.150565 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "000349d5-5671-46a1-b1d2-1954a9facc3e" (UID: "000349d5-5671-46a1-b1d2-1954a9facc3e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.150864 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cbfc817-e82c-436d-bc3a-6a3a94ee82e8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0cbfc817-e82c-436d-bc3a-6a3a94ee82e8" (UID: "0cbfc817-e82c-436d-bc3a-6a3a94ee82e8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.151567 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "000349d5-5671-46a1-b1d2-1954a9facc3e" (UID: "000349d5-5671-46a1-b1d2-1954a9facc3e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.152197 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/000349d5-5671-46a1-b1d2-1954a9facc3e-kube-api-access-ldkrq" (OuterVolumeSpecName: "kube-api-access-ldkrq") pod "000349d5-5671-46a1-b1d2-1954a9facc3e" (UID: "000349d5-5671-46a1-b1d2-1954a9facc3e"). InnerVolumeSpecName "kube-api-access-ldkrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.153011 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cbfc817-e82c-436d-bc3a-6a3a94ee82e8-kube-api-access-8hppf" (OuterVolumeSpecName: "kube-api-access-8hppf") pod "0cbfc817-e82c-436d-bc3a-6a3a94ee82e8" (UID: "0cbfc817-e82c-436d-bc3a-6a3a94ee82e8"). InnerVolumeSpecName "kube-api-access-8hppf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.155211 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-scripts" (OuterVolumeSpecName: "scripts") pod "000349d5-5671-46a1-b1d2-1954a9facc3e" (UID: "000349d5-5671-46a1-b1d2-1954a9facc3e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.155748 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l7pvv" event={"ID":"0cbfc817-e82c-436d-bc3a-6a3a94ee82e8","Type":"ContainerDied","Data":"e040daeaf40cd7bfe5dec09a9b17b151e13f1dc4fbe084f3e257ca5fa6fa03f2"} Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.155861 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e040daeaf40cd7bfe5dec09a9b17b151e13f1dc4fbe084f3e257ca5fa6fa03f2" Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.155943 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l7pvv" Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.162709 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lpcd7" podStartSLOduration=5.291456765 podStartE2EDuration="11.162687739s" podCreationTimestamp="2025-11-28 07:15:55 +0000 UTC" firstStartedPulling="2025-11-28 07:15:59.941170416 +0000 UTC m=+1414.319235527" lastFinishedPulling="2025-11-28 07:16:05.81240139 +0000 UTC m=+1420.190466501" observedRunningTime="2025-11-28 07:16:06.158126707 +0000 UTC m=+1420.536191818" watchObservedRunningTime="2025-11-28 07:16:06.162687739 +0000 UTC m=+1420.540752850" Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.178615 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-config-data" (OuterVolumeSpecName: "config-data") pod "000349d5-5671-46a1-b1d2-1954a9facc3e" (UID: "000349d5-5671-46a1-b1d2-1954a9facc3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.188924 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cbfc817-e82c-436d-bc3a-6a3a94ee82e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0cbfc817-e82c-436d-bc3a-6a3a94ee82e8" (UID: "0cbfc817-e82c-436d-bc3a-6a3a94ee82e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.201300 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "000349d5-5671-46a1-b1d2-1954a9facc3e" (UID: "000349d5-5671-46a1-b1d2-1954a9facc3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.246299 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldkrq\" (UniqueName: \"kubernetes.io/projected/000349d5-5671-46a1-b1d2-1954a9facc3e-kube-api-access-ldkrq\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.246351 4946 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.246362 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.246371 4946 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0cbfc817-e82c-436d-bc3a-6a3a94ee82e8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.246384 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.246393 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hppf\" (UniqueName: \"kubernetes.io/projected/0cbfc817-e82c-436d-bc3a-6a3a94ee82e8-kube-api-access-8hppf\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.246401 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.246426 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cbfc817-e82c-436d-bc3a-6a3a94ee82e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.246437 4946 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/000349d5-5671-46a1-b1d2-1954a9facc3e-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:06 crc kubenswrapper[4946]: I1128 07:16:06.342280 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-96fb6f878-56tfz"] Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.169428 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-96fb6f878-56tfz" event={"ID":"52101de8-a25c-4372-9df3-3f090167ff5f","Type":"ContainerStarted","Data":"6b8d481b71ffab080960ca39f72a71bb4c2e8f178d73020eb2dfb24175734884"} Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.169857 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-96fb6f878-56tfz" event={"ID":"52101de8-a25c-4372-9df3-3f090167ff5f","Type":"ContainerStarted","Data":"a2f882b052a314819dd4340b85645a14b6914096d138c6a1d81c6036bd6013fb"} Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.169870 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-96fb6f878-56tfz" event={"ID":"52101de8-a25c-4372-9df3-3f090167ff5f","Type":"ContainerStarted","Data":"44d720134c5b40834abb0dc5b55058a116e10f41445bf90bdff8941986aa4dd5"} Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.257348 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-96fb6f878-56tfz" podStartSLOduration=4.257319572 podStartE2EDuration="4.257319572s" podCreationTimestamp="2025-11-28 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:16:07.193400274 +0000 UTC m=+1421.571465385" watchObservedRunningTime="2025-11-28 07:16:07.257319572 +0000 UTC m=+1421.635384683" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.258775 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-77dfbb6d46-cf289"] Nov 28 07:16:07 crc kubenswrapper[4946]: E1128 07:16:07.259292 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cbfc817-e82c-436d-bc3a-6a3a94ee82e8" containerName="barbican-db-sync" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.259315 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cbfc817-e82c-436d-bc3a-6a3a94ee82e8" containerName="barbican-db-sync" Nov 28 07:16:07 crc kubenswrapper[4946]: E1128 07:16:07.259355 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000349d5-5671-46a1-b1d2-1954a9facc3e" containerName="keystone-bootstrap" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.259366 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="000349d5-5671-46a1-b1d2-1954a9facc3e" containerName="keystone-bootstrap" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.259649 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="000349d5-5671-46a1-b1d2-1954a9facc3e" containerName="keystone-bootstrap" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.259690 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cbfc817-e82c-436d-bc3a-6a3a94ee82e8" containerName="barbican-db-sync" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.260777 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.279623 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.279886 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.280123 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.280577 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.280763 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.281003 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-hmftc" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.338337 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-77dfbb6d46-cf289"] Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.351598 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.351672 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.351684 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.351695 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.381086 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-scripts\") pod \"keystone-77dfbb6d46-cf289\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.381438 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92kmg\" (UniqueName: \"kubernetes.io/projected/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-kube-api-access-92kmg\") pod \"keystone-77dfbb6d46-cf289\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.381598 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-internal-tls-certs\") pod \"keystone-77dfbb6d46-cf289\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.381685 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-credential-keys\") pod \"keystone-77dfbb6d46-cf289\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.381714 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-combined-ca-bundle\") pod \"keystone-77dfbb6d46-cf289\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.381758 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-public-tls-certs\") pod \"keystone-77dfbb6d46-cf289\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.381775 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-config-data\") pod \"keystone-77dfbb6d46-cf289\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.381790 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-fernet-keys\") pod \"keystone-77dfbb6d46-cf289\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.389844 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.397666 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.399599 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.406201 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.436474 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-bc95b876b-t8r9q"] Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.438740 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.444491 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.453010 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-bc95b876b-t8r9q"] Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.456601 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-pcsnk" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.457963 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.486927 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-config-data\") pod \"keystone-77dfbb6d46-cf289\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.486976 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-fernet-keys\") pod \"keystone-77dfbb6d46-cf289\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.487041 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-scripts\") pod \"keystone-77dfbb6d46-cf289\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.487096 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92kmg\" (UniqueName: \"kubernetes.io/projected/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-kube-api-access-92kmg\") pod \"keystone-77dfbb6d46-cf289\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.487138 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-internal-tls-certs\") pod \"keystone-77dfbb6d46-cf289\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.487212 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-credential-keys\") pod \"keystone-77dfbb6d46-cf289\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.487241 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-combined-ca-bundle\") pod \"keystone-77dfbb6d46-cf289\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.487275 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-public-tls-certs\") pod \"keystone-77dfbb6d46-cf289\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.492524 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-97bfd767f-7zg9s"] Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.503268 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-public-tls-certs\") pod \"keystone-77dfbb6d46-cf289\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.506491 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-internal-tls-certs\") pod \"keystone-77dfbb6d46-cf289\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.514333 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.514455 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-97bfd767f-7zg9s" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.533749 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.537079 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-combined-ca-bundle\") pod \"keystone-77dfbb6d46-cf289\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.544122 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-fernet-keys\") pod \"keystone-77dfbb6d46-cf289\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.544353 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-scripts\") pod \"keystone-77dfbb6d46-cf289\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.544921 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92kmg\" (UniqueName: \"kubernetes.io/projected/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-kube-api-access-92kmg\") pod \"keystone-77dfbb6d46-cf289\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.555201 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.561898 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-credential-keys\") pod \"keystone-77dfbb6d46-cf289\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.563034 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-config-data\") pod \"keystone-77dfbb6d46-cf289\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.586533 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-97bfd767f-7zg9s"] Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.650534 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f8dc44d89-xffzr"] Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.650886 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" podUID="de236db3-71ed-4308-b783-aab4e13225b5" containerName="dnsmasq-dns" containerID="cri-o://8a7733a102e3c067aaba74d858d59b2ac5ee773f7059dc7d5d37060effd43f34" gracePeriod=10 Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.656669 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.698567 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.699299 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxjpz\" (UniqueName: \"kubernetes.io/projected/4619b857-5e70-4ab3-807d-d233c9d9223c-kube-api-access-xxjpz\") pod \"barbican-worker-97bfd767f-7zg9s\" (UID: \"4619b857-5e70-4ab3-807d-d233c9d9223c\") " pod="openstack/barbican-worker-97bfd767f-7zg9s" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.699379 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-logs\") pod \"barbican-keystone-listener-bc95b876b-t8r9q\" (UID: \"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d\") " pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.699415 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-config-data-custom\") pod \"barbican-keystone-listener-bc95b876b-t8r9q\" (UID: \"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d\") " pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.699449 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-combined-ca-bundle\") pod \"barbican-keystone-listener-bc95b876b-t8r9q\" (UID: \"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d\") " pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.699522 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4619b857-5e70-4ab3-807d-d233c9d9223c-logs\") pod \"barbican-worker-97bfd767f-7zg9s\" (UID: \"4619b857-5e70-4ab3-807d-d233c9d9223c\") " pod="openstack/barbican-worker-97bfd767f-7zg9s" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.699545 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4619b857-5e70-4ab3-807d-d233c9d9223c-config-data\") pod \"barbican-worker-97bfd767f-7zg9s\" (UID: \"4619b857-5e70-4ab3-807d-d233c9d9223c\") " pod="openstack/barbican-worker-97bfd767f-7zg9s" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.699569 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-config-data\") pod \"barbican-keystone-listener-bc95b876b-t8r9q\" (UID: \"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d\") " pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.699646 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x5vh\" (UniqueName: \"kubernetes.io/projected/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-kube-api-access-6x5vh\") pod \"barbican-keystone-listener-bc95b876b-t8r9q\" (UID: \"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d\") " pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.699694 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4619b857-5e70-4ab3-807d-d233c9d9223c-config-data-custom\") pod \"barbican-worker-97bfd767f-7zg9s\" (UID: \"4619b857-5e70-4ab3-807d-d233c9d9223c\") " pod="openstack/barbican-worker-97bfd767f-7zg9s" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.699914 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4619b857-5e70-4ab3-807d-d233c9d9223c-combined-ca-bundle\") pod \"barbican-worker-97bfd767f-7zg9s\" (UID: \"4619b857-5e70-4ab3-807d-d233c9d9223c\") " pod="openstack/barbican-worker-97bfd767f-7zg9s" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.754245 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.788892 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf96b7dc5-svc2b"] Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.790613 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.801641 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4619b857-5e70-4ab3-807d-d233c9d9223c-config-data-custom\") pod \"barbican-worker-97bfd767f-7zg9s\" (UID: \"4619b857-5e70-4ab3-807d-d233c9d9223c\") " pod="openstack/barbican-worker-97bfd767f-7zg9s" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.801927 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4619b857-5e70-4ab3-807d-d233c9d9223c-combined-ca-bundle\") pod \"barbican-worker-97bfd767f-7zg9s\" (UID: \"4619b857-5e70-4ab3-807d-d233c9d9223c\") " pod="openstack/barbican-worker-97bfd767f-7zg9s" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.802548 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxjpz\" (UniqueName: \"kubernetes.io/projected/4619b857-5e70-4ab3-807d-d233c9d9223c-kube-api-access-xxjpz\") pod \"barbican-worker-97bfd767f-7zg9s\" (UID: \"4619b857-5e70-4ab3-807d-d233c9d9223c\") " pod="openstack/barbican-worker-97bfd767f-7zg9s" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.802634 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-logs\") pod \"barbican-keystone-listener-bc95b876b-t8r9q\" (UID: \"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d\") " pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.812047 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-config-data-custom\") pod \"barbican-keystone-listener-bc95b876b-t8r9q\" (UID: \"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d\") " pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.812150 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-logs\") pod \"barbican-keystone-listener-bc95b876b-t8r9q\" (UID: \"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d\") " pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.812248 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf96b7dc5-svc2b"] Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.812335 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-combined-ca-bundle\") pod \"barbican-keystone-listener-bc95b876b-t8r9q\" (UID: \"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d\") " pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.812415 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4619b857-5e70-4ab3-807d-d233c9d9223c-logs\") pod \"barbican-worker-97bfd767f-7zg9s\" (UID: \"4619b857-5e70-4ab3-807d-d233c9d9223c\") " pod="openstack/barbican-worker-97bfd767f-7zg9s" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.812507 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4619b857-5e70-4ab3-807d-d233c9d9223c-config-data\") pod \"barbican-worker-97bfd767f-7zg9s\" (UID: \"4619b857-5e70-4ab3-807d-d233c9d9223c\") " pod="openstack/barbican-worker-97bfd767f-7zg9s" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.812613 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-config-data\") pod \"barbican-keystone-listener-bc95b876b-t8r9q\" (UID: \"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d\") " pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.812808 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x5vh\" (UniqueName: \"kubernetes.io/projected/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-kube-api-access-6x5vh\") pod \"barbican-keystone-listener-bc95b876b-t8r9q\" (UID: \"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d\") " pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.814244 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4619b857-5e70-4ab3-807d-d233c9d9223c-logs\") pod \"barbican-worker-97bfd767f-7zg9s\" (UID: \"4619b857-5e70-4ab3-807d-d233c9d9223c\") " pod="openstack/barbican-worker-97bfd767f-7zg9s" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.834589 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4619b857-5e70-4ab3-807d-d233c9d9223c-config-data-custom\") pod \"barbican-worker-97bfd767f-7zg9s\" (UID: \"4619b857-5e70-4ab3-807d-d233c9d9223c\") " pod="openstack/barbican-worker-97bfd767f-7zg9s" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.835833 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4619b857-5e70-4ab3-807d-d233c9d9223c-combined-ca-bundle\") pod \"barbican-worker-97bfd767f-7zg9s\" (UID: \"4619b857-5e70-4ab3-807d-d233c9d9223c\") " pod="openstack/barbican-worker-97bfd767f-7zg9s" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.841096 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x5vh\" (UniqueName: \"kubernetes.io/projected/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-kube-api-access-6x5vh\") pod \"barbican-keystone-listener-bc95b876b-t8r9q\" (UID: \"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d\") " pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.842935 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4619b857-5e70-4ab3-807d-d233c9d9223c-config-data\") pod \"barbican-worker-97bfd767f-7zg9s\" (UID: \"4619b857-5e70-4ab3-807d-d233c9d9223c\") " pod="openstack/barbican-worker-97bfd767f-7zg9s" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.844861 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-config-data\") pod \"barbican-keystone-listener-bc95b876b-t8r9q\" (UID: \"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d\") " pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.858973 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-combined-ca-bundle\") pod \"barbican-keystone-listener-bc95b876b-t8r9q\" (UID: \"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d\") " pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.879171 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-config-data-custom\") pod \"barbican-keystone-listener-bc95b876b-t8r9q\" (UID: \"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d\") " pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.914264 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxjpz\" (UniqueName: \"kubernetes.io/projected/4619b857-5e70-4ab3-807d-d233c9d9223c-kube-api-access-xxjpz\") pod \"barbican-worker-97bfd767f-7zg9s\" (UID: \"4619b857-5e70-4ab3-807d-d233c9d9223c\") " pod="openstack/barbican-worker-97bfd767f-7zg9s" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.915517 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkqdv\" (UniqueName: \"kubernetes.io/projected/bc4b8b86-40ad-4ff6-8866-207cb02c182d-kube-api-access-qkqdv\") pod \"dnsmasq-dns-cf96b7dc5-svc2b\" (UID: \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\") " pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.915598 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-config\") pod \"dnsmasq-dns-cf96b7dc5-svc2b\" (UID: \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\") " pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.915623 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-ovsdbserver-nb\") pod \"dnsmasq-dns-cf96b7dc5-svc2b\" (UID: \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\") " pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.915649 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-dns-svc\") pod \"dnsmasq-dns-cf96b7dc5-svc2b\" (UID: \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\") " pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.915668 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-ovsdbserver-sb\") pod \"dnsmasq-dns-cf96b7dc5-svc2b\" (UID: \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\") " pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.915691 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-dns-swift-storage-0\") pod \"dnsmasq-dns-cf96b7dc5-svc2b\" (UID: \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\") " pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.976830 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" Nov 28 07:16:07 crc kubenswrapper[4946]: I1128 07:16:07.988636 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-97bfd767f-7zg9s" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.022264 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-config\") pod \"dnsmasq-dns-cf96b7dc5-svc2b\" (UID: \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\") " pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.022320 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-ovsdbserver-nb\") pod \"dnsmasq-dns-cf96b7dc5-svc2b\" (UID: \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\") " pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.022355 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-dns-svc\") pod \"dnsmasq-dns-cf96b7dc5-svc2b\" (UID: \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\") " pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.022375 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-ovsdbserver-sb\") pod \"dnsmasq-dns-cf96b7dc5-svc2b\" (UID: \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\") " pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.022406 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-dns-swift-storage-0\") pod \"dnsmasq-dns-cf96b7dc5-svc2b\" (UID: \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\") " pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.022505 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkqdv\" (UniqueName: \"kubernetes.io/projected/bc4b8b86-40ad-4ff6-8866-207cb02c182d-kube-api-access-qkqdv\") pod \"dnsmasq-dns-cf96b7dc5-svc2b\" (UID: \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\") " pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.068778 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-97cf8d846-2fqhs"] Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.070506 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.070872 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-97cf8d846-2fqhs" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.075810 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.095849 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-config\") pod \"dnsmasq-dns-cf96b7dc5-svc2b\" (UID: \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\") " pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.097553 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-dns-svc\") pod \"dnsmasq-dns-cf96b7dc5-svc2b\" (UID: \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\") " pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.102581 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-ovsdbserver-sb\") pod \"dnsmasq-dns-cf96b7dc5-svc2b\" (UID: \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\") " pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.103244 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-ovsdbserver-nb\") pod \"dnsmasq-dns-cf96b7dc5-svc2b\" (UID: \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\") " pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.103376 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-dns-swift-storage-0\") pod \"dnsmasq-dns-cf96b7dc5-svc2b\" (UID: \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\") " pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.111207 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkqdv\" (UniqueName: \"kubernetes.io/projected/bc4b8b86-40ad-4ff6-8866-207cb02c182d-kube-api-access-qkqdv\") pod \"dnsmasq-dns-cf96b7dc5-svc2b\" (UID: \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\") " pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.170508 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-97cf8d846-2fqhs"] Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.206674 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.220877 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.236499 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-config-data\") pod \"barbican-api-97cf8d846-2fqhs\" (UID: \"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755\") " pod="openstack/barbican-api-97cf8d846-2fqhs" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.236548 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-config-data-custom\") pod \"barbican-api-97cf8d846-2fqhs\" (UID: \"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755\") " pod="openstack/barbican-api-97cf8d846-2fqhs" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.236573 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkvxg\" (UniqueName: \"kubernetes.io/projected/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-kube-api-access-fkvxg\") pod \"barbican-api-97cf8d846-2fqhs\" (UID: \"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755\") " pod="openstack/barbican-api-97cf8d846-2fqhs" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.236702 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-logs\") pod \"barbican-api-97cf8d846-2fqhs\" (UID: \"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755\") " pod="openstack/barbican-api-97cf8d846-2fqhs" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.236720 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-combined-ca-bundle\") pod \"barbican-api-97cf8d846-2fqhs\" (UID: \"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755\") " pod="openstack/barbican-api-97cf8d846-2fqhs" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.338752 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-logs\") pod \"barbican-api-97cf8d846-2fqhs\" (UID: \"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755\") " pod="openstack/barbican-api-97cf8d846-2fqhs" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.338807 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-combined-ca-bundle\") pod \"barbican-api-97cf8d846-2fqhs\" (UID: \"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755\") " pod="openstack/barbican-api-97cf8d846-2fqhs" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.339074 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-config-data\") pod \"barbican-api-97cf8d846-2fqhs\" (UID: \"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755\") " pod="openstack/barbican-api-97cf8d846-2fqhs" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.339093 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-config-data-custom\") pod \"barbican-api-97cf8d846-2fqhs\" (UID: \"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755\") " pod="openstack/barbican-api-97cf8d846-2fqhs" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.339141 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkvxg\" (UniqueName: \"kubernetes.io/projected/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-kube-api-access-fkvxg\") pod \"barbican-api-97cf8d846-2fqhs\" (UID: \"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755\") " pod="openstack/barbican-api-97cf8d846-2fqhs" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.355166 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-config-data-custom\") pod \"barbican-api-97cf8d846-2fqhs\" (UID: \"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755\") " pod="openstack/barbican-api-97cf8d846-2fqhs" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.361497 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-logs\") pod \"barbican-api-97cf8d846-2fqhs\" (UID: \"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755\") " pod="openstack/barbican-api-97cf8d846-2fqhs" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.366022 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-combined-ca-bundle\") pod \"barbican-api-97cf8d846-2fqhs\" (UID: \"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755\") " pod="openstack/barbican-api-97cf8d846-2fqhs" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.366151 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-config-data\") pod \"barbican-api-97cf8d846-2fqhs\" (UID: \"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755\") " pod="openstack/barbican-api-97cf8d846-2fqhs" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.377556 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkvxg\" (UniqueName: \"kubernetes.io/projected/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-kube-api-access-fkvxg\") pod \"barbican-api-97cf8d846-2fqhs\" (UID: \"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755\") " pod="openstack/barbican-api-97cf8d846-2fqhs" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.409639 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.439008 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-97cf8d846-2fqhs" Nov 28 07:16:08 crc kubenswrapper[4946]: E1128 07:16:08.684208 4946 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde236db3_71ed_4308_b783_aab4e13225b5.slice/crio-conmon-8a7733a102e3c067aaba74d858d59b2ac5ee773f7059dc7d5d37060effd43f34.scope\": RecentStats: unable to find data in memory cache]" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.772834 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-97bfd767f-7zg9s"] Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.934851 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.954152 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-77dfbb6d46-cf289"] Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.959564 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmwf5\" (UniqueName: \"kubernetes.io/projected/de236db3-71ed-4308-b783-aab4e13225b5-kube-api-access-cmwf5\") pod \"de236db3-71ed-4308-b783-aab4e13225b5\" (UID: \"de236db3-71ed-4308-b783-aab4e13225b5\") " Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.959661 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-dns-svc\") pod \"de236db3-71ed-4308-b783-aab4e13225b5\" (UID: \"de236db3-71ed-4308-b783-aab4e13225b5\") " Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.959715 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-ovsdbserver-nb\") pod \"de236db3-71ed-4308-b783-aab4e13225b5\" (UID: \"de236db3-71ed-4308-b783-aab4e13225b5\") " Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.959744 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-dns-swift-storage-0\") pod \"de236db3-71ed-4308-b783-aab4e13225b5\" (UID: \"de236db3-71ed-4308-b783-aab4e13225b5\") " Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.959956 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-ovsdbserver-sb\") pod \"de236db3-71ed-4308-b783-aab4e13225b5\" (UID: \"de236db3-71ed-4308-b783-aab4e13225b5\") " Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.960100 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-config\") pod \"de236db3-71ed-4308-b783-aab4e13225b5\" (UID: \"de236db3-71ed-4308-b783-aab4e13225b5\") " Nov 28 07:16:08 crc kubenswrapper[4946]: I1128 07:16:08.974638 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de236db3-71ed-4308-b783-aab4e13225b5-kube-api-access-cmwf5" (OuterVolumeSpecName: "kube-api-access-cmwf5") pod "de236db3-71ed-4308-b783-aab4e13225b5" (UID: "de236db3-71ed-4308-b783-aab4e13225b5"). InnerVolumeSpecName "kube-api-access-cmwf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.024937 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-bc95b876b-t8r9q"] Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.046217 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-config" (OuterVolumeSpecName: "config") pod "de236db3-71ed-4308-b783-aab4e13225b5" (UID: "de236db3-71ed-4308-b783-aab4e13225b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.063566 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.063599 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmwf5\" (UniqueName: \"kubernetes.io/projected/de236db3-71ed-4308-b783-aab4e13225b5-kube-api-access-cmwf5\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.093077 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "de236db3-71ed-4308-b783-aab4e13225b5" (UID: "de236db3-71ed-4308-b783-aab4e13225b5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.098554 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "de236db3-71ed-4308-b783-aab4e13225b5" (UID: "de236db3-71ed-4308-b783-aab4e13225b5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.102012 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "de236db3-71ed-4308-b783-aab4e13225b5" (UID: "de236db3-71ed-4308-b783-aab4e13225b5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.102630 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "de236db3-71ed-4308-b783-aab4e13225b5" (UID: "de236db3-71ed-4308-b783-aab4e13225b5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.165883 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.165929 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.165938 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.166080 4946 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de236db3-71ed-4308-b783-aab4e13225b5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.236825 4946 generic.go:334] "Generic (PLEG): container finished" podID="de236db3-71ed-4308-b783-aab4e13225b5" containerID="8a7733a102e3c067aaba74d858d59b2ac5ee773f7059dc7d5d37060effd43f34" exitCode=0 Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.236895 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" event={"ID":"de236db3-71ed-4308-b783-aab4e13225b5","Type":"ContainerDied","Data":"8a7733a102e3c067aaba74d858d59b2ac5ee773f7059dc7d5d37060effd43f34"} Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.236930 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" event={"ID":"de236db3-71ed-4308-b783-aab4e13225b5","Type":"ContainerDied","Data":"f7822fbbff9ab1de4e040d30b74c78746be0bfb171266e7223c1139875e63371"} Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.236950 4946 scope.go:117] "RemoveContainer" containerID="8a7733a102e3c067aaba74d858d59b2ac5ee773f7059dc7d5d37060effd43f34" Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.237085 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f8dc44d89-xffzr" Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.253342 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-97bfd767f-7zg9s" event={"ID":"4619b857-5e70-4ab3-807d-d233c9d9223c","Type":"ContainerStarted","Data":"af404d73e71434bdd034c7fc0628178eb1c85dfa9f673efa7e2c157a4887187f"} Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.272602 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-77dfbb6d46-cf289" event={"ID":"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37","Type":"ContainerStarted","Data":"361c58d9d9eda81e7b6f10f7ce55798b58eb61a8af2c6578a63a57f1664a299b"} Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.288434 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-97cf8d846-2fqhs"] Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.288735 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" event={"ID":"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d","Type":"ContainerStarted","Data":"ed0857116183a45137ef9772b65b50928baca9419eea15636a75437e54f008b5"} Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.309757 4946 scope.go:117] "RemoveContainer" containerID="f5677399d8071635fd901f66ea39012572a4a6378ffd8fb45891d25f4bc1e80a" Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.327640 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f8dc44d89-xffzr"] Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.352744 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f8dc44d89-xffzr"] Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.430850 4946 scope.go:117] "RemoveContainer" containerID="8a7733a102e3c067aaba74d858d59b2ac5ee773f7059dc7d5d37060effd43f34" Nov 28 07:16:09 crc kubenswrapper[4946]: E1128 07:16:09.434780 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a7733a102e3c067aaba74d858d59b2ac5ee773f7059dc7d5d37060effd43f34\": container with ID starting with 8a7733a102e3c067aaba74d858d59b2ac5ee773f7059dc7d5d37060effd43f34 not found: ID does not exist" containerID="8a7733a102e3c067aaba74d858d59b2ac5ee773f7059dc7d5d37060effd43f34" Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.434840 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a7733a102e3c067aaba74d858d59b2ac5ee773f7059dc7d5d37060effd43f34"} err="failed to get container status \"8a7733a102e3c067aaba74d858d59b2ac5ee773f7059dc7d5d37060effd43f34\": rpc error: code = NotFound desc = could not find container \"8a7733a102e3c067aaba74d858d59b2ac5ee773f7059dc7d5d37060effd43f34\": container with ID starting with 8a7733a102e3c067aaba74d858d59b2ac5ee773f7059dc7d5d37060effd43f34 not found: ID does not exist" Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.434878 4946 scope.go:117] "RemoveContainer" containerID="f5677399d8071635fd901f66ea39012572a4a6378ffd8fb45891d25f4bc1e80a" Nov 28 07:16:09 crc kubenswrapper[4946]: E1128 07:16:09.437306 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5677399d8071635fd901f66ea39012572a4a6378ffd8fb45891d25f4bc1e80a\": container with ID starting with f5677399d8071635fd901f66ea39012572a4a6378ffd8fb45891d25f4bc1e80a not found: ID does not exist" containerID="f5677399d8071635fd901f66ea39012572a4a6378ffd8fb45891d25f4bc1e80a" Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.437382 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5677399d8071635fd901f66ea39012572a4a6378ffd8fb45891d25f4bc1e80a"} err="failed to get container status \"f5677399d8071635fd901f66ea39012572a4a6378ffd8fb45891d25f4bc1e80a\": rpc error: code = NotFound desc = could not find container \"f5677399d8071635fd901f66ea39012572a4a6378ffd8fb45891d25f4bc1e80a\": container with ID starting with f5677399d8071635fd901f66ea39012572a4a6378ffd8fb45891d25f4bc1e80a not found: ID does not exist" Nov 28 07:16:09 crc kubenswrapper[4946]: I1128 07:16:09.461087 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf96b7dc5-svc2b"] Nov 28 07:16:10 crc kubenswrapper[4946]: I1128 07:16:10.002693 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de236db3-71ed-4308-b783-aab4e13225b5" path="/var/lib/kubelet/pods/de236db3-71ed-4308-b783-aab4e13225b5/volumes" Nov 28 07:16:10 crc kubenswrapper[4946]: I1128 07:16:10.361326 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-77dfbb6d46-cf289" event={"ID":"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37","Type":"ContainerStarted","Data":"d7ff90ba4ae10a7961b71df89adf06b1201b956ef3dcb71ad66f05d75ef803ef"} Nov 28 07:16:10 crc kubenswrapper[4946]: I1128 07:16:10.362146 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:10 crc kubenswrapper[4946]: I1128 07:16:10.367406 4946 generic.go:334] "Generic (PLEG): container finished" podID="bc4b8b86-40ad-4ff6-8866-207cb02c182d" containerID="1b8a285ff6ab56bae635b5be65d3cbe9f74311d36c411030deed6954dac2d335" exitCode=0 Nov 28 07:16:10 crc kubenswrapper[4946]: I1128 07:16:10.367506 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" event={"ID":"bc4b8b86-40ad-4ff6-8866-207cb02c182d","Type":"ContainerDied","Data":"1b8a285ff6ab56bae635b5be65d3cbe9f74311d36c411030deed6954dac2d335"} Nov 28 07:16:10 crc kubenswrapper[4946]: I1128 07:16:10.367532 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" event={"ID":"bc4b8b86-40ad-4ff6-8866-207cb02c182d","Type":"ContainerStarted","Data":"fa596f55dfd5fc55df52db3001e0bc401f1c96d0aff4171ab2cd69df0be819b3"} Nov 28 07:16:10 crc kubenswrapper[4946]: I1128 07:16:10.381736 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8wk8q" event={"ID":"8d0d856a-c2e4-4ccf-adfb-70391210f8d9","Type":"ContainerStarted","Data":"3d5eeaae441aa25568927d194f96b2d53b36fd0d2f9a8f7955ca367e5464cb34"} Nov 28 07:16:10 crc kubenswrapper[4946]: I1128 07:16:10.452397 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-97cf8d846-2fqhs" event={"ID":"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755","Type":"ContainerStarted","Data":"849c8a042b3728a44b8c1f9369f45c135c112afac0133076b60c395d456ec422"} Nov 28 07:16:10 crc kubenswrapper[4946]: I1128 07:16:10.452663 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-97cf8d846-2fqhs" Nov 28 07:16:10 crc kubenswrapper[4946]: I1128 07:16:10.452727 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-97cf8d846-2fqhs" event={"ID":"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755","Type":"ContainerStarted","Data":"421e9052d27407d15140d6d912af7b4e4e29caf6daa496715ddb295e6cdf2b4b"} Nov 28 07:16:10 crc kubenswrapper[4946]: I1128 07:16:10.452751 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-97cf8d846-2fqhs" event={"ID":"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755","Type":"ContainerStarted","Data":"b99f314ec79311d335e481b1065a56fee4046c57a9bbf6229882104752629f75"} Nov 28 07:16:10 crc kubenswrapper[4946]: I1128 07:16:10.452783 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-97cf8d846-2fqhs" Nov 28 07:16:10 crc kubenswrapper[4946]: I1128 07:16:10.460218 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-77dfbb6d46-cf289" podStartSLOduration=3.460192952 podStartE2EDuration="3.460192952s" podCreationTimestamp="2025-11-28 07:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:16:10.396824068 +0000 UTC m=+1424.774889179" watchObservedRunningTime="2025-11-28 07:16:10.460192952 +0000 UTC m=+1424.838258063" Nov 28 07:16:10 crc kubenswrapper[4946]: I1128 07:16:10.476248 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-8wk8q" podStartSLOduration=5.147559016 podStartE2EDuration="42.476229656s" podCreationTimestamp="2025-11-28 07:15:28 +0000 UTC" firstStartedPulling="2025-11-28 07:15:30.34370845 +0000 UTC m=+1384.721773561" lastFinishedPulling="2025-11-28 07:16:07.67237909 +0000 UTC m=+1422.050444201" observedRunningTime="2025-11-28 07:16:10.452211897 +0000 UTC m=+1424.830277018" watchObservedRunningTime="2025-11-28 07:16:10.476229656 +0000 UTC m=+1424.854294767" Nov 28 07:16:10 crc kubenswrapper[4946]: I1128 07:16:10.497143 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-97cf8d846-2fqhs" podStartSLOduration=3.497121808 podStartE2EDuration="3.497121808s" podCreationTimestamp="2025-11-28 07:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:16:10.475022856 +0000 UTC m=+1424.853087967" watchObservedRunningTime="2025-11-28 07:16:10.497121808 +0000 UTC m=+1424.875186919" Nov 28 07:16:11 crc kubenswrapper[4946]: I1128 07:16:11.467156 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" event={"ID":"bc4b8b86-40ad-4ff6-8866-207cb02c182d","Type":"ContainerStarted","Data":"2fe9aa1a15d8263e269473bdce62508133ea57ffd23294ebe19d208918be43f3"} Nov 28 07:16:11 crc kubenswrapper[4946]: I1128 07:16:11.468263 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" Nov 28 07:16:11 crc kubenswrapper[4946]: I1128 07:16:11.505895 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" podStartSLOduration=4.505866305 podStartE2EDuration="4.505866305s" podCreationTimestamp="2025-11-28 07:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:16:11.496518465 +0000 UTC m=+1425.874583596" watchObservedRunningTime="2025-11-28 07:16:11.505866305 +0000 UTC m=+1425.883931426" Nov 28 07:16:11 crc kubenswrapper[4946]: I1128 07:16:11.819032 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 07:16:11 crc kubenswrapper[4946]: I1128 07:16:11.819179 4946 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 07:16:11 crc kubenswrapper[4946]: I1128 07:16:11.876502 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-576769594d-lbv64"] Nov 28 07:16:11 crc kubenswrapper[4946]: E1128 07:16:11.877096 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de236db3-71ed-4308-b783-aab4e13225b5" containerName="init" Nov 28 07:16:11 crc kubenswrapper[4946]: I1128 07:16:11.877117 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="de236db3-71ed-4308-b783-aab4e13225b5" containerName="init" Nov 28 07:16:11 crc kubenswrapper[4946]: E1128 07:16:11.877136 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de236db3-71ed-4308-b783-aab4e13225b5" containerName="dnsmasq-dns" Nov 28 07:16:11 crc kubenswrapper[4946]: I1128 07:16:11.877146 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="de236db3-71ed-4308-b783-aab4e13225b5" containerName="dnsmasq-dns" Nov 28 07:16:11 crc kubenswrapper[4946]: I1128 07:16:11.877380 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="de236db3-71ed-4308-b783-aab4e13225b5" containerName="dnsmasq-dns" Nov 28 07:16:11 crc kubenswrapper[4946]: I1128 07:16:11.878659 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:11 crc kubenswrapper[4946]: I1128 07:16:11.882715 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 28 07:16:11 crc kubenswrapper[4946]: I1128 07:16:11.882777 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 28 07:16:11 crc kubenswrapper[4946]: I1128 07:16:11.886572 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-576769594d-lbv64"] Nov 28 07:16:11 crc kubenswrapper[4946]: I1128 07:16:11.975400 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-config-data\") pod \"barbican-api-576769594d-lbv64\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:11 crc kubenswrapper[4946]: I1128 07:16:11.975526 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gw8z\" (UniqueName: \"kubernetes.io/projected/313c5837-e776-49ef-8689-14f6f70d31a1-kube-api-access-6gw8z\") pod \"barbican-api-576769594d-lbv64\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:11 crc kubenswrapper[4946]: I1128 07:16:11.975591 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-public-tls-certs\") pod \"barbican-api-576769594d-lbv64\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:11 crc kubenswrapper[4946]: I1128 07:16:11.975651 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-internal-tls-certs\") pod \"barbican-api-576769594d-lbv64\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:11 crc kubenswrapper[4946]: I1128 07:16:11.975675 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-combined-ca-bundle\") pod \"barbican-api-576769594d-lbv64\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:11 crc kubenswrapper[4946]: I1128 07:16:11.976455 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/313c5837-e776-49ef-8689-14f6f70d31a1-logs\") pod \"barbican-api-576769594d-lbv64\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:11 crc kubenswrapper[4946]: I1128 07:16:11.976940 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-config-data-custom\") pod \"barbican-api-576769594d-lbv64\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:11 crc kubenswrapper[4946]: I1128 07:16:11.982118 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 07:16:11 crc kubenswrapper[4946]: I1128 07:16:11.983123 4946 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 07:16:11 crc kubenswrapper[4946]: I1128 07:16:11.987763 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 07:16:12 crc kubenswrapper[4946]: I1128 07:16:12.079272 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-public-tls-certs\") pod \"barbican-api-576769594d-lbv64\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:12 crc kubenswrapper[4946]: I1128 07:16:12.079380 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-internal-tls-certs\") pod \"barbican-api-576769594d-lbv64\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:12 crc kubenswrapper[4946]: I1128 07:16:12.079407 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-combined-ca-bundle\") pod \"barbican-api-576769594d-lbv64\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:12 crc kubenswrapper[4946]: I1128 07:16:12.079526 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/313c5837-e776-49ef-8689-14f6f70d31a1-logs\") pod \"barbican-api-576769594d-lbv64\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:12 crc kubenswrapper[4946]: I1128 07:16:12.079558 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-config-data-custom\") pod \"barbican-api-576769594d-lbv64\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:12 crc kubenswrapper[4946]: I1128 07:16:12.079591 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-config-data\") pod \"barbican-api-576769594d-lbv64\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:12 crc kubenswrapper[4946]: I1128 07:16:12.079624 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gw8z\" (UniqueName: \"kubernetes.io/projected/313c5837-e776-49ef-8689-14f6f70d31a1-kube-api-access-6gw8z\") pod \"barbican-api-576769594d-lbv64\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:12 crc kubenswrapper[4946]: I1128 07:16:12.080987 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/313c5837-e776-49ef-8689-14f6f70d31a1-logs\") pod \"barbican-api-576769594d-lbv64\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:12 crc kubenswrapper[4946]: I1128 07:16:12.082832 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 07:16:12 crc kubenswrapper[4946]: I1128 07:16:12.089274 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-public-tls-certs\") pod \"barbican-api-576769594d-lbv64\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:12 crc kubenswrapper[4946]: I1128 07:16:12.091100 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-config-data-custom\") pod \"barbican-api-576769594d-lbv64\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:12 crc kubenswrapper[4946]: I1128 07:16:12.094567 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-combined-ca-bundle\") pod \"barbican-api-576769594d-lbv64\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:12 crc kubenswrapper[4946]: I1128 07:16:12.099557 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-internal-tls-certs\") pod \"barbican-api-576769594d-lbv64\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:12 crc kubenswrapper[4946]: I1128 07:16:12.105367 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-config-data\") pod \"barbican-api-576769594d-lbv64\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:12 crc kubenswrapper[4946]: I1128 07:16:12.106936 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gw8z\" (UniqueName: \"kubernetes.io/projected/313c5837-e776-49ef-8689-14f6f70d31a1-kube-api-access-6gw8z\") pod \"barbican-api-576769594d-lbv64\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:12 crc kubenswrapper[4946]: I1128 07:16:12.202404 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:13 crc kubenswrapper[4946]: I1128 07:16:13.214226 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-576769594d-lbv64"] Nov 28 07:16:13 crc kubenswrapper[4946]: W1128 07:16:13.223232 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod313c5837_e776_49ef_8689_14f6f70d31a1.slice/crio-6f40f8c684db37068a0770b44fa872fcdd30e6d8c9cadc9dfd30e9eaf286c527 WatchSource:0}: Error finding container 6f40f8c684db37068a0770b44fa872fcdd30e6d8c9cadc9dfd30e9eaf286c527: Status 404 returned error can't find the container with id 6f40f8c684db37068a0770b44fa872fcdd30e6d8c9cadc9dfd30e9eaf286c527 Nov 28 07:16:13 crc kubenswrapper[4946]: I1128 07:16:13.494750 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" event={"ID":"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d","Type":"ContainerStarted","Data":"0f786e996c300ebfa42ac27a855c0fd4efcf8b18e7989e6e4ee76f3ce6556d28"} Nov 28 07:16:13 crc kubenswrapper[4946]: I1128 07:16:13.495134 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" event={"ID":"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d","Type":"ContainerStarted","Data":"0eb88c99eb48005972cbcbed71e1bfbdae3511e830c076af9bb07fc162c1abd1"} Nov 28 07:16:13 crc kubenswrapper[4946]: I1128 07:16:13.507872 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-97bfd767f-7zg9s" event={"ID":"4619b857-5e70-4ab3-807d-d233c9d9223c","Type":"ContainerStarted","Data":"1bd51decff33c36bbc5dd5c23b04d649356ea9e1d0ccc8a4e364d1512e184087"} Nov 28 07:16:13 crc kubenswrapper[4946]: I1128 07:16:13.507934 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-97bfd767f-7zg9s" event={"ID":"4619b857-5e70-4ab3-807d-d233c9d9223c","Type":"ContainerStarted","Data":"9517940817e1a7d770b2a8b25720d839e772ec8d3fd3f428c2debea57ae43b63"} Nov 28 07:16:13 crc kubenswrapper[4946]: I1128 07:16:13.518062 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" podStartSLOduration=2.963465371 podStartE2EDuration="6.518045865s" podCreationTimestamp="2025-11-28 07:16:07 +0000 UTC" firstStartedPulling="2025-11-28 07:16:09.08128857 +0000 UTC m=+1423.459353681" lastFinishedPulling="2025-11-28 07:16:12.635869064 +0000 UTC m=+1427.013934175" observedRunningTime="2025-11-28 07:16:13.515982275 +0000 UTC m=+1427.894047386" watchObservedRunningTime="2025-11-28 07:16:13.518045865 +0000 UTC m=+1427.896110976" Nov 28 07:16:13 crc kubenswrapper[4946]: I1128 07:16:13.528397 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-576769594d-lbv64" event={"ID":"313c5837-e776-49ef-8689-14f6f70d31a1","Type":"ContainerStarted","Data":"6f40f8c684db37068a0770b44fa872fcdd30e6d8c9cadc9dfd30e9eaf286c527"} Nov 28 07:16:13 crc kubenswrapper[4946]: I1128 07:16:13.550826 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-97bfd767f-7zg9s" podStartSLOduration=2.834256643 podStartE2EDuration="6.550804589s" podCreationTimestamp="2025-11-28 07:16:07 +0000 UTC" firstStartedPulling="2025-11-28 07:16:08.920398184 +0000 UTC m=+1423.298463295" lastFinishedPulling="2025-11-28 07:16:12.63694613 +0000 UTC m=+1427.015011241" observedRunningTime="2025-11-28 07:16:13.540949867 +0000 UTC m=+1427.919014978" watchObservedRunningTime="2025-11-28 07:16:13.550804589 +0000 UTC m=+1427.928869700" Nov 28 07:16:14 crc kubenswrapper[4946]: I1128 07:16:14.543893 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-576769594d-lbv64" event={"ID":"313c5837-e776-49ef-8689-14f6f70d31a1","Type":"ContainerStarted","Data":"ed90e8dccea49daff0a79a0d11bad0590ddbd5f45981e375a7dee23f1a208e56"} Nov 28 07:16:14 crc kubenswrapper[4946]: I1128 07:16:14.544203 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-576769594d-lbv64" event={"ID":"313c5837-e776-49ef-8689-14f6f70d31a1","Type":"ContainerStarted","Data":"7154bbd8ae6506bb7231f4ac45fc4ff0e5022062ff798b35ec9d298de493ab9f"} Nov 28 07:16:15 crc kubenswrapper[4946]: I1128 07:16:15.552413 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:15 crc kubenswrapper[4946]: I1128 07:16:15.552476 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:15 crc kubenswrapper[4946]: I1128 07:16:15.789418 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lpcd7" Nov 28 07:16:15 crc kubenswrapper[4946]: I1128 07:16:15.789842 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lpcd7" Nov 28 07:16:15 crc kubenswrapper[4946]: I1128 07:16:15.854496 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lpcd7" Nov 28 07:16:15 crc kubenswrapper[4946]: I1128 07:16:15.891051 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-576769594d-lbv64" podStartSLOduration=4.891023426 podStartE2EDuration="4.891023426s" podCreationTimestamp="2025-11-28 07:16:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:16:14.581091793 +0000 UTC m=+1428.959156904" watchObservedRunningTime="2025-11-28 07:16:15.891023426 +0000 UTC m=+1430.269088537" Nov 28 07:16:16 crc kubenswrapper[4946]: I1128 07:16:16.566137 4946 generic.go:334] "Generic (PLEG): container finished" podID="8d0d856a-c2e4-4ccf-adfb-70391210f8d9" containerID="3d5eeaae441aa25568927d194f96b2d53b36fd0d2f9a8f7955ca367e5464cb34" exitCode=0 Nov 28 07:16:16 crc kubenswrapper[4946]: I1128 07:16:16.566240 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8wk8q" event={"ID":"8d0d856a-c2e4-4ccf-adfb-70391210f8d9","Type":"ContainerDied","Data":"3d5eeaae441aa25568927d194f96b2d53b36fd0d2f9a8f7955ca367e5464cb34"} Nov 28 07:16:16 crc kubenswrapper[4946]: I1128 07:16:16.655207 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lpcd7" Nov 28 07:16:16 crc kubenswrapper[4946]: I1128 07:16:16.714723 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lpcd7"] Nov 28 07:16:18 crc kubenswrapper[4946]: I1128 07:16:18.411637 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" Nov 28 07:16:18 crc kubenswrapper[4946]: I1128 07:16:18.498453 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f5d458b55-225m9"] Nov 28 07:16:18 crc kubenswrapper[4946]: I1128 07:16:18.499634 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f5d458b55-225m9" podUID="be5ff997-7c05-4d71-8a40-1cc01a423bc3" containerName="dnsmasq-dns" containerID="cri-o://ad9154061ecdccee32327c0040940e4bf6b7afef612d378da3b9ffd35cf3e8fe" gracePeriod=10 Nov 28 07:16:18 crc kubenswrapper[4946]: I1128 07:16:18.533564 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kbnxr"] Nov 28 07:16:18 crc kubenswrapper[4946]: I1128 07:16:18.536580 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kbnxr" Nov 28 07:16:18 crc kubenswrapper[4946]: I1128 07:16:18.574841 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kbnxr"] Nov 28 07:16:18 crc kubenswrapper[4946]: I1128 07:16:18.609537 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lpcd7" podUID="b9bec43f-4374-47e6-968e-858a39bfa527" containerName="registry-server" containerID="cri-o://0564c31d1504747c43e38bd341896264381cf50d36d80baec84079cb38b524ab" gracePeriod=2 Nov 28 07:16:18 crc kubenswrapper[4946]: I1128 07:16:18.710592 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljq9p\" (UniqueName: \"kubernetes.io/projected/5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d-kube-api-access-ljq9p\") pod \"redhat-operators-kbnxr\" (UID: \"5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d\") " pod="openshift-marketplace/redhat-operators-kbnxr" Nov 28 07:16:18 crc kubenswrapper[4946]: I1128 07:16:18.710706 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d-catalog-content\") pod \"redhat-operators-kbnxr\" (UID: \"5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d\") " pod="openshift-marketplace/redhat-operators-kbnxr" Nov 28 07:16:18 crc kubenswrapper[4946]: I1128 07:16:18.710733 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d-utilities\") pod \"redhat-operators-kbnxr\" (UID: \"5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d\") " pod="openshift-marketplace/redhat-operators-kbnxr" Nov 28 07:16:18 crc kubenswrapper[4946]: I1128 07:16:18.813495 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d-catalog-content\") pod \"redhat-operators-kbnxr\" (UID: \"5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d\") " pod="openshift-marketplace/redhat-operators-kbnxr" Nov 28 07:16:18 crc kubenswrapper[4946]: I1128 07:16:18.813802 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d-utilities\") pod \"redhat-operators-kbnxr\" (UID: \"5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d\") " pod="openshift-marketplace/redhat-operators-kbnxr" Nov 28 07:16:18 crc kubenswrapper[4946]: I1128 07:16:18.813962 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljq9p\" (UniqueName: \"kubernetes.io/projected/5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d-kube-api-access-ljq9p\") pod \"redhat-operators-kbnxr\" (UID: \"5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d\") " pod="openshift-marketplace/redhat-operators-kbnxr" Nov 28 07:16:18 crc kubenswrapper[4946]: I1128 07:16:18.814527 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d-catalog-content\") pod \"redhat-operators-kbnxr\" (UID: \"5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d\") " pod="openshift-marketplace/redhat-operators-kbnxr" Nov 28 07:16:18 crc kubenswrapper[4946]: I1128 07:16:18.814768 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d-utilities\") pod \"redhat-operators-kbnxr\" (UID: \"5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d\") " pod="openshift-marketplace/redhat-operators-kbnxr" Nov 28 07:16:18 crc kubenswrapper[4946]: I1128 07:16:18.842274 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljq9p\" (UniqueName: \"kubernetes.io/projected/5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d-kube-api-access-ljq9p\") pod \"redhat-operators-kbnxr\" (UID: \"5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d\") " pod="openshift-marketplace/redhat-operators-kbnxr" Nov 28 07:16:18 crc kubenswrapper[4946]: I1128 07:16:18.885906 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kbnxr" Nov 28 07:16:19 crc kubenswrapper[4946]: I1128 07:16:19.438329 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f5d458b55-225m9" podUID="be5ff997-7c05-4d71-8a40-1cc01a423bc3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: connect: connection refused" Nov 28 07:16:19 crc kubenswrapper[4946]: I1128 07:16:19.620339 4946 generic.go:334] "Generic (PLEG): container finished" podID="b9bec43f-4374-47e6-968e-858a39bfa527" containerID="0564c31d1504747c43e38bd341896264381cf50d36d80baec84079cb38b524ab" exitCode=0 Nov 28 07:16:19 crc kubenswrapper[4946]: I1128 07:16:19.620424 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lpcd7" event={"ID":"b9bec43f-4374-47e6-968e-858a39bfa527","Type":"ContainerDied","Data":"0564c31d1504747c43e38bd341896264381cf50d36d80baec84079cb38b524ab"} Nov 28 07:16:19 crc kubenswrapper[4946]: I1128 07:16:19.624124 4946 generic.go:334] "Generic (PLEG): container finished" podID="be5ff997-7c05-4d71-8a40-1cc01a423bc3" containerID="ad9154061ecdccee32327c0040940e4bf6b7afef612d378da3b9ffd35cf3e8fe" exitCode=0 Nov 28 07:16:19 crc kubenswrapper[4946]: I1128 07:16:19.624152 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f5d458b55-225m9" event={"ID":"be5ff997-7c05-4d71-8a40-1cc01a423bc3","Type":"ContainerDied","Data":"ad9154061ecdccee32327c0040940e4bf6b7afef612d378da3b9ffd35cf3e8fe"} Nov 28 07:16:20 crc kubenswrapper[4946]: I1128 07:16:20.151807 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-97cf8d846-2fqhs" Nov 28 07:16:20 crc kubenswrapper[4946]: I1128 07:16:20.426474 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-97cf8d846-2fqhs" Nov 28 07:16:20 crc kubenswrapper[4946]: I1128 07:16:20.600686 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8wk8q" Nov 28 07:16:20 crc kubenswrapper[4946]: I1128 07:16:20.647970 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8wk8q" Nov 28 07:16:20 crc kubenswrapper[4946]: I1128 07:16:20.648135 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8wk8q" event={"ID":"8d0d856a-c2e4-4ccf-adfb-70391210f8d9","Type":"ContainerDied","Data":"cf8b41b8a4e8be9d4c4cddf896de608a53cba6eca033c6775fadf6d6900660c4"} Nov 28 07:16:20 crc kubenswrapper[4946]: I1128 07:16:20.648181 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf8b41b8a4e8be9d4c4cddf896de608a53cba6eca033c6775fadf6d6900660c4" Nov 28 07:16:20 crc kubenswrapper[4946]: I1128 07:16:20.758318 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-scripts\") pod \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\" (UID: \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\") " Nov 28 07:16:20 crc kubenswrapper[4946]: I1128 07:16:20.758368 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-combined-ca-bundle\") pod \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\" (UID: \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\") " Nov 28 07:16:20 crc kubenswrapper[4946]: I1128 07:16:20.758457 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5z6v\" (UniqueName: \"kubernetes.io/projected/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-kube-api-access-m5z6v\") pod \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\" (UID: \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\") " Nov 28 07:16:20 crc kubenswrapper[4946]: I1128 07:16:20.758571 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-etc-machine-id\") pod \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\" (UID: \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\") " Nov 28 07:16:20 crc kubenswrapper[4946]: I1128 07:16:20.758650 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-db-sync-config-data\") pod \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\" (UID: \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\") " Nov 28 07:16:20 crc kubenswrapper[4946]: I1128 07:16:20.758680 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-config-data\") pod \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\" (UID: \"8d0d856a-c2e4-4ccf-adfb-70391210f8d9\") " Nov 28 07:16:20 crc kubenswrapper[4946]: I1128 07:16:20.763090 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8d0d856a-c2e4-4ccf-adfb-70391210f8d9" (UID: "8d0d856a-c2e4-4ccf-adfb-70391210f8d9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:20 crc kubenswrapper[4946]: I1128 07:16:20.763167 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8d0d856a-c2e4-4ccf-adfb-70391210f8d9" (UID: "8d0d856a-c2e4-4ccf-adfb-70391210f8d9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:16:20 crc kubenswrapper[4946]: I1128 07:16:20.764206 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-scripts" (OuterVolumeSpecName: "scripts") pod "8d0d856a-c2e4-4ccf-adfb-70391210f8d9" (UID: "8d0d856a-c2e4-4ccf-adfb-70391210f8d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:20 crc kubenswrapper[4946]: I1128 07:16:20.769602 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-kube-api-access-m5z6v" (OuterVolumeSpecName: "kube-api-access-m5z6v") pod "8d0d856a-c2e4-4ccf-adfb-70391210f8d9" (UID: "8d0d856a-c2e4-4ccf-adfb-70391210f8d9"). InnerVolumeSpecName "kube-api-access-m5z6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:16:20 crc kubenswrapper[4946]: I1128 07:16:20.834519 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d0d856a-c2e4-4ccf-adfb-70391210f8d9" (UID: "8d0d856a-c2e4-4ccf-adfb-70391210f8d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:20 crc kubenswrapper[4946]: I1128 07:16:20.840746 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-config-data" (OuterVolumeSpecName: "config-data") pod "8d0d856a-c2e4-4ccf-adfb-70391210f8d9" (UID: "8d0d856a-c2e4-4ccf-adfb-70391210f8d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:20 crc kubenswrapper[4946]: I1128 07:16:20.862969 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:20 crc kubenswrapper[4946]: I1128 07:16:20.863303 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:20 crc kubenswrapper[4946]: I1128 07:16:20.863314 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:20 crc kubenswrapper[4946]: I1128 07:16:20.863327 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5z6v\" (UniqueName: \"kubernetes.io/projected/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-kube-api-access-m5z6v\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:20 crc kubenswrapper[4946]: I1128 07:16:20.863338 4946 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:20 crc kubenswrapper[4946]: I1128 07:16:20.863347 4946 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8d0d856a-c2e4-4ccf-adfb-70391210f8d9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:20 crc kubenswrapper[4946]: I1128 07:16:20.915408 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lpcd7" Nov 28 07:16:20 crc kubenswrapper[4946]: E1128 07:16:20.953886 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="8310da02-e0c7-4dab-bc26-7139ca576c2c" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.018565 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f5d458b55-225m9" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.071269 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bec43f-4374-47e6-968e-858a39bfa527-catalog-content\") pod \"b9bec43f-4374-47e6-968e-858a39bfa527\" (UID: \"b9bec43f-4374-47e6-968e-858a39bfa527\") " Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.071954 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh22g\" (UniqueName: \"kubernetes.io/projected/b9bec43f-4374-47e6-968e-858a39bfa527-kube-api-access-zh22g\") pod \"b9bec43f-4374-47e6-968e-858a39bfa527\" (UID: \"b9bec43f-4374-47e6-968e-858a39bfa527\") " Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.072769 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-dns-svc\") pod \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\" (UID: \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\") " Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.072848 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bec43f-4374-47e6-968e-858a39bfa527-utilities\") pod \"b9bec43f-4374-47e6-968e-858a39bfa527\" (UID: \"b9bec43f-4374-47e6-968e-858a39bfa527\") " Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.072979 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-dns-swift-storage-0\") pod \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\" (UID: \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\") " Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.074985 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9bec43f-4374-47e6-968e-858a39bfa527-utilities" (OuterVolumeSpecName: "utilities") pod "b9bec43f-4374-47e6-968e-858a39bfa527" (UID: "b9bec43f-4374-47e6-968e-858a39bfa527"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.083746 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9bec43f-4374-47e6-968e-858a39bfa527-kube-api-access-zh22g" (OuterVolumeSpecName: "kube-api-access-zh22g") pod "b9bec43f-4374-47e6-968e-858a39bfa527" (UID: "b9bec43f-4374-47e6-968e-858a39bfa527"). InnerVolumeSpecName "kube-api-access-zh22g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.112021 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9bec43f-4374-47e6-968e-858a39bfa527-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9bec43f-4374-47e6-968e-858a39bfa527" (UID: "b9bec43f-4374-47e6-968e-858a39bfa527"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.178913 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7br7z\" (UniqueName: \"kubernetes.io/projected/be5ff997-7c05-4d71-8a40-1cc01a423bc3-kube-api-access-7br7z\") pod \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\" (UID: \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\") " Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.179026 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-ovsdbserver-sb\") pod \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\" (UID: \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\") " Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.179159 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-config\") pod \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\" (UID: \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\") " Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.179179 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-ovsdbserver-nb\") pod \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\" (UID: \"be5ff997-7c05-4d71-8a40-1cc01a423bc3\") " Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.179732 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bec43f-4374-47e6-968e-858a39bfa527-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.179744 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bec43f-4374-47e6-968e-858a39bfa527-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.179755 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh22g\" (UniqueName: \"kubernetes.io/projected/b9bec43f-4374-47e6-968e-858a39bfa527-kube-api-access-zh22g\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.187831 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kbnxr"] Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.204336 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "be5ff997-7c05-4d71-8a40-1cc01a423bc3" (UID: "be5ff997-7c05-4d71-8a40-1cc01a423bc3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.213644 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be5ff997-7c05-4d71-8a40-1cc01a423bc3-kube-api-access-7br7z" (OuterVolumeSpecName: "kube-api-access-7br7z") pod "be5ff997-7c05-4d71-8a40-1cc01a423bc3" (UID: "be5ff997-7c05-4d71-8a40-1cc01a423bc3"). InnerVolumeSpecName "kube-api-access-7br7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.259707 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "be5ff997-7c05-4d71-8a40-1cc01a423bc3" (UID: "be5ff997-7c05-4d71-8a40-1cc01a423bc3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.286108 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.286145 4946 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.286159 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7br7z\" (UniqueName: \"kubernetes.io/projected/be5ff997-7c05-4d71-8a40-1cc01a423bc3-kube-api-access-7br7z\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.313368 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "be5ff997-7c05-4d71-8a40-1cc01a423bc3" (UID: "be5ff997-7c05-4d71-8a40-1cc01a423bc3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.326809 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "be5ff997-7c05-4d71-8a40-1cc01a423bc3" (UID: "be5ff997-7c05-4d71-8a40-1cc01a423bc3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.362636 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-config" (OuterVolumeSpecName: "config") pod "be5ff997-7c05-4d71-8a40-1cc01a423bc3" (UID: "be5ff997-7c05-4d71-8a40-1cc01a423bc3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.388446 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.388503 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.388515 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be5ff997-7c05-4d71-8a40-1cc01a423bc3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.672619 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f5d458b55-225m9" event={"ID":"be5ff997-7c05-4d71-8a40-1cc01a423bc3","Type":"ContainerDied","Data":"03a25ab23f06b7da69cb8471d0fddf30a08f6e86d8b3370a0b92699a2762b055"} Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.672710 4946 scope.go:117] "RemoveContainer" containerID="ad9154061ecdccee32327c0040940e4bf6b7afef612d378da3b9ffd35cf3e8fe" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.672914 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f5d458b55-225m9" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.679854 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8310da02-e0c7-4dab-bc26-7139ca576c2c","Type":"ContainerStarted","Data":"df82ef63a87274d71a3d085faf073489fcdcd234b6c1bb73ccdce3974a20228a"} Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.680089 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8310da02-e0c7-4dab-bc26-7139ca576c2c" containerName="ceilometer-notification-agent" containerID="cri-o://f4240250529f6e7ff2ca469668b93e5b7f799dc70a5f6bfd1d23fc4836a7ef0b" gracePeriod=30 Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.680227 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8310da02-e0c7-4dab-bc26-7139ca576c2c" containerName="proxy-httpd" containerID="cri-o://df82ef63a87274d71a3d085faf073489fcdcd234b6c1bb73ccdce3974a20228a" gracePeriod=30 Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.680357 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.680304 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8310da02-e0c7-4dab-bc26-7139ca576c2c" containerName="sg-core" containerID="cri-o://4708373a46192f99732c4354806f10e8a24e49a44a638af9271cd3c51380f78a" gracePeriod=30 Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.692011 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lpcd7" event={"ID":"b9bec43f-4374-47e6-968e-858a39bfa527","Type":"ContainerDied","Data":"562964e86d74d3b02ee17bf1342fba7afbe650fe5ebc18f1be5e2a77aea4e1a0"} Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.692139 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lpcd7" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.708670 4946 generic.go:334] "Generic (PLEG): container finished" podID="5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d" containerID="250ceef6a6804465c2153239acd760587c5d1fe5a1fe6ac7ee045eac80fab151" exitCode=0 Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.708731 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbnxr" event={"ID":"5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d","Type":"ContainerDied","Data":"250ceef6a6804465c2153239acd760587c5d1fe5a1fe6ac7ee045eac80fab151"} Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.708771 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbnxr" event={"ID":"5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d","Type":"ContainerStarted","Data":"d7fa153e40767b019d22e338d9302fffb0aa7f84a435543bb353ad55bceb8903"} Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.735822 4946 scope.go:117] "RemoveContainer" containerID="133486b76b76281205b48295c969bee20c80a954b6722df2a9d7436c4c620478" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.771525 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f5d458b55-225m9"] Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.780334 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f5d458b55-225m9"] Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.794690 4946 scope.go:117] "RemoveContainer" containerID="0564c31d1504747c43e38bd341896264381cf50d36d80baec84079cb38b524ab" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.837671 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lpcd7"] Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.852066 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lpcd7"] Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.935480 4946 scope.go:117] "RemoveContainer" containerID="f3a83246ec59447909c7b33851f31529ea71c070ce34a671a8edd57ba76a4bc8" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.981362 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 07:16:21 crc kubenswrapper[4946]: E1128 07:16:21.981848 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9bec43f-4374-47e6-968e-858a39bfa527" containerName="registry-server" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.981868 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9bec43f-4374-47e6-968e-858a39bfa527" containerName="registry-server" Nov 28 07:16:21 crc kubenswrapper[4946]: E1128 07:16:21.981897 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be5ff997-7c05-4d71-8a40-1cc01a423bc3" containerName="dnsmasq-dns" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.981909 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="be5ff997-7c05-4d71-8a40-1cc01a423bc3" containerName="dnsmasq-dns" Nov 28 07:16:21 crc kubenswrapper[4946]: E1128 07:16:21.981937 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be5ff997-7c05-4d71-8a40-1cc01a423bc3" containerName="init" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.981945 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="be5ff997-7c05-4d71-8a40-1cc01a423bc3" containerName="init" Nov 28 07:16:21 crc kubenswrapper[4946]: E1128 07:16:21.981957 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d0d856a-c2e4-4ccf-adfb-70391210f8d9" containerName="cinder-db-sync" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.981964 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d0d856a-c2e4-4ccf-adfb-70391210f8d9" containerName="cinder-db-sync" Nov 28 07:16:21 crc kubenswrapper[4946]: E1128 07:16:21.981990 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9bec43f-4374-47e6-968e-858a39bfa527" containerName="extract-content" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.981999 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9bec43f-4374-47e6-968e-858a39bfa527" containerName="extract-content" Nov 28 07:16:21 crc kubenswrapper[4946]: E1128 07:16:21.982011 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9bec43f-4374-47e6-968e-858a39bfa527" containerName="extract-utilities" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.982017 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9bec43f-4374-47e6-968e-858a39bfa527" containerName="extract-utilities" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.982197 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9bec43f-4374-47e6-968e-858a39bfa527" containerName="registry-server" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.982223 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d0d856a-c2e4-4ccf-adfb-70391210f8d9" containerName="cinder-db-sync" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.982245 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="be5ff997-7c05-4d71-8a40-1cc01a423bc3" containerName="dnsmasq-dns" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.983405 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.989952 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.990231 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.990330 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-qfgjc" Nov 28 07:16:21 crc kubenswrapper[4946]: I1128 07:16:21.990479 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.017052 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9bec43f-4374-47e6-968e-858a39bfa527" path="/var/lib/kubelet/pods/b9bec43f-4374-47e6-968e-858a39bfa527/volumes" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.017703 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be5ff997-7c05-4d71-8a40-1cc01a423bc3" path="/var/lib/kubelet/pods/be5ff997-7c05-4d71-8a40-1cc01a423bc3/volumes" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.023769 4946 scope.go:117] "RemoveContainer" containerID="cca5dc3fdfd20cbb0d08b4649f9aca583e073316daade6a505f6df5453ac58d2" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.025669 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhrs8\" (UniqueName: \"kubernetes.io/projected/3ea7cbfe-2490-4cdb-b331-fc9905773c54-kube-api-access-fhrs8\") pod \"cinder-scheduler-0\" (UID: \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.025720 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ea7cbfe-2490-4cdb-b331-fc9905773c54-scripts\") pod \"cinder-scheduler-0\" (UID: \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.025828 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ea7cbfe-2490-4cdb-b331-fc9905773c54-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.025846 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ea7cbfe-2490-4cdb-b331-fc9905773c54-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.025863 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ea7cbfe-2490-4cdb-b331-fc9905773c54-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.025881 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ea7cbfe-2490-4cdb-b331-fc9905773c54-config-data\") pod \"cinder-scheduler-0\" (UID: \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.069759 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.092378 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c55f6679-r8nsx"] Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.094288 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c55f6679-r8nsx" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.121434 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c55f6679-r8nsx"] Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.135306 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ea7cbfe-2490-4cdb-b331-fc9905773c54-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.135357 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ea7cbfe-2490-4cdb-b331-fc9905773c54-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.135381 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ea7cbfe-2490-4cdb-b331-fc9905773c54-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.135403 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ea7cbfe-2490-4cdb-b331-fc9905773c54-config-data\") pod \"cinder-scheduler-0\" (UID: \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.135472 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhrs8\" (UniqueName: \"kubernetes.io/projected/3ea7cbfe-2490-4cdb-b331-fc9905773c54-kube-api-access-fhrs8\") pod \"cinder-scheduler-0\" (UID: \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.135510 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ea7cbfe-2490-4cdb-b331-fc9905773c54-scripts\") pod \"cinder-scheduler-0\" (UID: \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.135518 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ea7cbfe-2490-4cdb-b331-fc9905773c54-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.145299 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ea7cbfe-2490-4cdb-b331-fc9905773c54-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.146297 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ea7cbfe-2490-4cdb-b331-fc9905773c54-scripts\") pod \"cinder-scheduler-0\" (UID: \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.148690 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ea7cbfe-2490-4cdb-b331-fc9905773c54-config-data\") pod \"cinder-scheduler-0\" (UID: \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.152914 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ea7cbfe-2490-4cdb-b331-fc9905773c54-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.175162 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhrs8\" (UniqueName: \"kubernetes.io/projected/3ea7cbfe-2490-4cdb-b331-fc9905773c54-kube-api-access-fhrs8\") pod \"cinder-scheduler-0\" (UID: \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.238784 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-dns-svc\") pod \"dnsmasq-dns-c55f6679-r8nsx\" (UID: \"4fd09876-590d-478f-a778-015719592efb\") " pod="openstack/dnsmasq-dns-c55f6679-r8nsx" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.238894 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55t89\" (UniqueName: \"kubernetes.io/projected/4fd09876-590d-478f-a778-015719592efb-kube-api-access-55t89\") pod \"dnsmasq-dns-c55f6679-r8nsx\" (UID: \"4fd09876-590d-478f-a778-015719592efb\") " pod="openstack/dnsmasq-dns-c55f6679-r8nsx" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.238941 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-dns-swift-storage-0\") pod \"dnsmasq-dns-c55f6679-r8nsx\" (UID: \"4fd09876-590d-478f-a778-015719592efb\") " pod="openstack/dnsmasq-dns-c55f6679-r8nsx" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.239016 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-config\") pod \"dnsmasq-dns-c55f6679-r8nsx\" (UID: \"4fd09876-590d-478f-a778-015719592efb\") " pod="openstack/dnsmasq-dns-c55f6679-r8nsx" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.239042 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-ovsdbserver-nb\") pod \"dnsmasq-dns-c55f6679-r8nsx\" (UID: \"4fd09876-590d-478f-a778-015719592efb\") " pod="openstack/dnsmasq-dns-c55f6679-r8nsx" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.239070 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-ovsdbserver-sb\") pod \"dnsmasq-dns-c55f6679-r8nsx\" (UID: \"4fd09876-590d-478f-a778-015719592efb\") " pod="openstack/dnsmasq-dns-c55f6679-r8nsx" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.341479 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-dns-svc\") pod \"dnsmasq-dns-c55f6679-r8nsx\" (UID: \"4fd09876-590d-478f-a778-015719592efb\") " pod="openstack/dnsmasq-dns-c55f6679-r8nsx" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.341551 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55t89\" (UniqueName: \"kubernetes.io/projected/4fd09876-590d-478f-a778-015719592efb-kube-api-access-55t89\") pod \"dnsmasq-dns-c55f6679-r8nsx\" (UID: \"4fd09876-590d-478f-a778-015719592efb\") " pod="openstack/dnsmasq-dns-c55f6679-r8nsx" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.341579 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-dns-swift-storage-0\") pod \"dnsmasq-dns-c55f6679-r8nsx\" (UID: \"4fd09876-590d-478f-a778-015719592efb\") " pod="openstack/dnsmasq-dns-c55f6679-r8nsx" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.341641 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-config\") pod \"dnsmasq-dns-c55f6679-r8nsx\" (UID: \"4fd09876-590d-478f-a778-015719592efb\") " pod="openstack/dnsmasq-dns-c55f6679-r8nsx" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.341662 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-ovsdbserver-nb\") pod \"dnsmasq-dns-c55f6679-r8nsx\" (UID: \"4fd09876-590d-478f-a778-015719592efb\") " pod="openstack/dnsmasq-dns-c55f6679-r8nsx" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.341685 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-ovsdbserver-sb\") pod \"dnsmasq-dns-c55f6679-r8nsx\" (UID: \"4fd09876-590d-478f-a778-015719592efb\") " pod="openstack/dnsmasq-dns-c55f6679-r8nsx" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.342937 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-config\") pod \"dnsmasq-dns-c55f6679-r8nsx\" (UID: \"4fd09876-590d-478f-a778-015719592efb\") " pod="openstack/dnsmasq-dns-c55f6679-r8nsx" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.342978 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-dns-svc\") pod \"dnsmasq-dns-c55f6679-r8nsx\" (UID: \"4fd09876-590d-478f-a778-015719592efb\") " pod="openstack/dnsmasq-dns-c55f6679-r8nsx" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.342997 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-ovsdbserver-sb\") pod \"dnsmasq-dns-c55f6679-r8nsx\" (UID: \"4fd09876-590d-478f-a778-015719592efb\") " pod="openstack/dnsmasq-dns-c55f6679-r8nsx" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.343000 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-dns-swift-storage-0\") pod \"dnsmasq-dns-c55f6679-r8nsx\" (UID: \"4fd09876-590d-478f-a778-015719592efb\") " pod="openstack/dnsmasq-dns-c55f6679-r8nsx" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.343094 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-ovsdbserver-nb\") pod \"dnsmasq-dns-c55f6679-r8nsx\" (UID: \"4fd09876-590d-478f-a778-015719592efb\") " pod="openstack/dnsmasq-dns-c55f6679-r8nsx" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.360236 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55t89\" (UniqueName: \"kubernetes.io/projected/4fd09876-590d-478f-a778-015719592efb-kube-api-access-55t89\") pod \"dnsmasq-dns-c55f6679-r8nsx\" (UID: \"4fd09876-590d-478f-a778-015719592efb\") " pod="openstack/dnsmasq-dns-c55f6679-r8nsx" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.384499 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.395734 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.397727 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.403173 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.418810 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c55f6679-r8nsx" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.423115 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.445525 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g5wz\" (UniqueName: \"kubernetes.io/projected/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-kube-api-access-9g5wz\") pod \"cinder-api-0\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " pod="openstack/cinder-api-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.445569 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-logs\") pod \"cinder-api-0\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " pod="openstack/cinder-api-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.445604 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-scripts\") pod \"cinder-api-0\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " pod="openstack/cinder-api-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.445628 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " pod="openstack/cinder-api-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.445685 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " pod="openstack/cinder-api-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.445700 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-config-data\") pod \"cinder-api-0\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " pod="openstack/cinder-api-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.445729 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-config-data-custom\") pod \"cinder-api-0\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " pod="openstack/cinder-api-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.547122 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " pod="openstack/cinder-api-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.547167 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-config-data\") pod \"cinder-api-0\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " pod="openstack/cinder-api-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.547200 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-config-data-custom\") pod \"cinder-api-0\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " pod="openstack/cinder-api-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.547263 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g5wz\" (UniqueName: \"kubernetes.io/projected/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-kube-api-access-9g5wz\") pod \"cinder-api-0\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " pod="openstack/cinder-api-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.547289 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-logs\") pod \"cinder-api-0\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " pod="openstack/cinder-api-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.547320 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-scripts\") pod \"cinder-api-0\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " pod="openstack/cinder-api-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.547342 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " pod="openstack/cinder-api-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.547803 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " pod="openstack/cinder-api-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.548167 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-logs\") pod \"cinder-api-0\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " pod="openstack/cinder-api-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.558047 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-config-data-custom\") pod \"cinder-api-0\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " pod="openstack/cinder-api-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.563871 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-config-data\") pod \"cinder-api-0\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " pod="openstack/cinder-api-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.572260 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-scripts\") pod \"cinder-api-0\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " pod="openstack/cinder-api-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.572387 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " pod="openstack/cinder-api-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.582325 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g5wz\" (UniqueName: \"kubernetes.io/projected/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-kube-api-access-9g5wz\") pod \"cinder-api-0\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " pod="openstack/cinder-api-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.725012 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.761888 4946 generic.go:334] "Generic (PLEG): container finished" podID="8310da02-e0c7-4dab-bc26-7139ca576c2c" containerID="df82ef63a87274d71a3d085faf073489fcdcd234b6c1bb73ccdce3974a20228a" exitCode=0 Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.761933 4946 generic.go:334] "Generic (PLEG): container finished" podID="8310da02-e0c7-4dab-bc26-7139ca576c2c" containerID="4708373a46192f99732c4354806f10e8a24e49a44a638af9271cd3c51380f78a" exitCode=2 Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.762379 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8310da02-e0c7-4dab-bc26-7139ca576c2c","Type":"ContainerDied","Data":"df82ef63a87274d71a3d085faf073489fcdcd234b6c1bb73ccdce3974a20228a"} Nov 28 07:16:22 crc kubenswrapper[4946]: I1128 07:16:22.762415 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8310da02-e0c7-4dab-bc26-7139ca576c2c","Type":"ContainerDied","Data":"4708373a46192f99732c4354806f10e8a24e49a44a638af9271cd3c51380f78a"} Nov 28 07:16:23 crc kubenswrapper[4946]: I1128 07:16:23.003667 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 07:16:23 crc kubenswrapper[4946]: I1128 07:16:23.046448 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c55f6679-r8nsx"] Nov 28 07:16:23 crc kubenswrapper[4946]: W1128 07:16:23.337654 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2845cc2c_3bb4_4083_b59b_cdd5ef7da024.slice/crio-4f4172a48ecf82d1b3a49e6808180501957709c894389f8d92c2a63e5a88cff5 WatchSource:0}: Error finding container 4f4172a48ecf82d1b3a49e6808180501957709c894389f8d92c2a63e5a88cff5: Status 404 returned error can't find the container with id 4f4172a48ecf82d1b3a49e6808180501957709c894389f8d92c2a63e5a88cff5 Nov 28 07:16:23 crc kubenswrapper[4946]: I1128 07:16:23.339921 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 07:16:23 crc kubenswrapper[4946]: I1128 07:16:23.844367 4946 generic.go:334] "Generic (PLEG): container finished" podID="4fd09876-590d-478f-a778-015719592efb" containerID="9fa0bd89f0751c220d859de95dbe385d4777f8e6e5ba41138a10428806de2aa9" exitCode=0 Nov 28 07:16:23 crc kubenswrapper[4946]: I1128 07:16:23.844497 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c55f6679-r8nsx" event={"ID":"4fd09876-590d-478f-a778-015719592efb","Type":"ContainerDied","Data":"9fa0bd89f0751c220d859de95dbe385d4777f8e6e5ba41138a10428806de2aa9"} Nov 28 07:16:23 crc kubenswrapper[4946]: I1128 07:16:23.844881 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c55f6679-r8nsx" event={"ID":"4fd09876-590d-478f-a778-015719592efb","Type":"ContainerStarted","Data":"aab1c61afa9f795a50db4a5314a583eddcf2e9f3aabc3b80865d42149bc1e5b8"} Nov 28 07:16:23 crc kubenswrapper[4946]: I1128 07:16:23.905768 4946 generic.go:334] "Generic (PLEG): container finished" podID="5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d" containerID="01116e135b1ce24cfe3a49e468bed58e4df8ab1116492bb38ecd02092646d9f8" exitCode=0 Nov 28 07:16:23 crc kubenswrapper[4946]: I1128 07:16:23.905884 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbnxr" event={"ID":"5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d","Type":"ContainerDied","Data":"01116e135b1ce24cfe3a49e468bed58e4df8ab1116492bb38ecd02092646d9f8"} Nov 28 07:16:23 crc kubenswrapper[4946]: I1128 07:16:23.925795 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3ea7cbfe-2490-4cdb-b331-fc9905773c54","Type":"ContainerStarted","Data":"c73e13019cca89229e273d007f3b6b0bec53f4cda7a1e5804b8de69da1573226"} Nov 28 07:16:23 crc kubenswrapper[4946]: I1128 07:16:23.951637 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2845cc2c-3bb4-4083-b59b-cdd5ef7da024","Type":"ContainerStarted","Data":"4f4172a48ecf82d1b3a49e6808180501957709c894389f8d92c2a63e5a88cff5"} Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.037079 4946 generic.go:334] "Generic (PLEG): container finished" podID="8310da02-e0c7-4dab-bc26-7139ca576c2c" containerID="f4240250529f6e7ff2ca469668b93e5b7f799dc70a5f6bfd1d23fc4836a7ef0b" exitCode=0 Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.038382 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8310da02-e0c7-4dab-bc26-7139ca576c2c","Type":"ContainerDied","Data":"f4240250529f6e7ff2ca469668b93e5b7f799dc70a5f6bfd1d23fc4836a7ef0b"} Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.603629 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-bbd4d5c56-h9gwc" Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.722753 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.731147 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.731196 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.731237 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.732094 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a1b65860bba4b7422a1bd44c20f73ab6d26e45cd22f0c4eba1bdbae4c38acc18"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.732157 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://a1b65860bba4b7422a1bd44c20f73ab6d26e45cd22f0c4eba1bdbae4c38acc18" gracePeriod=600 Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.740840 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.857324 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g8xr\" (UniqueName: \"kubernetes.io/projected/8310da02-e0c7-4dab-bc26-7139ca576c2c-kube-api-access-5g8xr\") pod \"8310da02-e0c7-4dab-bc26-7139ca576c2c\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.857486 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8310da02-e0c7-4dab-bc26-7139ca576c2c-config-data\") pod \"8310da02-e0c7-4dab-bc26-7139ca576c2c\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.857530 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8310da02-e0c7-4dab-bc26-7139ca576c2c-run-httpd\") pod \"8310da02-e0c7-4dab-bc26-7139ca576c2c\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.859038 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8310da02-e0c7-4dab-bc26-7139ca576c2c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8310da02-e0c7-4dab-bc26-7139ca576c2c" (UID: "8310da02-e0c7-4dab-bc26-7139ca576c2c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.859238 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8310da02-e0c7-4dab-bc26-7139ca576c2c-sg-core-conf-yaml\") pod \"8310da02-e0c7-4dab-bc26-7139ca576c2c\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.859277 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8310da02-e0c7-4dab-bc26-7139ca576c2c-scripts\") pod \"8310da02-e0c7-4dab-bc26-7139ca576c2c\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.859383 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8310da02-e0c7-4dab-bc26-7139ca576c2c-log-httpd\") pod \"8310da02-e0c7-4dab-bc26-7139ca576c2c\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.859414 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8310da02-e0c7-4dab-bc26-7139ca576c2c-combined-ca-bundle\") pod \"8310da02-e0c7-4dab-bc26-7139ca576c2c\" (UID: \"8310da02-e0c7-4dab-bc26-7139ca576c2c\") " Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.862085 4946 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8310da02-e0c7-4dab-bc26-7139ca576c2c-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.862446 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8310da02-e0c7-4dab-bc26-7139ca576c2c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8310da02-e0c7-4dab-bc26-7139ca576c2c" (UID: "8310da02-e0c7-4dab-bc26-7139ca576c2c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.863900 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8310da02-e0c7-4dab-bc26-7139ca576c2c-kube-api-access-5g8xr" (OuterVolumeSpecName: "kube-api-access-5g8xr") pod "8310da02-e0c7-4dab-bc26-7139ca576c2c" (UID: "8310da02-e0c7-4dab-bc26-7139ca576c2c"). InnerVolumeSpecName "kube-api-access-5g8xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.876671 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8310da02-e0c7-4dab-bc26-7139ca576c2c-scripts" (OuterVolumeSpecName: "scripts") pod "8310da02-e0c7-4dab-bc26-7139ca576c2c" (UID: "8310da02-e0c7-4dab-bc26-7139ca576c2c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.876791 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.887096 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8310da02-e0c7-4dab-bc26-7139ca576c2c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8310da02-e0c7-4dab-bc26-7139ca576c2c" (UID: "8310da02-e0c7-4dab-bc26-7139ca576c2c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.967390 4946 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8310da02-e0c7-4dab-bc26-7139ca576c2c-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.967426 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g8xr\" (UniqueName: \"kubernetes.io/projected/8310da02-e0c7-4dab-bc26-7139ca576c2c-kube-api-access-5g8xr\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.967439 4946 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8310da02-e0c7-4dab-bc26-7139ca576c2c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:24 crc kubenswrapper[4946]: I1128 07:16:24.967450 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8310da02-e0c7-4dab-bc26-7139ca576c2c-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.022544 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-97cf8d846-2fqhs"] Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.023356 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-97cf8d846-2fqhs" podUID="fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755" containerName="barbican-api-log" containerID="cri-o://421e9052d27407d15140d6d912af7b4e4e29caf6daa496715ddb295e6cdf2b4b" gracePeriod=30 Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.023519 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-97cf8d846-2fqhs" podUID="fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755" containerName="barbican-api" containerID="cri-o://849c8a042b3728a44b8c1f9369f45c135c112afac0133076b60c395d456ec422" gracePeriod=30 Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.029603 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8310da02-e0c7-4dab-bc26-7139ca576c2c-config-data" (OuterVolumeSpecName: "config-data") pod "8310da02-e0c7-4dab-bc26-7139ca576c2c" (UID: "8310da02-e0c7-4dab-bc26-7139ca576c2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.032660 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8310da02-e0c7-4dab-bc26-7139ca576c2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8310da02-e0c7-4dab-bc26-7139ca576c2c" (UID: "8310da02-e0c7-4dab-bc26-7139ca576c2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.068880 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8310da02-e0c7-4dab-bc26-7139ca576c2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.068914 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8310da02-e0c7-4dab-bc26-7139ca576c2c-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.106353 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2845cc2c-3bb4-4083-b59b-cdd5ef7da024","Type":"ContainerStarted","Data":"f288f35c6b7ca8028c817841aa085cfc8a9a408e9b62414e319fa0acad933417"} Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.134433 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.146343 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8310da02-e0c7-4dab-bc26-7139ca576c2c","Type":"ContainerDied","Data":"c26e837a2742c742a2d43b1c36855f77af9ae27aa510959b01bc2b1b75beac70"} Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.146418 4946 scope.go:117] "RemoveContainer" containerID="df82ef63a87274d71a3d085faf073489fcdcd234b6c1bb73ccdce3974a20228a" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.146670 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.163903 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c55f6679-r8nsx" event={"ID":"4fd09876-590d-478f-a778-015719592efb","Type":"ContainerStarted","Data":"1473945a8753d1a76f9a5b69ee8837fb93f7c74574e19abb8e3fbd27fb2d1a13"} Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.164892 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c55f6679-r8nsx" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.181562 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="a1b65860bba4b7422a1bd44c20f73ab6d26e45cd22f0c4eba1bdbae4c38acc18" exitCode=0 Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.182113 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"a1b65860bba4b7422a1bd44c20f73ab6d26e45cd22f0c4eba1bdbae4c38acc18"} Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.255542 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.272589 4946 scope.go:117] "RemoveContainer" containerID="4708373a46192f99732c4354806f10e8a24e49a44a638af9271cd3c51380f78a" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.285040 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.299674 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c55f6679-r8nsx" podStartSLOduration=3.299644813 podStartE2EDuration="3.299644813s" podCreationTimestamp="2025-11-28 07:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:16:25.239064367 +0000 UTC m=+1439.617129478" watchObservedRunningTime="2025-11-28 07:16:25.299644813 +0000 UTC m=+1439.677709924" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.326630 4946 scope.go:117] "RemoveContainer" containerID="f4240250529f6e7ff2ca469668b93e5b7f799dc70a5f6bfd1d23fc4836a7ef0b" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.337263 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:16:25 crc kubenswrapper[4946]: E1128 07:16:25.338083 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8310da02-e0c7-4dab-bc26-7139ca576c2c" containerName="sg-core" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.338113 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="8310da02-e0c7-4dab-bc26-7139ca576c2c" containerName="sg-core" Nov 28 07:16:25 crc kubenswrapper[4946]: E1128 07:16:25.338153 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8310da02-e0c7-4dab-bc26-7139ca576c2c" containerName="proxy-httpd" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.338163 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="8310da02-e0c7-4dab-bc26-7139ca576c2c" containerName="proxy-httpd" Nov 28 07:16:25 crc kubenswrapper[4946]: E1128 07:16:25.338178 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8310da02-e0c7-4dab-bc26-7139ca576c2c" containerName="ceilometer-notification-agent" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.338187 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="8310da02-e0c7-4dab-bc26-7139ca576c2c" containerName="ceilometer-notification-agent" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.338438 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="8310da02-e0c7-4dab-bc26-7139ca576c2c" containerName="proxy-httpd" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.338480 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="8310da02-e0c7-4dab-bc26-7139ca576c2c" containerName="ceilometer-notification-agent" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.338501 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="8310da02-e0c7-4dab-bc26-7139ca576c2c" containerName="sg-core" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.341265 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.356823 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.357718 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.418572 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.429859 4946 scope.go:117] "RemoveContainer" containerID="0a6d974443f840af515def5c439a2c40cf5e3449f4043f5c4fe778bb70c9b0fd" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.561926 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/506e968b-dc84-4f39-b5f4-427270eb7e9c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " pod="openstack/ceilometer-0" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.561997 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/506e968b-dc84-4f39-b5f4-427270eb7e9c-scripts\") pod \"ceilometer-0\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " pod="openstack/ceilometer-0" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.562017 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/506e968b-dc84-4f39-b5f4-427270eb7e9c-run-httpd\") pod \"ceilometer-0\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " pod="openstack/ceilometer-0" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.562078 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsckp\" (UniqueName: \"kubernetes.io/projected/506e968b-dc84-4f39-b5f4-427270eb7e9c-kube-api-access-lsckp\") pod \"ceilometer-0\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " pod="openstack/ceilometer-0" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.562126 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506e968b-dc84-4f39-b5f4-427270eb7e9c-config-data\") pod \"ceilometer-0\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " pod="openstack/ceilometer-0" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.562145 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506e968b-dc84-4f39-b5f4-427270eb7e9c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " pod="openstack/ceilometer-0" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.562165 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/506e968b-dc84-4f39-b5f4-427270eb7e9c-log-httpd\") pod \"ceilometer-0\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " pod="openstack/ceilometer-0" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.664892 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/506e968b-dc84-4f39-b5f4-427270eb7e9c-scripts\") pod \"ceilometer-0\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " pod="openstack/ceilometer-0" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.664936 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/506e968b-dc84-4f39-b5f4-427270eb7e9c-run-httpd\") pod \"ceilometer-0\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " pod="openstack/ceilometer-0" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.665026 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsckp\" (UniqueName: \"kubernetes.io/projected/506e968b-dc84-4f39-b5f4-427270eb7e9c-kube-api-access-lsckp\") pod \"ceilometer-0\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " pod="openstack/ceilometer-0" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.665086 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506e968b-dc84-4f39-b5f4-427270eb7e9c-config-data\") pod \"ceilometer-0\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " pod="openstack/ceilometer-0" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.665110 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506e968b-dc84-4f39-b5f4-427270eb7e9c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " pod="openstack/ceilometer-0" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.665131 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/506e968b-dc84-4f39-b5f4-427270eb7e9c-log-httpd\") pod \"ceilometer-0\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " pod="openstack/ceilometer-0" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.665161 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/506e968b-dc84-4f39-b5f4-427270eb7e9c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " pod="openstack/ceilometer-0" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.668928 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/506e968b-dc84-4f39-b5f4-427270eb7e9c-run-httpd\") pod \"ceilometer-0\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " pod="openstack/ceilometer-0" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.669164 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/506e968b-dc84-4f39-b5f4-427270eb7e9c-log-httpd\") pod \"ceilometer-0\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " pod="openstack/ceilometer-0" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.671104 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/506e968b-dc84-4f39-b5f4-427270eb7e9c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " pod="openstack/ceilometer-0" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.674813 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/506e968b-dc84-4f39-b5f4-427270eb7e9c-scripts\") pod \"ceilometer-0\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " pod="openstack/ceilometer-0" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.679493 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506e968b-dc84-4f39-b5f4-427270eb7e9c-config-data\") pod \"ceilometer-0\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " pod="openstack/ceilometer-0" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.693271 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsckp\" (UniqueName: \"kubernetes.io/projected/506e968b-dc84-4f39-b5f4-427270eb7e9c-kube-api-access-lsckp\") pod \"ceilometer-0\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " pod="openstack/ceilometer-0" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.694821 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506e968b-dc84-4f39-b5f4-427270eb7e9c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " pod="openstack/ceilometer-0" Nov 28 07:16:25 crc kubenswrapper[4946]: I1128 07:16:25.976429 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:16:26 crc kubenswrapper[4946]: I1128 07:16:26.023861 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8310da02-e0c7-4dab-bc26-7139ca576c2c" path="/var/lib/kubelet/pods/8310da02-e0c7-4dab-bc26-7139ca576c2c/volumes" Nov 28 07:16:26 crc kubenswrapper[4946]: I1128 07:16:26.233145 4946 generic.go:334] "Generic (PLEG): container finished" podID="fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755" containerID="421e9052d27407d15140d6d912af7b4e4e29caf6daa496715ddb295e6cdf2b4b" exitCode=143 Nov 28 07:16:26 crc kubenswrapper[4946]: I1128 07:16:26.233437 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-97cf8d846-2fqhs" event={"ID":"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755","Type":"ContainerDied","Data":"421e9052d27407d15140d6d912af7b4e4e29caf6daa496715ddb295e6cdf2b4b"} Nov 28 07:16:26 crc kubenswrapper[4946]: I1128 07:16:26.236151 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365"} Nov 28 07:16:26 crc kubenswrapper[4946]: I1128 07:16:26.250904 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbnxr" event={"ID":"5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d","Type":"ContainerStarted","Data":"53115fdb47e1622c7cc72bcac9916685906f7b9c04068a7aea488cf98bb750f2"} Nov 28 07:16:26 crc kubenswrapper[4946]: I1128 07:16:26.260591 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3ea7cbfe-2490-4cdb-b331-fc9905773c54","Type":"ContainerStarted","Data":"fbbfae95be4a445583685f21f5948fc029c254c9c247edc82e1e930792a47b75"} Nov 28 07:16:26 crc kubenswrapper[4946]: I1128 07:16:26.262477 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2845cc2c-3bb4-4083-b59b-cdd5ef7da024","Type":"ContainerStarted","Data":"7466b459b8431b0e3726f353873313895f6df0f78ec7a77143d0da560abb72eb"} Nov 28 07:16:26 crc kubenswrapper[4946]: I1128 07:16:26.262652 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="2845cc2c-3bb4-4083-b59b-cdd5ef7da024" containerName="cinder-api-log" containerID="cri-o://f288f35c6b7ca8028c817841aa085cfc8a9a408e9b62414e319fa0acad933417" gracePeriod=30 Nov 28 07:16:26 crc kubenswrapper[4946]: I1128 07:16:26.262924 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 28 07:16:26 crc kubenswrapper[4946]: I1128 07:16:26.262963 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="2845cc2c-3bb4-4083-b59b-cdd5ef7da024" containerName="cinder-api" containerID="cri-o://7466b459b8431b0e3726f353873313895f6df0f78ec7a77143d0da560abb72eb" gracePeriod=30 Nov 28 07:16:26 crc kubenswrapper[4946]: I1128 07:16:26.288193 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kbnxr" podStartSLOduration=5.296477491 podStartE2EDuration="8.288157743s" podCreationTimestamp="2025-11-28 07:16:18 +0000 UTC" firstStartedPulling="2025-11-28 07:16:21.714961399 +0000 UTC m=+1436.093026520" lastFinishedPulling="2025-11-28 07:16:24.706641661 +0000 UTC m=+1439.084706772" observedRunningTime="2025-11-28 07:16:26.284932764 +0000 UTC m=+1440.662997875" watchObservedRunningTime="2025-11-28 07:16:26.288157743 +0000 UTC m=+1440.666222854" Nov 28 07:16:26 crc kubenswrapper[4946]: I1128 07:16:26.319070 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.319046311 podStartE2EDuration="4.319046311s" podCreationTimestamp="2025-11-28 07:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:16:26.30881184 +0000 UTC m=+1440.686876951" watchObservedRunningTime="2025-11-28 07:16:26.319046311 +0000 UTC m=+1440.697111412" Nov 28 07:16:26 crc kubenswrapper[4946]: I1128 07:16:26.637712 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:16:26 crc kubenswrapper[4946]: W1128 07:16:26.680054 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod506e968b_dc84_4f39_b5f4_427270eb7e9c.slice/crio-0d7f66408cf0fc2b2d8c88331ac074691b5790f57056d5eba6b24d0e3a2a6774 WatchSource:0}: Error finding container 0d7f66408cf0fc2b2d8c88331ac074691b5790f57056d5eba6b24d0e3a2a6774: Status 404 returned error can't find the container with id 0d7f66408cf0fc2b2d8c88331ac074691b5790f57056d5eba6b24d0e3a2a6774 Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.304922 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3ea7cbfe-2490-4cdb-b331-fc9905773c54","Type":"ContainerStarted","Data":"1316a0c17874e27f705f7919a0b1d106021686bbb735182511ae5849559e0f64"} Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.307502 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"506e968b-dc84-4f39-b5f4-427270eb7e9c","Type":"ContainerStarted","Data":"0d7f66408cf0fc2b2d8c88331ac074691b5790f57056d5eba6b24d0e3a2a6774"} Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.318994 4946 generic.go:334] "Generic (PLEG): container finished" podID="2845cc2c-3bb4-4083-b59b-cdd5ef7da024" containerID="7466b459b8431b0e3726f353873313895f6df0f78ec7a77143d0da560abb72eb" exitCode=0 Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.319028 4946 generic.go:334] "Generic (PLEG): container finished" podID="2845cc2c-3bb4-4083-b59b-cdd5ef7da024" containerID="f288f35c6b7ca8028c817841aa085cfc8a9a408e9b62414e319fa0acad933417" exitCode=143 Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.320407 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2845cc2c-3bb4-4083-b59b-cdd5ef7da024","Type":"ContainerDied","Data":"7466b459b8431b0e3726f353873313895f6df0f78ec7a77143d0da560abb72eb"} Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.320442 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2845cc2c-3bb4-4083-b59b-cdd5ef7da024","Type":"ContainerDied","Data":"f288f35c6b7ca8028c817841aa085cfc8a9a408e9b62414e319fa0acad933417"} Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.349566 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.690450115 podStartE2EDuration="6.348388041s" podCreationTimestamp="2025-11-28 07:16:21 +0000 UTC" firstStartedPulling="2025-11-28 07:16:23.047180818 +0000 UTC m=+1437.425245929" lastFinishedPulling="2025-11-28 07:16:24.705118744 +0000 UTC m=+1439.083183855" observedRunningTime="2025-11-28 07:16:27.328664789 +0000 UTC m=+1441.706729910" watchObservedRunningTime="2025-11-28 07:16:27.348388041 +0000 UTC m=+1441.726453162" Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.385416 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.564208 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.718098 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.743442 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g5wz\" (UniqueName: \"kubernetes.io/projected/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-kube-api-access-9g5wz\") pod \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.743558 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-config-data\") pod \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.743699 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-scripts\") pod \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.743747 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-logs\") pod \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.743835 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-etc-machine-id\") pod \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.743905 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-combined-ca-bundle\") pod \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.743959 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-config-data-custom\") pod \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\" (UID: \"2845cc2c-3bb4-4083-b59b-cdd5ef7da024\") " Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.744692 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-logs" (OuterVolumeSpecName: "logs") pod "2845cc2c-3bb4-4083-b59b-cdd5ef7da024" (UID: "2845cc2c-3bb4-4083-b59b-cdd5ef7da024"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.744956 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2845cc2c-3bb4-4083-b59b-cdd5ef7da024" (UID: "2845cc2c-3bb4-4083-b59b-cdd5ef7da024"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.754227 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-kube-api-access-9g5wz" (OuterVolumeSpecName: "kube-api-access-9g5wz") pod "2845cc2c-3bb4-4083-b59b-cdd5ef7da024" (UID: "2845cc2c-3bb4-4083-b59b-cdd5ef7da024"). InnerVolumeSpecName "kube-api-access-9g5wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.754642 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-scripts" (OuterVolumeSpecName: "scripts") pod "2845cc2c-3bb4-4083-b59b-cdd5ef7da024" (UID: "2845cc2c-3bb4-4083-b59b-cdd5ef7da024"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.759152 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2845cc2c-3bb4-4083-b59b-cdd5ef7da024" (UID: "2845cc2c-3bb4-4083-b59b-cdd5ef7da024"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.808564 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2845cc2c-3bb4-4083-b59b-cdd5ef7da024" (UID: "2845cc2c-3bb4-4083-b59b-cdd5ef7da024"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.810223 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bbd4d5c56-h9gwc"] Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.810511 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-bbd4d5c56-h9gwc" podUID="9a8bc084-fa5e-4975-972d-9beb506babc1" containerName="neutron-api" containerID="cri-o://26bde06bf1c10c3c0f9429e58a4d550cc56eee29e9b26fa9be3a3166679b520d" gracePeriod=30 Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.810931 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-bbd4d5c56-h9gwc" podUID="9a8bc084-fa5e-4975-972d-9beb506babc1" containerName="neutron-httpd" containerID="cri-o://065455f92a5371e9320f2881f9f4e8ad18f6c24f2a8ffb2a70fa6ce0026a58f6" gracePeriod=30 Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.846166 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.846416 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.846425 4946 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.846434 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.846443 4946 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.846453 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g5wz\" (UniqueName: \"kubernetes.io/projected/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-kube-api-access-9g5wz\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.890654 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-config-data" (OuterVolumeSpecName: "config-data") pod "2845cc2c-3bb4-4083-b59b-cdd5ef7da024" (UID: "2845cc2c-3bb4-4083-b59b-cdd5ef7da024"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:27 crc kubenswrapper[4946]: I1128 07:16:27.948601 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2845cc2c-3bb4-4083-b59b-cdd5ef7da024-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.331029 4946 generic.go:334] "Generic (PLEG): container finished" podID="9a8bc084-fa5e-4975-972d-9beb506babc1" containerID="065455f92a5371e9320f2881f9f4e8ad18f6c24f2a8ffb2a70fa6ce0026a58f6" exitCode=0 Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.331140 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bbd4d5c56-h9gwc" event={"ID":"9a8bc084-fa5e-4975-972d-9beb506babc1","Type":"ContainerDied","Data":"065455f92a5371e9320f2881f9f4e8ad18f6c24f2a8ffb2a70fa6ce0026a58f6"} Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.332821 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"506e968b-dc84-4f39-b5f4-427270eb7e9c","Type":"ContainerStarted","Data":"68a56de70cb709f5d0187a644270625a74e7d861db2aad2931c42c8c544e513c"} Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.335254 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2845cc2c-3bb4-4083-b59b-cdd5ef7da024","Type":"ContainerDied","Data":"4f4172a48ecf82d1b3a49e6808180501957709c894389f8d92c2a63e5a88cff5"} Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.335291 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.335310 4946 scope.go:117] "RemoveContainer" containerID="7466b459b8431b0e3726f353873313895f6df0f78ec7a77143d0da560abb72eb" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.366021 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.375974 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.379839 4946 scope.go:117] "RemoveContainer" containerID="f288f35c6b7ca8028c817841aa085cfc8a9a408e9b62414e319fa0acad933417" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.404988 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 28 07:16:28 crc kubenswrapper[4946]: E1128 07:16:28.405487 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2845cc2c-3bb4-4083-b59b-cdd5ef7da024" containerName="cinder-api-log" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.405506 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="2845cc2c-3bb4-4083-b59b-cdd5ef7da024" containerName="cinder-api-log" Nov 28 07:16:28 crc kubenswrapper[4946]: E1128 07:16:28.405517 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2845cc2c-3bb4-4083-b59b-cdd5ef7da024" containerName="cinder-api" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.405524 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="2845cc2c-3bb4-4083-b59b-cdd5ef7da024" containerName="cinder-api" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.405704 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="2845cc2c-3bb4-4083-b59b-cdd5ef7da024" containerName="cinder-api" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.405737 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="2845cc2c-3bb4-4083-b59b-cdd5ef7da024" containerName="cinder-api-log" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.406809 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.412523 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.412711 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.412858 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.420381 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.532986 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-97cf8d846-2fqhs" podUID="fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.154:9311/healthcheck\": read tcp 10.217.0.2:50428->10.217.0.154:9311: read: connection reset by peer" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.533082 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-97cf8d846-2fqhs" podUID="fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.154:9311/healthcheck\": read tcp 10.217.0.2:50432->10.217.0.154:9311: read: connection reset by peer" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.559686 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb4q5\" (UniqueName: \"kubernetes.io/projected/05997c14-3116-4439-8e63-230bf0e5c411-kube-api-access-zb4q5\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.559750 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.559798 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-config-data\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.559901 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-config-data-custom\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.559931 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05997c14-3116-4439-8e63-230bf0e5c411-etc-machine-id\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.559973 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-scripts\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.559994 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05997c14-3116-4439-8e63-230bf0e5c411-logs\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.560312 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.560421 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-public-tls-certs\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.662533 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.662836 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-public-tls-certs\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.662950 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb4q5\" (UniqueName: \"kubernetes.io/projected/05997c14-3116-4439-8e63-230bf0e5c411-kube-api-access-zb4q5\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.663066 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.663201 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-config-data\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.663335 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-config-data-custom\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.663421 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05997c14-3116-4439-8e63-230bf0e5c411-etc-machine-id\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.663522 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05997c14-3116-4439-8e63-230bf0e5c411-etc-machine-id\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.663620 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-scripts\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.663710 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05997c14-3116-4439-8e63-230bf0e5c411-logs\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.664125 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05997c14-3116-4439-8e63-230bf0e5c411-logs\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.671199 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-config-data-custom\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.673308 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.673546 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.674967 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-scripts\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.675340 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-config-data\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.675553 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-public-tls-certs\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.697560 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb4q5\" (UniqueName: \"kubernetes.io/projected/05997c14-3116-4439-8e63-230bf0e5c411-kube-api-access-zb4q5\") pod \"cinder-api-0\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.735295 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.886579 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kbnxr" Nov 28 07:16:28 crc kubenswrapper[4946]: I1128 07:16:28.886859 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kbnxr" Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.080493 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-97cf8d846-2fqhs" Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.178652 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkvxg\" (UniqueName: \"kubernetes.io/projected/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-kube-api-access-fkvxg\") pod \"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755\" (UID: \"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755\") " Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.178743 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-logs\") pod \"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755\" (UID: \"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755\") " Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.178769 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-combined-ca-bundle\") pod \"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755\" (UID: \"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755\") " Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.178948 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-config-data\") pod \"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755\" (UID: \"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755\") " Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.179114 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-config-data-custom\") pod \"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755\" (UID: \"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755\") " Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.179535 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-logs" (OuterVolumeSpecName: "logs") pod "fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755" (UID: "fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.179970 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.184761 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-kube-api-access-fkvxg" (OuterVolumeSpecName: "kube-api-access-fkvxg") pod "fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755" (UID: "fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755"). InnerVolumeSpecName "kube-api-access-fkvxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.188078 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755" (UID: "fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.210775 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755" (UID: "fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.225645 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-config-data" (OuterVolumeSpecName: "config-data") pod "fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755" (UID: "fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.282027 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.282062 4946 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.282080 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkvxg\" (UniqueName: \"kubernetes.io/projected/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-kube-api-access-fkvxg\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.282096 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.343282 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 07:16:29 crc kubenswrapper[4946]: W1128 07:16:29.343524 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05997c14_3116_4439_8e63_230bf0e5c411.slice/crio-05c10eca8555675d751aaa6aff59f9834ee91a9a4feea01b254bf563ec7cf1cb WatchSource:0}: Error finding container 05c10eca8555675d751aaa6aff59f9834ee91a9a4feea01b254bf563ec7cf1cb: Status 404 returned error can't find the container with id 05c10eca8555675d751aaa6aff59f9834ee91a9a4feea01b254bf563ec7cf1cb Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.353265 4946 generic.go:334] "Generic (PLEG): container finished" podID="fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755" containerID="849c8a042b3728a44b8c1f9369f45c135c112afac0133076b60c395d456ec422" exitCode=0 Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.353430 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-97cf8d846-2fqhs" event={"ID":"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755","Type":"ContainerDied","Data":"849c8a042b3728a44b8c1f9369f45c135c112afac0133076b60c395d456ec422"} Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.353571 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-97cf8d846-2fqhs" event={"ID":"fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755","Type":"ContainerDied","Data":"b99f314ec79311d335e481b1065a56fee4046c57a9bbf6229882104752629f75"} Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.353596 4946 scope.go:117] "RemoveContainer" containerID="849c8a042b3728a44b8c1f9369f45c135c112afac0133076b60c395d456ec422" Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.353484 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-97cf8d846-2fqhs" Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.357351 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"506e968b-dc84-4f39-b5f4-427270eb7e9c","Type":"ContainerStarted","Data":"414d86369bbeef63fdb08294c4966a3eb5f1af050a501cc968a5e5f54e419bc2"} Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.415366 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-97cf8d846-2fqhs"] Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.415568 4946 scope.go:117] "RemoveContainer" containerID="421e9052d27407d15140d6d912af7b4e4e29caf6daa496715ddb295e6cdf2b4b" Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.444070 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-97cf8d846-2fqhs"] Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.445787 4946 scope.go:117] "RemoveContainer" containerID="849c8a042b3728a44b8c1f9369f45c135c112afac0133076b60c395d456ec422" Nov 28 07:16:29 crc kubenswrapper[4946]: E1128 07:16:29.446329 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"849c8a042b3728a44b8c1f9369f45c135c112afac0133076b60c395d456ec422\": container with ID starting with 849c8a042b3728a44b8c1f9369f45c135c112afac0133076b60c395d456ec422 not found: ID does not exist" containerID="849c8a042b3728a44b8c1f9369f45c135c112afac0133076b60c395d456ec422" Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.446372 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"849c8a042b3728a44b8c1f9369f45c135c112afac0133076b60c395d456ec422"} err="failed to get container status \"849c8a042b3728a44b8c1f9369f45c135c112afac0133076b60c395d456ec422\": rpc error: code = NotFound desc = could not find container \"849c8a042b3728a44b8c1f9369f45c135c112afac0133076b60c395d456ec422\": container with ID starting with 849c8a042b3728a44b8c1f9369f45c135c112afac0133076b60c395d456ec422 not found: ID does not exist" Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.446416 4946 scope.go:117] "RemoveContainer" containerID="421e9052d27407d15140d6d912af7b4e4e29caf6daa496715ddb295e6cdf2b4b" Nov 28 07:16:29 crc kubenswrapper[4946]: E1128 07:16:29.446740 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"421e9052d27407d15140d6d912af7b4e4e29caf6daa496715ddb295e6cdf2b4b\": container with ID starting with 421e9052d27407d15140d6d912af7b4e4e29caf6daa496715ddb295e6cdf2b4b not found: ID does not exist" containerID="421e9052d27407d15140d6d912af7b4e4e29caf6daa496715ddb295e6cdf2b4b" Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.446763 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"421e9052d27407d15140d6d912af7b4e4e29caf6daa496715ddb295e6cdf2b4b"} err="failed to get container status \"421e9052d27407d15140d6d912af7b4e4e29caf6daa496715ddb295e6cdf2b4b\": rpc error: code = NotFound desc = could not find container \"421e9052d27407d15140d6d912af7b4e4e29caf6daa496715ddb295e6cdf2b4b\": container with ID starting with 421e9052d27407d15140d6d912af7b4e4e29caf6daa496715ddb295e6cdf2b4b not found: ID does not exist" Nov 28 07:16:29 crc kubenswrapper[4946]: I1128 07:16:29.936636 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kbnxr" podUID="5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d" containerName="registry-server" probeResult="failure" output=< Nov 28 07:16:29 crc kubenswrapper[4946]: timeout: failed to connect service ":50051" within 1s Nov 28 07:16:29 crc kubenswrapper[4946]: > Nov 28 07:16:30 crc kubenswrapper[4946]: I1128 07:16:30.003425 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2845cc2c-3bb4-4083-b59b-cdd5ef7da024" path="/var/lib/kubelet/pods/2845cc2c-3bb4-4083-b59b-cdd5ef7da024/volumes" Nov 28 07:16:30 crc kubenswrapper[4946]: I1128 07:16:30.004403 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755" path="/var/lib/kubelet/pods/fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755/volumes" Nov 28 07:16:30 crc kubenswrapper[4946]: I1128 07:16:30.404776 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"05997c14-3116-4439-8e63-230bf0e5c411","Type":"ContainerStarted","Data":"e5f294dc1328491a34656832803a089d67378b093e4cd726077d224138730910"} Nov 28 07:16:30 crc kubenswrapper[4946]: I1128 07:16:30.405118 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"05997c14-3116-4439-8e63-230bf0e5c411","Type":"ContainerStarted","Data":"05c10eca8555675d751aaa6aff59f9834ee91a9a4feea01b254bf563ec7cf1cb"} Nov 28 07:16:30 crc kubenswrapper[4946]: I1128 07:16:30.443753 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"506e968b-dc84-4f39-b5f4-427270eb7e9c","Type":"ContainerStarted","Data":"73d415b2419edfb827b945bae4c83cc23d1a1aa0e94c0f1ff22de2faaa179733"} Nov 28 07:16:31 crc kubenswrapper[4946]: I1128 07:16:31.470326 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"05997c14-3116-4439-8e63-230bf0e5c411","Type":"ContainerStarted","Data":"bf9f1ffb702a1544bb3f60c1bb7628caa63240849c8027e55a2a252bcc1fc987"} Nov 28 07:16:31 crc kubenswrapper[4946]: I1128 07:16:31.472735 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 28 07:16:31 crc kubenswrapper[4946]: I1128 07:16:31.478348 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"506e968b-dc84-4f39-b5f4-427270eb7e9c","Type":"ContainerStarted","Data":"17db7cd5c15e535b59ab8806e0f4bb49dedce33fb10156ef3f49284307894ecb"} Nov 28 07:16:31 crc kubenswrapper[4946]: I1128 07:16:31.478724 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 07:16:31 crc kubenswrapper[4946]: I1128 07:16:31.505949 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.505915322 podStartE2EDuration="3.505915322s" podCreationTimestamp="2025-11-28 07:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:16:31.501393531 +0000 UTC m=+1445.879458652" watchObservedRunningTime="2025-11-28 07:16:31.505915322 +0000 UTC m=+1445.883980453" Nov 28 07:16:32 crc kubenswrapper[4946]: I1128 07:16:32.420716 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c55f6679-r8nsx" Nov 28 07:16:32 crc kubenswrapper[4946]: I1128 07:16:32.458332 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.231611131 podStartE2EDuration="7.458309467s" podCreationTimestamp="2025-11-28 07:16:25 +0000 UTC" firstStartedPulling="2025-11-28 07:16:26.69175616 +0000 UTC m=+1441.069821271" lastFinishedPulling="2025-11-28 07:16:30.918454486 +0000 UTC m=+1445.296519607" observedRunningTime="2025-11-28 07:16:31.541040884 +0000 UTC m=+1445.919105995" watchObservedRunningTime="2025-11-28 07:16:32.458309467 +0000 UTC m=+1446.836374578" Nov 28 07:16:32 crc kubenswrapper[4946]: I1128 07:16:32.528798 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf96b7dc5-svc2b"] Nov 28 07:16:32 crc kubenswrapper[4946]: I1128 07:16:32.529129 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" podUID="bc4b8b86-40ad-4ff6-8866-207cb02c182d" containerName="dnsmasq-dns" containerID="cri-o://2fe9aa1a15d8263e269473bdce62508133ea57ffd23294ebe19d208918be43f3" gracePeriod=10 Nov 28 07:16:32 crc kubenswrapper[4946]: I1128 07:16:32.754445 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 28 07:16:32 crc kubenswrapper[4946]: I1128 07:16:32.822279 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.157436 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.218617 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bbd4d5c56-h9gwc" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.291338 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9a8bc084-fa5e-4975-972d-9beb506babc1-httpd-config\") pod \"9a8bc084-fa5e-4975-972d-9beb506babc1\" (UID: \"9a8bc084-fa5e-4975-972d-9beb506babc1\") " Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.291404 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a8bc084-fa5e-4975-972d-9beb506babc1-ovndb-tls-certs\") pod \"9a8bc084-fa5e-4975-972d-9beb506babc1\" (UID: \"9a8bc084-fa5e-4975-972d-9beb506babc1\") " Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.291445 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfqgr\" (UniqueName: \"kubernetes.io/projected/9a8bc084-fa5e-4975-972d-9beb506babc1-kube-api-access-xfqgr\") pod \"9a8bc084-fa5e-4975-972d-9beb506babc1\" (UID: \"9a8bc084-fa5e-4975-972d-9beb506babc1\") " Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.291516 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a8bc084-fa5e-4975-972d-9beb506babc1-combined-ca-bundle\") pod \"9a8bc084-fa5e-4975-972d-9beb506babc1\" (UID: \"9a8bc084-fa5e-4975-972d-9beb506babc1\") " Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.292360 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkqdv\" (UniqueName: \"kubernetes.io/projected/bc4b8b86-40ad-4ff6-8866-207cb02c182d-kube-api-access-qkqdv\") pod \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\" (UID: \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\") " Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.292420 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-config\") pod \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\" (UID: \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\") " Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.292611 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-ovsdbserver-sb\") pod \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\" (UID: \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\") " Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.292748 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9a8bc084-fa5e-4975-972d-9beb506babc1-config\") pod \"9a8bc084-fa5e-4975-972d-9beb506babc1\" (UID: \"9a8bc084-fa5e-4975-972d-9beb506babc1\") " Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.292775 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-ovsdbserver-nb\") pod \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\" (UID: \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\") " Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.292840 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-dns-swift-storage-0\") pod \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\" (UID: \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\") " Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.292918 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-dns-svc\") pod \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\" (UID: \"bc4b8b86-40ad-4ff6-8866-207cb02c182d\") " Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.301266 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc4b8b86-40ad-4ff6-8866-207cb02c182d-kube-api-access-qkqdv" (OuterVolumeSpecName: "kube-api-access-qkqdv") pod "bc4b8b86-40ad-4ff6-8866-207cb02c182d" (UID: "bc4b8b86-40ad-4ff6-8866-207cb02c182d"). InnerVolumeSpecName "kube-api-access-qkqdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.301882 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a8bc084-fa5e-4975-972d-9beb506babc1-kube-api-access-xfqgr" (OuterVolumeSpecName: "kube-api-access-xfqgr") pod "9a8bc084-fa5e-4975-972d-9beb506babc1" (UID: "9a8bc084-fa5e-4975-972d-9beb506babc1"). InnerVolumeSpecName "kube-api-access-xfqgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.310091 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a8bc084-fa5e-4975-972d-9beb506babc1-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "9a8bc084-fa5e-4975-972d-9beb506babc1" (UID: "9a8bc084-fa5e-4975-972d-9beb506babc1"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.349172 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bc4b8b86-40ad-4ff6-8866-207cb02c182d" (UID: "bc4b8b86-40ad-4ff6-8866-207cb02c182d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.353086 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bc4b8b86-40ad-4ff6-8866-207cb02c182d" (UID: "bc4b8b86-40ad-4ff6-8866-207cb02c182d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.361002 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a8bc084-fa5e-4975-972d-9beb506babc1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a8bc084-fa5e-4975-972d-9beb506babc1" (UID: "9a8bc084-fa5e-4975-972d-9beb506babc1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.361793 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bc4b8b86-40ad-4ff6-8866-207cb02c182d" (UID: "bc4b8b86-40ad-4ff6-8866-207cb02c182d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.371671 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a8bc084-fa5e-4975-972d-9beb506babc1-config" (OuterVolumeSpecName: "config") pod "9a8bc084-fa5e-4975-972d-9beb506babc1" (UID: "9a8bc084-fa5e-4975-972d-9beb506babc1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.374799 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bc4b8b86-40ad-4ff6-8866-207cb02c182d" (UID: "bc4b8b86-40ad-4ff6-8866-207cb02c182d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.382399 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a8bc084-fa5e-4975-972d-9beb506babc1-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "9a8bc084-fa5e-4975-972d-9beb506babc1" (UID: "9a8bc084-fa5e-4975-972d-9beb506babc1"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.387269 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-config" (OuterVolumeSpecName: "config") pod "bc4b8b86-40ad-4ff6-8866-207cb02c182d" (UID: "bc4b8b86-40ad-4ff6-8866-207cb02c182d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.395940 4946 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9a8bc084-fa5e-4975-972d-9beb506babc1-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.395980 4946 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a8bc084-fa5e-4975-972d-9beb506babc1-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.395993 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfqgr\" (UniqueName: \"kubernetes.io/projected/9a8bc084-fa5e-4975-972d-9beb506babc1-kube-api-access-xfqgr\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.396003 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a8bc084-fa5e-4975-972d-9beb506babc1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.396038 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkqdv\" (UniqueName: \"kubernetes.io/projected/bc4b8b86-40ad-4ff6-8866-207cb02c182d-kube-api-access-qkqdv\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.396049 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.396058 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.396067 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9a8bc084-fa5e-4975-972d-9beb506babc1-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.396075 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.396084 4946 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.396105 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc4b8b86-40ad-4ff6-8866-207cb02c182d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.543183 4946 generic.go:334] "Generic (PLEG): container finished" podID="9a8bc084-fa5e-4975-972d-9beb506babc1" containerID="26bde06bf1c10c3c0f9429e58a4d550cc56eee29e9b26fa9be3a3166679b520d" exitCode=0 Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.543928 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bbd4d5c56-h9gwc" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.547577 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bbd4d5c56-h9gwc" event={"ID":"9a8bc084-fa5e-4975-972d-9beb506babc1","Type":"ContainerDied","Data":"26bde06bf1c10c3c0f9429e58a4d550cc56eee29e9b26fa9be3a3166679b520d"} Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.547635 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bbd4d5c56-h9gwc" event={"ID":"9a8bc084-fa5e-4975-972d-9beb506babc1","Type":"ContainerDied","Data":"d120a6c789d441af9598ce92474c7e9a0116e5f4611a0232e22d939ca4af4a3b"} Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.547657 4946 scope.go:117] "RemoveContainer" containerID="065455f92a5371e9320f2881f9f4e8ad18f6c24f2a8ffb2a70fa6ce0026a58f6" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.549610 4946 generic.go:334] "Generic (PLEG): container finished" podID="bc4b8b86-40ad-4ff6-8866-207cb02c182d" containerID="2fe9aa1a15d8263e269473bdce62508133ea57ffd23294ebe19d208918be43f3" exitCode=0 Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.549693 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" event={"ID":"bc4b8b86-40ad-4ff6-8866-207cb02c182d","Type":"ContainerDied","Data":"2fe9aa1a15d8263e269473bdce62508133ea57ffd23294ebe19d208918be43f3"} Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.549756 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" event={"ID":"bc4b8b86-40ad-4ff6-8866-207cb02c182d","Type":"ContainerDied","Data":"fa596f55dfd5fc55df52db3001e0bc401f1c96d0aff4171ab2cd69df0be819b3"} Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.549839 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3ea7cbfe-2490-4cdb-b331-fc9905773c54" containerName="cinder-scheduler" containerID="cri-o://fbbfae95be4a445583685f21f5948fc029c254c9c247edc82e1e930792a47b75" gracePeriod=30 Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.549947 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf96b7dc5-svc2b" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.550032 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3ea7cbfe-2490-4cdb-b331-fc9905773c54" containerName="probe" containerID="cri-o://1316a0c17874e27f705f7919a0b1d106021686bbb735182511ae5849559e0f64" gracePeriod=30 Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.600302 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bbd4d5c56-h9gwc"] Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.601254 4946 scope.go:117] "RemoveContainer" containerID="26bde06bf1c10c3c0f9429e58a4d550cc56eee29e9b26fa9be3a3166679b520d" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.611445 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-bbd4d5c56-h9gwc"] Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.655334 4946 scope.go:117] "RemoveContainer" containerID="065455f92a5371e9320f2881f9f4e8ad18f6c24f2a8ffb2a70fa6ce0026a58f6" Nov 28 07:16:33 crc kubenswrapper[4946]: E1128 07:16:33.658510 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"065455f92a5371e9320f2881f9f4e8ad18f6c24f2a8ffb2a70fa6ce0026a58f6\": container with ID starting with 065455f92a5371e9320f2881f9f4e8ad18f6c24f2a8ffb2a70fa6ce0026a58f6 not found: ID does not exist" containerID="065455f92a5371e9320f2881f9f4e8ad18f6c24f2a8ffb2a70fa6ce0026a58f6" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.658563 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"065455f92a5371e9320f2881f9f4e8ad18f6c24f2a8ffb2a70fa6ce0026a58f6"} err="failed to get container status \"065455f92a5371e9320f2881f9f4e8ad18f6c24f2a8ffb2a70fa6ce0026a58f6\": rpc error: code = NotFound desc = could not find container \"065455f92a5371e9320f2881f9f4e8ad18f6c24f2a8ffb2a70fa6ce0026a58f6\": container with ID starting with 065455f92a5371e9320f2881f9f4e8ad18f6c24f2a8ffb2a70fa6ce0026a58f6 not found: ID does not exist" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.658594 4946 scope.go:117] "RemoveContainer" containerID="26bde06bf1c10c3c0f9429e58a4d550cc56eee29e9b26fa9be3a3166679b520d" Nov 28 07:16:33 crc kubenswrapper[4946]: E1128 07:16:33.658895 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26bde06bf1c10c3c0f9429e58a4d550cc56eee29e9b26fa9be3a3166679b520d\": container with ID starting with 26bde06bf1c10c3c0f9429e58a4d550cc56eee29e9b26fa9be3a3166679b520d not found: ID does not exist" containerID="26bde06bf1c10c3c0f9429e58a4d550cc56eee29e9b26fa9be3a3166679b520d" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.658923 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26bde06bf1c10c3c0f9429e58a4d550cc56eee29e9b26fa9be3a3166679b520d"} err="failed to get container status \"26bde06bf1c10c3c0f9429e58a4d550cc56eee29e9b26fa9be3a3166679b520d\": rpc error: code = NotFound desc = could not find container \"26bde06bf1c10c3c0f9429e58a4d550cc56eee29e9b26fa9be3a3166679b520d\": container with ID starting with 26bde06bf1c10c3c0f9429e58a4d550cc56eee29e9b26fa9be3a3166679b520d not found: ID does not exist" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.658940 4946 scope.go:117] "RemoveContainer" containerID="2fe9aa1a15d8263e269473bdce62508133ea57ffd23294ebe19d208918be43f3" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.664918 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf96b7dc5-svc2b"] Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.674081 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf96b7dc5-svc2b"] Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.696272 4946 scope.go:117] "RemoveContainer" containerID="1b8a285ff6ab56bae635b5be65d3cbe9f74311d36c411030deed6954dac2d335" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.731069 4946 scope.go:117] "RemoveContainer" containerID="2fe9aa1a15d8263e269473bdce62508133ea57ffd23294ebe19d208918be43f3" Nov 28 07:16:33 crc kubenswrapper[4946]: E1128 07:16:33.737020 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fe9aa1a15d8263e269473bdce62508133ea57ffd23294ebe19d208918be43f3\": container with ID starting with 2fe9aa1a15d8263e269473bdce62508133ea57ffd23294ebe19d208918be43f3 not found: ID does not exist" containerID="2fe9aa1a15d8263e269473bdce62508133ea57ffd23294ebe19d208918be43f3" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.737074 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fe9aa1a15d8263e269473bdce62508133ea57ffd23294ebe19d208918be43f3"} err="failed to get container status \"2fe9aa1a15d8263e269473bdce62508133ea57ffd23294ebe19d208918be43f3\": rpc error: code = NotFound desc = could not find container \"2fe9aa1a15d8263e269473bdce62508133ea57ffd23294ebe19d208918be43f3\": container with ID starting with 2fe9aa1a15d8263e269473bdce62508133ea57ffd23294ebe19d208918be43f3 not found: ID does not exist" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.737110 4946 scope.go:117] "RemoveContainer" containerID="1b8a285ff6ab56bae635b5be65d3cbe9f74311d36c411030deed6954dac2d335" Nov 28 07:16:33 crc kubenswrapper[4946]: E1128 07:16:33.737573 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b8a285ff6ab56bae635b5be65d3cbe9f74311d36c411030deed6954dac2d335\": container with ID starting with 1b8a285ff6ab56bae635b5be65d3cbe9f74311d36c411030deed6954dac2d335 not found: ID does not exist" containerID="1b8a285ff6ab56bae635b5be65d3cbe9f74311d36c411030deed6954dac2d335" Nov 28 07:16:33 crc kubenswrapper[4946]: I1128 07:16:33.737600 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8a285ff6ab56bae635b5be65d3cbe9f74311d36c411030deed6954dac2d335"} err="failed to get container status \"1b8a285ff6ab56bae635b5be65d3cbe9f74311d36c411030deed6954dac2d335\": rpc error: code = NotFound desc = could not find container \"1b8a285ff6ab56bae635b5be65d3cbe9f74311d36c411030deed6954dac2d335\": container with ID starting with 1b8a285ff6ab56bae635b5be65d3cbe9f74311d36c411030deed6954dac2d335 not found: ID does not exist" Nov 28 07:16:34 crc kubenswrapper[4946]: I1128 07:16:34.001350 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a8bc084-fa5e-4975-972d-9beb506babc1" path="/var/lib/kubelet/pods/9a8bc084-fa5e-4975-972d-9beb506babc1/volumes" Nov 28 07:16:34 crc kubenswrapper[4946]: I1128 07:16:34.001949 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc4b8b86-40ad-4ff6-8866-207cb02c182d" path="/var/lib/kubelet/pods/bc4b8b86-40ad-4ff6-8866-207cb02c182d/volumes" Nov 28 07:16:34 crc kubenswrapper[4946]: I1128 07:16:34.564310 4946 generic.go:334] "Generic (PLEG): container finished" podID="3ea7cbfe-2490-4cdb-b331-fc9905773c54" containerID="1316a0c17874e27f705f7919a0b1d106021686bbb735182511ae5849559e0f64" exitCode=0 Nov 28 07:16:34 crc kubenswrapper[4946]: I1128 07:16:34.564382 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3ea7cbfe-2490-4cdb-b331-fc9905773c54","Type":"ContainerDied","Data":"1316a0c17874e27f705f7919a0b1d106021686bbb735182511ae5849559e0f64"} Nov 28 07:16:34 crc kubenswrapper[4946]: I1128 07:16:34.895148 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:34 crc kubenswrapper[4946]: I1128 07:16:34.895508 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.522316 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.612154 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ea7cbfe-2490-4cdb-b331-fc9905773c54-scripts\") pod \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\" (UID: \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\") " Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.612239 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ea7cbfe-2490-4cdb-b331-fc9905773c54-etc-machine-id\") pod \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\" (UID: \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\") " Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.612265 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhrs8\" (UniqueName: \"kubernetes.io/projected/3ea7cbfe-2490-4cdb-b331-fc9905773c54-kube-api-access-fhrs8\") pod \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\" (UID: \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\") " Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.612366 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ea7cbfe-2490-4cdb-b331-fc9905773c54-config-data-custom\") pod \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\" (UID: \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\") " Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.612495 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ea7cbfe-2490-4cdb-b331-fc9905773c54-combined-ca-bundle\") pod \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\" (UID: \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\") " Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.612599 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ea7cbfe-2490-4cdb-b331-fc9905773c54-config-data\") pod \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\" (UID: \"3ea7cbfe-2490-4cdb-b331-fc9905773c54\") " Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.615193 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ea7cbfe-2490-4cdb-b331-fc9905773c54-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3ea7cbfe-2490-4cdb-b331-fc9905773c54" (UID: "3ea7cbfe-2490-4cdb-b331-fc9905773c54"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.621115 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea7cbfe-2490-4cdb-b331-fc9905773c54-kube-api-access-fhrs8" (OuterVolumeSpecName: "kube-api-access-fhrs8") pod "3ea7cbfe-2490-4cdb-b331-fc9905773c54" (UID: "3ea7cbfe-2490-4cdb-b331-fc9905773c54"). InnerVolumeSpecName "kube-api-access-fhrs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.623068 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ea7cbfe-2490-4cdb-b331-fc9905773c54-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3ea7cbfe-2490-4cdb-b331-fc9905773c54" (UID: "3ea7cbfe-2490-4cdb-b331-fc9905773c54"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.627530 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ea7cbfe-2490-4cdb-b331-fc9905773c54-scripts" (OuterVolumeSpecName: "scripts") pod "3ea7cbfe-2490-4cdb-b331-fc9905773c54" (UID: "3ea7cbfe-2490-4cdb-b331-fc9905773c54"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.653439 4946 generic.go:334] "Generic (PLEG): container finished" podID="3ea7cbfe-2490-4cdb-b331-fc9905773c54" containerID="fbbfae95be4a445583685f21f5948fc029c254c9c247edc82e1e930792a47b75" exitCode=0 Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.653589 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.653603 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3ea7cbfe-2490-4cdb-b331-fc9905773c54","Type":"ContainerDied","Data":"fbbfae95be4a445583685f21f5948fc029c254c9c247edc82e1e930792a47b75"} Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.655564 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3ea7cbfe-2490-4cdb-b331-fc9905773c54","Type":"ContainerDied","Data":"c73e13019cca89229e273d007f3b6b0bec53f4cda7a1e5804b8de69da1573226"} Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.655590 4946 scope.go:117] "RemoveContainer" containerID="1316a0c17874e27f705f7919a0b1d106021686bbb735182511ae5849559e0f64" Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.691764 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ea7cbfe-2490-4cdb-b331-fc9905773c54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ea7cbfe-2490-4cdb-b331-fc9905773c54" (UID: "3ea7cbfe-2490-4cdb-b331-fc9905773c54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.715999 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhrs8\" (UniqueName: \"kubernetes.io/projected/3ea7cbfe-2490-4cdb-b331-fc9905773c54-kube-api-access-fhrs8\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.716053 4946 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ea7cbfe-2490-4cdb-b331-fc9905773c54-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.716063 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ea7cbfe-2490-4cdb-b331-fc9905773c54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.716073 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ea7cbfe-2490-4cdb-b331-fc9905773c54-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.716082 4946 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ea7cbfe-2490-4cdb-b331-fc9905773c54-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.776591 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ea7cbfe-2490-4cdb-b331-fc9905773c54-config-data" (OuterVolumeSpecName: "config-data") pod "3ea7cbfe-2490-4cdb-b331-fc9905773c54" (UID: "3ea7cbfe-2490-4cdb-b331-fc9905773c54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.819876 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ea7cbfe-2490-4cdb-b331-fc9905773c54-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.885311 4946 scope.go:117] "RemoveContainer" containerID="fbbfae95be4a445583685f21f5948fc029c254c9c247edc82e1e930792a47b75" Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.922977 4946 scope.go:117] "RemoveContainer" containerID="1316a0c17874e27f705f7919a0b1d106021686bbb735182511ae5849559e0f64" Nov 28 07:16:38 crc kubenswrapper[4946]: E1128 07:16:38.923882 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1316a0c17874e27f705f7919a0b1d106021686bbb735182511ae5849559e0f64\": container with ID starting with 1316a0c17874e27f705f7919a0b1d106021686bbb735182511ae5849559e0f64 not found: ID does not exist" containerID="1316a0c17874e27f705f7919a0b1d106021686bbb735182511ae5849559e0f64" Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.924017 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1316a0c17874e27f705f7919a0b1d106021686bbb735182511ae5849559e0f64"} err="failed to get container status \"1316a0c17874e27f705f7919a0b1d106021686bbb735182511ae5849559e0f64\": rpc error: code = NotFound desc = could not find container \"1316a0c17874e27f705f7919a0b1d106021686bbb735182511ae5849559e0f64\": container with ID starting with 1316a0c17874e27f705f7919a0b1d106021686bbb735182511ae5849559e0f64 not found: ID does not exist" Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.924127 4946 scope.go:117] "RemoveContainer" containerID="fbbfae95be4a445583685f21f5948fc029c254c9c247edc82e1e930792a47b75" Nov 28 07:16:38 crc kubenswrapper[4946]: E1128 07:16:38.924809 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbbfae95be4a445583685f21f5948fc029c254c9c247edc82e1e930792a47b75\": container with ID starting with fbbfae95be4a445583685f21f5948fc029c254c9c247edc82e1e930792a47b75 not found: ID does not exist" containerID="fbbfae95be4a445583685f21f5948fc029c254c9c247edc82e1e930792a47b75" Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.924843 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbbfae95be4a445583685f21f5948fc029c254c9c247edc82e1e930792a47b75"} err="failed to get container status \"fbbfae95be4a445583685f21f5948fc029c254c9c247edc82e1e930792a47b75\": rpc error: code = NotFound desc = could not find container \"fbbfae95be4a445583685f21f5948fc029c254c9c247edc82e1e930792a47b75\": container with ID starting with fbbfae95be4a445583685f21f5948fc029c254c9c247edc82e1e930792a47b75 not found: ID does not exist" Nov 28 07:16:38 crc kubenswrapper[4946]: I1128 07:16:38.945163 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kbnxr" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.005317 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.013243 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.026023 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 07:16:39 crc kubenswrapper[4946]: E1128 07:16:39.026432 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755" containerName="barbican-api-log" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.026450 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755" containerName="barbican-api-log" Nov 28 07:16:39 crc kubenswrapper[4946]: E1128 07:16:39.026499 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755" containerName="barbican-api" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.026506 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755" containerName="barbican-api" Nov 28 07:16:39 crc kubenswrapper[4946]: E1128 07:16:39.026513 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea7cbfe-2490-4cdb-b331-fc9905773c54" containerName="cinder-scheduler" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.026519 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea7cbfe-2490-4cdb-b331-fc9905773c54" containerName="cinder-scheduler" Nov 28 07:16:39 crc kubenswrapper[4946]: E1128 07:16:39.026535 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a8bc084-fa5e-4975-972d-9beb506babc1" containerName="neutron-api" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.026540 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a8bc084-fa5e-4975-972d-9beb506babc1" containerName="neutron-api" Nov 28 07:16:39 crc kubenswrapper[4946]: E1128 07:16:39.026548 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc4b8b86-40ad-4ff6-8866-207cb02c182d" containerName="dnsmasq-dns" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.026553 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc4b8b86-40ad-4ff6-8866-207cb02c182d" containerName="dnsmasq-dns" Nov 28 07:16:39 crc kubenswrapper[4946]: E1128 07:16:39.026563 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc4b8b86-40ad-4ff6-8866-207cb02c182d" containerName="init" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.026569 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc4b8b86-40ad-4ff6-8866-207cb02c182d" containerName="init" Nov 28 07:16:39 crc kubenswrapper[4946]: E1128 07:16:39.026583 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea7cbfe-2490-4cdb-b331-fc9905773c54" containerName="probe" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.026589 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea7cbfe-2490-4cdb-b331-fc9905773c54" containerName="probe" Nov 28 07:16:39 crc kubenswrapper[4946]: E1128 07:16:39.026602 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a8bc084-fa5e-4975-972d-9beb506babc1" containerName="neutron-httpd" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.026610 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a8bc084-fa5e-4975-972d-9beb506babc1" containerName="neutron-httpd" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.026960 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a8bc084-fa5e-4975-972d-9beb506babc1" containerName="neutron-httpd" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.026978 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea7cbfe-2490-4cdb-b331-fc9905773c54" containerName="cinder-scheduler" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.026987 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a8bc084-fa5e-4975-972d-9beb506babc1" containerName="neutron-api" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.026998 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc4b8b86-40ad-4ff6-8866-207cb02c182d" containerName="dnsmasq-dns" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.027008 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755" containerName="barbican-api-log" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.027015 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea7cbfe-2490-4cdb-b331-fc9905773c54" containerName="probe" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.027031 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa490eca-2f0a-4ebe-b3c9-e0c0b4d66755" containerName="barbican-api" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.028012 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.028746 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kbnxr" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.031375 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.037699 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.132892 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l79zk\" (UniqueName: \"kubernetes.io/projected/c485c360-55fc-49da-851d-ab74f7c7fc98-kube-api-access-l79zk\") pod \"cinder-scheduler-0\" (UID: \"c485c360-55fc-49da-851d-ab74f7c7fc98\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.132988 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c485c360-55fc-49da-851d-ab74f7c7fc98-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c485c360-55fc-49da-851d-ab74f7c7fc98\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.133033 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c485c360-55fc-49da-851d-ab74f7c7fc98-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c485c360-55fc-49da-851d-ab74f7c7fc98\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.133067 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c485c360-55fc-49da-851d-ab74f7c7fc98-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c485c360-55fc-49da-851d-ab74f7c7fc98\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.133149 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c485c360-55fc-49da-851d-ab74f7c7fc98-config-data\") pod \"cinder-scheduler-0\" (UID: \"c485c360-55fc-49da-851d-ab74f7c7fc98\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.133169 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c485c360-55fc-49da-851d-ab74f7c7fc98-scripts\") pod \"cinder-scheduler-0\" (UID: \"c485c360-55fc-49da-851d-ab74f7c7fc98\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.199344 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kbnxr"] Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.234720 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c485c360-55fc-49da-851d-ab74f7c7fc98-config-data\") pod \"cinder-scheduler-0\" (UID: \"c485c360-55fc-49da-851d-ab74f7c7fc98\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.234783 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c485c360-55fc-49da-851d-ab74f7c7fc98-scripts\") pod \"cinder-scheduler-0\" (UID: \"c485c360-55fc-49da-851d-ab74f7c7fc98\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.234875 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l79zk\" (UniqueName: \"kubernetes.io/projected/c485c360-55fc-49da-851d-ab74f7c7fc98-kube-api-access-l79zk\") pod \"cinder-scheduler-0\" (UID: \"c485c360-55fc-49da-851d-ab74f7c7fc98\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.234950 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c485c360-55fc-49da-851d-ab74f7c7fc98-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c485c360-55fc-49da-851d-ab74f7c7fc98\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.235016 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c485c360-55fc-49da-851d-ab74f7c7fc98-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c485c360-55fc-49da-851d-ab74f7c7fc98\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.235043 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c485c360-55fc-49da-851d-ab74f7c7fc98-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c485c360-55fc-49da-851d-ab74f7c7fc98\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.235228 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c485c360-55fc-49da-851d-ab74f7c7fc98-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c485c360-55fc-49da-851d-ab74f7c7fc98\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.239252 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c485c360-55fc-49da-851d-ab74f7c7fc98-config-data\") pod \"cinder-scheduler-0\" (UID: \"c485c360-55fc-49da-851d-ab74f7c7fc98\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.249098 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c485c360-55fc-49da-851d-ab74f7c7fc98-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c485c360-55fc-49da-851d-ab74f7c7fc98\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.249102 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c485c360-55fc-49da-851d-ab74f7c7fc98-scripts\") pod \"cinder-scheduler-0\" (UID: \"c485c360-55fc-49da-851d-ab74f7c7fc98\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.249102 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c485c360-55fc-49da-851d-ab74f7c7fc98-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c485c360-55fc-49da-851d-ab74f7c7fc98\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.256980 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l79zk\" (UniqueName: \"kubernetes.io/projected/c485c360-55fc-49da-851d-ab74f7c7fc98-kube-api-access-l79zk\") pod \"cinder-scheduler-0\" (UID: \"c485c360-55fc-49da-851d-ab74f7c7fc98\") " pod="openstack/cinder-scheduler-0" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.355364 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.684495 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:16:39 crc kubenswrapper[4946]: W1128 07:16:39.933470 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc485c360_55fc_49da_851d_ab74f7c7fc98.slice/crio-55a75649ed94bef4b1e825d44fc73179cbcd34309cb1d3de52b730787c61102a WatchSource:0}: Error finding container 55a75649ed94bef4b1e825d44fc73179cbcd34309cb1d3de52b730787c61102a: Status 404 returned error can't find the container with id 55a75649ed94bef4b1e825d44fc73179cbcd34309cb1d3de52b730787c61102a Nov 28 07:16:39 crc kubenswrapper[4946]: I1128 07:16:39.936947 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 07:16:40 crc kubenswrapper[4946]: I1128 07:16:40.001624 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ea7cbfe-2490-4cdb-b331-fc9905773c54" path="/var/lib/kubelet/pods/3ea7cbfe-2490-4cdb-b331-fc9905773c54/volumes" Nov 28 07:16:40 crc kubenswrapper[4946]: I1128 07:16:40.686092 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kbnxr" podUID="5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d" containerName="registry-server" containerID="cri-o://53115fdb47e1622c7cc72bcac9916685906f7b9c04068a7aea488cf98bb750f2" gracePeriod=2 Nov 28 07:16:40 crc kubenswrapper[4946]: I1128 07:16:40.687146 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c485c360-55fc-49da-851d-ab74f7c7fc98","Type":"ContainerStarted","Data":"a144d6e25c9728943275e991de8aeab0546d7b79b5798dc6dd8598ae23f2baf4"} Nov 28 07:16:40 crc kubenswrapper[4946]: I1128 07:16:40.687203 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c485c360-55fc-49da-851d-ab74f7c7fc98","Type":"ContainerStarted","Data":"55a75649ed94bef4b1e825d44fc73179cbcd34309cb1d3de52b730787c61102a"} Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.263415 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kbnxr" Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.394816 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljq9p\" (UniqueName: \"kubernetes.io/projected/5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d-kube-api-access-ljq9p\") pod \"5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d\" (UID: \"5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d\") " Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.394912 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d-catalog-content\") pod \"5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d\" (UID: \"5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d\") " Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.395001 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d-utilities\") pod \"5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d\" (UID: \"5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d\") " Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.395847 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d-utilities" (OuterVolumeSpecName: "utilities") pod "5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d" (UID: "5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.402385 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d-kube-api-access-ljq9p" (OuterVolumeSpecName: "kube-api-access-ljq9p") pod "5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d" (UID: "5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d"). InnerVolumeSpecName "kube-api-access-ljq9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.486519 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.497403 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljq9p\" (UniqueName: \"kubernetes.io/projected/5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d-kube-api-access-ljq9p\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.497438 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.521160 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d" (UID: "5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.601889 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.696370 4946 generic.go:334] "Generic (PLEG): container finished" podID="5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d" containerID="53115fdb47e1622c7cc72bcac9916685906f7b9c04068a7aea488cf98bb750f2" exitCode=0 Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.696434 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbnxr" event={"ID":"5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d","Type":"ContainerDied","Data":"53115fdb47e1622c7cc72bcac9916685906f7b9c04068a7aea488cf98bb750f2"} Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.696480 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbnxr" event={"ID":"5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d","Type":"ContainerDied","Data":"d7fa153e40767b019d22e338d9302fffb0aa7f84a435543bb353ad55bceb8903"} Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.696503 4946 scope.go:117] "RemoveContainer" containerID="53115fdb47e1622c7cc72bcac9916685906f7b9c04068a7aea488cf98bb750f2" Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.696643 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kbnxr" Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.700906 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c485c360-55fc-49da-851d-ab74f7c7fc98","Type":"ContainerStarted","Data":"5edec769179e5fbde067252513cb9a91acd14bb765488eebcacb651d47671cea"} Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.718766 4946 scope.go:117] "RemoveContainer" containerID="01116e135b1ce24cfe3a49e468bed58e4df8ab1116492bb38ecd02092646d9f8" Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.726412 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.726392938 podStartE2EDuration="2.726392938s" podCreationTimestamp="2025-11-28 07:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:16:41.722182925 +0000 UTC m=+1456.100248036" watchObservedRunningTime="2025-11-28 07:16:41.726392938 +0000 UTC m=+1456.104458049" Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.746859 4946 scope.go:117] "RemoveContainer" containerID="250ceef6a6804465c2153239acd760587c5d1fe5a1fe6ac7ee045eac80fab151" Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.760178 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kbnxr"] Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.771429 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kbnxr"] Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.792591 4946 scope.go:117] "RemoveContainer" containerID="53115fdb47e1622c7cc72bcac9916685906f7b9c04068a7aea488cf98bb750f2" Nov 28 07:16:41 crc kubenswrapper[4946]: E1128 07:16:41.794326 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53115fdb47e1622c7cc72bcac9916685906f7b9c04068a7aea488cf98bb750f2\": container with ID starting with 53115fdb47e1622c7cc72bcac9916685906f7b9c04068a7aea488cf98bb750f2 not found: ID does not exist" containerID="53115fdb47e1622c7cc72bcac9916685906f7b9c04068a7aea488cf98bb750f2" Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.794418 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53115fdb47e1622c7cc72bcac9916685906f7b9c04068a7aea488cf98bb750f2"} err="failed to get container status \"53115fdb47e1622c7cc72bcac9916685906f7b9c04068a7aea488cf98bb750f2\": rpc error: code = NotFound desc = could not find container \"53115fdb47e1622c7cc72bcac9916685906f7b9c04068a7aea488cf98bb750f2\": container with ID starting with 53115fdb47e1622c7cc72bcac9916685906f7b9c04068a7aea488cf98bb750f2 not found: ID does not exist" Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.794495 4946 scope.go:117] "RemoveContainer" containerID="01116e135b1ce24cfe3a49e468bed58e4df8ab1116492bb38ecd02092646d9f8" Nov 28 07:16:41 crc kubenswrapper[4946]: E1128 07:16:41.794909 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01116e135b1ce24cfe3a49e468bed58e4df8ab1116492bb38ecd02092646d9f8\": container with ID starting with 01116e135b1ce24cfe3a49e468bed58e4df8ab1116492bb38ecd02092646d9f8 not found: ID does not exist" containerID="01116e135b1ce24cfe3a49e468bed58e4df8ab1116492bb38ecd02092646d9f8" Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.794952 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01116e135b1ce24cfe3a49e468bed58e4df8ab1116492bb38ecd02092646d9f8"} err="failed to get container status \"01116e135b1ce24cfe3a49e468bed58e4df8ab1116492bb38ecd02092646d9f8\": rpc error: code = NotFound desc = could not find container \"01116e135b1ce24cfe3a49e468bed58e4df8ab1116492bb38ecd02092646d9f8\": container with ID starting with 01116e135b1ce24cfe3a49e468bed58e4df8ab1116492bb38ecd02092646d9f8 not found: ID does not exist" Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.794977 4946 scope.go:117] "RemoveContainer" containerID="250ceef6a6804465c2153239acd760587c5d1fe5a1fe6ac7ee045eac80fab151" Nov 28 07:16:41 crc kubenswrapper[4946]: E1128 07:16:41.798395 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"250ceef6a6804465c2153239acd760587c5d1fe5a1fe6ac7ee045eac80fab151\": container with ID starting with 250ceef6a6804465c2153239acd760587c5d1fe5a1fe6ac7ee045eac80fab151 not found: ID does not exist" containerID="250ceef6a6804465c2153239acd760587c5d1fe5a1fe6ac7ee045eac80fab151" Nov 28 07:16:41 crc kubenswrapper[4946]: I1128 07:16:41.798425 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"250ceef6a6804465c2153239acd760587c5d1fe5a1fe6ac7ee045eac80fab151"} err="failed to get container status \"250ceef6a6804465c2153239acd760587c5d1fe5a1fe6ac7ee045eac80fab151\": rpc error: code = NotFound desc = could not find container \"250ceef6a6804465c2153239acd760587c5d1fe5a1fe6ac7ee045eac80fab151\": container with ID starting with 250ceef6a6804465c2153239acd760587c5d1fe5a1fe6ac7ee045eac80fab151 not found: ID does not exist" Nov 28 07:16:42 crc kubenswrapper[4946]: I1128 07:16:42.001673 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d" path="/var/lib/kubelet/pods/5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d/volumes" Nov 28 07:16:43 crc kubenswrapper[4946]: I1128 07:16:43.779330 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 28 07:16:43 crc kubenswrapper[4946]: E1128 07:16:43.780713 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d" containerName="extract-content" Nov 28 07:16:43 crc kubenswrapper[4946]: I1128 07:16:43.780734 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d" containerName="extract-content" Nov 28 07:16:43 crc kubenswrapper[4946]: E1128 07:16:43.780782 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d" containerName="extract-utilities" Nov 28 07:16:43 crc kubenswrapper[4946]: I1128 07:16:43.780790 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d" containerName="extract-utilities" Nov 28 07:16:43 crc kubenswrapper[4946]: E1128 07:16:43.780807 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d" containerName="registry-server" Nov 28 07:16:43 crc kubenswrapper[4946]: I1128 07:16:43.780815 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d" containerName="registry-server" Nov 28 07:16:43 crc kubenswrapper[4946]: I1128 07:16:43.781067 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c4d27ca-8c37-4a47-b12f-cc0e8846eb5d" containerName="registry-server" Nov 28 07:16:43 crc kubenswrapper[4946]: I1128 07:16:43.782907 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 07:16:43 crc kubenswrapper[4946]: I1128 07:16:43.787622 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 28 07:16:43 crc kubenswrapper[4946]: I1128 07:16:43.787624 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-8mb88" Nov 28 07:16:43 crc kubenswrapper[4946]: I1128 07:16:43.792446 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 28 07:16:43 crc kubenswrapper[4946]: I1128 07:16:43.823647 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 07:16:43 crc kubenswrapper[4946]: I1128 07:16:43.894201 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq926\" (UniqueName: \"kubernetes.io/projected/1a71b82d-c922-4d23-b816-f662cc5539ec-kube-api-access-tq926\") pod \"openstackclient\" (UID: \"1a71b82d-c922-4d23-b816-f662cc5539ec\") " pod="openstack/openstackclient" Nov 28 07:16:43 crc kubenswrapper[4946]: I1128 07:16:43.894279 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1a71b82d-c922-4d23-b816-f662cc5539ec-openstack-config-secret\") pod \"openstackclient\" (UID: \"1a71b82d-c922-4d23-b816-f662cc5539ec\") " pod="openstack/openstackclient" Nov 28 07:16:43 crc kubenswrapper[4946]: I1128 07:16:43.894313 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a71b82d-c922-4d23-b816-f662cc5539ec-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1a71b82d-c922-4d23-b816-f662cc5539ec\") " pod="openstack/openstackclient" Nov 28 07:16:43 crc kubenswrapper[4946]: I1128 07:16:43.894362 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1a71b82d-c922-4d23-b816-f662cc5539ec-openstack-config\") pod \"openstackclient\" (UID: \"1a71b82d-c922-4d23-b816-f662cc5539ec\") " pod="openstack/openstackclient" Nov 28 07:16:43 crc kubenswrapper[4946]: I1128 07:16:43.996152 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq926\" (UniqueName: \"kubernetes.io/projected/1a71b82d-c922-4d23-b816-f662cc5539ec-kube-api-access-tq926\") pod \"openstackclient\" (UID: \"1a71b82d-c922-4d23-b816-f662cc5539ec\") " pod="openstack/openstackclient" Nov 28 07:16:43 crc kubenswrapper[4946]: I1128 07:16:43.996304 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1a71b82d-c922-4d23-b816-f662cc5539ec-openstack-config-secret\") pod \"openstackclient\" (UID: \"1a71b82d-c922-4d23-b816-f662cc5539ec\") " pod="openstack/openstackclient" Nov 28 07:16:43 crc kubenswrapper[4946]: I1128 07:16:43.996375 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a71b82d-c922-4d23-b816-f662cc5539ec-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1a71b82d-c922-4d23-b816-f662cc5539ec\") " pod="openstack/openstackclient" Nov 28 07:16:43 crc kubenswrapper[4946]: I1128 07:16:43.996559 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1a71b82d-c922-4d23-b816-f662cc5539ec-openstack-config\") pod \"openstackclient\" (UID: \"1a71b82d-c922-4d23-b816-f662cc5539ec\") " pod="openstack/openstackclient" Nov 28 07:16:43 crc kubenswrapper[4946]: I1128 07:16:43.997908 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1a71b82d-c922-4d23-b816-f662cc5539ec-openstack-config\") pod \"openstackclient\" (UID: \"1a71b82d-c922-4d23-b816-f662cc5539ec\") " pod="openstack/openstackclient" Nov 28 07:16:44 crc kubenswrapper[4946]: I1128 07:16:44.003948 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a71b82d-c922-4d23-b816-f662cc5539ec-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1a71b82d-c922-4d23-b816-f662cc5539ec\") " pod="openstack/openstackclient" Nov 28 07:16:44 crc kubenswrapper[4946]: I1128 07:16:44.004125 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1a71b82d-c922-4d23-b816-f662cc5539ec-openstack-config-secret\") pod \"openstackclient\" (UID: \"1a71b82d-c922-4d23-b816-f662cc5539ec\") " pod="openstack/openstackclient" Nov 28 07:16:44 crc kubenswrapper[4946]: I1128 07:16:44.020989 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq926\" (UniqueName: \"kubernetes.io/projected/1a71b82d-c922-4d23-b816-f662cc5539ec-kube-api-access-tq926\") pod \"openstackclient\" (UID: \"1a71b82d-c922-4d23-b816-f662cc5539ec\") " pod="openstack/openstackclient" Nov 28 07:16:44 crc kubenswrapper[4946]: I1128 07:16:44.104124 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 07:16:44 crc kubenswrapper[4946]: I1128 07:16:44.356599 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 28 07:16:44 crc kubenswrapper[4946]: I1128 07:16:44.816321 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 07:16:44 crc kubenswrapper[4946]: W1128 07:16:44.816828 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a71b82d_c922_4d23_b816_f662cc5539ec.slice/crio-e5043347215f2701d7f3613b53fa5708c40226d71ee327ce9b26ae090308796b WatchSource:0}: Error finding container e5043347215f2701d7f3613b53fa5708c40226d71ee327ce9b26ae090308796b: Status 404 returned error can't find the container with id e5043347215f2701d7f3613b53fa5708c40226d71ee327ce9b26ae090308796b Nov 28 07:16:45 crc kubenswrapper[4946]: I1128 07:16:45.749433 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1a71b82d-c922-4d23-b816-f662cc5539ec","Type":"ContainerStarted","Data":"e5043347215f2701d7f3613b53fa5708c40226d71ee327ce9b26ae090308796b"} Nov 28 07:16:47 crc kubenswrapper[4946]: I1128 07:16:47.772856 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-65f88f985c-d964v"] Nov 28 07:16:47 crc kubenswrapper[4946]: I1128 07:16:47.775510 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:47 crc kubenswrapper[4946]: I1128 07:16:47.783636 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 28 07:16:47 crc kubenswrapper[4946]: I1128 07:16:47.783880 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 28 07:16:47 crc kubenswrapper[4946]: I1128 07:16:47.784095 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 28 07:16:47 crc kubenswrapper[4946]: I1128 07:16:47.788974 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-65f88f985c-d964v"] Nov 28 07:16:47 crc kubenswrapper[4946]: I1128 07:16:47.911112 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea608140-3ad6-4c56-9754-ec74fc292781-run-httpd\") pod \"swift-proxy-65f88f985c-d964v\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:47 crc kubenswrapper[4946]: I1128 07:16:47.911249 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea608140-3ad6-4c56-9754-ec74fc292781-etc-swift\") pod \"swift-proxy-65f88f985c-d964v\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:47 crc kubenswrapper[4946]: I1128 07:16:47.911274 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea608140-3ad6-4c56-9754-ec74fc292781-log-httpd\") pod \"swift-proxy-65f88f985c-d964v\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:47 crc kubenswrapper[4946]: I1128 07:16:47.911308 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea608140-3ad6-4c56-9754-ec74fc292781-public-tls-certs\") pod \"swift-proxy-65f88f985c-d964v\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:47 crc kubenswrapper[4946]: I1128 07:16:47.911590 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea608140-3ad6-4c56-9754-ec74fc292781-internal-tls-certs\") pod \"swift-proxy-65f88f985c-d964v\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:47 crc kubenswrapper[4946]: I1128 07:16:47.911746 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpdv2\" (UniqueName: \"kubernetes.io/projected/ea608140-3ad6-4c56-9754-ec74fc292781-kube-api-access-vpdv2\") pod \"swift-proxy-65f88f985c-d964v\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:47 crc kubenswrapper[4946]: I1128 07:16:47.911794 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea608140-3ad6-4c56-9754-ec74fc292781-config-data\") pod \"swift-proxy-65f88f985c-d964v\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:47 crc kubenswrapper[4946]: I1128 07:16:47.912024 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea608140-3ad6-4c56-9754-ec74fc292781-combined-ca-bundle\") pod \"swift-proxy-65f88f985c-d964v\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.014927 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea608140-3ad6-4c56-9754-ec74fc292781-etc-swift\") pod \"swift-proxy-65f88f985c-d964v\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.015749 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea608140-3ad6-4c56-9754-ec74fc292781-log-httpd\") pod \"swift-proxy-65f88f985c-d964v\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.015892 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea608140-3ad6-4c56-9754-ec74fc292781-public-tls-certs\") pod \"swift-proxy-65f88f985c-d964v\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.016016 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea608140-3ad6-4c56-9754-ec74fc292781-internal-tls-certs\") pod \"swift-proxy-65f88f985c-d964v\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.016110 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpdv2\" (UniqueName: \"kubernetes.io/projected/ea608140-3ad6-4c56-9754-ec74fc292781-kube-api-access-vpdv2\") pod \"swift-proxy-65f88f985c-d964v\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.016189 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea608140-3ad6-4c56-9754-ec74fc292781-config-data\") pod \"swift-proxy-65f88f985c-d964v\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.016309 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea608140-3ad6-4c56-9754-ec74fc292781-combined-ca-bundle\") pod \"swift-proxy-65f88f985c-d964v\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.016440 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea608140-3ad6-4c56-9754-ec74fc292781-run-httpd\") pod \"swift-proxy-65f88f985c-d964v\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.016902 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea608140-3ad6-4c56-9754-ec74fc292781-log-httpd\") pod \"swift-proxy-65f88f985c-d964v\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.017624 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea608140-3ad6-4c56-9754-ec74fc292781-run-httpd\") pod \"swift-proxy-65f88f985c-d964v\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.023734 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea608140-3ad6-4c56-9754-ec74fc292781-internal-tls-certs\") pod \"swift-proxy-65f88f985c-d964v\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.024474 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea608140-3ad6-4c56-9754-ec74fc292781-config-data\") pod \"swift-proxy-65f88f985c-d964v\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.025548 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea608140-3ad6-4c56-9754-ec74fc292781-etc-swift\") pod \"swift-proxy-65f88f985c-d964v\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.026855 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea608140-3ad6-4c56-9754-ec74fc292781-combined-ca-bundle\") pod \"swift-proxy-65f88f985c-d964v\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.027953 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea608140-3ad6-4c56-9754-ec74fc292781-public-tls-certs\") pod \"swift-proxy-65f88f985c-d964v\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.036054 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpdv2\" (UniqueName: \"kubernetes.io/projected/ea608140-3ad6-4c56-9754-ec74fc292781-kube-api-access-vpdv2\") pod \"swift-proxy-65f88f985c-d964v\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.131707 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.558968 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.559520 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="506e968b-dc84-4f39-b5f4-427270eb7e9c" containerName="ceilometer-central-agent" containerID="cri-o://68a56de70cb709f5d0187a644270625a74e7d861db2aad2931c42c8c544e513c" gracePeriod=30 Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.560339 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="506e968b-dc84-4f39-b5f4-427270eb7e9c" containerName="proxy-httpd" containerID="cri-o://17db7cd5c15e535b59ab8806e0f4bb49dedce33fb10156ef3f49284307894ecb" gracePeriod=30 Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.560399 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="506e968b-dc84-4f39-b5f4-427270eb7e9c" containerName="sg-core" containerID="cri-o://73d415b2419edfb827b945bae4c83cc23d1a1aa0e94c0f1ff22de2faaa179733" gracePeriod=30 Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.560438 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="506e968b-dc84-4f39-b5f4-427270eb7e9c" containerName="ceilometer-notification-agent" containerID="cri-o://414d86369bbeef63fdb08294c4966a3eb5f1af050a501cc968a5e5f54e419bc2" gracePeriod=30 Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.571066 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.725828 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-65f88f985c-d964v"] Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.806023 4946 generic.go:334] "Generic (PLEG): container finished" podID="506e968b-dc84-4f39-b5f4-427270eb7e9c" containerID="17db7cd5c15e535b59ab8806e0f4bb49dedce33fb10156ef3f49284307894ecb" exitCode=0 Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.806059 4946 generic.go:334] "Generic (PLEG): container finished" podID="506e968b-dc84-4f39-b5f4-427270eb7e9c" containerID="73d415b2419edfb827b945bae4c83cc23d1a1aa0e94c0f1ff22de2faaa179733" exitCode=2 Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.806112 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"506e968b-dc84-4f39-b5f4-427270eb7e9c","Type":"ContainerDied","Data":"17db7cd5c15e535b59ab8806e0f4bb49dedce33fb10156ef3f49284307894ecb"} Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.806144 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"506e968b-dc84-4f39-b5f4-427270eb7e9c","Type":"ContainerDied","Data":"73d415b2419edfb827b945bae4c83cc23d1a1aa0e94c0f1ff22de2faaa179733"} Nov 28 07:16:48 crc kubenswrapper[4946]: I1128 07:16:48.809923 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-65f88f985c-d964v" event={"ID":"ea608140-3ad6-4c56-9754-ec74fc292781","Type":"ContainerStarted","Data":"b59860fc9a7096098acaccd50b27b487d6bc78e0ad06a88ec3e27543fabe4326"} Nov 28 07:16:49 crc kubenswrapper[4946]: I1128 07:16:49.666791 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 28 07:16:49 crc kubenswrapper[4946]: I1128 07:16:49.840138 4946 generic.go:334] "Generic (PLEG): container finished" podID="506e968b-dc84-4f39-b5f4-427270eb7e9c" containerID="68a56de70cb709f5d0187a644270625a74e7d861db2aad2931c42c8c544e513c" exitCode=0 Nov 28 07:16:49 crc kubenswrapper[4946]: I1128 07:16:49.840260 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"506e968b-dc84-4f39-b5f4-427270eb7e9c","Type":"ContainerDied","Data":"68a56de70cb709f5d0187a644270625a74e7d861db2aad2931c42c8c544e513c"} Nov 28 07:16:49 crc kubenswrapper[4946]: I1128 07:16:49.846357 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-65f88f985c-d964v" event={"ID":"ea608140-3ad6-4c56-9754-ec74fc292781","Type":"ContainerStarted","Data":"0c66dae97a36e8c460ce04d5fac2b27cca2e32b76e908a748791b556732f4db4"} Nov 28 07:16:50 crc kubenswrapper[4946]: I1128 07:16:50.883778 4946 generic.go:334] "Generic (PLEG): container finished" podID="506e968b-dc84-4f39-b5f4-427270eb7e9c" containerID="414d86369bbeef63fdb08294c4966a3eb5f1af050a501cc968a5e5f54e419bc2" exitCode=0 Nov 28 07:16:50 crc kubenswrapper[4946]: I1128 07:16:50.883883 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"506e968b-dc84-4f39-b5f4-427270eb7e9c","Type":"ContainerDied","Data":"414d86369bbeef63fdb08294c4966a3eb5f1af050a501cc968a5e5f54e419bc2"} Nov 28 07:16:53 crc kubenswrapper[4946]: I1128 07:16:53.934556 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:16:53 crc kubenswrapper[4946]: I1128 07:16:53.935079 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6bf0f217-858e-49f9-8730-7376f77c6d4f" containerName="glance-log" containerID="cri-o://528448ade13d44b6e5c0e461e67cc76aee6c8f3216d6cef11cdf76b3702e2960" gracePeriod=30 Nov 28 07:16:53 crc kubenswrapper[4946]: I1128 07:16:53.935457 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6bf0f217-858e-49f9-8730-7376f77c6d4f" containerName="glance-httpd" containerID="cri-o://ecb1db803400e94fb374b1b0dec90f106d9649db844264b53700f85dbf61d268" gracePeriod=30 Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.696228 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.699443 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1f7498cb-2048-4c04-b1ce-50a236db88a6" containerName="glance-log" containerID="cri-o://b8f930faf3c00d4c11e4852daae6d3873a790d09d2e2b47665d2dcb7fd73ec0c" gracePeriod=30 Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.703106 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1f7498cb-2048-4c04-b1ce-50a236db88a6" containerName="glance-httpd" containerID="cri-o://b8582a0aeb08beaedc43c27127396d887ec2f6515c784b520e60cb8b5aec7db4" gracePeriod=30 Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.804424 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.936271 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-65f88f985c-d964v" event={"ID":"ea608140-3ad6-4c56-9754-ec74fc292781","Type":"ContainerStarted","Data":"d78e71107b7394d2525bc8feb2c6598f678c038511392ededa5a41827a995dcb"} Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.936375 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.939870 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1a71b82d-c922-4d23-b816-f662cc5539ec","Type":"ContainerStarted","Data":"2456162c80891db11f4df38895a38626b9a5d79c2067609e9aa578de23df284f"} Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.945102 4946 generic.go:334] "Generic (PLEG): container finished" podID="1f7498cb-2048-4c04-b1ce-50a236db88a6" containerID="b8f930faf3c00d4c11e4852daae6d3873a790d09d2e2b47665d2dcb7fd73ec0c" exitCode=143 Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.945173 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f7498cb-2048-4c04-b1ce-50a236db88a6","Type":"ContainerDied","Data":"b8f930faf3c00d4c11e4852daae6d3873a790d09d2e2b47665d2dcb7fd73ec0c"} Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.956531 4946 generic.go:334] "Generic (PLEG): container finished" podID="6bf0f217-858e-49f9-8730-7376f77c6d4f" containerID="528448ade13d44b6e5c0e461e67cc76aee6c8f3216d6cef11cdf76b3702e2960" exitCode=143 Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.956616 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6bf0f217-858e-49f9-8730-7376f77c6d4f","Type":"ContainerDied","Data":"528448ade13d44b6e5c0e461e67cc76aee6c8f3216d6cef11cdf76b3702e2960"} Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.966792 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"506e968b-dc84-4f39-b5f4-427270eb7e9c","Type":"ContainerDied","Data":"0d7f66408cf0fc2b2d8c88331ac074691b5790f57056d5eba6b24d0e3a2a6774"} Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.966862 4946 scope.go:117] "RemoveContainer" containerID="17db7cd5c15e535b59ab8806e0f4bb49dedce33fb10156ef3f49284307894ecb" Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.967085 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.969134 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-65f88f985c-d964v" podStartSLOduration=7.969115585 podStartE2EDuration="7.969115585s" podCreationTimestamp="2025-11-28 07:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:16:54.955977393 +0000 UTC m=+1469.334042504" watchObservedRunningTime="2025-11-28 07:16:54.969115585 +0000 UTC m=+1469.347180696" Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.983920 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.3114274200000002 podStartE2EDuration="11.983900897s" podCreationTimestamp="2025-11-28 07:16:43 +0000 UTC" firstStartedPulling="2025-11-28 07:16:44.819425446 +0000 UTC m=+1459.197490557" lastFinishedPulling="2025-11-28 07:16:54.491898913 +0000 UTC m=+1468.869964034" observedRunningTime="2025-11-28 07:16:54.981821016 +0000 UTC m=+1469.359886127" watchObservedRunningTime="2025-11-28 07:16:54.983900897 +0000 UTC m=+1469.361966008" Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.983949 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/506e968b-dc84-4f39-b5f4-427270eb7e9c-scripts\") pod \"506e968b-dc84-4f39-b5f4-427270eb7e9c\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.984702 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/506e968b-dc84-4f39-b5f4-427270eb7e9c-sg-core-conf-yaml\") pod \"506e968b-dc84-4f39-b5f4-427270eb7e9c\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.984815 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/506e968b-dc84-4f39-b5f4-427270eb7e9c-run-httpd\") pod \"506e968b-dc84-4f39-b5f4-427270eb7e9c\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.984896 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsckp\" (UniqueName: \"kubernetes.io/projected/506e968b-dc84-4f39-b5f4-427270eb7e9c-kube-api-access-lsckp\") pod \"506e968b-dc84-4f39-b5f4-427270eb7e9c\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.985033 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506e968b-dc84-4f39-b5f4-427270eb7e9c-config-data\") pod \"506e968b-dc84-4f39-b5f4-427270eb7e9c\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.985065 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/506e968b-dc84-4f39-b5f4-427270eb7e9c-log-httpd\") pod \"506e968b-dc84-4f39-b5f4-427270eb7e9c\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.985135 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506e968b-dc84-4f39-b5f4-427270eb7e9c-combined-ca-bundle\") pod \"506e968b-dc84-4f39-b5f4-427270eb7e9c\" (UID: \"506e968b-dc84-4f39-b5f4-427270eb7e9c\") " Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.985379 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/506e968b-dc84-4f39-b5f4-427270eb7e9c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "506e968b-dc84-4f39-b5f4-427270eb7e9c" (UID: "506e968b-dc84-4f39-b5f4-427270eb7e9c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.986234 4946 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/506e968b-dc84-4f39-b5f4-427270eb7e9c-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.987157 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/506e968b-dc84-4f39-b5f4-427270eb7e9c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "506e968b-dc84-4f39-b5f4-427270eb7e9c" (UID: "506e968b-dc84-4f39-b5f4-427270eb7e9c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.989995 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506e968b-dc84-4f39-b5f4-427270eb7e9c-scripts" (OuterVolumeSpecName: "scripts") pod "506e968b-dc84-4f39-b5f4-427270eb7e9c" (UID: "506e968b-dc84-4f39-b5f4-427270eb7e9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:54 crc kubenswrapper[4946]: I1128 07:16:54.995897 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/506e968b-dc84-4f39-b5f4-427270eb7e9c-kube-api-access-lsckp" (OuterVolumeSpecName: "kube-api-access-lsckp") pod "506e968b-dc84-4f39-b5f4-427270eb7e9c" (UID: "506e968b-dc84-4f39-b5f4-427270eb7e9c"). InnerVolumeSpecName "kube-api-access-lsckp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.002814 4946 scope.go:117] "RemoveContainer" containerID="73d415b2419edfb827b945bae4c83cc23d1a1aa0e94c0f1ff22de2faaa179733" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.021903 4946 scope.go:117] "RemoveContainer" containerID="414d86369bbeef63fdb08294c4966a3eb5f1af050a501cc968a5e5f54e419bc2" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.030590 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506e968b-dc84-4f39-b5f4-427270eb7e9c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "506e968b-dc84-4f39-b5f4-427270eb7e9c" (UID: "506e968b-dc84-4f39-b5f4-427270eb7e9c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.054266 4946 scope.go:117] "RemoveContainer" containerID="68a56de70cb709f5d0187a644270625a74e7d861db2aad2931c42c8c544e513c" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.088868 4946 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/506e968b-dc84-4f39-b5f4-427270eb7e9c-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.089114 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/506e968b-dc84-4f39-b5f4-427270eb7e9c-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.089171 4946 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/506e968b-dc84-4f39-b5f4-427270eb7e9c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.089234 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsckp\" (UniqueName: \"kubernetes.io/projected/506e968b-dc84-4f39-b5f4-427270eb7e9c-kube-api-access-lsckp\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.097200 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506e968b-dc84-4f39-b5f4-427270eb7e9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "506e968b-dc84-4f39-b5f4-427270eb7e9c" (UID: "506e968b-dc84-4f39-b5f4-427270eb7e9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.136000 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506e968b-dc84-4f39-b5f4-427270eb7e9c-config-data" (OuterVolumeSpecName: "config-data") pod "506e968b-dc84-4f39-b5f4-427270eb7e9c" (UID: "506e968b-dc84-4f39-b5f4-427270eb7e9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.191757 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506e968b-dc84-4f39-b5f4-427270eb7e9c-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.191794 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506e968b-dc84-4f39-b5f4-427270eb7e9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.305944 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.315681 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.339433 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:16:55 crc kubenswrapper[4946]: E1128 07:16:55.339887 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="506e968b-dc84-4f39-b5f4-427270eb7e9c" containerName="ceilometer-notification-agent" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.339905 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="506e968b-dc84-4f39-b5f4-427270eb7e9c" containerName="ceilometer-notification-agent" Nov 28 07:16:55 crc kubenswrapper[4946]: E1128 07:16:55.339926 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="506e968b-dc84-4f39-b5f4-427270eb7e9c" containerName="sg-core" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.339933 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="506e968b-dc84-4f39-b5f4-427270eb7e9c" containerName="sg-core" Nov 28 07:16:55 crc kubenswrapper[4946]: E1128 07:16:55.339948 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="506e968b-dc84-4f39-b5f4-427270eb7e9c" containerName="proxy-httpd" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.339955 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="506e968b-dc84-4f39-b5f4-427270eb7e9c" containerName="proxy-httpd" Nov 28 07:16:55 crc kubenswrapper[4946]: E1128 07:16:55.339975 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="506e968b-dc84-4f39-b5f4-427270eb7e9c" containerName="ceilometer-central-agent" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.339983 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="506e968b-dc84-4f39-b5f4-427270eb7e9c" containerName="ceilometer-central-agent" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.340160 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="506e968b-dc84-4f39-b5f4-427270eb7e9c" containerName="ceilometer-notification-agent" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.340180 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="506e968b-dc84-4f39-b5f4-427270eb7e9c" containerName="sg-core" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.340193 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="506e968b-dc84-4f39-b5f4-427270eb7e9c" containerName="ceilometer-central-agent" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.340209 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="506e968b-dc84-4f39-b5f4-427270eb7e9c" containerName="proxy-httpd" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.341936 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.346072 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.358612 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.360423 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.497494 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3fb7f76-db33-4ff6-b613-f60c7a64be38-scripts\") pod \"ceilometer-0\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " pod="openstack/ceilometer-0" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.497557 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llkl7\" (UniqueName: \"kubernetes.io/projected/e3fb7f76-db33-4ff6-b613-f60c7a64be38-kube-api-access-llkl7\") pod \"ceilometer-0\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " pod="openstack/ceilometer-0" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.497606 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3fb7f76-db33-4ff6-b613-f60c7a64be38-log-httpd\") pod \"ceilometer-0\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " pod="openstack/ceilometer-0" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.497669 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3fb7f76-db33-4ff6-b613-f60c7a64be38-config-data\") pod \"ceilometer-0\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " pod="openstack/ceilometer-0" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.497718 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3fb7f76-db33-4ff6-b613-f60c7a64be38-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " pod="openstack/ceilometer-0" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.497738 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3fb7f76-db33-4ff6-b613-f60c7a64be38-run-httpd\") pod \"ceilometer-0\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " pod="openstack/ceilometer-0" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.497763 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3fb7f76-db33-4ff6-b613-f60c7a64be38-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " pod="openstack/ceilometer-0" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.599290 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3fb7f76-db33-4ff6-b613-f60c7a64be38-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " pod="openstack/ceilometer-0" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.599349 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3fb7f76-db33-4ff6-b613-f60c7a64be38-run-httpd\") pod \"ceilometer-0\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " pod="openstack/ceilometer-0" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.599386 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3fb7f76-db33-4ff6-b613-f60c7a64be38-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " pod="openstack/ceilometer-0" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.599484 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3fb7f76-db33-4ff6-b613-f60c7a64be38-scripts\") pod \"ceilometer-0\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " pod="openstack/ceilometer-0" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.599523 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llkl7\" (UniqueName: \"kubernetes.io/projected/e3fb7f76-db33-4ff6-b613-f60c7a64be38-kube-api-access-llkl7\") pod \"ceilometer-0\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " pod="openstack/ceilometer-0" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.599568 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3fb7f76-db33-4ff6-b613-f60c7a64be38-log-httpd\") pod \"ceilometer-0\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " pod="openstack/ceilometer-0" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.599661 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3fb7f76-db33-4ff6-b613-f60c7a64be38-config-data\") pod \"ceilometer-0\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " pod="openstack/ceilometer-0" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.599958 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3fb7f76-db33-4ff6-b613-f60c7a64be38-run-httpd\") pod \"ceilometer-0\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " pod="openstack/ceilometer-0" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.600255 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3fb7f76-db33-4ff6-b613-f60c7a64be38-log-httpd\") pod \"ceilometer-0\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " pod="openstack/ceilometer-0" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.604730 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3fb7f76-db33-4ff6-b613-f60c7a64be38-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " pod="openstack/ceilometer-0" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.607564 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3fb7f76-db33-4ff6-b613-f60c7a64be38-scripts\") pod \"ceilometer-0\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " pod="openstack/ceilometer-0" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.609425 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3fb7f76-db33-4ff6-b613-f60c7a64be38-config-data\") pod \"ceilometer-0\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " pod="openstack/ceilometer-0" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.617325 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3fb7f76-db33-4ff6-b613-f60c7a64be38-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " pod="openstack/ceilometer-0" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.629621 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llkl7\" (UniqueName: \"kubernetes.io/projected/e3fb7f76-db33-4ff6-b613-f60c7a64be38-kube-api-access-llkl7\") pod \"ceilometer-0\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " pod="openstack/ceilometer-0" Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.754344 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:16:55 crc kubenswrapper[4946]: I1128 07:16:55.755693 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:16:56 crc kubenswrapper[4946]: I1128 07:16:56.023873 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="506e968b-dc84-4f39-b5f4-427270eb7e9c" path="/var/lib/kubelet/pods/506e968b-dc84-4f39-b5f4-427270eb7e9c/volumes" Nov 28 07:16:56 crc kubenswrapper[4946]: I1128 07:16:56.025964 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:56 crc kubenswrapper[4946]: W1128 07:16:56.256596 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3fb7f76_db33_4ff6_b613_f60c7a64be38.slice/crio-958a9823d8315e86934ab8a599203bf5dd22e9ed0bf342c82ff2c5f232ad5ff8 WatchSource:0}: Error finding container 958a9823d8315e86934ab8a599203bf5dd22e9ed0bf342c82ff2c5f232ad5ff8: Status 404 returned error can't find the container with id 958a9823d8315e86934ab8a599203bf5dd22e9ed0bf342c82ff2c5f232ad5ff8 Nov 28 07:16:56 crc kubenswrapper[4946]: I1128 07:16:56.258761 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:16:56 crc kubenswrapper[4946]: I1128 07:16:56.260027 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:16:56 crc kubenswrapper[4946]: I1128 07:16:56.999608 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3fb7f76-db33-4ff6-b613-f60c7a64be38","Type":"ContainerStarted","Data":"958a9823d8315e86934ab8a599203bf5dd22e9ed0bf342c82ff2c5f232ad5ff8"} Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.718144 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.870063 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf0f217-858e-49f9-8730-7376f77c6d4f-internal-tls-certs\") pod \"6bf0f217-858e-49f9-8730-7376f77c6d4f\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.870151 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"6bf0f217-858e-49f9-8730-7376f77c6d4f\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.870189 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bf0f217-858e-49f9-8730-7376f77c6d4f-scripts\") pod \"6bf0f217-858e-49f9-8730-7376f77c6d4f\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.870256 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bf0f217-858e-49f9-8730-7376f77c6d4f-combined-ca-bundle\") pod \"6bf0f217-858e-49f9-8730-7376f77c6d4f\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.870336 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bf0f217-858e-49f9-8730-7376f77c6d4f-config-data\") pod \"6bf0f217-858e-49f9-8730-7376f77c6d4f\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.870372 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f217-858e-49f9-8730-7376f77c6d4f-logs\") pod \"6bf0f217-858e-49f9-8730-7376f77c6d4f\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.870480 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f217-858e-49f9-8730-7376f77c6d4f-httpd-run\") pod \"6bf0f217-858e-49f9-8730-7376f77c6d4f\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.870560 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-496gw\" (UniqueName: \"kubernetes.io/projected/6bf0f217-858e-49f9-8730-7376f77c6d4f-kube-api-access-496gw\") pod \"6bf0f217-858e-49f9-8730-7376f77c6d4f\" (UID: \"6bf0f217-858e-49f9-8730-7376f77c6d4f\") " Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.870998 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bf0f217-858e-49f9-8730-7376f77c6d4f-logs" (OuterVolumeSpecName: "logs") pod "6bf0f217-858e-49f9-8730-7376f77c6d4f" (UID: "6bf0f217-858e-49f9-8730-7376f77c6d4f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.871188 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bf0f217-858e-49f9-8730-7376f77c6d4f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6bf0f217-858e-49f9-8730-7376f77c6d4f" (UID: "6bf0f217-858e-49f9-8730-7376f77c6d4f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.875396 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bf0f217-858e-49f9-8730-7376f77c6d4f-kube-api-access-496gw" (OuterVolumeSpecName: "kube-api-access-496gw") pod "6bf0f217-858e-49f9-8730-7376f77c6d4f" (UID: "6bf0f217-858e-49f9-8730-7376f77c6d4f"). InnerVolumeSpecName "kube-api-access-496gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.875730 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bf0f217-858e-49f9-8730-7376f77c6d4f-scripts" (OuterVolumeSpecName: "scripts") pod "6bf0f217-858e-49f9-8730-7376f77c6d4f" (UID: "6bf0f217-858e-49f9-8730-7376f77c6d4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.875895 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "6bf0f217-858e-49f9-8730-7376f77c6d4f" (UID: "6bf0f217-858e-49f9-8730-7376f77c6d4f"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.904486 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bf0f217-858e-49f9-8730-7376f77c6d4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6bf0f217-858e-49f9-8730-7376f77c6d4f" (UID: "6bf0f217-858e-49f9-8730-7376f77c6d4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.931590 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bf0f217-858e-49f9-8730-7376f77c6d4f-config-data" (OuterVolumeSpecName: "config-data") pod "6bf0f217-858e-49f9-8730-7376f77c6d4f" (UID: "6bf0f217-858e-49f9-8730-7376f77c6d4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.941406 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bf0f217-858e-49f9-8730-7376f77c6d4f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6bf0f217-858e-49f9-8730-7376f77c6d4f" (UID: "6bf0f217-858e-49f9-8730-7376f77c6d4f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.972740 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bf0f217-858e-49f9-8730-7376f77c6d4f-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.972776 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f217-858e-49f9-8730-7376f77c6d4f-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.972789 4946 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6bf0f217-858e-49f9-8730-7376f77c6d4f-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.972804 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-496gw\" (UniqueName: \"kubernetes.io/projected/6bf0f217-858e-49f9-8730-7376f77c6d4f-kube-api-access-496gw\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.972834 4946 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bf0f217-858e-49f9-8730-7376f77c6d4f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.972864 4946 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.972872 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bf0f217-858e-49f9-8730-7376f77c6d4f-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.972881 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bf0f217-858e-49f9-8730-7376f77c6d4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:57 crc kubenswrapper[4946]: I1128 07:16:57.992541 4946 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.038694 4946 generic.go:334] "Generic (PLEG): container finished" podID="1f7498cb-2048-4c04-b1ce-50a236db88a6" containerID="b8582a0aeb08beaedc43c27127396d887ec2f6515c784b520e60cb8b5aec7db4" exitCode=0 Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.038828 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f7498cb-2048-4c04-b1ce-50a236db88a6","Type":"ContainerDied","Data":"b8582a0aeb08beaedc43c27127396d887ec2f6515c784b520e60cb8b5aec7db4"} Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.043735 4946 generic.go:334] "Generic (PLEG): container finished" podID="6bf0f217-858e-49f9-8730-7376f77c6d4f" containerID="ecb1db803400e94fb374b1b0dec90f106d9649db844264b53700f85dbf61d268" exitCode=0 Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.043890 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6bf0f217-858e-49f9-8730-7376f77c6d4f","Type":"ContainerDied","Data":"ecb1db803400e94fb374b1b0dec90f106d9649db844264b53700f85dbf61d268"} Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.043975 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6bf0f217-858e-49f9-8730-7376f77c6d4f","Type":"ContainerDied","Data":"a94ef9f9dcc0ff7b1f98598d34efd23b035470f5d2eb68d0179378a644cc09f6"} Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.044039 4946 scope.go:117] "RemoveContainer" containerID="ecb1db803400e94fb374b1b0dec90f106d9649db844264b53700f85dbf61d268" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.044081 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.060533 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3fb7f76-db33-4ff6-b613-f60c7a64be38","Type":"ContainerStarted","Data":"0dd7ceecbb40811425d322c6721c5aa9fee8214c0c3c536f96eb6a228e8a1dfb"} Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.075311 4946 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.138083 4946 scope.go:117] "RemoveContainer" containerID="528448ade13d44b6e5c0e461e67cc76aee6c8f3216d6cef11cdf76b3702e2960" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.192770 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.211570 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.214757 4946 scope.go:117] "RemoveContainer" containerID="ecb1db803400e94fb374b1b0dec90f106d9649db844264b53700f85dbf61d268" Nov 28 07:16:58 crc kubenswrapper[4946]: E1128 07:16:58.217861 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecb1db803400e94fb374b1b0dec90f106d9649db844264b53700f85dbf61d268\": container with ID starting with ecb1db803400e94fb374b1b0dec90f106d9649db844264b53700f85dbf61d268 not found: ID does not exist" containerID="ecb1db803400e94fb374b1b0dec90f106d9649db844264b53700f85dbf61d268" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.217906 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecb1db803400e94fb374b1b0dec90f106d9649db844264b53700f85dbf61d268"} err="failed to get container status \"ecb1db803400e94fb374b1b0dec90f106d9649db844264b53700f85dbf61d268\": rpc error: code = NotFound desc = could not find container \"ecb1db803400e94fb374b1b0dec90f106d9649db844264b53700f85dbf61d268\": container with ID starting with ecb1db803400e94fb374b1b0dec90f106d9649db844264b53700f85dbf61d268 not found: ID does not exist" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.217932 4946 scope.go:117] "RemoveContainer" containerID="528448ade13d44b6e5c0e461e67cc76aee6c8f3216d6cef11cdf76b3702e2960" Nov 28 07:16:58 crc kubenswrapper[4946]: E1128 07:16:58.218268 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"528448ade13d44b6e5c0e461e67cc76aee6c8f3216d6cef11cdf76b3702e2960\": container with ID starting with 528448ade13d44b6e5c0e461e67cc76aee6c8f3216d6cef11cdf76b3702e2960 not found: ID does not exist" containerID="528448ade13d44b6e5c0e461e67cc76aee6c8f3216d6cef11cdf76b3702e2960" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.218317 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"528448ade13d44b6e5c0e461e67cc76aee6c8f3216d6cef11cdf76b3702e2960"} err="failed to get container status \"528448ade13d44b6e5c0e461e67cc76aee6c8f3216d6cef11cdf76b3702e2960\": rpc error: code = NotFound desc = could not find container \"528448ade13d44b6e5c0e461e67cc76aee6c8f3216d6cef11cdf76b3702e2960\": container with ID starting with 528448ade13d44b6e5c0e461e67cc76aee6c8f3216d6cef11cdf76b3702e2960 not found: ID does not exist" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.246815 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:16:58 crc kubenswrapper[4946]: E1128 07:16:58.247452 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bf0f217-858e-49f9-8730-7376f77c6d4f" containerName="glance-log" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.247531 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bf0f217-858e-49f9-8730-7376f77c6d4f" containerName="glance-log" Nov 28 07:16:58 crc kubenswrapper[4946]: E1128 07:16:58.247572 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bf0f217-858e-49f9-8730-7376f77c6d4f" containerName="glance-httpd" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.247581 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bf0f217-858e-49f9-8730-7376f77c6d4f" containerName="glance-httpd" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.247813 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bf0f217-858e-49f9-8730-7376f77c6d4f" containerName="glance-httpd" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.247844 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bf0f217-858e-49f9-8730-7376f77c6d4f" containerName="glance-log" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.249166 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.257138 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.257401 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.262294 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.387787 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.387858 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-logs\") pod \"glance-default-internal-api-0\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.387885 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.387924 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ztn2\" (UniqueName: \"kubernetes.io/projected/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-kube-api-access-5ztn2\") pod \"glance-default-internal-api-0\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.387947 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.387977 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-scripts\") pod \"glance-default-internal-api-0\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.388003 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.388022 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-config-data\") pod \"glance-default-internal-api-0\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.489819 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ztn2\" (UniqueName: \"kubernetes.io/projected/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-kube-api-access-5ztn2\") pod \"glance-default-internal-api-0\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.489880 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.489911 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-scripts\") pod \"glance-default-internal-api-0\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.489938 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.489960 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-config-data\") pod \"glance-default-internal-api-0\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.490047 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.490082 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-logs\") pod \"glance-default-internal-api-0\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.490103 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.490626 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.491050 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.495834 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-config-data\") pod \"glance-default-internal-api-0\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.496679 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.498036 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-logs\") pod \"glance-default-internal-api-0\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.506697 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.528145 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-scripts\") pod \"glance-default-internal-api-0\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.530332 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ztn2\" (UniqueName: \"kubernetes.io/projected/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-kube-api-access-5ztn2\") pod \"glance-default-internal-api-0\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.576598 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.605009 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.697246 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f7498cb-2048-4c04-b1ce-50a236db88a6-httpd-run\") pod \"1f7498cb-2048-4c04-b1ce-50a236db88a6\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.697323 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f7498cb-2048-4c04-b1ce-50a236db88a6-logs\") pod \"1f7498cb-2048-4c04-b1ce-50a236db88a6\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.697351 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"1f7498cb-2048-4c04-b1ce-50a236db88a6\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.697415 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqbkl\" (UniqueName: \"kubernetes.io/projected/1f7498cb-2048-4c04-b1ce-50a236db88a6-kube-api-access-rqbkl\") pod \"1f7498cb-2048-4c04-b1ce-50a236db88a6\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.697440 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f7498cb-2048-4c04-b1ce-50a236db88a6-public-tls-certs\") pod \"1f7498cb-2048-4c04-b1ce-50a236db88a6\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.697653 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f7498cb-2048-4c04-b1ce-50a236db88a6-combined-ca-bundle\") pod \"1f7498cb-2048-4c04-b1ce-50a236db88a6\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.697671 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f7498cb-2048-4c04-b1ce-50a236db88a6-scripts\") pod \"1f7498cb-2048-4c04-b1ce-50a236db88a6\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.697711 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f7498cb-2048-4c04-b1ce-50a236db88a6-config-data\") pod \"1f7498cb-2048-4c04-b1ce-50a236db88a6\" (UID: \"1f7498cb-2048-4c04-b1ce-50a236db88a6\") " Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.697845 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f7498cb-2048-4c04-b1ce-50a236db88a6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1f7498cb-2048-4c04-b1ce-50a236db88a6" (UID: "1f7498cb-2048-4c04-b1ce-50a236db88a6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.697962 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f7498cb-2048-4c04-b1ce-50a236db88a6-logs" (OuterVolumeSpecName: "logs") pod "1f7498cb-2048-4c04-b1ce-50a236db88a6" (UID: "1f7498cb-2048-4c04-b1ce-50a236db88a6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.698499 4946 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f7498cb-2048-4c04-b1ce-50a236db88a6-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.698516 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f7498cb-2048-4c04-b1ce-50a236db88a6-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.706868 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "1f7498cb-2048-4c04-b1ce-50a236db88a6" (UID: "1f7498cb-2048-4c04-b1ce-50a236db88a6"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.719638 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f7498cb-2048-4c04-b1ce-50a236db88a6-scripts" (OuterVolumeSpecName: "scripts") pod "1f7498cb-2048-4c04-b1ce-50a236db88a6" (UID: "1f7498cb-2048-4c04-b1ce-50a236db88a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.719800 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f7498cb-2048-4c04-b1ce-50a236db88a6-kube-api-access-rqbkl" (OuterVolumeSpecName: "kube-api-access-rqbkl") pod "1f7498cb-2048-4c04-b1ce-50a236db88a6" (UID: "1f7498cb-2048-4c04-b1ce-50a236db88a6"). InnerVolumeSpecName "kube-api-access-rqbkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.765064 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f7498cb-2048-4c04-b1ce-50a236db88a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f7498cb-2048-4c04-b1ce-50a236db88a6" (UID: "1f7498cb-2048-4c04-b1ce-50a236db88a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.768666 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f7498cb-2048-4c04-b1ce-50a236db88a6-config-data" (OuterVolumeSpecName: "config-data") pod "1f7498cb-2048-4c04-b1ce-50a236db88a6" (UID: "1f7498cb-2048-4c04-b1ce-50a236db88a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.785517 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f7498cb-2048-4c04-b1ce-50a236db88a6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1f7498cb-2048-4c04-b1ce-50a236db88a6" (UID: "1f7498cb-2048-4c04-b1ce-50a236db88a6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.800835 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f7498cb-2048-4c04-b1ce-50a236db88a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.800872 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f7498cb-2048-4c04-b1ce-50a236db88a6-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.800888 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f7498cb-2048-4c04-b1ce-50a236db88a6-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.800921 4946 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.800931 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqbkl\" (UniqueName: \"kubernetes.io/projected/1f7498cb-2048-4c04-b1ce-50a236db88a6-kube-api-access-rqbkl\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.800942 4946 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f7498cb-2048-4c04-b1ce-50a236db88a6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.834006 4946 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.871450 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 07:16:58 crc kubenswrapper[4946]: I1128 07:16:58.905902 4946 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.090499 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.090863 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f7498cb-2048-4c04-b1ce-50a236db88a6","Type":"ContainerDied","Data":"e21836b37220847731c77e60896743728b9129f1309dff4c359c55d25d676eec"} Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.090917 4946 scope.go:117] "RemoveContainer" containerID="b8582a0aeb08beaedc43c27127396d887ec2f6515c784b520e60cb8b5aec7db4" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.112111 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3fb7f76-db33-4ff6-b613-f60c7a64be38","Type":"ContainerStarted","Data":"d9ce4709d1cb40bce3dab99a4b8e4867e80357a705bfcf4276ab80331db85044"} Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.143518 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.153483 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.157938 4946 scope.go:117] "RemoveContainer" containerID="b8f930faf3c00d4c11e4852daae6d3873a790d09d2e2b47665d2dcb7fd73ec0c" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.167520 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:16:59 crc kubenswrapper[4946]: E1128 07:16:59.168027 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7498cb-2048-4c04-b1ce-50a236db88a6" containerName="glance-log" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.168046 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7498cb-2048-4c04-b1ce-50a236db88a6" containerName="glance-log" Nov 28 07:16:59 crc kubenswrapper[4946]: E1128 07:16:59.168060 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7498cb-2048-4c04-b1ce-50a236db88a6" containerName="glance-httpd" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.168066 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7498cb-2048-4c04-b1ce-50a236db88a6" containerName="glance-httpd" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.168261 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f7498cb-2048-4c04-b1ce-50a236db88a6" containerName="glance-log" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.168285 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f7498cb-2048-4c04-b1ce-50a236db88a6" containerName="glance-httpd" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.177909 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.188061 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.188218 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.195188 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.314871 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.314933 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/813134a8-463b-4f7d-8160-ceb1c5a96853-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.315024 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/813134a8-463b-4f7d-8160-ceb1c5a96853-scripts\") pod \"glance-default-external-api-0\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.315060 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/813134a8-463b-4f7d-8160-ceb1c5a96853-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.315132 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/813134a8-463b-4f7d-8160-ceb1c5a96853-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.315181 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/813134a8-463b-4f7d-8160-ceb1c5a96853-logs\") pod \"glance-default-external-api-0\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.315216 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/813134a8-463b-4f7d-8160-ceb1c5a96853-config-data\") pod \"glance-default-external-api-0\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.315240 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqk9h\" (UniqueName: \"kubernetes.io/projected/813134a8-463b-4f7d-8160-ceb1c5a96853-kube-api-access-lqk9h\") pod \"glance-default-external-api-0\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.417715 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/813134a8-463b-4f7d-8160-ceb1c5a96853-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.417789 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/813134a8-463b-4f7d-8160-ceb1c5a96853-logs\") pod \"glance-default-external-api-0\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.417825 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/813134a8-463b-4f7d-8160-ceb1c5a96853-config-data\") pod \"glance-default-external-api-0\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.417853 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqk9h\" (UniqueName: \"kubernetes.io/projected/813134a8-463b-4f7d-8160-ceb1c5a96853-kube-api-access-lqk9h\") pod \"glance-default-external-api-0\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.417906 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.417945 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/813134a8-463b-4f7d-8160-ceb1c5a96853-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.418016 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/813134a8-463b-4f7d-8160-ceb1c5a96853-scripts\") pod \"glance-default-external-api-0\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.418043 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/813134a8-463b-4f7d-8160-ceb1c5a96853-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.419289 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/813134a8-463b-4f7d-8160-ceb1c5a96853-logs\") pod \"glance-default-external-api-0\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.419587 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/813134a8-463b-4f7d-8160-ceb1c5a96853-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.420047 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.425147 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/813134a8-463b-4f7d-8160-ceb1c5a96853-scripts\") pod \"glance-default-external-api-0\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.425354 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/813134a8-463b-4f7d-8160-ceb1c5a96853-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.425492 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/813134a8-463b-4f7d-8160-ceb1c5a96853-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.425643 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/813134a8-463b-4f7d-8160-ceb1c5a96853-config-data\") pod \"glance-default-external-api-0\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.440585 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqk9h\" (UniqueName: \"kubernetes.io/projected/813134a8-463b-4f7d-8160-ceb1c5a96853-kube-api-access-lqk9h\") pod \"glance-default-external-api-0\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.444857 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:16:59 crc kubenswrapper[4946]: W1128 07:16:59.448689 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacadbe07_94b0_4a5d_ac42_6524f0e4ce61.slice/crio-d98d8dd8dcc0fc29e123ccc773426df78e1b72e165951ad2e1b448184d5ad6d8 WatchSource:0}: Error finding container d98d8dd8dcc0fc29e123ccc773426df78e1b72e165951ad2e1b448184d5ad6d8: Status 404 returned error can't find the container with id d98d8dd8dcc0fc29e123ccc773426df78e1b72e165951ad2e1b448184d5ad6d8 Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.454826 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " pod="openstack/glance-default-external-api-0" Nov 28 07:16:59 crc kubenswrapper[4946]: I1128 07:16:59.536289 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 07:17:00 crc kubenswrapper[4946]: I1128 07:17:00.010592 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f7498cb-2048-4c04-b1ce-50a236db88a6" path="/var/lib/kubelet/pods/1f7498cb-2048-4c04-b1ce-50a236db88a6/volumes" Nov 28 07:17:00 crc kubenswrapper[4946]: I1128 07:17:00.012332 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bf0f217-858e-49f9-8730-7376f77c6d4f" path="/var/lib/kubelet/pods/6bf0f217-858e-49f9-8730-7376f77c6d4f/volumes" Nov 28 07:17:00 crc kubenswrapper[4946]: I1128 07:17:00.144990 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:17:00 crc kubenswrapper[4946]: W1128 07:17:00.168444 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod813134a8_463b_4f7d_8160_ceb1c5a96853.slice/crio-8378ae4ecbb1d3ada07991bd1f3c82f2762b1a98faa009c8fab02ce25130ddde WatchSource:0}: Error finding container 8378ae4ecbb1d3ada07991bd1f3c82f2762b1a98faa009c8fab02ce25130ddde: Status 404 returned error can't find the container with id 8378ae4ecbb1d3ada07991bd1f3c82f2762b1a98faa009c8fab02ce25130ddde Nov 28 07:17:00 crc kubenswrapper[4946]: I1128 07:17:00.171840 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3fb7f76-db33-4ff6-b613-f60c7a64be38","Type":"ContainerStarted","Data":"787af9da38e7f9ea7c2d782a0a03c764e1ffa2af9e7e27e0eb8c6049ba138058"} Nov 28 07:17:00 crc kubenswrapper[4946]: I1128 07:17:00.174976 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"acadbe07-94b0-4a5d-ac42-6524f0e4ce61","Type":"ContainerStarted","Data":"364195b07b396d49046a6d8594cb12442c0047908fd530817352c055ef2e1319"} Nov 28 07:17:00 crc kubenswrapper[4946]: I1128 07:17:00.175016 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"acadbe07-94b0-4a5d-ac42-6524f0e4ce61","Type":"ContainerStarted","Data":"d98d8dd8dcc0fc29e123ccc773426df78e1b72e165951ad2e1b448184d5ad6d8"} Nov 28 07:17:01 crc kubenswrapper[4946]: I1128 07:17:01.190158 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"813134a8-463b-4f7d-8160-ceb1c5a96853","Type":"ContainerStarted","Data":"53c5b6d4bf9c96653c84e3fcb9a4ea1b7994f3a98935936a45c752c693dcee48"} Nov 28 07:17:01 crc kubenswrapper[4946]: I1128 07:17:01.191934 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"813134a8-463b-4f7d-8160-ceb1c5a96853","Type":"ContainerStarted","Data":"8378ae4ecbb1d3ada07991bd1f3c82f2762b1a98faa009c8fab02ce25130ddde"} Nov 28 07:17:01 crc kubenswrapper[4946]: I1128 07:17:01.195164 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3fb7f76-db33-4ff6-b613-f60c7a64be38","Type":"ContainerStarted","Data":"d67738d9866152125af3a1266102ca4656a4f5012f313fb1146ddac533f56549"} Nov 28 07:17:01 crc kubenswrapper[4946]: I1128 07:17:01.195363 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3fb7f76-db33-4ff6-b613-f60c7a64be38" containerName="ceilometer-central-agent" containerID="cri-o://0dd7ceecbb40811425d322c6721c5aa9fee8214c0c3c536f96eb6a228e8a1dfb" gracePeriod=30 Nov 28 07:17:01 crc kubenswrapper[4946]: I1128 07:17:01.195675 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 07:17:01 crc kubenswrapper[4946]: I1128 07:17:01.195985 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3fb7f76-db33-4ff6-b613-f60c7a64be38" containerName="proxy-httpd" containerID="cri-o://d67738d9866152125af3a1266102ca4656a4f5012f313fb1146ddac533f56549" gracePeriod=30 Nov 28 07:17:01 crc kubenswrapper[4946]: I1128 07:17:01.196036 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3fb7f76-db33-4ff6-b613-f60c7a64be38" containerName="sg-core" containerID="cri-o://787af9da38e7f9ea7c2d782a0a03c764e1ffa2af9e7e27e0eb8c6049ba138058" gracePeriod=30 Nov 28 07:17:01 crc kubenswrapper[4946]: I1128 07:17:01.196080 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3fb7f76-db33-4ff6-b613-f60c7a64be38" containerName="ceilometer-notification-agent" containerID="cri-o://d9ce4709d1cb40bce3dab99a4b8e4867e80357a705bfcf4276ab80331db85044" gracePeriod=30 Nov 28 07:17:01 crc kubenswrapper[4946]: I1128 07:17:01.208916 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"acadbe07-94b0-4a5d-ac42-6524f0e4ce61","Type":"ContainerStarted","Data":"b1f1ded874bdaaff5d6be92e1099409af1ca775953c8f30295a4c606005bdf30"} Nov 28 07:17:01 crc kubenswrapper[4946]: I1128 07:17:01.236336 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.6332314810000002 podStartE2EDuration="6.236313547s" podCreationTimestamp="2025-11-28 07:16:55 +0000 UTC" firstStartedPulling="2025-11-28 07:16:56.259614529 +0000 UTC m=+1470.637679650" lastFinishedPulling="2025-11-28 07:17:00.862696605 +0000 UTC m=+1475.240761716" observedRunningTime="2025-11-28 07:17:01.217400904 +0000 UTC m=+1475.595466015" watchObservedRunningTime="2025-11-28 07:17:01.236313547 +0000 UTC m=+1475.614378658" Nov 28 07:17:01 crc kubenswrapper[4946]: I1128 07:17:01.266483 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.266451266 podStartE2EDuration="3.266451266s" podCreationTimestamp="2025-11-28 07:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:17:01.2482385 +0000 UTC m=+1475.626303611" watchObservedRunningTime="2025-11-28 07:17:01.266451266 +0000 UTC m=+1475.644516367" Nov 28 07:17:02 crc kubenswrapper[4946]: I1128 07:17:02.218312 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"813134a8-463b-4f7d-8160-ceb1c5a96853","Type":"ContainerStarted","Data":"a5bc002da109da3ebb39a470b77cd1c3a84898e95c562ea010269e7e486161bb"} Nov 28 07:17:02 crc kubenswrapper[4946]: I1128 07:17:02.222488 4946 generic.go:334] "Generic (PLEG): container finished" podID="e3fb7f76-db33-4ff6-b613-f60c7a64be38" containerID="787af9da38e7f9ea7c2d782a0a03c764e1ffa2af9e7e27e0eb8c6049ba138058" exitCode=2 Nov 28 07:17:02 crc kubenswrapper[4946]: I1128 07:17:02.222518 4946 generic.go:334] "Generic (PLEG): container finished" podID="e3fb7f76-db33-4ff6-b613-f60c7a64be38" containerID="d9ce4709d1cb40bce3dab99a4b8e4867e80357a705bfcf4276ab80331db85044" exitCode=0 Nov 28 07:17:02 crc kubenswrapper[4946]: I1128 07:17:02.223069 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3fb7f76-db33-4ff6-b613-f60c7a64be38","Type":"ContainerDied","Data":"787af9da38e7f9ea7c2d782a0a03c764e1ffa2af9e7e27e0eb8c6049ba138058"} Nov 28 07:17:02 crc kubenswrapper[4946]: I1128 07:17:02.223099 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3fb7f76-db33-4ff6-b613-f60c7a64be38","Type":"ContainerDied","Data":"d9ce4709d1cb40bce3dab99a4b8e4867e80357a705bfcf4276ab80331db85044"} Nov 28 07:17:02 crc kubenswrapper[4946]: I1128 07:17:02.246917 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.246892499 podStartE2EDuration="3.246892499s" podCreationTimestamp="2025-11-28 07:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:17:02.236352751 +0000 UTC m=+1476.614417862" watchObservedRunningTime="2025-11-28 07:17:02.246892499 +0000 UTC m=+1476.624957610" Nov 28 07:17:03 crc kubenswrapper[4946]: I1128 07:17:03.138684 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:17:07 crc kubenswrapper[4946]: I1128 07:17:07.285665 4946 generic.go:334] "Generic (PLEG): container finished" podID="e3fb7f76-db33-4ff6-b613-f60c7a64be38" containerID="0dd7ceecbb40811425d322c6721c5aa9fee8214c0c3c536f96eb6a228e8a1dfb" exitCode=0 Nov 28 07:17:07 crc kubenswrapper[4946]: I1128 07:17:07.285776 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3fb7f76-db33-4ff6-b613-f60c7a64be38","Type":"ContainerDied","Data":"0dd7ceecbb40811425d322c6721c5aa9fee8214c0c3c536f96eb6a228e8a1dfb"} Nov 28 07:17:08 crc kubenswrapper[4946]: I1128 07:17:08.871819 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 07:17:08 crc kubenswrapper[4946]: I1128 07:17:08.872128 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 07:17:08 crc kubenswrapper[4946]: I1128 07:17:08.910341 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 07:17:08 crc kubenswrapper[4946]: I1128 07:17:08.923856 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 07:17:09 crc kubenswrapper[4946]: I1128 07:17:09.309686 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 07:17:09 crc kubenswrapper[4946]: I1128 07:17:09.309777 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 07:17:09 crc kubenswrapper[4946]: I1128 07:17:09.537431 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 07:17:09 crc kubenswrapper[4946]: I1128 07:17:09.537556 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 07:17:09 crc kubenswrapper[4946]: I1128 07:17:09.576023 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 07:17:09 crc kubenswrapper[4946]: I1128 07:17:09.614226 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.166735 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-wkths"] Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.169583 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wkths" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.178970 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wkths"] Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.269552 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-6n69v"] Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.270991 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6n69v" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.292008 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-d713-account-create-update-5fhps"] Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.293383 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d713-account-create-update-5fhps" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.309848 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.325339 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.325376 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.332259 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4e29c9b-9e25-4304-a314-54943d88f953-operator-scripts\") pod \"nova-api-db-create-wkths\" (UID: \"d4e29c9b-9e25-4304-a314-54943d88f953\") " pod="openstack/nova-api-db-create-wkths" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.333116 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vr6q\" (UniqueName: \"kubernetes.io/projected/d4e29c9b-9e25-4304-a314-54943d88f953-kube-api-access-6vr6q\") pod \"nova-api-db-create-wkths\" (UID: \"d4e29c9b-9e25-4304-a314-54943d88f953\") " pod="openstack/nova-api-db-create-wkths" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.341406 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6n69v"] Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.354540 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d713-account-create-update-5fhps"] Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.439139 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-xbl8m"] Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.440176 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv7m4\" (UniqueName: \"kubernetes.io/projected/8b8aece4-9268-47a9-a9cb-b312c604a23a-kube-api-access-bv7m4\") pod \"nova-cell0-db-create-6n69v\" (UID: \"8b8aece4-9268-47a9-a9cb-b312c604a23a\") " pod="openstack/nova-cell0-db-create-6n69v" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.440291 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b8aece4-9268-47a9-a9cb-b312c604a23a-operator-scripts\") pod \"nova-cell0-db-create-6n69v\" (UID: \"8b8aece4-9268-47a9-a9cb-b312c604a23a\") " pod="openstack/nova-cell0-db-create-6n69v" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.440900 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vr6q\" (UniqueName: \"kubernetes.io/projected/d4e29c9b-9e25-4304-a314-54943d88f953-kube-api-access-6vr6q\") pod \"nova-api-db-create-wkths\" (UID: \"d4e29c9b-9e25-4304-a314-54943d88f953\") " pod="openstack/nova-api-db-create-wkths" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.441318 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xbl8m" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.446693 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38655c7a-a82f-485f-ae38-75634417e780-operator-scripts\") pod \"nova-api-d713-account-create-update-5fhps\" (UID: \"38655c7a-a82f-485f-ae38-75634417e780\") " pod="openstack/nova-api-d713-account-create-update-5fhps" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.446932 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5fbb\" (UniqueName: \"kubernetes.io/projected/38655c7a-a82f-485f-ae38-75634417e780-kube-api-access-p5fbb\") pod \"nova-api-d713-account-create-update-5fhps\" (UID: \"38655c7a-a82f-485f-ae38-75634417e780\") " pod="openstack/nova-api-d713-account-create-update-5fhps" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.447055 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4e29c9b-9e25-4304-a314-54943d88f953-operator-scripts\") pod \"nova-api-db-create-wkths\" (UID: \"d4e29c9b-9e25-4304-a314-54943d88f953\") " pod="openstack/nova-api-db-create-wkths" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.450288 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4e29c9b-9e25-4304-a314-54943d88f953-operator-scripts\") pod \"nova-api-db-create-wkths\" (UID: \"d4e29c9b-9e25-4304-a314-54943d88f953\") " pod="openstack/nova-api-db-create-wkths" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.474853 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xbl8m"] Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.487094 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vr6q\" (UniqueName: \"kubernetes.io/projected/d4e29c9b-9e25-4304-a314-54943d88f953-kube-api-access-6vr6q\") pod \"nova-api-db-create-wkths\" (UID: \"d4e29c9b-9e25-4304-a314-54943d88f953\") " pod="openstack/nova-api-db-create-wkths" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.509005 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wkths" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.529281 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-d35d-account-create-update-8j294"] Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.531938 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d35d-account-create-update-8j294" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.537806 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.550755 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-d35d-account-create-update-8j294"] Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.551240 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bbkl\" (UniqueName: \"kubernetes.io/projected/a209c4a0-a388-463f-a49c-cd24fc3b3ca8-kube-api-access-7bbkl\") pod \"nova-cell1-db-create-xbl8m\" (UID: \"a209c4a0-a388-463f-a49c-cd24fc3b3ca8\") " pod="openstack/nova-cell1-db-create-xbl8m" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.551350 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38655c7a-a82f-485f-ae38-75634417e780-operator-scripts\") pod \"nova-api-d713-account-create-update-5fhps\" (UID: \"38655c7a-a82f-485f-ae38-75634417e780\") " pod="openstack/nova-api-d713-account-create-update-5fhps" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.551483 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5fbb\" (UniqueName: \"kubernetes.io/projected/38655c7a-a82f-485f-ae38-75634417e780-kube-api-access-p5fbb\") pod \"nova-api-d713-account-create-update-5fhps\" (UID: \"38655c7a-a82f-485f-ae38-75634417e780\") " pod="openstack/nova-api-d713-account-create-update-5fhps" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.551669 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a209c4a0-a388-463f-a49c-cd24fc3b3ca8-operator-scripts\") pod \"nova-cell1-db-create-xbl8m\" (UID: \"a209c4a0-a388-463f-a49c-cd24fc3b3ca8\") " pod="openstack/nova-cell1-db-create-xbl8m" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.551705 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv7m4\" (UniqueName: \"kubernetes.io/projected/8b8aece4-9268-47a9-a9cb-b312c604a23a-kube-api-access-bv7m4\") pod \"nova-cell0-db-create-6n69v\" (UID: \"8b8aece4-9268-47a9-a9cb-b312c604a23a\") " pod="openstack/nova-cell0-db-create-6n69v" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.551778 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b8aece4-9268-47a9-a9cb-b312c604a23a-operator-scripts\") pod \"nova-cell0-db-create-6n69v\" (UID: \"8b8aece4-9268-47a9-a9cb-b312c604a23a\") " pod="openstack/nova-cell0-db-create-6n69v" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.558073 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38655c7a-a82f-485f-ae38-75634417e780-operator-scripts\") pod \"nova-api-d713-account-create-update-5fhps\" (UID: \"38655c7a-a82f-485f-ae38-75634417e780\") " pod="openstack/nova-api-d713-account-create-update-5fhps" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.558699 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b8aece4-9268-47a9-a9cb-b312c604a23a-operator-scripts\") pod \"nova-cell0-db-create-6n69v\" (UID: \"8b8aece4-9268-47a9-a9cb-b312c604a23a\") " pod="openstack/nova-cell0-db-create-6n69v" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.583073 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv7m4\" (UniqueName: \"kubernetes.io/projected/8b8aece4-9268-47a9-a9cb-b312c604a23a-kube-api-access-bv7m4\") pod \"nova-cell0-db-create-6n69v\" (UID: \"8b8aece4-9268-47a9-a9cb-b312c604a23a\") " pod="openstack/nova-cell0-db-create-6n69v" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.585019 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5fbb\" (UniqueName: \"kubernetes.io/projected/38655c7a-a82f-485f-ae38-75634417e780-kube-api-access-p5fbb\") pod \"nova-api-d713-account-create-update-5fhps\" (UID: \"38655c7a-a82f-485f-ae38-75634417e780\") " pod="openstack/nova-api-d713-account-create-update-5fhps" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.595953 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6n69v" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.622002 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d713-account-create-update-5fhps" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.657165 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bef3c13-6018-44b9-b9ac-7620eee2ddb0-operator-scripts\") pod \"nova-cell0-d35d-account-create-update-8j294\" (UID: \"3bef3c13-6018-44b9-b9ac-7620eee2ddb0\") " pod="openstack/nova-cell0-d35d-account-create-update-8j294" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.657208 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bbkl\" (UniqueName: \"kubernetes.io/projected/a209c4a0-a388-463f-a49c-cd24fc3b3ca8-kube-api-access-7bbkl\") pod \"nova-cell1-db-create-xbl8m\" (UID: \"a209c4a0-a388-463f-a49c-cd24fc3b3ca8\") " pod="openstack/nova-cell1-db-create-xbl8m" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.657326 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvpp9\" (UniqueName: \"kubernetes.io/projected/3bef3c13-6018-44b9-b9ac-7620eee2ddb0-kube-api-access-dvpp9\") pod \"nova-cell0-d35d-account-create-update-8j294\" (UID: \"3bef3c13-6018-44b9-b9ac-7620eee2ddb0\") " pod="openstack/nova-cell0-d35d-account-create-update-8j294" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.657381 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a209c4a0-a388-463f-a49c-cd24fc3b3ca8-operator-scripts\") pod \"nova-cell1-db-create-xbl8m\" (UID: \"a209c4a0-a388-463f-a49c-cd24fc3b3ca8\") " pod="openstack/nova-cell1-db-create-xbl8m" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.658259 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a209c4a0-a388-463f-a49c-cd24fc3b3ca8-operator-scripts\") pod \"nova-cell1-db-create-xbl8m\" (UID: \"a209c4a0-a388-463f-a49c-cd24fc3b3ca8\") " pod="openstack/nova-cell1-db-create-xbl8m" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.674652 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-0c3a-account-create-update-vmw8c"] Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.676116 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0c3a-account-create-update-vmw8c" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.680290 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.688106 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bbkl\" (UniqueName: \"kubernetes.io/projected/a209c4a0-a388-463f-a49c-cd24fc3b3ca8-kube-api-access-7bbkl\") pod \"nova-cell1-db-create-xbl8m\" (UID: \"a209c4a0-a388-463f-a49c-cd24fc3b3ca8\") " pod="openstack/nova-cell1-db-create-xbl8m" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.720435 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0c3a-account-create-update-vmw8c"] Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.759132 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bef3c13-6018-44b9-b9ac-7620eee2ddb0-operator-scripts\") pod \"nova-cell0-d35d-account-create-update-8j294\" (UID: \"3bef3c13-6018-44b9-b9ac-7620eee2ddb0\") " pod="openstack/nova-cell0-d35d-account-create-update-8j294" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.759195 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szmp7\" (UniqueName: \"kubernetes.io/projected/52295fb5-c0ec-4910-ad42-413e574375bd-kube-api-access-szmp7\") pod \"nova-cell1-0c3a-account-create-update-vmw8c\" (UID: \"52295fb5-c0ec-4910-ad42-413e574375bd\") " pod="openstack/nova-cell1-0c3a-account-create-update-vmw8c" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.759234 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52295fb5-c0ec-4910-ad42-413e574375bd-operator-scripts\") pod \"nova-cell1-0c3a-account-create-update-vmw8c\" (UID: \"52295fb5-c0ec-4910-ad42-413e574375bd\") " pod="openstack/nova-cell1-0c3a-account-create-update-vmw8c" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.759312 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvpp9\" (UniqueName: \"kubernetes.io/projected/3bef3c13-6018-44b9-b9ac-7620eee2ddb0-kube-api-access-dvpp9\") pod \"nova-cell0-d35d-account-create-update-8j294\" (UID: \"3bef3c13-6018-44b9-b9ac-7620eee2ddb0\") " pod="openstack/nova-cell0-d35d-account-create-update-8j294" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.760377 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bef3c13-6018-44b9-b9ac-7620eee2ddb0-operator-scripts\") pod \"nova-cell0-d35d-account-create-update-8j294\" (UID: \"3bef3c13-6018-44b9-b9ac-7620eee2ddb0\") " pod="openstack/nova-cell0-d35d-account-create-update-8j294" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.795880 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvpp9\" (UniqueName: \"kubernetes.io/projected/3bef3c13-6018-44b9-b9ac-7620eee2ddb0-kube-api-access-dvpp9\") pod \"nova-cell0-d35d-account-create-update-8j294\" (UID: \"3bef3c13-6018-44b9-b9ac-7620eee2ddb0\") " pod="openstack/nova-cell0-d35d-account-create-update-8j294" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.840669 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xbl8m" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.861204 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szmp7\" (UniqueName: \"kubernetes.io/projected/52295fb5-c0ec-4910-ad42-413e574375bd-kube-api-access-szmp7\") pod \"nova-cell1-0c3a-account-create-update-vmw8c\" (UID: \"52295fb5-c0ec-4910-ad42-413e574375bd\") " pod="openstack/nova-cell1-0c3a-account-create-update-vmw8c" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.861256 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52295fb5-c0ec-4910-ad42-413e574375bd-operator-scripts\") pod \"nova-cell1-0c3a-account-create-update-vmw8c\" (UID: \"52295fb5-c0ec-4910-ad42-413e574375bd\") " pod="openstack/nova-cell1-0c3a-account-create-update-vmw8c" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.862833 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52295fb5-c0ec-4910-ad42-413e574375bd-operator-scripts\") pod \"nova-cell1-0c3a-account-create-update-vmw8c\" (UID: \"52295fb5-c0ec-4910-ad42-413e574375bd\") " pod="openstack/nova-cell1-0c3a-account-create-update-vmw8c" Nov 28 07:17:10 crc kubenswrapper[4946]: I1128 07:17:10.886795 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szmp7\" (UniqueName: \"kubernetes.io/projected/52295fb5-c0ec-4910-ad42-413e574375bd-kube-api-access-szmp7\") pod \"nova-cell1-0c3a-account-create-update-vmw8c\" (UID: \"52295fb5-c0ec-4910-ad42-413e574375bd\") " pod="openstack/nova-cell1-0c3a-account-create-update-vmw8c" Nov 28 07:17:11 crc kubenswrapper[4946]: I1128 07:17:11.049387 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d35d-account-create-update-8j294" Nov 28 07:17:11 crc kubenswrapper[4946]: I1128 07:17:11.139751 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0c3a-account-create-update-vmw8c" Nov 28 07:17:11 crc kubenswrapper[4946]: I1128 07:17:11.231492 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d713-account-create-update-5fhps"] Nov 28 07:17:11 crc kubenswrapper[4946]: I1128 07:17:11.239052 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wkths"] Nov 28 07:17:11 crc kubenswrapper[4946]: I1128 07:17:11.290636 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6n69v"] Nov 28 07:17:11 crc kubenswrapper[4946]: I1128 07:17:11.349776 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d713-account-create-update-5fhps" event={"ID":"38655c7a-a82f-485f-ae38-75634417e780","Type":"ContainerStarted","Data":"c477d8a6607a4e743e006527160692b23ec8bc71a5a0eb84fabe7a091846c129"} Nov 28 07:17:11 crc kubenswrapper[4946]: I1128 07:17:11.355029 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wkths" event={"ID":"d4e29c9b-9e25-4304-a314-54943d88f953","Type":"ContainerStarted","Data":"8bb1cc327c5ad55f87f2274e84729020a462192c2152cc990975388432eec8ab"} Nov 28 07:17:11 crc kubenswrapper[4946]: I1128 07:17:11.503524 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xbl8m"] Nov 28 07:17:11 crc kubenswrapper[4946]: I1128 07:17:11.626566 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-d35d-account-create-update-8j294"] Nov 28 07:17:11 crc kubenswrapper[4946]: I1128 07:17:11.715439 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0c3a-account-create-update-vmw8c"] Nov 28 07:17:12 crc kubenswrapper[4946]: I1128 07:17:12.002803 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 07:17:12 crc kubenswrapper[4946]: I1128 07:17:12.002953 4946 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 07:17:12 crc kubenswrapper[4946]: I1128 07:17:12.240826 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 07:17:12 crc kubenswrapper[4946]: I1128 07:17:12.367047 4946 generic.go:334] "Generic (PLEG): container finished" podID="38655c7a-a82f-485f-ae38-75634417e780" containerID="9df069bb1b0f7f202b6a0360d34734cc278c73b6fc9f9ccab947c2326a62097b" exitCode=0 Nov 28 07:17:12 crc kubenswrapper[4946]: I1128 07:17:12.367112 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d713-account-create-update-5fhps" event={"ID":"38655c7a-a82f-485f-ae38-75634417e780","Type":"ContainerDied","Data":"9df069bb1b0f7f202b6a0360d34734cc278c73b6fc9f9ccab947c2326a62097b"} Nov 28 07:17:12 crc kubenswrapper[4946]: I1128 07:17:12.368680 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d35d-account-create-update-8j294" event={"ID":"3bef3c13-6018-44b9-b9ac-7620eee2ddb0","Type":"ContainerStarted","Data":"c6e6d37ec18105c33db4cdfae75f73255b47ec55fc3ee1889fa5c3ca22ae96de"} Nov 28 07:17:12 crc kubenswrapper[4946]: I1128 07:17:12.368704 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d35d-account-create-update-8j294" event={"ID":"3bef3c13-6018-44b9-b9ac-7620eee2ddb0","Type":"ContainerStarted","Data":"d19c47529b697a999b9bd549fea18920fef6106f736418ca4b994e0a4b0868ff"} Nov 28 07:17:12 crc kubenswrapper[4946]: I1128 07:17:12.374454 4946 generic.go:334] "Generic (PLEG): container finished" podID="d4e29c9b-9e25-4304-a314-54943d88f953" containerID="33cf1521825b65d2a855bb33ed486535726448c3aa1cde0589bb5e6245126d61" exitCode=0 Nov 28 07:17:12 crc kubenswrapper[4946]: I1128 07:17:12.374650 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wkths" event={"ID":"d4e29c9b-9e25-4304-a314-54943d88f953","Type":"ContainerDied","Data":"33cf1521825b65d2a855bb33ed486535726448c3aa1cde0589bb5e6245126d61"} Nov 28 07:17:12 crc kubenswrapper[4946]: I1128 07:17:12.382032 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0c3a-account-create-update-vmw8c" event={"ID":"52295fb5-c0ec-4910-ad42-413e574375bd","Type":"ContainerStarted","Data":"703b3ac1dfd5613d3a24ce5b3cd50e11a43939a141bc6b9d66c051b90300a96b"} Nov 28 07:17:12 crc kubenswrapper[4946]: I1128 07:17:12.382085 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0c3a-account-create-update-vmw8c" event={"ID":"52295fb5-c0ec-4910-ad42-413e574375bd","Type":"ContainerStarted","Data":"baef953169d993adb51aa0192f4f3dbfef6e915b7cde861bae9e8814927ae352"} Nov 28 07:17:12 crc kubenswrapper[4946]: I1128 07:17:12.383488 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xbl8m" event={"ID":"a209c4a0-a388-463f-a49c-cd24fc3b3ca8","Type":"ContainerStarted","Data":"84092b1962d9db4523e0c74821365501324a13449cfcf66d36f378c07b1fecea"} Nov 28 07:17:12 crc kubenswrapper[4946]: I1128 07:17:12.383518 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xbl8m" event={"ID":"a209c4a0-a388-463f-a49c-cd24fc3b3ca8","Type":"ContainerStarted","Data":"ebde893d841ae37479bb866ebf7bcc54ad91477ffe4f758f0aae0be25c0eee64"} Nov 28 07:17:12 crc kubenswrapper[4946]: I1128 07:17:12.384668 4946 generic.go:334] "Generic (PLEG): container finished" podID="8b8aece4-9268-47a9-a9cb-b312c604a23a" containerID="34c03cab919d7afdbbafc7c5ec8251dff41d8908c91a4de2fb931f02e29401a2" exitCode=0 Nov 28 07:17:12 crc kubenswrapper[4946]: I1128 07:17:12.385641 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6n69v" event={"ID":"8b8aece4-9268-47a9-a9cb-b312c604a23a","Type":"ContainerDied","Data":"34c03cab919d7afdbbafc7c5ec8251dff41d8908c91a4de2fb931f02e29401a2"} Nov 28 07:17:12 crc kubenswrapper[4946]: I1128 07:17:12.385666 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6n69v" event={"ID":"8b8aece4-9268-47a9-a9cb-b312c604a23a","Type":"ContainerStarted","Data":"bfa9271c0f5efe91e1efb9a5f585cd7af9265b57fea671356bc44023a4a64d9b"} Nov 28 07:17:12 crc kubenswrapper[4946]: I1128 07:17:12.907255 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 07:17:12 crc kubenswrapper[4946]: I1128 07:17:12.907400 4946 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 07:17:12 crc kubenswrapper[4946]: I1128 07:17:12.924031 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 07:17:12 crc kubenswrapper[4946]: I1128 07:17:12.932848 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-d35d-account-create-update-8j294" podStartSLOduration=2.932830159 podStartE2EDuration="2.932830159s" podCreationTimestamp="2025-11-28 07:17:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:17:12.47696499 +0000 UTC m=+1486.855030101" watchObservedRunningTime="2025-11-28 07:17:12.932830159 +0000 UTC m=+1487.310895270" Nov 28 07:17:13 crc kubenswrapper[4946]: I1128 07:17:13.394800 4946 generic.go:334] "Generic (PLEG): container finished" podID="52295fb5-c0ec-4910-ad42-413e574375bd" containerID="703b3ac1dfd5613d3a24ce5b3cd50e11a43939a141bc6b9d66c051b90300a96b" exitCode=0 Nov 28 07:17:13 crc kubenswrapper[4946]: I1128 07:17:13.394972 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0c3a-account-create-update-vmw8c" event={"ID":"52295fb5-c0ec-4910-ad42-413e574375bd","Type":"ContainerDied","Data":"703b3ac1dfd5613d3a24ce5b3cd50e11a43939a141bc6b9d66c051b90300a96b"} Nov 28 07:17:13 crc kubenswrapper[4946]: I1128 07:17:13.396430 4946 generic.go:334] "Generic (PLEG): container finished" podID="a209c4a0-a388-463f-a49c-cd24fc3b3ca8" containerID="84092b1962d9db4523e0c74821365501324a13449cfcf66d36f378c07b1fecea" exitCode=0 Nov 28 07:17:13 crc kubenswrapper[4946]: I1128 07:17:13.396498 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xbl8m" event={"ID":"a209c4a0-a388-463f-a49c-cd24fc3b3ca8","Type":"ContainerDied","Data":"84092b1962d9db4523e0c74821365501324a13449cfcf66d36f378c07b1fecea"} Nov 28 07:17:13 crc kubenswrapper[4946]: I1128 07:17:13.397937 4946 generic.go:334] "Generic (PLEG): container finished" podID="3bef3c13-6018-44b9-b9ac-7620eee2ddb0" containerID="c6e6d37ec18105c33db4cdfae75f73255b47ec55fc3ee1889fa5c3ca22ae96de" exitCode=0 Nov 28 07:17:13 crc kubenswrapper[4946]: I1128 07:17:13.397978 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d35d-account-create-update-8j294" event={"ID":"3bef3c13-6018-44b9-b9ac-7620eee2ddb0","Type":"ContainerDied","Data":"c6e6d37ec18105c33db4cdfae75f73255b47ec55fc3ee1889fa5c3ca22ae96de"} Nov 28 07:17:13 crc kubenswrapper[4946]: I1128 07:17:13.832800 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wkths" Nov 28 07:17:13 crc kubenswrapper[4946]: I1128 07:17:13.962431 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vr6q\" (UniqueName: \"kubernetes.io/projected/d4e29c9b-9e25-4304-a314-54943d88f953-kube-api-access-6vr6q\") pod \"d4e29c9b-9e25-4304-a314-54943d88f953\" (UID: \"d4e29c9b-9e25-4304-a314-54943d88f953\") " Nov 28 07:17:13 crc kubenswrapper[4946]: I1128 07:17:13.962503 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4e29c9b-9e25-4304-a314-54943d88f953-operator-scripts\") pod \"d4e29c9b-9e25-4304-a314-54943d88f953\" (UID: \"d4e29c9b-9e25-4304-a314-54943d88f953\") " Nov 28 07:17:13 crc kubenswrapper[4946]: I1128 07:17:13.964040 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4e29c9b-9e25-4304-a314-54943d88f953-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4e29c9b-9e25-4304-a314-54943d88f953" (UID: "d4e29c9b-9e25-4304-a314-54943d88f953"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:17:13 crc kubenswrapper[4946]: I1128 07:17:13.978840 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4e29c9b-9e25-4304-a314-54943d88f953-kube-api-access-6vr6q" (OuterVolumeSpecName: "kube-api-access-6vr6q") pod "d4e29c9b-9e25-4304-a314-54943d88f953" (UID: "d4e29c9b-9e25-4304-a314-54943d88f953"). InnerVolumeSpecName "kube-api-access-6vr6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.065982 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vr6q\" (UniqueName: \"kubernetes.io/projected/d4e29c9b-9e25-4304-a314-54943d88f953-kube-api-access-6vr6q\") on node \"crc\" DevicePath \"\"" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.066023 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4e29c9b-9e25-4304-a314-54943d88f953-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.094802 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xbl8m" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.100486 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0c3a-account-create-update-vmw8c" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.111342 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d713-account-create-update-5fhps" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.119155 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6n69v" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.269405 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38655c7a-a82f-485f-ae38-75634417e780-operator-scripts\") pod \"38655c7a-a82f-485f-ae38-75634417e780\" (UID: \"38655c7a-a82f-485f-ae38-75634417e780\") " Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.269470 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv7m4\" (UniqueName: \"kubernetes.io/projected/8b8aece4-9268-47a9-a9cb-b312c604a23a-kube-api-access-bv7m4\") pod \"8b8aece4-9268-47a9-a9cb-b312c604a23a\" (UID: \"8b8aece4-9268-47a9-a9cb-b312c604a23a\") " Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.269561 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5fbb\" (UniqueName: \"kubernetes.io/projected/38655c7a-a82f-485f-ae38-75634417e780-kube-api-access-p5fbb\") pod \"38655c7a-a82f-485f-ae38-75634417e780\" (UID: \"38655c7a-a82f-485f-ae38-75634417e780\") " Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.269675 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bbkl\" (UniqueName: \"kubernetes.io/projected/a209c4a0-a388-463f-a49c-cd24fc3b3ca8-kube-api-access-7bbkl\") pod \"a209c4a0-a388-463f-a49c-cd24fc3b3ca8\" (UID: \"a209c4a0-a388-463f-a49c-cd24fc3b3ca8\") " Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.269732 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a209c4a0-a388-463f-a49c-cd24fc3b3ca8-operator-scripts\") pod \"a209c4a0-a388-463f-a49c-cd24fc3b3ca8\" (UID: \"a209c4a0-a388-463f-a49c-cd24fc3b3ca8\") " Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.269788 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b8aece4-9268-47a9-a9cb-b312c604a23a-operator-scripts\") pod \"8b8aece4-9268-47a9-a9cb-b312c604a23a\" (UID: \"8b8aece4-9268-47a9-a9cb-b312c604a23a\") " Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.269869 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szmp7\" (UniqueName: \"kubernetes.io/projected/52295fb5-c0ec-4910-ad42-413e574375bd-kube-api-access-szmp7\") pod \"52295fb5-c0ec-4910-ad42-413e574375bd\" (UID: \"52295fb5-c0ec-4910-ad42-413e574375bd\") " Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.270299 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38655c7a-a82f-485f-ae38-75634417e780-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "38655c7a-a82f-485f-ae38-75634417e780" (UID: "38655c7a-a82f-485f-ae38-75634417e780"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.270354 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52295fb5-c0ec-4910-ad42-413e574375bd-operator-scripts\") pod \"52295fb5-c0ec-4910-ad42-413e574375bd\" (UID: \"52295fb5-c0ec-4910-ad42-413e574375bd\") " Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.270437 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8aece4-9268-47a9-a9cb-b312c604a23a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b8aece4-9268-47a9-a9cb-b312c604a23a" (UID: "8b8aece4-9268-47a9-a9cb-b312c604a23a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.270821 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52295fb5-c0ec-4910-ad42-413e574375bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52295fb5-c0ec-4910-ad42-413e574375bd" (UID: "52295fb5-c0ec-4910-ad42-413e574375bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.270841 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38655c7a-a82f-485f-ae38-75634417e780-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.270856 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b8aece4-9268-47a9-a9cb-b312c604a23a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.271056 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a209c4a0-a388-463f-a49c-cd24fc3b3ca8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a209c4a0-a388-463f-a49c-cd24fc3b3ca8" (UID: "a209c4a0-a388-463f-a49c-cd24fc3b3ca8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.273899 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38655c7a-a82f-485f-ae38-75634417e780-kube-api-access-p5fbb" (OuterVolumeSpecName: "kube-api-access-p5fbb") pod "38655c7a-a82f-485f-ae38-75634417e780" (UID: "38655c7a-a82f-485f-ae38-75634417e780"). InnerVolumeSpecName "kube-api-access-p5fbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.273941 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b8aece4-9268-47a9-a9cb-b312c604a23a-kube-api-access-bv7m4" (OuterVolumeSpecName: "kube-api-access-bv7m4") pod "8b8aece4-9268-47a9-a9cb-b312c604a23a" (UID: "8b8aece4-9268-47a9-a9cb-b312c604a23a"). InnerVolumeSpecName "kube-api-access-bv7m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.273958 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a209c4a0-a388-463f-a49c-cd24fc3b3ca8-kube-api-access-7bbkl" (OuterVolumeSpecName: "kube-api-access-7bbkl") pod "a209c4a0-a388-463f-a49c-cd24fc3b3ca8" (UID: "a209c4a0-a388-463f-a49c-cd24fc3b3ca8"). InnerVolumeSpecName "kube-api-access-7bbkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.276417 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52295fb5-c0ec-4910-ad42-413e574375bd-kube-api-access-szmp7" (OuterVolumeSpecName: "kube-api-access-szmp7") pod "52295fb5-c0ec-4910-ad42-413e574375bd" (UID: "52295fb5-c0ec-4910-ad42-413e574375bd"). InnerVolumeSpecName "kube-api-access-szmp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.374371 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bbkl\" (UniqueName: \"kubernetes.io/projected/a209c4a0-a388-463f-a49c-cd24fc3b3ca8-kube-api-access-7bbkl\") on node \"crc\" DevicePath \"\"" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.374429 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a209c4a0-a388-463f-a49c-cd24fc3b3ca8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.374449 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szmp7\" (UniqueName: \"kubernetes.io/projected/52295fb5-c0ec-4910-ad42-413e574375bd-kube-api-access-szmp7\") on node \"crc\" DevicePath \"\"" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.374489 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52295fb5-c0ec-4910-ad42-413e574375bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.374509 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv7m4\" (UniqueName: \"kubernetes.io/projected/8b8aece4-9268-47a9-a9cb-b312c604a23a-kube-api-access-bv7m4\") on node \"crc\" DevicePath \"\"" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.374526 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5fbb\" (UniqueName: \"kubernetes.io/projected/38655c7a-a82f-485f-ae38-75634417e780-kube-api-access-p5fbb\") on node \"crc\" DevicePath \"\"" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.413322 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0c3a-account-create-update-vmw8c" event={"ID":"52295fb5-c0ec-4910-ad42-413e574375bd","Type":"ContainerDied","Data":"baef953169d993adb51aa0192f4f3dbfef6e915b7cde861bae9e8814927ae352"} Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.413416 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baef953169d993adb51aa0192f4f3dbfef6e915b7cde861bae9e8814927ae352" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.413551 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0c3a-account-create-update-vmw8c" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.422660 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xbl8m" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.438106 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xbl8m" event={"ID":"a209c4a0-a388-463f-a49c-cd24fc3b3ca8","Type":"ContainerDied","Data":"ebde893d841ae37479bb866ebf7bcc54ad91477ffe4f758f0aae0be25c0eee64"} Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.438180 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebde893d841ae37479bb866ebf7bcc54ad91477ffe4f758f0aae0be25c0eee64" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.443927 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6n69v" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.446482 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6n69v" event={"ID":"8b8aece4-9268-47a9-a9cb-b312c604a23a","Type":"ContainerDied","Data":"bfa9271c0f5efe91e1efb9a5f585cd7af9265b57fea671356bc44023a4a64d9b"} Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.446535 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfa9271c0f5efe91e1efb9a5f585cd7af9265b57fea671356bc44023a4a64d9b" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.451365 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d713-account-create-update-5fhps" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.451475 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d713-account-create-update-5fhps" event={"ID":"38655c7a-a82f-485f-ae38-75634417e780","Type":"ContainerDied","Data":"c477d8a6607a4e743e006527160692b23ec8bc71a5a0eb84fabe7a091846c129"} Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.451528 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c477d8a6607a4e743e006527160692b23ec8bc71a5a0eb84fabe7a091846c129" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.462024 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wkths" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.462090 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wkths" event={"ID":"d4e29c9b-9e25-4304-a314-54943d88f953","Type":"ContainerDied","Data":"8bb1cc327c5ad55f87f2274e84729020a462192c2152cc990975388432eec8ab"} Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.462137 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bb1cc327c5ad55f87f2274e84729020a462192c2152cc990975388432eec8ab" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.712196 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d35d-account-create-update-8j294" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.897606 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bef3c13-6018-44b9-b9ac-7620eee2ddb0-operator-scripts\") pod \"3bef3c13-6018-44b9-b9ac-7620eee2ddb0\" (UID: \"3bef3c13-6018-44b9-b9ac-7620eee2ddb0\") " Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.897913 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvpp9\" (UniqueName: \"kubernetes.io/projected/3bef3c13-6018-44b9-b9ac-7620eee2ddb0-kube-api-access-dvpp9\") pod \"3bef3c13-6018-44b9-b9ac-7620eee2ddb0\" (UID: \"3bef3c13-6018-44b9-b9ac-7620eee2ddb0\") " Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.898094 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bef3c13-6018-44b9-b9ac-7620eee2ddb0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3bef3c13-6018-44b9-b9ac-7620eee2ddb0" (UID: "3bef3c13-6018-44b9-b9ac-7620eee2ddb0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.898413 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bef3c13-6018-44b9-b9ac-7620eee2ddb0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:17:14 crc kubenswrapper[4946]: I1128 07:17:14.902684 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bef3c13-6018-44b9-b9ac-7620eee2ddb0-kube-api-access-dvpp9" (OuterVolumeSpecName: "kube-api-access-dvpp9") pod "3bef3c13-6018-44b9-b9ac-7620eee2ddb0" (UID: "3bef3c13-6018-44b9-b9ac-7620eee2ddb0"). InnerVolumeSpecName "kube-api-access-dvpp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:17:15 crc kubenswrapper[4946]: I1128 07:17:15.000470 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvpp9\" (UniqueName: \"kubernetes.io/projected/3bef3c13-6018-44b9-b9ac-7620eee2ddb0-kube-api-access-dvpp9\") on node \"crc\" DevicePath \"\"" Nov 28 07:17:15 crc kubenswrapper[4946]: I1128 07:17:15.474423 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d35d-account-create-update-8j294" event={"ID":"3bef3c13-6018-44b9-b9ac-7620eee2ddb0","Type":"ContainerDied","Data":"d19c47529b697a999b9bd549fea18920fef6106f736418ca4b994e0a4b0868ff"} Nov 28 07:17:15 crc kubenswrapper[4946]: I1128 07:17:15.474724 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d19c47529b697a999b9bd549fea18920fef6106f736418ca4b994e0a4b0868ff" Nov 28 07:17:15 crc kubenswrapper[4946]: I1128 07:17:15.474656 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d35d-account-create-update-8j294" Nov 28 07:17:20 crc kubenswrapper[4946]: I1128 07:17:20.800283 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5k9zq"] Nov 28 07:17:20 crc kubenswrapper[4946]: E1128 07:17:20.801223 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4e29c9b-9e25-4304-a314-54943d88f953" containerName="mariadb-database-create" Nov 28 07:17:20 crc kubenswrapper[4946]: I1128 07:17:20.801242 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4e29c9b-9e25-4304-a314-54943d88f953" containerName="mariadb-database-create" Nov 28 07:17:20 crc kubenswrapper[4946]: E1128 07:17:20.801276 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b8aece4-9268-47a9-a9cb-b312c604a23a" containerName="mariadb-database-create" Nov 28 07:17:20 crc kubenswrapper[4946]: I1128 07:17:20.801284 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b8aece4-9268-47a9-a9cb-b312c604a23a" containerName="mariadb-database-create" Nov 28 07:17:20 crc kubenswrapper[4946]: E1128 07:17:20.801299 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52295fb5-c0ec-4910-ad42-413e574375bd" containerName="mariadb-account-create-update" Nov 28 07:17:20 crc kubenswrapper[4946]: I1128 07:17:20.801308 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="52295fb5-c0ec-4910-ad42-413e574375bd" containerName="mariadb-account-create-update" Nov 28 07:17:20 crc kubenswrapper[4946]: E1128 07:17:20.801318 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a209c4a0-a388-463f-a49c-cd24fc3b3ca8" containerName="mariadb-database-create" Nov 28 07:17:20 crc kubenswrapper[4946]: I1128 07:17:20.801326 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="a209c4a0-a388-463f-a49c-cd24fc3b3ca8" containerName="mariadb-database-create" Nov 28 07:17:20 crc kubenswrapper[4946]: E1128 07:17:20.801339 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bef3c13-6018-44b9-b9ac-7620eee2ddb0" containerName="mariadb-account-create-update" Nov 28 07:17:20 crc kubenswrapper[4946]: I1128 07:17:20.801344 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bef3c13-6018-44b9-b9ac-7620eee2ddb0" containerName="mariadb-account-create-update" Nov 28 07:17:20 crc kubenswrapper[4946]: E1128 07:17:20.801371 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38655c7a-a82f-485f-ae38-75634417e780" containerName="mariadb-account-create-update" Nov 28 07:17:20 crc kubenswrapper[4946]: I1128 07:17:20.801377 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="38655c7a-a82f-485f-ae38-75634417e780" containerName="mariadb-account-create-update" Nov 28 07:17:20 crc kubenswrapper[4946]: I1128 07:17:20.801586 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b8aece4-9268-47a9-a9cb-b312c604a23a" containerName="mariadb-database-create" Nov 28 07:17:20 crc kubenswrapper[4946]: I1128 07:17:20.801599 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="52295fb5-c0ec-4910-ad42-413e574375bd" containerName="mariadb-account-create-update" Nov 28 07:17:20 crc kubenswrapper[4946]: I1128 07:17:20.801615 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4e29c9b-9e25-4304-a314-54943d88f953" containerName="mariadb-database-create" Nov 28 07:17:20 crc kubenswrapper[4946]: I1128 07:17:20.801637 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="38655c7a-a82f-485f-ae38-75634417e780" containerName="mariadb-account-create-update" Nov 28 07:17:20 crc kubenswrapper[4946]: I1128 07:17:20.801650 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="a209c4a0-a388-463f-a49c-cd24fc3b3ca8" containerName="mariadb-database-create" Nov 28 07:17:20 crc kubenswrapper[4946]: I1128 07:17:20.801662 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bef3c13-6018-44b9-b9ac-7620eee2ddb0" containerName="mariadb-account-create-update" Nov 28 07:17:20 crc kubenswrapper[4946]: I1128 07:17:20.802369 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5k9zq" Nov 28 07:17:20 crc kubenswrapper[4946]: I1128 07:17:20.818657 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 28 07:17:20 crc kubenswrapper[4946]: I1128 07:17:20.819349 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nr9c9" Nov 28 07:17:20 crc kubenswrapper[4946]: I1128 07:17:20.819396 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 28 07:17:20 crc kubenswrapper[4946]: I1128 07:17:20.829518 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5k9zq"] Nov 28 07:17:20 crc kubenswrapper[4946]: I1128 07:17:20.938192 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cb45337-5b52-46b3-b7b3-bbe5123d34e5-scripts\") pod \"nova-cell0-conductor-db-sync-5k9zq\" (UID: \"2cb45337-5b52-46b3-b7b3-bbe5123d34e5\") " pod="openstack/nova-cell0-conductor-db-sync-5k9zq" Nov 28 07:17:20 crc kubenswrapper[4946]: I1128 07:17:20.938339 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb45337-5b52-46b3-b7b3-bbe5123d34e5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5k9zq\" (UID: \"2cb45337-5b52-46b3-b7b3-bbe5123d34e5\") " pod="openstack/nova-cell0-conductor-db-sync-5k9zq" Nov 28 07:17:20 crc kubenswrapper[4946]: I1128 07:17:20.938385 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwp4q\" (UniqueName: \"kubernetes.io/projected/2cb45337-5b52-46b3-b7b3-bbe5123d34e5-kube-api-access-fwp4q\") pod \"nova-cell0-conductor-db-sync-5k9zq\" (UID: \"2cb45337-5b52-46b3-b7b3-bbe5123d34e5\") " pod="openstack/nova-cell0-conductor-db-sync-5k9zq" Nov 28 07:17:20 crc kubenswrapper[4946]: I1128 07:17:20.938564 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cb45337-5b52-46b3-b7b3-bbe5123d34e5-config-data\") pod \"nova-cell0-conductor-db-sync-5k9zq\" (UID: \"2cb45337-5b52-46b3-b7b3-bbe5123d34e5\") " pod="openstack/nova-cell0-conductor-db-sync-5k9zq" Nov 28 07:17:21 crc kubenswrapper[4946]: I1128 07:17:21.040543 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cb45337-5b52-46b3-b7b3-bbe5123d34e5-config-data\") pod \"nova-cell0-conductor-db-sync-5k9zq\" (UID: \"2cb45337-5b52-46b3-b7b3-bbe5123d34e5\") " pod="openstack/nova-cell0-conductor-db-sync-5k9zq" Nov 28 07:17:21 crc kubenswrapper[4946]: I1128 07:17:21.040611 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cb45337-5b52-46b3-b7b3-bbe5123d34e5-scripts\") pod \"nova-cell0-conductor-db-sync-5k9zq\" (UID: \"2cb45337-5b52-46b3-b7b3-bbe5123d34e5\") " pod="openstack/nova-cell0-conductor-db-sync-5k9zq" Nov 28 07:17:21 crc kubenswrapper[4946]: I1128 07:17:21.040669 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb45337-5b52-46b3-b7b3-bbe5123d34e5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5k9zq\" (UID: \"2cb45337-5b52-46b3-b7b3-bbe5123d34e5\") " pod="openstack/nova-cell0-conductor-db-sync-5k9zq" Nov 28 07:17:21 crc kubenswrapper[4946]: I1128 07:17:21.040694 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwp4q\" (UniqueName: \"kubernetes.io/projected/2cb45337-5b52-46b3-b7b3-bbe5123d34e5-kube-api-access-fwp4q\") pod \"nova-cell0-conductor-db-sync-5k9zq\" (UID: \"2cb45337-5b52-46b3-b7b3-bbe5123d34e5\") " pod="openstack/nova-cell0-conductor-db-sync-5k9zq" Nov 28 07:17:21 crc kubenswrapper[4946]: I1128 07:17:21.052613 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cb45337-5b52-46b3-b7b3-bbe5123d34e5-config-data\") pod \"nova-cell0-conductor-db-sync-5k9zq\" (UID: \"2cb45337-5b52-46b3-b7b3-bbe5123d34e5\") " pod="openstack/nova-cell0-conductor-db-sync-5k9zq" Nov 28 07:17:21 crc kubenswrapper[4946]: I1128 07:17:21.061851 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cb45337-5b52-46b3-b7b3-bbe5123d34e5-scripts\") pod \"nova-cell0-conductor-db-sync-5k9zq\" (UID: \"2cb45337-5b52-46b3-b7b3-bbe5123d34e5\") " pod="openstack/nova-cell0-conductor-db-sync-5k9zq" Nov 28 07:17:21 crc kubenswrapper[4946]: I1128 07:17:21.062433 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb45337-5b52-46b3-b7b3-bbe5123d34e5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5k9zq\" (UID: \"2cb45337-5b52-46b3-b7b3-bbe5123d34e5\") " pod="openstack/nova-cell0-conductor-db-sync-5k9zq" Nov 28 07:17:21 crc kubenswrapper[4946]: I1128 07:17:21.071285 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwp4q\" (UniqueName: \"kubernetes.io/projected/2cb45337-5b52-46b3-b7b3-bbe5123d34e5-kube-api-access-fwp4q\") pod \"nova-cell0-conductor-db-sync-5k9zq\" (UID: \"2cb45337-5b52-46b3-b7b3-bbe5123d34e5\") " pod="openstack/nova-cell0-conductor-db-sync-5k9zq" Nov 28 07:17:21 crc kubenswrapper[4946]: I1128 07:17:21.135123 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5k9zq" Nov 28 07:17:21 crc kubenswrapper[4946]: I1128 07:17:21.594407 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5k9zq"] Nov 28 07:17:21 crc kubenswrapper[4946]: W1128 07:17:21.595760 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cb45337_5b52_46b3_b7b3_bbe5123d34e5.slice/crio-9943f23a83e53d697695f9443c5b0abef1b07410900caafbc18233a1e0aa4220 WatchSource:0}: Error finding container 9943f23a83e53d697695f9443c5b0abef1b07410900caafbc18233a1e0aa4220: Status 404 returned error can't find the container with id 9943f23a83e53d697695f9443c5b0abef1b07410900caafbc18233a1e0aa4220 Nov 28 07:17:22 crc kubenswrapper[4946]: I1128 07:17:22.554703 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5k9zq" event={"ID":"2cb45337-5b52-46b3-b7b3-bbe5123d34e5","Type":"ContainerStarted","Data":"9943f23a83e53d697695f9443c5b0abef1b07410900caafbc18233a1e0aa4220"} Nov 28 07:17:25 crc kubenswrapper[4946]: I1128 07:17:25.759843 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="e3fb7f76-db33-4ff6-b613-f60c7a64be38" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 28 07:17:28 crc kubenswrapper[4946]: I1128 07:17:28.121184 4946 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod6bf0f217-858e-49f9-8730-7376f77c6d4f"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod6bf0f217-858e-49f9-8730-7376f77c6d4f] : Timed out while waiting for systemd to remove kubepods-besteffort-pod6bf0f217_858e_49f9_8730_7376f77c6d4f.slice" Nov 28 07:17:30 crc kubenswrapper[4946]: I1128 07:17:30.646341 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5k9zq" event={"ID":"2cb45337-5b52-46b3-b7b3-bbe5123d34e5","Type":"ContainerStarted","Data":"94714a0d2515a34c244c85744cb38ff42cd4784182d2bc23af41f47d94d54c00"} Nov 28 07:17:30 crc kubenswrapper[4946]: I1128 07:17:30.721530 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-5k9zq" podStartSLOduration=2.896515117 podStartE2EDuration="10.721502031s" podCreationTimestamp="2025-11-28 07:17:20 +0000 UTC" firstStartedPulling="2025-11-28 07:17:21.598734433 +0000 UTC m=+1495.976799544" lastFinishedPulling="2025-11-28 07:17:29.423721347 +0000 UTC m=+1503.801786458" observedRunningTime="2025-11-28 07:17:30.706740849 +0000 UTC m=+1505.084805960" watchObservedRunningTime="2025-11-28 07:17:30.721502031 +0000 UTC m=+1505.099567162" Nov 28 07:17:31 crc kubenswrapper[4946]: I1128 07:17:31.660213 4946 generic.go:334] "Generic (PLEG): container finished" podID="e3fb7f76-db33-4ff6-b613-f60c7a64be38" containerID="d67738d9866152125af3a1266102ca4656a4f5012f313fb1146ddac533f56549" exitCode=137 Nov 28 07:17:31 crc kubenswrapper[4946]: I1128 07:17:31.660256 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3fb7f76-db33-4ff6-b613-f60c7a64be38","Type":"ContainerDied","Data":"d67738d9866152125af3a1266102ca4656a4f5012f313fb1146ddac533f56549"} Nov 28 07:17:31 crc kubenswrapper[4946]: I1128 07:17:31.660502 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3fb7f76-db33-4ff6-b613-f60c7a64be38","Type":"ContainerDied","Data":"958a9823d8315e86934ab8a599203bf5dd22e9ed0bf342c82ff2c5f232ad5ff8"} Nov 28 07:17:31 crc kubenswrapper[4946]: I1128 07:17:31.660534 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="958a9823d8315e86934ab8a599203bf5dd22e9ed0bf342c82ff2c5f232ad5ff8" Nov 28 07:17:31 crc kubenswrapper[4946]: I1128 07:17:31.668184 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:17:31 crc kubenswrapper[4946]: I1128 07:17:31.836757 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3fb7f76-db33-4ff6-b613-f60c7a64be38-sg-core-conf-yaml\") pod \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " Nov 28 07:17:31 crc kubenswrapper[4946]: I1128 07:17:31.836818 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3fb7f76-db33-4ff6-b613-f60c7a64be38-run-httpd\") pod \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " Nov 28 07:17:31 crc kubenswrapper[4946]: I1128 07:17:31.836886 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3fb7f76-db33-4ff6-b613-f60c7a64be38-scripts\") pod \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " Nov 28 07:17:31 crc kubenswrapper[4946]: I1128 07:17:31.836979 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3fb7f76-db33-4ff6-b613-f60c7a64be38-log-httpd\") pod \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " Nov 28 07:17:31 crc kubenswrapper[4946]: I1128 07:17:31.837022 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3fb7f76-db33-4ff6-b613-f60c7a64be38-combined-ca-bundle\") pod \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " Nov 28 07:17:31 crc kubenswrapper[4946]: I1128 07:17:31.837078 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3fb7f76-db33-4ff6-b613-f60c7a64be38-config-data\") pod \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " Nov 28 07:17:31 crc kubenswrapper[4946]: I1128 07:17:31.837123 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llkl7\" (UniqueName: \"kubernetes.io/projected/e3fb7f76-db33-4ff6-b613-f60c7a64be38-kube-api-access-llkl7\") pod \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\" (UID: \"e3fb7f76-db33-4ff6-b613-f60c7a64be38\") " Nov 28 07:17:31 crc kubenswrapper[4946]: I1128 07:17:31.838126 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3fb7f76-db33-4ff6-b613-f60c7a64be38-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e3fb7f76-db33-4ff6-b613-f60c7a64be38" (UID: "e3fb7f76-db33-4ff6-b613-f60c7a64be38"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:17:31 crc kubenswrapper[4946]: I1128 07:17:31.840151 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3fb7f76-db33-4ff6-b613-f60c7a64be38-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e3fb7f76-db33-4ff6-b613-f60c7a64be38" (UID: "e3fb7f76-db33-4ff6-b613-f60c7a64be38"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:17:31 crc kubenswrapper[4946]: I1128 07:17:31.855810 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3fb7f76-db33-4ff6-b613-f60c7a64be38-scripts" (OuterVolumeSpecName: "scripts") pod "e3fb7f76-db33-4ff6-b613-f60c7a64be38" (UID: "e3fb7f76-db33-4ff6-b613-f60c7a64be38"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:17:31 crc kubenswrapper[4946]: I1128 07:17:31.859228 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3fb7f76-db33-4ff6-b613-f60c7a64be38-kube-api-access-llkl7" (OuterVolumeSpecName: "kube-api-access-llkl7") pod "e3fb7f76-db33-4ff6-b613-f60c7a64be38" (UID: "e3fb7f76-db33-4ff6-b613-f60c7a64be38"). InnerVolumeSpecName "kube-api-access-llkl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:17:31 crc kubenswrapper[4946]: I1128 07:17:31.880233 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3fb7f76-db33-4ff6-b613-f60c7a64be38-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e3fb7f76-db33-4ff6-b613-f60c7a64be38" (UID: "e3fb7f76-db33-4ff6-b613-f60c7a64be38"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:17:31 crc kubenswrapper[4946]: I1128 07:17:31.932626 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3fb7f76-db33-4ff6-b613-f60c7a64be38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3fb7f76-db33-4ff6-b613-f60c7a64be38" (UID: "e3fb7f76-db33-4ff6-b613-f60c7a64be38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:17:31 crc kubenswrapper[4946]: I1128 07:17:31.940533 4946 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3fb7f76-db33-4ff6-b613-f60c7a64be38-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:17:31 crc kubenswrapper[4946]: I1128 07:17:31.940761 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3fb7f76-db33-4ff6-b613-f60c7a64be38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:17:31 crc kubenswrapper[4946]: I1128 07:17:31.940824 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llkl7\" (UniqueName: \"kubernetes.io/projected/e3fb7f76-db33-4ff6-b613-f60c7a64be38-kube-api-access-llkl7\") on node \"crc\" DevicePath \"\"" Nov 28 07:17:31 crc kubenswrapper[4946]: I1128 07:17:31.940897 4946 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3fb7f76-db33-4ff6-b613-f60c7a64be38-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 07:17:31 crc kubenswrapper[4946]: I1128 07:17:31.940950 4946 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3fb7f76-db33-4ff6-b613-f60c7a64be38-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:17:31 crc kubenswrapper[4946]: I1128 07:17:31.941011 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3fb7f76-db33-4ff6-b613-f60c7a64be38-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:17:31 crc kubenswrapper[4946]: I1128 07:17:31.955968 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3fb7f76-db33-4ff6-b613-f60c7a64be38-config-data" (OuterVolumeSpecName: "config-data") pod "e3fb7f76-db33-4ff6-b613-f60c7a64be38" (UID: "e3fb7f76-db33-4ff6-b613-f60c7a64be38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.053109 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3fb7f76-db33-4ff6-b613-f60c7a64be38-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.676588 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.719773 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.733827 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.757385 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:17:32 crc kubenswrapper[4946]: E1128 07:17:32.758168 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3fb7f76-db33-4ff6-b613-f60c7a64be38" containerName="ceilometer-notification-agent" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.758219 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3fb7f76-db33-4ff6-b613-f60c7a64be38" containerName="ceilometer-notification-agent" Nov 28 07:17:32 crc kubenswrapper[4946]: E1128 07:17:32.758239 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3fb7f76-db33-4ff6-b613-f60c7a64be38" containerName="proxy-httpd" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.758250 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3fb7f76-db33-4ff6-b613-f60c7a64be38" containerName="proxy-httpd" Nov 28 07:17:32 crc kubenswrapper[4946]: E1128 07:17:32.758269 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3fb7f76-db33-4ff6-b613-f60c7a64be38" containerName="sg-core" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.758278 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3fb7f76-db33-4ff6-b613-f60c7a64be38" containerName="sg-core" Nov 28 07:17:32 crc kubenswrapper[4946]: E1128 07:17:32.758301 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3fb7f76-db33-4ff6-b613-f60c7a64be38" containerName="ceilometer-central-agent" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.758310 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3fb7f76-db33-4ff6-b613-f60c7a64be38" containerName="ceilometer-central-agent" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.758610 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3fb7f76-db33-4ff6-b613-f60c7a64be38" containerName="ceilometer-central-agent" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.758627 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3fb7f76-db33-4ff6-b613-f60c7a64be38" containerName="ceilometer-notification-agent" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.758649 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3fb7f76-db33-4ff6-b613-f60c7a64be38" containerName="sg-core" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.758669 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3fb7f76-db33-4ff6-b613-f60c7a64be38" containerName="proxy-httpd" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.761143 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.766802 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.767267 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.785887 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.868796 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsd9n\" (UniqueName: \"kubernetes.io/projected/9f63766d-f8ca-45f4-a9b0-12a48917af68-kube-api-access-jsd9n\") pod \"ceilometer-0\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " pod="openstack/ceilometer-0" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.868887 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f63766d-f8ca-45f4-a9b0-12a48917af68-config-data\") pod \"ceilometer-0\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " pod="openstack/ceilometer-0" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.868949 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f63766d-f8ca-45f4-a9b0-12a48917af68-scripts\") pod \"ceilometer-0\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " pod="openstack/ceilometer-0" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.868981 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f63766d-f8ca-45f4-a9b0-12a48917af68-log-httpd\") pod \"ceilometer-0\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " pod="openstack/ceilometer-0" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.869011 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f63766d-f8ca-45f4-a9b0-12a48917af68-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " pod="openstack/ceilometer-0" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.869049 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f63766d-f8ca-45f4-a9b0-12a48917af68-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " pod="openstack/ceilometer-0" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.869075 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f63766d-f8ca-45f4-a9b0-12a48917af68-run-httpd\") pod \"ceilometer-0\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " pod="openstack/ceilometer-0" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.970649 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f63766d-f8ca-45f4-a9b0-12a48917af68-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " pod="openstack/ceilometer-0" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.970703 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f63766d-f8ca-45f4-a9b0-12a48917af68-run-httpd\") pod \"ceilometer-0\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " pod="openstack/ceilometer-0" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.970772 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsd9n\" (UniqueName: \"kubernetes.io/projected/9f63766d-f8ca-45f4-a9b0-12a48917af68-kube-api-access-jsd9n\") pod \"ceilometer-0\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " pod="openstack/ceilometer-0" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.970858 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f63766d-f8ca-45f4-a9b0-12a48917af68-config-data\") pod \"ceilometer-0\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " pod="openstack/ceilometer-0" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.971093 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f63766d-f8ca-45f4-a9b0-12a48917af68-scripts\") pod \"ceilometer-0\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " pod="openstack/ceilometer-0" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.971131 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f63766d-f8ca-45f4-a9b0-12a48917af68-log-httpd\") pod \"ceilometer-0\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " pod="openstack/ceilometer-0" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.971161 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f63766d-f8ca-45f4-a9b0-12a48917af68-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " pod="openstack/ceilometer-0" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.971211 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f63766d-f8ca-45f4-a9b0-12a48917af68-run-httpd\") pod \"ceilometer-0\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " pod="openstack/ceilometer-0" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.972000 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f63766d-f8ca-45f4-a9b0-12a48917af68-log-httpd\") pod \"ceilometer-0\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " pod="openstack/ceilometer-0" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.985921 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f63766d-f8ca-45f4-a9b0-12a48917af68-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " pod="openstack/ceilometer-0" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.986561 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f63766d-f8ca-45f4-a9b0-12a48917af68-config-data\") pod \"ceilometer-0\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " pod="openstack/ceilometer-0" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.986602 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f63766d-f8ca-45f4-a9b0-12a48917af68-scripts\") pod \"ceilometer-0\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " pod="openstack/ceilometer-0" Nov 28 07:17:32 crc kubenswrapper[4946]: I1128 07:17:32.994303 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsd9n\" (UniqueName: \"kubernetes.io/projected/9f63766d-f8ca-45f4-a9b0-12a48917af68-kube-api-access-jsd9n\") pod \"ceilometer-0\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " pod="openstack/ceilometer-0" Nov 28 07:17:33 crc kubenswrapper[4946]: I1128 07:17:33.000659 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f63766d-f8ca-45f4-a9b0-12a48917af68-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " pod="openstack/ceilometer-0" Nov 28 07:17:33 crc kubenswrapper[4946]: I1128 07:17:33.091538 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:17:33 crc kubenswrapper[4946]: I1128 07:17:33.551979 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:17:33 crc kubenswrapper[4946]: I1128 07:17:33.688355 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f63766d-f8ca-45f4-a9b0-12a48917af68","Type":"ContainerStarted","Data":"3f5d9148f2efecd3e1062e2ef7ba17ea95c195e089ae596aac3d6e7dd0b6c3fd"} Nov 28 07:17:34 crc kubenswrapper[4946]: I1128 07:17:34.006258 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3fb7f76-db33-4ff6-b613-f60c7a64be38" path="/var/lib/kubelet/pods/e3fb7f76-db33-4ff6-b613-f60c7a64be38/volumes" Nov 28 07:17:34 crc kubenswrapper[4946]: I1128 07:17:34.702675 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f63766d-f8ca-45f4-a9b0-12a48917af68","Type":"ContainerStarted","Data":"6c7d264059d252430d87f0b0e94282ba9f85fc631b483117759d6db21b5d7867"} Nov 28 07:17:35 crc kubenswrapper[4946]: I1128 07:17:35.719894 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f63766d-f8ca-45f4-a9b0-12a48917af68","Type":"ContainerStarted","Data":"9d8270d9349b37ee7bca9009e6a1267b7660fdda5e573f31309b147bfbf1f0dd"} Nov 28 07:17:36 crc kubenswrapper[4946]: I1128 07:17:36.734805 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f63766d-f8ca-45f4-a9b0-12a48917af68","Type":"ContainerStarted","Data":"83cec6a77a4e88418903dfaf539eba1629b9e34b8cde6e47872b3a75d523845f"} Nov 28 07:17:37 crc kubenswrapper[4946]: I1128 07:17:37.747713 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f63766d-f8ca-45f4-a9b0-12a48917af68","Type":"ContainerStarted","Data":"d88b97a54777bafa1a703658388dacffc37ec6f4337066f1021a59a5567b8ad5"} Nov 28 07:17:37 crc kubenswrapper[4946]: I1128 07:17:37.748054 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 07:17:37 crc kubenswrapper[4946]: I1128 07:17:37.781080 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9297972140000001 podStartE2EDuration="5.781053374s" podCreationTimestamp="2025-11-28 07:17:32 +0000 UTC" firstStartedPulling="2025-11-28 07:17:33.561337269 +0000 UTC m=+1507.939402380" lastFinishedPulling="2025-11-28 07:17:37.412593429 +0000 UTC m=+1511.790658540" observedRunningTime="2025-11-28 07:17:37.770539766 +0000 UTC m=+1512.148604907" watchObservedRunningTime="2025-11-28 07:17:37.781053374 +0000 UTC m=+1512.159118505" Nov 28 07:17:41 crc kubenswrapper[4946]: I1128 07:17:41.804172 4946 generic.go:334] "Generic (PLEG): container finished" podID="2cb45337-5b52-46b3-b7b3-bbe5123d34e5" containerID="94714a0d2515a34c244c85744cb38ff42cd4784182d2bc23af41f47d94d54c00" exitCode=0 Nov 28 07:17:41 crc kubenswrapper[4946]: I1128 07:17:41.804356 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5k9zq" event={"ID":"2cb45337-5b52-46b3-b7b3-bbe5123d34e5","Type":"ContainerDied","Data":"94714a0d2515a34c244c85744cb38ff42cd4784182d2bc23af41f47d94d54c00"} Nov 28 07:17:43 crc kubenswrapper[4946]: I1128 07:17:43.234900 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5k9zq" Nov 28 07:17:43 crc kubenswrapper[4946]: I1128 07:17:43.432428 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb45337-5b52-46b3-b7b3-bbe5123d34e5-combined-ca-bundle\") pod \"2cb45337-5b52-46b3-b7b3-bbe5123d34e5\" (UID: \"2cb45337-5b52-46b3-b7b3-bbe5123d34e5\") " Nov 28 07:17:43 crc kubenswrapper[4946]: I1128 07:17:43.432498 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cb45337-5b52-46b3-b7b3-bbe5123d34e5-scripts\") pod \"2cb45337-5b52-46b3-b7b3-bbe5123d34e5\" (UID: \"2cb45337-5b52-46b3-b7b3-bbe5123d34e5\") " Nov 28 07:17:43 crc kubenswrapper[4946]: I1128 07:17:43.432647 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwp4q\" (UniqueName: \"kubernetes.io/projected/2cb45337-5b52-46b3-b7b3-bbe5123d34e5-kube-api-access-fwp4q\") pod \"2cb45337-5b52-46b3-b7b3-bbe5123d34e5\" (UID: \"2cb45337-5b52-46b3-b7b3-bbe5123d34e5\") " Nov 28 07:17:43 crc kubenswrapper[4946]: I1128 07:17:43.432717 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cb45337-5b52-46b3-b7b3-bbe5123d34e5-config-data\") pod \"2cb45337-5b52-46b3-b7b3-bbe5123d34e5\" (UID: \"2cb45337-5b52-46b3-b7b3-bbe5123d34e5\") " Nov 28 07:17:43 crc kubenswrapper[4946]: I1128 07:17:43.444412 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb45337-5b52-46b3-b7b3-bbe5123d34e5-scripts" (OuterVolumeSpecName: "scripts") pod "2cb45337-5b52-46b3-b7b3-bbe5123d34e5" (UID: "2cb45337-5b52-46b3-b7b3-bbe5123d34e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:17:43 crc kubenswrapper[4946]: I1128 07:17:43.444596 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb45337-5b52-46b3-b7b3-bbe5123d34e5-kube-api-access-fwp4q" (OuterVolumeSpecName: "kube-api-access-fwp4q") pod "2cb45337-5b52-46b3-b7b3-bbe5123d34e5" (UID: "2cb45337-5b52-46b3-b7b3-bbe5123d34e5"). InnerVolumeSpecName "kube-api-access-fwp4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:17:43 crc kubenswrapper[4946]: I1128 07:17:43.460081 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb45337-5b52-46b3-b7b3-bbe5123d34e5-config-data" (OuterVolumeSpecName: "config-data") pod "2cb45337-5b52-46b3-b7b3-bbe5123d34e5" (UID: "2cb45337-5b52-46b3-b7b3-bbe5123d34e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:17:43 crc kubenswrapper[4946]: I1128 07:17:43.475050 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb45337-5b52-46b3-b7b3-bbe5123d34e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2cb45337-5b52-46b3-b7b3-bbe5123d34e5" (UID: "2cb45337-5b52-46b3-b7b3-bbe5123d34e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:17:43 crc kubenswrapper[4946]: I1128 07:17:43.536119 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb45337-5b52-46b3-b7b3-bbe5123d34e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:17:43 crc kubenswrapper[4946]: I1128 07:17:43.536167 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cb45337-5b52-46b3-b7b3-bbe5123d34e5-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:17:43 crc kubenswrapper[4946]: I1128 07:17:43.536187 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwp4q\" (UniqueName: \"kubernetes.io/projected/2cb45337-5b52-46b3-b7b3-bbe5123d34e5-kube-api-access-fwp4q\") on node \"crc\" DevicePath \"\"" Nov 28 07:17:43 crc kubenswrapper[4946]: I1128 07:17:43.536207 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cb45337-5b52-46b3-b7b3-bbe5123d34e5-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:17:43 crc kubenswrapper[4946]: I1128 07:17:43.829095 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5k9zq" event={"ID":"2cb45337-5b52-46b3-b7b3-bbe5123d34e5","Type":"ContainerDied","Data":"9943f23a83e53d697695f9443c5b0abef1b07410900caafbc18233a1e0aa4220"} Nov 28 07:17:43 crc kubenswrapper[4946]: I1128 07:17:43.829159 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9943f23a83e53d697695f9443c5b0abef1b07410900caafbc18233a1e0aa4220" Nov 28 07:17:43 crc kubenswrapper[4946]: I1128 07:17:43.829166 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5k9zq" Nov 28 07:17:43 crc kubenswrapper[4946]: I1128 07:17:43.968273 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 07:17:43 crc kubenswrapper[4946]: E1128 07:17:43.969072 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cb45337-5b52-46b3-b7b3-bbe5123d34e5" containerName="nova-cell0-conductor-db-sync" Nov 28 07:17:43 crc kubenswrapper[4946]: I1128 07:17:43.969092 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cb45337-5b52-46b3-b7b3-bbe5123d34e5" containerName="nova-cell0-conductor-db-sync" Nov 28 07:17:43 crc kubenswrapper[4946]: I1128 07:17:43.969291 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cb45337-5b52-46b3-b7b3-bbe5123d34e5" containerName="nova-cell0-conductor-db-sync" Nov 28 07:17:43 crc kubenswrapper[4946]: I1128 07:17:43.970042 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 07:17:43 crc kubenswrapper[4946]: I1128 07:17:43.978267 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 28 07:17:43 crc kubenswrapper[4946]: I1128 07:17:43.978293 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nr9c9" Nov 28 07:17:43 crc kubenswrapper[4946]: I1128 07:17:43.980177 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 07:17:44 crc kubenswrapper[4946]: I1128 07:17:44.046930 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9847c34-a2be-405c-8bd8-34ba251d218d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b9847c34-a2be-405c-8bd8-34ba251d218d\") " pod="openstack/nova-cell0-conductor-0" Nov 28 07:17:44 crc kubenswrapper[4946]: I1128 07:17:44.047044 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9847c34-a2be-405c-8bd8-34ba251d218d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b9847c34-a2be-405c-8bd8-34ba251d218d\") " pod="openstack/nova-cell0-conductor-0" Nov 28 07:17:44 crc kubenswrapper[4946]: I1128 07:17:44.047075 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p6lc\" (UniqueName: \"kubernetes.io/projected/b9847c34-a2be-405c-8bd8-34ba251d218d-kube-api-access-9p6lc\") pod \"nova-cell0-conductor-0\" (UID: \"b9847c34-a2be-405c-8bd8-34ba251d218d\") " pod="openstack/nova-cell0-conductor-0" Nov 28 07:17:44 crc kubenswrapper[4946]: I1128 07:17:44.149718 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9847c34-a2be-405c-8bd8-34ba251d218d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b9847c34-a2be-405c-8bd8-34ba251d218d\") " pod="openstack/nova-cell0-conductor-0" Nov 28 07:17:44 crc kubenswrapper[4946]: I1128 07:17:44.149793 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p6lc\" (UniqueName: \"kubernetes.io/projected/b9847c34-a2be-405c-8bd8-34ba251d218d-kube-api-access-9p6lc\") pod \"nova-cell0-conductor-0\" (UID: \"b9847c34-a2be-405c-8bd8-34ba251d218d\") " pod="openstack/nova-cell0-conductor-0" Nov 28 07:17:44 crc kubenswrapper[4946]: I1128 07:17:44.149980 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9847c34-a2be-405c-8bd8-34ba251d218d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b9847c34-a2be-405c-8bd8-34ba251d218d\") " pod="openstack/nova-cell0-conductor-0" Nov 28 07:17:44 crc kubenswrapper[4946]: I1128 07:17:44.155535 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9847c34-a2be-405c-8bd8-34ba251d218d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b9847c34-a2be-405c-8bd8-34ba251d218d\") " pod="openstack/nova-cell0-conductor-0" Nov 28 07:17:44 crc kubenswrapper[4946]: I1128 07:17:44.155983 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9847c34-a2be-405c-8bd8-34ba251d218d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b9847c34-a2be-405c-8bd8-34ba251d218d\") " pod="openstack/nova-cell0-conductor-0" Nov 28 07:17:44 crc kubenswrapper[4946]: I1128 07:17:44.169077 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p6lc\" (UniqueName: \"kubernetes.io/projected/b9847c34-a2be-405c-8bd8-34ba251d218d-kube-api-access-9p6lc\") pod \"nova-cell0-conductor-0\" (UID: \"b9847c34-a2be-405c-8bd8-34ba251d218d\") " pod="openstack/nova-cell0-conductor-0" Nov 28 07:17:44 crc kubenswrapper[4946]: I1128 07:17:44.329452 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 07:17:44 crc kubenswrapper[4946]: W1128 07:17:44.883696 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9847c34_a2be_405c_8bd8_34ba251d218d.slice/crio-34edb443746b43a19279746540d0fd55b7126f2ec8f1aa208214ccb503a41cac WatchSource:0}: Error finding container 34edb443746b43a19279746540d0fd55b7126f2ec8f1aa208214ccb503a41cac: Status 404 returned error can't find the container with id 34edb443746b43a19279746540d0fd55b7126f2ec8f1aa208214ccb503a41cac Nov 28 07:17:44 crc kubenswrapper[4946]: I1128 07:17:44.885653 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 07:17:45 crc kubenswrapper[4946]: I1128 07:17:45.854114 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b9847c34-a2be-405c-8bd8-34ba251d218d","Type":"ContainerStarted","Data":"c93f03ceec40d0cc831cb034e30236277dd7917f19967a2a82daec8f7e2ea5a1"} Nov 28 07:17:45 crc kubenswrapper[4946]: I1128 07:17:45.854581 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b9847c34-a2be-405c-8bd8-34ba251d218d","Type":"ContainerStarted","Data":"34edb443746b43a19279746540d0fd55b7126f2ec8f1aa208214ccb503a41cac"} Nov 28 07:17:45 crc kubenswrapper[4946]: I1128 07:17:45.854617 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 28 07:17:45 crc kubenswrapper[4946]: I1128 07:17:45.887688 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.887664074 podStartE2EDuration="2.887664074s" podCreationTimestamp="2025-11-28 07:17:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:17:45.882029376 +0000 UTC m=+1520.260094487" watchObservedRunningTime="2025-11-28 07:17:45.887664074 +0000 UTC m=+1520.265729185" Nov 28 07:17:54 crc kubenswrapper[4946]: I1128 07:17:54.364533 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 28 07:17:54 crc kubenswrapper[4946]: I1128 07:17:54.960716 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-mr4lq"] Nov 28 07:17:54 crc kubenswrapper[4946]: I1128 07:17:54.963327 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mr4lq" Nov 28 07:17:54 crc kubenswrapper[4946]: I1128 07:17:54.968858 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 28 07:17:54 crc kubenswrapper[4946]: I1128 07:17:54.970880 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 28 07:17:54 crc kubenswrapper[4946]: I1128 07:17:54.977291 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mr4lq"] Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.123777 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz4km\" (UniqueName: \"kubernetes.io/projected/fc547561-a892-4d64-83cd-35f61e46ffbc-kube-api-access-mz4km\") pod \"nova-cell0-cell-mapping-mr4lq\" (UID: \"fc547561-a892-4d64-83cd-35f61e46ffbc\") " pod="openstack/nova-cell0-cell-mapping-mr4lq" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.124281 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc547561-a892-4d64-83cd-35f61e46ffbc-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mr4lq\" (UID: \"fc547561-a892-4d64-83cd-35f61e46ffbc\") " pod="openstack/nova-cell0-cell-mapping-mr4lq" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.124423 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc547561-a892-4d64-83cd-35f61e46ffbc-scripts\") pod \"nova-cell0-cell-mapping-mr4lq\" (UID: \"fc547561-a892-4d64-83cd-35f61e46ffbc\") " pod="openstack/nova-cell0-cell-mapping-mr4lq" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.124696 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc547561-a892-4d64-83cd-35f61e46ffbc-config-data\") pod \"nova-cell0-cell-mapping-mr4lq\" (UID: \"fc547561-a892-4d64-83cd-35f61e46ffbc\") " pod="openstack/nova-cell0-cell-mapping-mr4lq" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.216024 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.220589 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.223579 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.227514 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz4km\" (UniqueName: \"kubernetes.io/projected/fc547561-a892-4d64-83cd-35f61e46ffbc-kube-api-access-mz4km\") pod \"nova-cell0-cell-mapping-mr4lq\" (UID: \"fc547561-a892-4d64-83cd-35f61e46ffbc\") " pod="openstack/nova-cell0-cell-mapping-mr4lq" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.227551 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc547561-a892-4d64-83cd-35f61e46ffbc-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mr4lq\" (UID: \"fc547561-a892-4d64-83cd-35f61e46ffbc\") " pod="openstack/nova-cell0-cell-mapping-mr4lq" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.227601 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc547561-a892-4d64-83cd-35f61e46ffbc-scripts\") pod \"nova-cell0-cell-mapping-mr4lq\" (UID: \"fc547561-a892-4d64-83cd-35f61e46ffbc\") " pod="openstack/nova-cell0-cell-mapping-mr4lq" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.227650 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc547561-a892-4d64-83cd-35f61e46ffbc-config-data\") pod \"nova-cell0-cell-mapping-mr4lq\" (UID: \"fc547561-a892-4d64-83cd-35f61e46ffbc\") " pod="openstack/nova-cell0-cell-mapping-mr4lq" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.241690 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc547561-a892-4d64-83cd-35f61e46ffbc-config-data\") pod \"nova-cell0-cell-mapping-mr4lq\" (UID: \"fc547561-a892-4d64-83cd-35f61e46ffbc\") " pod="openstack/nova-cell0-cell-mapping-mr4lq" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.242649 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc547561-a892-4d64-83cd-35f61e46ffbc-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mr4lq\" (UID: \"fc547561-a892-4d64-83cd-35f61e46ffbc\") " pod="openstack/nova-cell0-cell-mapping-mr4lq" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.260087 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc547561-a892-4d64-83cd-35f61e46ffbc-scripts\") pod \"nova-cell0-cell-mapping-mr4lq\" (UID: \"fc547561-a892-4d64-83cd-35f61e46ffbc\") " pod="openstack/nova-cell0-cell-mapping-mr4lq" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.261087 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz4km\" (UniqueName: \"kubernetes.io/projected/fc547561-a892-4d64-83cd-35f61e46ffbc-kube-api-access-mz4km\") pod \"nova-cell0-cell-mapping-mr4lq\" (UID: \"fc547561-a892-4d64-83cd-35f61e46ffbc\") " pod="openstack/nova-cell0-cell-mapping-mr4lq" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.273046 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.275392 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.281989 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.301166 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.303601 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mr4lq" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.328155 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.329758 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0210eecd-3bef-4e57-9c69-1c7d8e8426df-logs\") pod \"nova-metadata-0\" (UID: \"0210eecd-3bef-4e57-9c69-1c7d8e8426df\") " pod="openstack/nova-metadata-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.329852 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0210eecd-3bef-4e57-9c69-1c7d8e8426df-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0210eecd-3bef-4e57-9c69-1c7d8e8426df\") " pod="openstack/nova-metadata-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.329883 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0210eecd-3bef-4e57-9c69-1c7d8e8426df-config-data\") pod \"nova-metadata-0\" (UID: \"0210eecd-3bef-4e57-9c69-1c7d8e8426df\") " pod="openstack/nova-metadata-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.329906 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5bsg\" (UniqueName: \"kubernetes.io/projected/0210eecd-3bef-4e57-9c69-1c7d8e8426df-kube-api-access-m5bsg\") pod \"nova-metadata-0\" (UID: \"0210eecd-3bef-4e57-9c69-1c7d8e8426df\") " pod="openstack/nova-metadata-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.387531 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.389280 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.400786 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.429484 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.431454 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3c04d77-b056-4082-a3eb-64f9f4679296-config-data\") pod \"nova-api-0\" (UID: \"a3c04d77-b056-4082-a3eb-64f9f4679296\") " pod="openstack/nova-api-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.431544 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c04d77-b056-4082-a3eb-64f9f4679296-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a3c04d77-b056-4082-a3eb-64f9f4679296\") " pod="openstack/nova-api-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.431578 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0210eecd-3bef-4e57-9c69-1c7d8e8426df-logs\") pod \"nova-metadata-0\" (UID: \"0210eecd-3bef-4e57-9c69-1c7d8e8426df\") " pod="openstack/nova-metadata-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.431659 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0210eecd-3bef-4e57-9c69-1c7d8e8426df-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0210eecd-3bef-4e57-9c69-1c7d8e8426df\") " pod="openstack/nova-metadata-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.431700 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0210eecd-3bef-4e57-9c69-1c7d8e8426df-config-data\") pod \"nova-metadata-0\" (UID: \"0210eecd-3bef-4e57-9c69-1c7d8e8426df\") " pod="openstack/nova-metadata-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.431724 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5bsg\" (UniqueName: \"kubernetes.io/projected/0210eecd-3bef-4e57-9c69-1c7d8e8426df-kube-api-access-m5bsg\") pod \"nova-metadata-0\" (UID: \"0210eecd-3bef-4e57-9c69-1c7d8e8426df\") " pod="openstack/nova-metadata-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.431789 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3c04d77-b056-4082-a3eb-64f9f4679296-logs\") pod \"nova-api-0\" (UID: \"a3c04d77-b056-4082-a3eb-64f9f4679296\") " pod="openstack/nova-api-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.431845 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd7ww\" (UniqueName: \"kubernetes.io/projected/a3c04d77-b056-4082-a3eb-64f9f4679296-kube-api-access-kd7ww\") pod \"nova-api-0\" (UID: \"a3c04d77-b056-4082-a3eb-64f9f4679296\") " pod="openstack/nova-api-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.432337 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0210eecd-3bef-4e57-9c69-1c7d8e8426df-logs\") pod \"nova-metadata-0\" (UID: \"0210eecd-3bef-4e57-9c69-1c7d8e8426df\") " pod="openstack/nova-metadata-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.450166 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.451211 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0210eecd-3bef-4e57-9c69-1c7d8e8426df-config-data\") pod \"nova-metadata-0\" (UID: \"0210eecd-3bef-4e57-9c69-1c7d8e8426df\") " pod="openstack/nova-metadata-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.451786 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.452187 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0210eecd-3bef-4e57-9c69-1c7d8e8426df-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0210eecd-3bef-4e57-9c69-1c7d8e8426df\") " pod="openstack/nova-metadata-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.454157 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.466106 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5bsg\" (UniqueName: \"kubernetes.io/projected/0210eecd-3bef-4e57-9c69-1c7d8e8426df-kube-api-access-m5bsg\") pod \"nova-metadata-0\" (UID: \"0210eecd-3bef-4e57-9c69-1c7d8e8426df\") " pod="openstack/nova-metadata-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.466253 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75df6cf455-7ctpw"] Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.468070 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.501626 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.531723 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75df6cf455-7ctpw"] Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.535006 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5919ecc3-7d40-4914-9e48-bde9efff3853-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5919ecc3-7d40-4914-9e48-bde9efff3853\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.535080 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ws54\" (UniqueName: \"kubernetes.io/projected/db67bc15-ec43-4391-8aa9-b0c214813024-kube-api-access-4ws54\") pod \"nova-scheduler-0\" (UID: \"db67bc15-ec43-4391-8aa9-b0c214813024\") " pod="openstack/nova-scheduler-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.535134 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3c04d77-b056-4082-a3eb-64f9f4679296-logs\") pod \"nova-api-0\" (UID: \"a3c04d77-b056-4082-a3eb-64f9f4679296\") " pod="openstack/nova-api-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.535181 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db67bc15-ec43-4391-8aa9-b0c214813024-config-data\") pod \"nova-scheduler-0\" (UID: \"db67bc15-ec43-4391-8aa9-b0c214813024\") " pod="openstack/nova-scheduler-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.535200 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd7ww\" (UniqueName: \"kubernetes.io/projected/a3c04d77-b056-4082-a3eb-64f9f4679296-kube-api-access-kd7ww\") pod \"nova-api-0\" (UID: \"a3c04d77-b056-4082-a3eb-64f9f4679296\") " pod="openstack/nova-api-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.535221 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db67bc15-ec43-4391-8aa9-b0c214813024-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"db67bc15-ec43-4391-8aa9-b0c214813024\") " pod="openstack/nova-scheduler-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.535245 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j2ss\" (UniqueName: \"kubernetes.io/projected/5919ecc3-7d40-4914-9e48-bde9efff3853-kube-api-access-6j2ss\") pod \"nova-cell1-novncproxy-0\" (UID: \"5919ecc3-7d40-4914-9e48-bde9efff3853\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.535266 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5919ecc3-7d40-4914-9e48-bde9efff3853-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5919ecc3-7d40-4914-9e48-bde9efff3853\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.535311 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3c04d77-b056-4082-a3eb-64f9f4679296-config-data\") pod \"nova-api-0\" (UID: \"a3c04d77-b056-4082-a3eb-64f9f4679296\") " pod="openstack/nova-api-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.535341 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c04d77-b056-4082-a3eb-64f9f4679296-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a3c04d77-b056-4082-a3eb-64f9f4679296\") " pod="openstack/nova-api-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.539645 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3c04d77-b056-4082-a3eb-64f9f4679296-logs\") pod \"nova-api-0\" (UID: \"a3c04d77-b056-4082-a3eb-64f9f4679296\") " pod="openstack/nova-api-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.544638 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3c04d77-b056-4082-a3eb-64f9f4679296-config-data\") pod \"nova-api-0\" (UID: \"a3c04d77-b056-4082-a3eb-64f9f4679296\") " pod="openstack/nova-api-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.547616 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c04d77-b056-4082-a3eb-64f9f4679296-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a3c04d77-b056-4082-a3eb-64f9f4679296\") " pod="openstack/nova-api-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.558093 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd7ww\" (UniqueName: \"kubernetes.io/projected/a3c04d77-b056-4082-a3eb-64f9f4679296-kube-api-access-kd7ww\") pod \"nova-api-0\" (UID: \"a3c04d77-b056-4082-a3eb-64f9f4679296\") " pod="openstack/nova-api-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.637863 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db67bc15-ec43-4391-8aa9-b0c214813024-config-data\") pod \"nova-scheduler-0\" (UID: \"db67bc15-ec43-4391-8aa9-b0c214813024\") " pod="openstack/nova-scheduler-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.638017 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db67bc15-ec43-4391-8aa9-b0c214813024-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"db67bc15-ec43-4391-8aa9-b0c214813024\") " pod="openstack/nova-scheduler-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.638055 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j2ss\" (UniqueName: \"kubernetes.io/projected/5919ecc3-7d40-4914-9e48-bde9efff3853-kube-api-access-6j2ss\") pod \"nova-cell1-novncproxy-0\" (UID: \"5919ecc3-7d40-4914-9e48-bde9efff3853\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.638097 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5919ecc3-7d40-4914-9e48-bde9efff3853-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5919ecc3-7d40-4914-9e48-bde9efff3853\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.638147 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-ovsdbserver-nb\") pod \"dnsmasq-dns-75df6cf455-7ctpw\" (UID: \"01e161c0-19be-45e2-9e1c-939bf287bd3e\") " pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.638199 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-config\") pod \"dnsmasq-dns-75df6cf455-7ctpw\" (UID: \"01e161c0-19be-45e2-9e1c-939bf287bd3e\") " pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.638221 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp5bz\" (UniqueName: \"kubernetes.io/projected/01e161c0-19be-45e2-9e1c-939bf287bd3e-kube-api-access-sp5bz\") pod \"dnsmasq-dns-75df6cf455-7ctpw\" (UID: \"01e161c0-19be-45e2-9e1c-939bf287bd3e\") " pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.638262 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-dns-swift-storage-0\") pod \"dnsmasq-dns-75df6cf455-7ctpw\" (UID: \"01e161c0-19be-45e2-9e1c-939bf287bd3e\") " pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.638342 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-dns-svc\") pod \"dnsmasq-dns-75df6cf455-7ctpw\" (UID: \"01e161c0-19be-45e2-9e1c-939bf287bd3e\") " pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.638403 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5919ecc3-7d40-4914-9e48-bde9efff3853-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5919ecc3-7d40-4914-9e48-bde9efff3853\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.638443 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ws54\" (UniqueName: \"kubernetes.io/projected/db67bc15-ec43-4391-8aa9-b0c214813024-kube-api-access-4ws54\") pod \"nova-scheduler-0\" (UID: \"db67bc15-ec43-4391-8aa9-b0c214813024\") " pod="openstack/nova-scheduler-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.638520 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-ovsdbserver-sb\") pod \"dnsmasq-dns-75df6cf455-7ctpw\" (UID: \"01e161c0-19be-45e2-9e1c-939bf287bd3e\") " pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.643716 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5919ecc3-7d40-4914-9e48-bde9efff3853-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5919ecc3-7d40-4914-9e48-bde9efff3853\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.643733 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db67bc15-ec43-4391-8aa9-b0c214813024-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"db67bc15-ec43-4391-8aa9-b0c214813024\") " pod="openstack/nova-scheduler-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.647714 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db67bc15-ec43-4391-8aa9-b0c214813024-config-data\") pod \"nova-scheduler-0\" (UID: \"db67bc15-ec43-4391-8aa9-b0c214813024\") " pod="openstack/nova-scheduler-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.655644 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5919ecc3-7d40-4914-9e48-bde9efff3853-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5919ecc3-7d40-4914-9e48-bde9efff3853\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.659057 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ws54\" (UniqueName: \"kubernetes.io/projected/db67bc15-ec43-4391-8aa9-b0c214813024-kube-api-access-4ws54\") pod \"nova-scheduler-0\" (UID: \"db67bc15-ec43-4391-8aa9-b0c214813024\") " pod="openstack/nova-scheduler-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.660398 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j2ss\" (UniqueName: \"kubernetes.io/projected/5919ecc3-7d40-4914-9e48-bde9efff3853-kube-api-access-6j2ss\") pod \"nova-cell1-novncproxy-0\" (UID: \"5919ecc3-7d40-4914-9e48-bde9efff3853\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.712413 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.739722 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.740637 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-dns-svc\") pod \"dnsmasq-dns-75df6cf455-7ctpw\" (UID: \"01e161c0-19be-45e2-9e1c-939bf287bd3e\") " pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.740835 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-ovsdbserver-sb\") pod \"dnsmasq-dns-75df6cf455-7ctpw\" (UID: \"01e161c0-19be-45e2-9e1c-939bf287bd3e\") " pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.741035 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-ovsdbserver-nb\") pod \"dnsmasq-dns-75df6cf455-7ctpw\" (UID: \"01e161c0-19be-45e2-9e1c-939bf287bd3e\") " pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.741079 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-config\") pod \"dnsmasq-dns-75df6cf455-7ctpw\" (UID: \"01e161c0-19be-45e2-9e1c-939bf287bd3e\") " pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.741114 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp5bz\" (UniqueName: \"kubernetes.io/projected/01e161c0-19be-45e2-9e1c-939bf287bd3e-kube-api-access-sp5bz\") pod \"dnsmasq-dns-75df6cf455-7ctpw\" (UID: \"01e161c0-19be-45e2-9e1c-939bf287bd3e\") " pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.741146 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-dns-swift-storage-0\") pod \"dnsmasq-dns-75df6cf455-7ctpw\" (UID: \"01e161c0-19be-45e2-9e1c-939bf287bd3e\") " pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.741444 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-dns-svc\") pod \"dnsmasq-dns-75df6cf455-7ctpw\" (UID: \"01e161c0-19be-45e2-9e1c-939bf287bd3e\") " pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.742198 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-dns-swift-storage-0\") pod \"dnsmasq-dns-75df6cf455-7ctpw\" (UID: \"01e161c0-19be-45e2-9e1c-939bf287bd3e\") " pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.742393 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-config\") pod \"dnsmasq-dns-75df6cf455-7ctpw\" (UID: \"01e161c0-19be-45e2-9e1c-939bf287bd3e\") " pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.743716 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-ovsdbserver-sb\") pod \"dnsmasq-dns-75df6cf455-7ctpw\" (UID: \"01e161c0-19be-45e2-9e1c-939bf287bd3e\") " pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.743856 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-ovsdbserver-nb\") pod \"dnsmasq-dns-75df6cf455-7ctpw\" (UID: \"01e161c0-19be-45e2-9e1c-939bf287bd3e\") " pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.759801 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp5bz\" (UniqueName: \"kubernetes.io/projected/01e161c0-19be-45e2-9e1c-939bf287bd3e-kube-api-access-sp5bz\") pod \"dnsmasq-dns-75df6cf455-7ctpw\" (UID: \"01e161c0-19be-45e2-9e1c-939bf287bd3e\") " pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.869575 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.887084 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.890778 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" Nov 28 07:17:55 crc kubenswrapper[4946]: I1128 07:17:55.937836 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mr4lq"] Nov 28 07:17:56 crc kubenswrapper[4946]: I1128 07:17:56.007477 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mr4lq" event={"ID":"fc547561-a892-4d64-83cd-35f61e46ffbc","Type":"ContainerStarted","Data":"3ea584477ef82817fe8b52de2912e4c1d327a990a4f217a5ffa32fa988a74297"} Nov 28 07:17:56 crc kubenswrapper[4946]: I1128 07:17:56.126717 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wxxkg"] Nov 28 07:17:56 crc kubenswrapper[4946]: I1128 07:17:56.132163 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wxxkg" Nov 28 07:17:56 crc kubenswrapper[4946]: I1128 07:17:56.134871 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 28 07:17:56 crc kubenswrapper[4946]: I1128 07:17:56.137740 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 28 07:17:56 crc kubenswrapper[4946]: I1128 07:17:56.162359 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wxxkg"] Nov 28 07:17:56 crc kubenswrapper[4946]: I1128 07:17:56.235615 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:17:56 crc kubenswrapper[4946]: I1128 07:17:56.284056 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d80e1b8-ad91-4449-bd5a-07c7c15ce996-scripts\") pod \"nova-cell1-conductor-db-sync-wxxkg\" (UID: \"6d80e1b8-ad91-4449-bd5a-07c7c15ce996\") " pod="openstack/nova-cell1-conductor-db-sync-wxxkg" Nov 28 07:17:56 crc kubenswrapper[4946]: I1128 07:17:56.284309 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r77vm\" (UniqueName: \"kubernetes.io/projected/6d80e1b8-ad91-4449-bd5a-07c7c15ce996-kube-api-access-r77vm\") pod \"nova-cell1-conductor-db-sync-wxxkg\" (UID: \"6d80e1b8-ad91-4449-bd5a-07c7c15ce996\") " pod="openstack/nova-cell1-conductor-db-sync-wxxkg" Nov 28 07:17:56 crc kubenswrapper[4946]: I1128 07:17:56.284351 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d80e1b8-ad91-4449-bd5a-07c7c15ce996-config-data\") pod \"nova-cell1-conductor-db-sync-wxxkg\" (UID: \"6d80e1b8-ad91-4449-bd5a-07c7c15ce996\") " pod="openstack/nova-cell1-conductor-db-sync-wxxkg" Nov 28 07:17:56 crc kubenswrapper[4946]: I1128 07:17:56.284375 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d80e1b8-ad91-4449-bd5a-07c7c15ce996-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wxxkg\" (UID: \"6d80e1b8-ad91-4449-bd5a-07c7c15ce996\") " pod="openstack/nova-cell1-conductor-db-sync-wxxkg" Nov 28 07:17:56 crc kubenswrapper[4946]: I1128 07:17:56.347927 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:17:56 crc kubenswrapper[4946]: W1128 07:17:56.350779 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3c04d77_b056_4082_a3eb_64f9f4679296.slice/crio-616ec6c456a2c6a547d1ac40346cc50e3a45227e3cbff97e070f431e8fb2b4bf WatchSource:0}: Error finding container 616ec6c456a2c6a547d1ac40346cc50e3a45227e3cbff97e070f431e8fb2b4bf: Status 404 returned error can't find the container with id 616ec6c456a2c6a547d1ac40346cc50e3a45227e3cbff97e070f431e8fb2b4bf Nov 28 07:17:56 crc kubenswrapper[4946]: I1128 07:17:56.387990 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d80e1b8-ad91-4449-bd5a-07c7c15ce996-scripts\") pod \"nova-cell1-conductor-db-sync-wxxkg\" (UID: \"6d80e1b8-ad91-4449-bd5a-07c7c15ce996\") " pod="openstack/nova-cell1-conductor-db-sync-wxxkg" Nov 28 07:17:56 crc kubenswrapper[4946]: I1128 07:17:56.388048 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r77vm\" (UniqueName: \"kubernetes.io/projected/6d80e1b8-ad91-4449-bd5a-07c7c15ce996-kube-api-access-r77vm\") pod \"nova-cell1-conductor-db-sync-wxxkg\" (UID: \"6d80e1b8-ad91-4449-bd5a-07c7c15ce996\") " pod="openstack/nova-cell1-conductor-db-sync-wxxkg" Nov 28 07:17:56 crc kubenswrapper[4946]: I1128 07:17:56.388096 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d80e1b8-ad91-4449-bd5a-07c7c15ce996-config-data\") pod \"nova-cell1-conductor-db-sync-wxxkg\" (UID: \"6d80e1b8-ad91-4449-bd5a-07c7c15ce996\") " pod="openstack/nova-cell1-conductor-db-sync-wxxkg" Nov 28 07:17:56 crc kubenswrapper[4946]: I1128 07:17:56.388121 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d80e1b8-ad91-4449-bd5a-07c7c15ce996-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wxxkg\" (UID: \"6d80e1b8-ad91-4449-bd5a-07c7c15ce996\") " pod="openstack/nova-cell1-conductor-db-sync-wxxkg" Nov 28 07:17:56 crc kubenswrapper[4946]: I1128 07:17:56.394339 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d80e1b8-ad91-4449-bd5a-07c7c15ce996-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wxxkg\" (UID: \"6d80e1b8-ad91-4449-bd5a-07c7c15ce996\") " pod="openstack/nova-cell1-conductor-db-sync-wxxkg" Nov 28 07:17:56 crc kubenswrapper[4946]: I1128 07:17:56.396477 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d80e1b8-ad91-4449-bd5a-07c7c15ce996-config-data\") pod \"nova-cell1-conductor-db-sync-wxxkg\" (UID: \"6d80e1b8-ad91-4449-bd5a-07c7c15ce996\") " pod="openstack/nova-cell1-conductor-db-sync-wxxkg" Nov 28 07:17:56 crc kubenswrapper[4946]: I1128 07:17:56.409039 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d80e1b8-ad91-4449-bd5a-07c7c15ce996-scripts\") pod \"nova-cell1-conductor-db-sync-wxxkg\" (UID: \"6d80e1b8-ad91-4449-bd5a-07c7c15ce996\") " pod="openstack/nova-cell1-conductor-db-sync-wxxkg" Nov 28 07:17:56 crc kubenswrapper[4946]: I1128 07:17:56.409143 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r77vm\" (UniqueName: \"kubernetes.io/projected/6d80e1b8-ad91-4449-bd5a-07c7c15ce996-kube-api-access-r77vm\") pod \"nova-cell1-conductor-db-sync-wxxkg\" (UID: \"6d80e1b8-ad91-4449-bd5a-07c7c15ce996\") " pod="openstack/nova-cell1-conductor-db-sync-wxxkg" Nov 28 07:17:56 crc kubenswrapper[4946]: I1128 07:17:56.476561 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wxxkg" Nov 28 07:17:56 crc kubenswrapper[4946]: I1128 07:17:56.508309 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 07:17:56 crc kubenswrapper[4946]: W1128 07:17:56.516023 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5919ecc3_7d40_4914_9e48_bde9efff3853.slice/crio-9c068f43c2649bc5200afbe71381373c5ba019a06fa119274747b0e2c8856986 WatchSource:0}: Error finding container 9c068f43c2649bc5200afbe71381373c5ba019a06fa119274747b0e2c8856986: Status 404 returned error can't find the container with id 9c068f43c2649bc5200afbe71381373c5ba019a06fa119274747b0e2c8856986 Nov 28 07:17:56 crc kubenswrapper[4946]: I1128 07:17:56.518028 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:17:56 crc kubenswrapper[4946]: I1128 07:17:56.697499 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75df6cf455-7ctpw"] Nov 28 07:17:56 crc kubenswrapper[4946]: I1128 07:17:56.950871 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wxxkg"] Nov 28 07:17:56 crc kubenswrapper[4946]: W1128 07:17:56.958402 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d80e1b8_ad91_4449_bd5a_07c7c15ce996.slice/crio-b736e42244ac71b3db4aad31e0b40382bc2551e00f501e30852a69a57d7fdb10 WatchSource:0}: Error finding container b736e42244ac71b3db4aad31e0b40382bc2551e00f501e30852a69a57d7fdb10: Status 404 returned error can't find the container with id b736e42244ac71b3db4aad31e0b40382bc2551e00f501e30852a69a57d7fdb10 Nov 28 07:17:57 crc kubenswrapper[4946]: I1128 07:17:57.001455 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"db67bc15-ec43-4391-8aa9-b0c214813024","Type":"ContainerStarted","Data":"a31056c2513c9f8e756ef66d5720920aa1e6fc9de230ef9050cb4ad030009c48"} Nov 28 07:17:57 crc kubenswrapper[4946]: I1128 07:17:57.003580 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5919ecc3-7d40-4914-9e48-bde9efff3853","Type":"ContainerStarted","Data":"9c068f43c2649bc5200afbe71381373c5ba019a06fa119274747b0e2c8856986"} Nov 28 07:17:57 crc kubenswrapper[4946]: I1128 07:17:57.006558 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0210eecd-3bef-4e57-9c69-1c7d8e8426df","Type":"ContainerStarted","Data":"69749b5fba71f1f79415863a4b6edb0a1fa059d58e45d8a03f0da9a7fea0aeb8"} Nov 28 07:17:57 crc kubenswrapper[4946]: I1128 07:17:57.008677 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mr4lq" event={"ID":"fc547561-a892-4d64-83cd-35f61e46ffbc","Type":"ContainerStarted","Data":"d1a874f7dc46f5856603f1816207ac7f5219e5bf8113380c75e03dd02f1318d2"} Nov 28 07:17:57 crc kubenswrapper[4946]: I1128 07:17:57.013591 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wxxkg" event={"ID":"6d80e1b8-ad91-4449-bd5a-07c7c15ce996","Type":"ContainerStarted","Data":"b736e42244ac71b3db4aad31e0b40382bc2551e00f501e30852a69a57d7fdb10"} Nov 28 07:17:57 crc kubenswrapper[4946]: I1128 07:17:57.016390 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a3c04d77-b056-4082-a3eb-64f9f4679296","Type":"ContainerStarted","Data":"616ec6c456a2c6a547d1ac40346cc50e3a45227e3cbff97e070f431e8fb2b4bf"} Nov 28 07:17:57 crc kubenswrapper[4946]: I1128 07:17:57.025309 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" event={"ID":"01e161c0-19be-45e2-9e1c-939bf287bd3e","Type":"ContainerStarted","Data":"a348da2efed5cb9f971db2057ab00687cdd610c28bf4bcbcd484d5b3b192867c"} Nov 28 07:17:57 crc kubenswrapper[4946]: I1128 07:17:57.025386 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" event={"ID":"01e161c0-19be-45e2-9e1c-939bf287bd3e","Type":"ContainerStarted","Data":"a334b93cb025df01de10265472779d3a75245878cd83853812ffecfd9e06d4e3"} Nov 28 07:17:57 crc kubenswrapper[4946]: I1128 07:17:57.027775 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-mr4lq" podStartSLOduration=3.027750629 podStartE2EDuration="3.027750629s" podCreationTimestamp="2025-11-28 07:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:17:57.02491893 +0000 UTC m=+1531.402984041" watchObservedRunningTime="2025-11-28 07:17:57.027750629 +0000 UTC m=+1531.405815740" Nov 28 07:17:58 crc kubenswrapper[4946]: I1128 07:17:58.044141 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wxxkg" event={"ID":"6d80e1b8-ad91-4449-bd5a-07c7c15ce996","Type":"ContainerStarted","Data":"9a6b1a4ad119bfed32568355cec70a94a7fcf22a0b1f5a06ffdb17c6c3ad1bf9"} Nov 28 07:17:58 crc kubenswrapper[4946]: I1128 07:17:58.047486 4946 generic.go:334] "Generic (PLEG): container finished" podID="01e161c0-19be-45e2-9e1c-939bf287bd3e" containerID="a348da2efed5cb9f971db2057ab00687cdd610c28bf4bcbcd484d5b3b192867c" exitCode=0 Nov 28 07:17:58 crc kubenswrapper[4946]: I1128 07:17:58.047610 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" event={"ID":"01e161c0-19be-45e2-9e1c-939bf287bd3e","Type":"ContainerDied","Data":"a348da2efed5cb9f971db2057ab00687cdd610c28bf4bcbcd484d5b3b192867c"} Nov 28 07:17:58 crc kubenswrapper[4946]: I1128 07:17:58.064002 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-wxxkg" podStartSLOduration=2.06398621 podStartE2EDuration="2.06398621s" podCreationTimestamp="2025-11-28 07:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:17:58.05828929 +0000 UTC m=+1532.436354411" watchObservedRunningTime="2025-11-28 07:17:58.06398621 +0000 UTC m=+1532.442051321" Nov 28 07:17:59 crc kubenswrapper[4946]: I1128 07:17:59.113387 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:17:59 crc kubenswrapper[4946]: I1128 07:17:59.126389 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 07:18:00 crc kubenswrapper[4946]: I1128 07:18:00.082408 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5919ecc3-7d40-4914-9e48-bde9efff3853","Type":"ContainerStarted","Data":"1dc38591f3fe43d534ca3a32b55c4cc20f1ed9c36d9e889d2540e36f214004ac"} Nov 28 07:18:00 crc kubenswrapper[4946]: I1128 07:18:00.082452 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="5919ecc3-7d40-4914-9e48-bde9efff3853" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://1dc38591f3fe43d534ca3a32b55c4cc20f1ed9c36d9e889d2540e36f214004ac" gracePeriod=30 Nov 28 07:18:00 crc kubenswrapper[4946]: I1128 07:18:00.091336 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" event={"ID":"01e161c0-19be-45e2-9e1c-939bf287bd3e","Type":"ContainerStarted","Data":"013a463e5332a4ac23726892ef20f93b135d9eefa71f004d0bc295ed866d6e69"} Nov 28 07:18:00 crc kubenswrapper[4946]: I1128 07:18:00.091795 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" Nov 28 07:18:00 crc kubenswrapper[4946]: I1128 07:18:00.111806 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.933873308 podStartE2EDuration="5.111776476s" podCreationTimestamp="2025-11-28 07:17:55 +0000 UTC" firstStartedPulling="2025-11-28 07:17:56.518936783 +0000 UTC m=+1530.897001894" lastFinishedPulling="2025-11-28 07:17:59.696839951 +0000 UTC m=+1534.074905062" observedRunningTime="2025-11-28 07:18:00.110407613 +0000 UTC m=+1534.488472724" watchObservedRunningTime="2025-11-28 07:18:00.111776476 +0000 UTC m=+1534.489841587" Nov 28 07:18:00 crc kubenswrapper[4946]: I1128 07:18:00.136900 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" podStartSLOduration=5.136872432 podStartE2EDuration="5.136872432s" podCreationTimestamp="2025-11-28 07:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:18:00.132966226 +0000 UTC m=+1534.511031347" watchObservedRunningTime="2025-11-28 07:18:00.136872432 +0000 UTC m=+1534.514937543" Nov 28 07:18:00 crc kubenswrapper[4946]: I1128 07:18:00.887778 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:18:01 crc kubenswrapper[4946]: I1128 07:18:01.102927 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a3c04d77-b056-4082-a3eb-64f9f4679296","Type":"ContainerStarted","Data":"eb0a5ac8d697a88614fd21d06b02ba39f498724f65d07996731adc13cd44ad4b"} Nov 28 07:18:01 crc kubenswrapper[4946]: I1128 07:18:01.102992 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a3c04d77-b056-4082-a3eb-64f9f4679296","Type":"ContainerStarted","Data":"5e04565953fdb740a56d881500149e6fb75f2cb1d1d6ee266392ee03ee31197f"} Nov 28 07:18:01 crc kubenswrapper[4946]: I1128 07:18:01.104769 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"db67bc15-ec43-4391-8aa9-b0c214813024","Type":"ContainerStarted","Data":"01f3959a3fddadfeb293023b6dbdd4f890722a5e2dfba684d15de2a137b654ab"} Nov 28 07:18:01 crc kubenswrapper[4946]: I1128 07:18:01.106278 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0210eecd-3bef-4e57-9c69-1c7d8e8426df","Type":"ContainerStarted","Data":"4c2c8d2655ab9c14120bbeccac21a43c5c13106406c56524978ce0f1de8c1724"} Nov 28 07:18:01 crc kubenswrapper[4946]: I1128 07:18:01.106449 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0210eecd-3bef-4e57-9c69-1c7d8e8426df","Type":"ContainerStarted","Data":"c6eb564d5747d4f1e745fee7227243768688658dd993d82dcc37a9fee9bfec18"} Nov 28 07:18:01 crc kubenswrapper[4946]: I1128 07:18:01.106447 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0210eecd-3bef-4e57-9c69-1c7d8e8426df" containerName="nova-metadata-metadata" containerID="cri-o://c6eb564d5747d4f1e745fee7227243768688658dd993d82dcc37a9fee9bfec18" gracePeriod=30 Nov 28 07:18:01 crc kubenswrapper[4946]: I1128 07:18:01.106394 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0210eecd-3bef-4e57-9c69-1c7d8e8426df" containerName="nova-metadata-log" containerID="cri-o://4c2c8d2655ab9c14120bbeccac21a43c5c13106406c56524978ce0f1de8c1724" gracePeriod=30 Nov 28 07:18:01 crc kubenswrapper[4946]: I1128 07:18:01.131050 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.786968347 podStartE2EDuration="6.13102711s" podCreationTimestamp="2025-11-28 07:17:55 +0000 UTC" firstStartedPulling="2025-11-28 07:17:56.354396768 +0000 UTC m=+1530.732461879" lastFinishedPulling="2025-11-28 07:17:59.698455531 +0000 UTC m=+1534.076520642" observedRunningTime="2025-11-28 07:18:01.12369413 +0000 UTC m=+1535.501759231" watchObservedRunningTime="2025-11-28 07:18:01.13102711 +0000 UTC m=+1535.509092221" Nov 28 07:18:01 crc kubenswrapper[4946]: I1128 07:18:01.147824 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.978355851 podStartE2EDuration="6.147793941s" podCreationTimestamp="2025-11-28 07:17:55 +0000 UTC" firstStartedPulling="2025-11-28 07:17:56.527444492 +0000 UTC m=+1530.905509603" lastFinishedPulling="2025-11-28 07:17:59.696882582 +0000 UTC m=+1534.074947693" observedRunningTime="2025-11-28 07:18:01.145142896 +0000 UTC m=+1535.523208007" watchObservedRunningTime="2025-11-28 07:18:01.147793941 +0000 UTC m=+1535.525859052" Nov 28 07:18:01 crc kubenswrapper[4946]: I1128 07:18:01.163285 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.7152329269999997 podStartE2EDuration="6.16326464s" podCreationTimestamp="2025-11-28 07:17:55 +0000 UTC" firstStartedPulling="2025-11-28 07:17:56.248788328 +0000 UTC m=+1530.626853439" lastFinishedPulling="2025-11-28 07:17:59.696820041 +0000 UTC m=+1534.074885152" observedRunningTime="2025-11-28 07:18:01.162210184 +0000 UTC m=+1535.540275305" watchObservedRunningTime="2025-11-28 07:18:01.16326464 +0000 UTC m=+1535.541329751" Nov 28 07:18:01 crc kubenswrapper[4946]: I1128 07:18:01.723766 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:18:01 crc kubenswrapper[4946]: I1128 07:18:01.827945 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0210eecd-3bef-4e57-9c69-1c7d8e8426df-combined-ca-bundle\") pod \"0210eecd-3bef-4e57-9c69-1c7d8e8426df\" (UID: \"0210eecd-3bef-4e57-9c69-1c7d8e8426df\") " Nov 28 07:18:01 crc kubenswrapper[4946]: I1128 07:18:01.827996 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5bsg\" (UniqueName: \"kubernetes.io/projected/0210eecd-3bef-4e57-9c69-1c7d8e8426df-kube-api-access-m5bsg\") pod \"0210eecd-3bef-4e57-9c69-1c7d8e8426df\" (UID: \"0210eecd-3bef-4e57-9c69-1c7d8e8426df\") " Nov 28 07:18:01 crc kubenswrapper[4946]: I1128 07:18:01.828164 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0210eecd-3bef-4e57-9c69-1c7d8e8426df-config-data\") pod \"0210eecd-3bef-4e57-9c69-1c7d8e8426df\" (UID: \"0210eecd-3bef-4e57-9c69-1c7d8e8426df\") " Nov 28 07:18:01 crc kubenswrapper[4946]: I1128 07:18:01.828185 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0210eecd-3bef-4e57-9c69-1c7d8e8426df-logs\") pod \"0210eecd-3bef-4e57-9c69-1c7d8e8426df\" (UID: \"0210eecd-3bef-4e57-9c69-1c7d8e8426df\") " Nov 28 07:18:01 crc kubenswrapper[4946]: I1128 07:18:01.829110 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0210eecd-3bef-4e57-9c69-1c7d8e8426df-logs" (OuterVolumeSpecName: "logs") pod "0210eecd-3bef-4e57-9c69-1c7d8e8426df" (UID: "0210eecd-3bef-4e57-9c69-1c7d8e8426df"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:18:01 crc kubenswrapper[4946]: I1128 07:18:01.850599 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0210eecd-3bef-4e57-9c69-1c7d8e8426df-kube-api-access-m5bsg" (OuterVolumeSpecName: "kube-api-access-m5bsg") pod "0210eecd-3bef-4e57-9c69-1c7d8e8426df" (UID: "0210eecd-3bef-4e57-9c69-1c7d8e8426df"). InnerVolumeSpecName "kube-api-access-m5bsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:18:01 crc kubenswrapper[4946]: I1128 07:18:01.863585 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0210eecd-3bef-4e57-9c69-1c7d8e8426df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0210eecd-3bef-4e57-9c69-1c7d8e8426df" (UID: "0210eecd-3bef-4e57-9c69-1c7d8e8426df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:01 crc kubenswrapper[4946]: I1128 07:18:01.871522 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0210eecd-3bef-4e57-9c69-1c7d8e8426df-config-data" (OuterVolumeSpecName: "config-data") pod "0210eecd-3bef-4e57-9c69-1c7d8e8426df" (UID: "0210eecd-3bef-4e57-9c69-1c7d8e8426df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:01 crc kubenswrapper[4946]: I1128 07:18:01.931405 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0210eecd-3bef-4e57-9c69-1c7d8e8426df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:01 crc kubenswrapper[4946]: I1128 07:18:01.931451 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5bsg\" (UniqueName: \"kubernetes.io/projected/0210eecd-3bef-4e57-9c69-1c7d8e8426df-kube-api-access-m5bsg\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:01 crc kubenswrapper[4946]: I1128 07:18:01.931490 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0210eecd-3bef-4e57-9c69-1c7d8e8426df-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:01 crc kubenswrapper[4946]: I1128 07:18:01.931505 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0210eecd-3bef-4e57-9c69-1c7d8e8426df-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.117903 4946 generic.go:334] "Generic (PLEG): container finished" podID="0210eecd-3bef-4e57-9c69-1c7d8e8426df" containerID="c6eb564d5747d4f1e745fee7227243768688658dd993d82dcc37a9fee9bfec18" exitCode=0 Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.117943 4946 generic.go:334] "Generic (PLEG): container finished" podID="0210eecd-3bef-4e57-9c69-1c7d8e8426df" containerID="4c2c8d2655ab9c14120bbeccac21a43c5c13106406c56524978ce0f1de8c1724" exitCode=143 Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.119056 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.119533 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0210eecd-3bef-4e57-9c69-1c7d8e8426df","Type":"ContainerDied","Data":"c6eb564d5747d4f1e745fee7227243768688658dd993d82dcc37a9fee9bfec18"} Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.119562 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0210eecd-3bef-4e57-9c69-1c7d8e8426df","Type":"ContainerDied","Data":"4c2c8d2655ab9c14120bbeccac21a43c5c13106406c56524978ce0f1de8c1724"} Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.119576 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0210eecd-3bef-4e57-9c69-1c7d8e8426df","Type":"ContainerDied","Data":"69749b5fba71f1f79415863a4b6edb0a1fa059d58e45d8a03f0da9a7fea0aeb8"} Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.119593 4946 scope.go:117] "RemoveContainer" containerID="c6eb564d5747d4f1e745fee7227243768688658dd993d82dcc37a9fee9bfec18" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.151802 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.163945 4946 scope.go:117] "RemoveContainer" containerID="4c2c8d2655ab9c14120bbeccac21a43c5c13106406c56524978ce0f1de8c1724" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.167676 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.195072 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:18:02 crc kubenswrapper[4946]: E1128 07:18:02.195795 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0210eecd-3bef-4e57-9c69-1c7d8e8426df" containerName="nova-metadata-log" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.195823 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0210eecd-3bef-4e57-9c69-1c7d8e8426df" containerName="nova-metadata-log" Nov 28 07:18:02 crc kubenswrapper[4946]: E1128 07:18:02.195853 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0210eecd-3bef-4e57-9c69-1c7d8e8426df" containerName="nova-metadata-metadata" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.195903 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0210eecd-3bef-4e57-9c69-1c7d8e8426df" containerName="nova-metadata-metadata" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.196127 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="0210eecd-3bef-4e57-9c69-1c7d8e8426df" containerName="nova-metadata-metadata" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.196170 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="0210eecd-3bef-4e57-9c69-1c7d8e8426df" containerName="nova-metadata-log" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.197713 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.206369 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.206511 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.231327 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.237521 4946 scope.go:117] "RemoveContainer" containerID="c6eb564d5747d4f1e745fee7227243768688658dd993d82dcc37a9fee9bfec18" Nov 28 07:18:02 crc kubenswrapper[4946]: E1128 07:18:02.238383 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6eb564d5747d4f1e745fee7227243768688658dd993d82dcc37a9fee9bfec18\": container with ID starting with c6eb564d5747d4f1e745fee7227243768688658dd993d82dcc37a9fee9bfec18 not found: ID does not exist" containerID="c6eb564d5747d4f1e745fee7227243768688658dd993d82dcc37a9fee9bfec18" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.238437 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6eb564d5747d4f1e745fee7227243768688658dd993d82dcc37a9fee9bfec18"} err="failed to get container status \"c6eb564d5747d4f1e745fee7227243768688658dd993d82dcc37a9fee9bfec18\": rpc error: code = NotFound desc = could not find container \"c6eb564d5747d4f1e745fee7227243768688658dd993d82dcc37a9fee9bfec18\": container with ID starting with c6eb564d5747d4f1e745fee7227243768688658dd993d82dcc37a9fee9bfec18 not found: ID does not exist" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.238547 4946 scope.go:117] "RemoveContainer" containerID="4c2c8d2655ab9c14120bbeccac21a43c5c13106406c56524978ce0f1de8c1724" Nov 28 07:18:02 crc kubenswrapper[4946]: E1128 07:18:02.239097 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c2c8d2655ab9c14120bbeccac21a43c5c13106406c56524978ce0f1de8c1724\": container with ID starting with 4c2c8d2655ab9c14120bbeccac21a43c5c13106406c56524978ce0f1de8c1724 not found: ID does not exist" containerID="4c2c8d2655ab9c14120bbeccac21a43c5c13106406c56524978ce0f1de8c1724" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.239115 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c2c8d2655ab9c14120bbeccac21a43c5c13106406c56524978ce0f1de8c1724"} err="failed to get container status \"4c2c8d2655ab9c14120bbeccac21a43c5c13106406c56524978ce0f1de8c1724\": rpc error: code = NotFound desc = could not find container \"4c2c8d2655ab9c14120bbeccac21a43c5c13106406c56524978ce0f1de8c1724\": container with ID starting with 4c2c8d2655ab9c14120bbeccac21a43c5c13106406c56524978ce0f1de8c1724 not found: ID does not exist" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.239128 4946 scope.go:117] "RemoveContainer" containerID="c6eb564d5747d4f1e745fee7227243768688658dd993d82dcc37a9fee9bfec18" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.239449 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6eb564d5747d4f1e745fee7227243768688658dd993d82dcc37a9fee9bfec18"} err="failed to get container status \"c6eb564d5747d4f1e745fee7227243768688658dd993d82dcc37a9fee9bfec18\": rpc error: code = NotFound desc = could not find container \"c6eb564d5747d4f1e745fee7227243768688658dd993d82dcc37a9fee9bfec18\": container with ID starting with c6eb564d5747d4f1e745fee7227243768688658dd993d82dcc37a9fee9bfec18 not found: ID does not exist" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.239498 4946 scope.go:117] "RemoveContainer" containerID="4c2c8d2655ab9c14120bbeccac21a43c5c13106406c56524978ce0f1de8c1724" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.239793 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c2c8d2655ab9c14120bbeccac21a43c5c13106406c56524978ce0f1de8c1724"} err="failed to get container status \"4c2c8d2655ab9c14120bbeccac21a43c5c13106406c56524978ce0f1de8c1724\": rpc error: code = NotFound desc = could not find container \"4c2c8d2655ab9c14120bbeccac21a43c5c13106406c56524978ce0f1de8c1724\": container with ID starting with 4c2c8d2655ab9c14120bbeccac21a43c5c13106406c56524978ce0f1de8c1724 not found: ID does not exist" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.351977 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2ptl\" (UniqueName: \"kubernetes.io/projected/31368578-5450-4e83-a5f4-f4075b195972-kube-api-access-v2ptl\") pod \"nova-metadata-0\" (UID: \"31368578-5450-4e83-a5f4-f4075b195972\") " pod="openstack/nova-metadata-0" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.352438 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31368578-5450-4e83-a5f4-f4075b195972-config-data\") pod \"nova-metadata-0\" (UID: \"31368578-5450-4e83-a5f4-f4075b195972\") " pod="openstack/nova-metadata-0" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.352544 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/31368578-5450-4e83-a5f4-f4075b195972-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"31368578-5450-4e83-a5f4-f4075b195972\") " pod="openstack/nova-metadata-0" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.352577 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31368578-5450-4e83-a5f4-f4075b195972-logs\") pod \"nova-metadata-0\" (UID: \"31368578-5450-4e83-a5f4-f4075b195972\") " pod="openstack/nova-metadata-0" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.352800 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31368578-5450-4e83-a5f4-f4075b195972-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"31368578-5450-4e83-a5f4-f4075b195972\") " pod="openstack/nova-metadata-0" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.467780 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2ptl\" (UniqueName: \"kubernetes.io/projected/31368578-5450-4e83-a5f4-f4075b195972-kube-api-access-v2ptl\") pod \"nova-metadata-0\" (UID: \"31368578-5450-4e83-a5f4-f4075b195972\") " pod="openstack/nova-metadata-0" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.467888 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31368578-5450-4e83-a5f4-f4075b195972-config-data\") pod \"nova-metadata-0\" (UID: \"31368578-5450-4e83-a5f4-f4075b195972\") " pod="openstack/nova-metadata-0" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.468294 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/31368578-5450-4e83-a5f4-f4075b195972-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"31368578-5450-4e83-a5f4-f4075b195972\") " pod="openstack/nova-metadata-0" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.468381 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31368578-5450-4e83-a5f4-f4075b195972-logs\") pod \"nova-metadata-0\" (UID: \"31368578-5450-4e83-a5f4-f4075b195972\") " pod="openstack/nova-metadata-0" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.468501 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31368578-5450-4e83-a5f4-f4075b195972-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"31368578-5450-4e83-a5f4-f4075b195972\") " pod="openstack/nova-metadata-0" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.468972 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31368578-5450-4e83-a5f4-f4075b195972-logs\") pod \"nova-metadata-0\" (UID: \"31368578-5450-4e83-a5f4-f4075b195972\") " pod="openstack/nova-metadata-0" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.478115 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/31368578-5450-4e83-a5f4-f4075b195972-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"31368578-5450-4e83-a5f4-f4075b195972\") " pod="openstack/nova-metadata-0" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.480255 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31368578-5450-4e83-a5f4-f4075b195972-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"31368578-5450-4e83-a5f4-f4075b195972\") " pod="openstack/nova-metadata-0" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.481186 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31368578-5450-4e83-a5f4-f4075b195972-config-data\") pod \"nova-metadata-0\" (UID: \"31368578-5450-4e83-a5f4-f4075b195972\") " pod="openstack/nova-metadata-0" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.489899 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2ptl\" (UniqueName: \"kubernetes.io/projected/31368578-5450-4e83-a5f4-f4075b195972-kube-api-access-v2ptl\") pod \"nova-metadata-0\" (UID: \"31368578-5450-4e83-a5f4-f4075b195972\") " pod="openstack/nova-metadata-0" Nov 28 07:18:02 crc kubenswrapper[4946]: I1128 07:18:02.542368 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:18:03 crc kubenswrapper[4946]: I1128 07:18:03.049503 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:18:03 crc kubenswrapper[4946]: W1128 07:18:03.065381 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31368578_5450_4e83_a5f4_f4075b195972.slice/crio-829a1dea75791ce8e666c689b8963bfdbedec1047152529f63a222040ad7fb21 WatchSource:0}: Error finding container 829a1dea75791ce8e666c689b8963bfdbedec1047152529f63a222040ad7fb21: Status 404 returned error can't find the container with id 829a1dea75791ce8e666c689b8963bfdbedec1047152529f63a222040ad7fb21 Nov 28 07:18:03 crc kubenswrapper[4946]: I1128 07:18:03.102576 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 28 07:18:03 crc kubenswrapper[4946]: I1128 07:18:03.162583 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"31368578-5450-4e83-a5f4-f4075b195972","Type":"ContainerStarted","Data":"829a1dea75791ce8e666c689b8963bfdbedec1047152529f63a222040ad7fb21"} Nov 28 07:18:04 crc kubenswrapper[4946]: I1128 07:18:04.007807 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0210eecd-3bef-4e57-9c69-1c7d8e8426df" path="/var/lib/kubelet/pods/0210eecd-3bef-4e57-9c69-1c7d8e8426df/volumes" Nov 28 07:18:04 crc kubenswrapper[4946]: I1128 07:18:04.204370 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"31368578-5450-4e83-a5f4-f4075b195972","Type":"ContainerStarted","Data":"d94b8f37be39b22b692fe08643c204d440a890ca5a43ac70f1ed35f81a3bc25f"} Nov 28 07:18:04 crc kubenswrapper[4946]: I1128 07:18:04.204447 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"31368578-5450-4e83-a5f4-f4075b195972","Type":"ContainerStarted","Data":"d250a9708f5498f5795040bb9e841f3fbd772ccf9d0d473b05f432aa4bdb7fde"} Nov 28 07:18:04 crc kubenswrapper[4946]: I1128 07:18:04.239890 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.239817183 podStartE2EDuration="2.239817183s" podCreationTimestamp="2025-11-28 07:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:18:04.231606902 +0000 UTC m=+1538.609672023" watchObservedRunningTime="2025-11-28 07:18:04.239817183 +0000 UTC m=+1538.617882294" Nov 28 07:18:05 crc kubenswrapper[4946]: I1128 07:18:05.219286 4946 generic.go:334] "Generic (PLEG): container finished" podID="fc547561-a892-4d64-83cd-35f61e46ffbc" containerID="d1a874f7dc46f5856603f1816207ac7f5219e5bf8113380c75e03dd02f1318d2" exitCode=0 Nov 28 07:18:05 crc kubenswrapper[4946]: I1128 07:18:05.219375 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mr4lq" event={"ID":"fc547561-a892-4d64-83cd-35f61e46ffbc","Type":"ContainerDied","Data":"d1a874f7dc46f5856603f1816207ac7f5219e5bf8113380c75e03dd02f1318d2"} Nov 28 07:18:05 crc kubenswrapper[4946]: I1128 07:18:05.223182 4946 generic.go:334] "Generic (PLEG): container finished" podID="6d80e1b8-ad91-4449-bd5a-07c7c15ce996" containerID="9a6b1a4ad119bfed32568355cec70a94a7fcf22a0b1f5a06ffdb17c6c3ad1bf9" exitCode=0 Nov 28 07:18:05 crc kubenswrapper[4946]: I1128 07:18:05.223282 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wxxkg" event={"ID":"6d80e1b8-ad91-4449-bd5a-07c7c15ce996","Type":"ContainerDied","Data":"9a6b1a4ad119bfed32568355cec70a94a7fcf22a0b1f5a06ffdb17c6c3ad1bf9"} Nov 28 07:18:05 crc kubenswrapper[4946]: I1128 07:18:05.740748 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 07:18:05 crc kubenswrapper[4946]: I1128 07:18:05.740807 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 07:18:05 crc kubenswrapper[4946]: I1128 07:18:05.870532 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 07:18:05 crc kubenswrapper[4946]: I1128 07:18:05.870582 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 07:18:05 crc kubenswrapper[4946]: I1128 07:18:05.892632 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" Nov 28 07:18:05 crc kubenswrapper[4946]: I1128 07:18:05.908425 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 07:18:05 crc kubenswrapper[4946]: I1128 07:18:05.968500 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c55f6679-r8nsx"] Nov 28 07:18:05 crc kubenswrapper[4946]: I1128 07:18:05.968803 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c55f6679-r8nsx" podUID="4fd09876-590d-478f-a778-015719592efb" containerName="dnsmasq-dns" containerID="cri-o://1473945a8753d1a76f9a5b69ee8837fb93f7c74574e19abb8e3fbd27fb2d1a13" gracePeriod=10 Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.243375 4946 generic.go:334] "Generic (PLEG): container finished" podID="4fd09876-590d-478f-a778-015719592efb" containerID="1473945a8753d1a76f9a5b69ee8837fb93f7c74574e19abb8e3fbd27fb2d1a13" exitCode=0 Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.243490 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c55f6679-r8nsx" event={"ID":"4fd09876-590d-478f-a778-015719592efb","Type":"ContainerDied","Data":"1473945a8753d1a76f9a5b69ee8837fb93f7c74574e19abb8e3fbd27fb2d1a13"} Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.298165 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.634968 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c55f6679-r8nsx" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.719260 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-ovsdbserver-nb\") pod \"4fd09876-590d-478f-a778-015719592efb\" (UID: \"4fd09876-590d-478f-a778-015719592efb\") " Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.719327 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-dns-swift-storage-0\") pod \"4fd09876-590d-478f-a778-015719592efb\" (UID: \"4fd09876-590d-478f-a778-015719592efb\") " Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.719418 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55t89\" (UniqueName: \"kubernetes.io/projected/4fd09876-590d-478f-a778-015719592efb-kube-api-access-55t89\") pod \"4fd09876-590d-478f-a778-015719592efb\" (UID: \"4fd09876-590d-478f-a778-015719592efb\") " Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.719451 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-dns-svc\") pod \"4fd09876-590d-478f-a778-015719592efb\" (UID: \"4fd09876-590d-478f-a778-015719592efb\") " Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.719554 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-ovsdbserver-sb\") pod \"4fd09876-590d-478f-a778-015719592efb\" (UID: \"4fd09876-590d-478f-a778-015719592efb\") " Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.719696 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-config\") pod \"4fd09876-590d-478f-a778-015719592efb\" (UID: \"4fd09876-590d-478f-a778-015719592efb\") " Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.741810 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3c04d77-b056-4082-a3eb-64f9f4679296" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.179:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.742128 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3c04d77-b056-4082-a3eb-64f9f4679296" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.179:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.760977 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fd09876-590d-478f-a778-015719592efb-kube-api-access-55t89" (OuterVolumeSpecName: "kube-api-access-55t89") pod "4fd09876-590d-478f-a778-015719592efb" (UID: "4fd09876-590d-478f-a778-015719592efb"). InnerVolumeSpecName "kube-api-access-55t89". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.799907 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4fd09876-590d-478f-a778-015719592efb" (UID: "4fd09876-590d-478f-a778-015719592efb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.818688 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-config" (OuterVolumeSpecName: "config") pod "4fd09876-590d-478f-a778-015719592efb" (UID: "4fd09876-590d-478f-a778-015719592efb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.820962 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4fd09876-590d-478f-a778-015719592efb" (UID: "4fd09876-590d-478f-a778-015719592efb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.821152 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-ovsdbserver-nb\") pod \"4fd09876-590d-478f-a778-015719592efb\" (UID: \"4fd09876-590d-478f-a778-015719592efb\") " Nov 28 07:18:06 crc kubenswrapper[4946]: W1128 07:18:06.821251 4946 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/4fd09876-590d-478f-a778-015719592efb/volumes/kubernetes.io~configmap/ovsdbserver-nb Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.821282 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4fd09876-590d-478f-a778-015719592efb" (UID: "4fd09876-590d-478f-a778-015719592efb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.821992 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.822008 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.822017 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.822028 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55t89\" (UniqueName: \"kubernetes.io/projected/4fd09876-590d-478f-a778-015719592efb-kube-api-access-55t89\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.836659 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4fd09876-590d-478f-a778-015719592efb" (UID: "4fd09876-590d-478f-a778-015719592efb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.847153 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wxxkg" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.855232 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mr4lq" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.896915 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4fd09876-590d-478f-a778-015719592efb" (UID: "4fd09876-590d-478f-a778-015719592efb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.923527 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc547561-a892-4d64-83cd-35f61e46ffbc-combined-ca-bundle\") pod \"fc547561-a892-4d64-83cd-35f61e46ffbc\" (UID: \"fc547561-a892-4d64-83cd-35f61e46ffbc\") " Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.923586 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz4km\" (UniqueName: \"kubernetes.io/projected/fc547561-a892-4d64-83cd-35f61e46ffbc-kube-api-access-mz4km\") pod \"fc547561-a892-4d64-83cd-35f61e46ffbc\" (UID: \"fc547561-a892-4d64-83cd-35f61e46ffbc\") " Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.923716 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc547561-a892-4d64-83cd-35f61e46ffbc-config-data\") pod \"fc547561-a892-4d64-83cd-35f61e46ffbc\" (UID: \"fc547561-a892-4d64-83cd-35f61e46ffbc\") " Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.923824 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d80e1b8-ad91-4449-bd5a-07c7c15ce996-scripts\") pod \"6d80e1b8-ad91-4449-bd5a-07c7c15ce996\" (UID: \"6d80e1b8-ad91-4449-bd5a-07c7c15ce996\") " Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.923868 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d80e1b8-ad91-4449-bd5a-07c7c15ce996-combined-ca-bundle\") pod \"6d80e1b8-ad91-4449-bd5a-07c7c15ce996\" (UID: \"6d80e1b8-ad91-4449-bd5a-07c7c15ce996\") " Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.923886 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r77vm\" (UniqueName: \"kubernetes.io/projected/6d80e1b8-ad91-4449-bd5a-07c7c15ce996-kube-api-access-r77vm\") pod \"6d80e1b8-ad91-4449-bd5a-07c7c15ce996\" (UID: \"6d80e1b8-ad91-4449-bd5a-07c7c15ce996\") " Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.923913 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d80e1b8-ad91-4449-bd5a-07c7c15ce996-config-data\") pod \"6d80e1b8-ad91-4449-bd5a-07c7c15ce996\" (UID: \"6d80e1b8-ad91-4449-bd5a-07c7c15ce996\") " Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.923944 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc547561-a892-4d64-83cd-35f61e46ffbc-scripts\") pod \"fc547561-a892-4d64-83cd-35f61e46ffbc\" (UID: \"fc547561-a892-4d64-83cd-35f61e46ffbc\") " Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.924382 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.924398 4946 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4fd09876-590d-478f-a778-015719592efb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.928587 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d80e1b8-ad91-4449-bd5a-07c7c15ce996-kube-api-access-r77vm" (OuterVolumeSpecName: "kube-api-access-r77vm") pod "6d80e1b8-ad91-4449-bd5a-07c7c15ce996" (UID: "6d80e1b8-ad91-4449-bd5a-07c7c15ce996"). InnerVolumeSpecName "kube-api-access-r77vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.929745 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc547561-a892-4d64-83cd-35f61e46ffbc-kube-api-access-mz4km" (OuterVolumeSpecName: "kube-api-access-mz4km") pod "fc547561-a892-4d64-83cd-35f61e46ffbc" (UID: "fc547561-a892-4d64-83cd-35f61e46ffbc"). InnerVolumeSpecName "kube-api-access-mz4km". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.929975 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc547561-a892-4d64-83cd-35f61e46ffbc-scripts" (OuterVolumeSpecName: "scripts") pod "fc547561-a892-4d64-83cd-35f61e46ffbc" (UID: "fc547561-a892-4d64-83cd-35f61e46ffbc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.934240 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d80e1b8-ad91-4449-bd5a-07c7c15ce996-scripts" (OuterVolumeSpecName: "scripts") pod "6d80e1b8-ad91-4449-bd5a-07c7c15ce996" (UID: "6d80e1b8-ad91-4449-bd5a-07c7c15ce996"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.952395 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d80e1b8-ad91-4449-bd5a-07c7c15ce996-config-data" (OuterVolumeSpecName: "config-data") pod "6d80e1b8-ad91-4449-bd5a-07c7c15ce996" (UID: "6d80e1b8-ad91-4449-bd5a-07c7c15ce996"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.963674 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d80e1b8-ad91-4449-bd5a-07c7c15ce996-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d80e1b8-ad91-4449-bd5a-07c7c15ce996" (UID: "6d80e1b8-ad91-4449-bd5a-07c7c15ce996"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.968704 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc547561-a892-4d64-83cd-35f61e46ffbc-config-data" (OuterVolumeSpecName: "config-data") pod "fc547561-a892-4d64-83cd-35f61e46ffbc" (UID: "fc547561-a892-4d64-83cd-35f61e46ffbc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:06 crc kubenswrapper[4946]: I1128 07:18:06.969895 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc547561-a892-4d64-83cd-35f61e46ffbc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc547561-a892-4d64-83cd-35f61e46ffbc" (UID: "fc547561-a892-4d64-83cd-35f61e46ffbc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.026628 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc547561-a892-4d64-83cd-35f61e46ffbc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.026671 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz4km\" (UniqueName: \"kubernetes.io/projected/fc547561-a892-4d64-83cd-35f61e46ffbc-kube-api-access-mz4km\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.026688 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc547561-a892-4d64-83cd-35f61e46ffbc-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.026702 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d80e1b8-ad91-4449-bd5a-07c7c15ce996-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.026713 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d80e1b8-ad91-4449-bd5a-07c7c15ce996-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.026724 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r77vm\" (UniqueName: \"kubernetes.io/projected/6d80e1b8-ad91-4449-bd5a-07c7c15ce996-kube-api-access-r77vm\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.026735 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d80e1b8-ad91-4449-bd5a-07c7c15ce996-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.026749 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc547561-a892-4d64-83cd-35f61e46ffbc-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.263723 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c55f6679-r8nsx" event={"ID":"4fd09876-590d-478f-a778-015719592efb","Type":"ContainerDied","Data":"aab1c61afa9f795a50db4a5314a583eddcf2e9f3aabc3b80865d42149bc1e5b8"} Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.264046 4946 scope.go:117] "RemoveContainer" containerID="1473945a8753d1a76f9a5b69ee8837fb93f7c74574e19abb8e3fbd27fb2d1a13" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.264299 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c55f6679-r8nsx" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.277219 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wxxkg" event={"ID":"6d80e1b8-ad91-4449-bd5a-07c7c15ce996","Type":"ContainerDied","Data":"b736e42244ac71b3db4aad31e0b40382bc2551e00f501e30852a69a57d7fdb10"} Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.277273 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b736e42244ac71b3db4aad31e0b40382bc2551e00f501e30852a69a57d7fdb10" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.277416 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wxxkg" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.279968 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mr4lq" event={"ID":"fc547561-a892-4d64-83cd-35f61e46ffbc","Type":"ContainerDied","Data":"3ea584477ef82817fe8b52de2912e4c1d327a990a4f217a5ffa32fa988a74297"} Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.280092 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ea584477ef82817fe8b52de2912e4c1d327a990a4f217a5ffa32fa988a74297" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.282234 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mr4lq" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.319485 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.321185 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="2307ed21-fd67-403c-ae9d-acc822502e42" containerName="kube-state-metrics" containerID="cri-o://ca77bf6d5c0d4d76bcac958857c9dc6a681f5bd3946886d84896f5797455fbcc" gracePeriod=30 Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.339307 4946 scope.go:117] "RemoveContainer" containerID="9fa0bd89f0751c220d859de95dbe385d4777f8e6e5ba41138a10428806de2aa9" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.372298 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 07:18:07 crc kubenswrapper[4946]: E1128 07:18:07.372850 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fd09876-590d-478f-a778-015719592efb" containerName="init" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.372876 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd09876-590d-478f-a778-015719592efb" containerName="init" Nov 28 07:18:07 crc kubenswrapper[4946]: E1128 07:18:07.372918 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc547561-a892-4d64-83cd-35f61e46ffbc" containerName="nova-manage" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.372926 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc547561-a892-4d64-83cd-35f61e46ffbc" containerName="nova-manage" Nov 28 07:18:07 crc kubenswrapper[4946]: E1128 07:18:07.372952 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d80e1b8-ad91-4449-bd5a-07c7c15ce996" containerName="nova-cell1-conductor-db-sync" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.372959 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d80e1b8-ad91-4449-bd5a-07c7c15ce996" containerName="nova-cell1-conductor-db-sync" Nov 28 07:18:07 crc kubenswrapper[4946]: E1128 07:18:07.372975 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fd09876-590d-478f-a778-015719592efb" containerName="dnsmasq-dns" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.372983 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd09876-590d-478f-a778-015719592efb" containerName="dnsmasq-dns" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.373185 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc547561-a892-4d64-83cd-35f61e46ffbc" containerName="nova-manage" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.373204 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fd09876-590d-478f-a778-015719592efb" containerName="dnsmasq-dns" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.373235 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d80e1b8-ad91-4449-bd5a-07c7c15ce996" containerName="nova-cell1-conductor-db-sync" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.374054 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.376224 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.376876 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.446663 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7322c8-b99d-4970-85c0-218d683f1ca3-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"db7322c8-b99d-4970-85c0-218d683f1ca3\") " pod="openstack/nova-cell1-conductor-0" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.446812 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tcv2\" (UniqueName: \"kubernetes.io/projected/db7322c8-b99d-4970-85c0-218d683f1ca3-kube-api-access-6tcv2\") pod \"nova-cell1-conductor-0\" (UID: \"db7322c8-b99d-4970-85c0-218d683f1ca3\") " pod="openstack/nova-cell1-conductor-0" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.446931 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db7322c8-b99d-4970-85c0-218d683f1ca3-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"db7322c8-b99d-4970-85c0-218d683f1ca3\") " pod="openstack/nova-cell1-conductor-0" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.483953 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.484293 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a3c04d77-b056-4082-a3eb-64f9f4679296" containerName="nova-api-api" containerID="cri-o://eb0a5ac8d697a88614fd21d06b02ba39f498724f65d07996731adc13cd44ad4b" gracePeriod=30 Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.484234 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a3c04d77-b056-4082-a3eb-64f9f4679296" containerName="nova-api-log" containerID="cri-o://5e04565953fdb740a56d881500149e6fb75f2cb1d1d6ee266392ee03ee31197f" gracePeriod=30 Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.507022 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.542585 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.542637 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.550154 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db7322c8-b99d-4970-85c0-218d683f1ca3-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"db7322c8-b99d-4970-85c0-218d683f1ca3\") " pod="openstack/nova-cell1-conductor-0" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.550215 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7322c8-b99d-4970-85c0-218d683f1ca3-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"db7322c8-b99d-4970-85c0-218d683f1ca3\") " pod="openstack/nova-cell1-conductor-0" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.550305 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tcv2\" (UniqueName: \"kubernetes.io/projected/db7322c8-b99d-4970-85c0-218d683f1ca3-kube-api-access-6tcv2\") pod \"nova-cell1-conductor-0\" (UID: \"db7322c8-b99d-4970-85c0-218d683f1ca3\") " pod="openstack/nova-cell1-conductor-0" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.558142 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c55f6679-r8nsx"] Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.573447 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db7322c8-b99d-4970-85c0-218d683f1ca3-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"db7322c8-b99d-4970-85c0-218d683f1ca3\") " pod="openstack/nova-cell1-conductor-0" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.575553 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7322c8-b99d-4970-85c0-218d683f1ca3-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"db7322c8-b99d-4970-85c0-218d683f1ca3\") " pod="openstack/nova-cell1-conductor-0" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.582049 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tcv2\" (UniqueName: \"kubernetes.io/projected/db7322c8-b99d-4970-85c0-218d683f1ca3-kube-api-access-6tcv2\") pod \"nova-cell1-conductor-0\" (UID: \"db7322c8-b99d-4970-85c0-218d683f1ca3\") " pod="openstack/nova-cell1-conductor-0" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.603599 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c55f6679-r8nsx"] Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.652251 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.830746 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.846961 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.958935 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vg69\" (UniqueName: \"kubernetes.io/projected/2307ed21-fd67-403c-ae9d-acc822502e42-kube-api-access-9vg69\") pod \"2307ed21-fd67-403c-ae9d-acc822502e42\" (UID: \"2307ed21-fd67-403c-ae9d-acc822502e42\") " Nov 28 07:18:07 crc kubenswrapper[4946]: I1128 07:18:07.966579 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2307ed21-fd67-403c-ae9d-acc822502e42-kube-api-access-9vg69" (OuterVolumeSpecName: "kube-api-access-9vg69") pod "2307ed21-fd67-403c-ae9d-acc822502e42" (UID: "2307ed21-fd67-403c-ae9d-acc822502e42"). InnerVolumeSpecName "kube-api-access-9vg69". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.042220 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fd09876-590d-478f-a778-015719592efb" path="/var/lib/kubelet/pods/4fd09876-590d-478f-a778-015719592efb/volumes" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.064878 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vg69\" (UniqueName: \"kubernetes.io/projected/2307ed21-fd67-403c-ae9d-acc822502e42-kube-api-access-9vg69\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.291562 4946 generic.go:334] "Generic (PLEG): container finished" podID="2307ed21-fd67-403c-ae9d-acc822502e42" containerID="ca77bf6d5c0d4d76bcac958857c9dc6a681f5bd3946886d84896f5797455fbcc" exitCode=2 Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.291640 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.291675 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2307ed21-fd67-403c-ae9d-acc822502e42","Type":"ContainerDied","Data":"ca77bf6d5c0d4d76bcac958857c9dc6a681f5bd3946886d84896f5797455fbcc"} Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.294147 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2307ed21-fd67-403c-ae9d-acc822502e42","Type":"ContainerDied","Data":"f79bc806764927e9705860d7bfe49d6ea7c1e6ce03da5f3ca44e08fd7e344a8c"} Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.294175 4946 scope.go:117] "RemoveContainer" containerID="ca77bf6d5c0d4d76bcac958857c9dc6a681f5bd3946886d84896f5797455fbcc" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.307401 4946 generic.go:334] "Generic (PLEG): container finished" podID="a3c04d77-b056-4082-a3eb-64f9f4679296" containerID="5e04565953fdb740a56d881500149e6fb75f2cb1d1d6ee266392ee03ee31197f" exitCode=143 Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.307519 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a3c04d77-b056-4082-a3eb-64f9f4679296","Type":"ContainerDied","Data":"5e04565953fdb740a56d881500149e6fb75f2cb1d1d6ee266392ee03ee31197f"} Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.325873 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="31368578-5450-4e83-a5f4-f4075b195972" containerName="nova-metadata-log" containerID="cri-o://d250a9708f5498f5795040bb9e841f3fbd772ccf9d0d473b05f432aa4bdb7fde" gracePeriod=30 Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.326066 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="31368578-5450-4e83-a5f4-f4075b195972" containerName="nova-metadata-metadata" containerID="cri-o://d94b8f37be39b22b692fe08643c204d440a890ca5a43ac70f1ed35f81a3bc25f" gracePeriod=30 Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.328957 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="db67bc15-ec43-4391-8aa9-b0c214813024" containerName="nova-scheduler-scheduler" containerID="cri-o://01f3959a3fddadfeb293023b6dbdd4f890722a5e2dfba684d15de2a137b654ab" gracePeriod=30 Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.329189 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.334429 4946 scope.go:117] "RemoveContainer" containerID="ca77bf6d5c0d4d76bcac958857c9dc6a681f5bd3946886d84896f5797455fbcc" Nov 28 07:18:08 crc kubenswrapper[4946]: E1128 07:18:08.338540 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca77bf6d5c0d4d76bcac958857c9dc6a681f5bd3946886d84896f5797455fbcc\": container with ID starting with ca77bf6d5c0d4d76bcac958857c9dc6a681f5bd3946886d84896f5797455fbcc not found: ID does not exist" containerID="ca77bf6d5c0d4d76bcac958857c9dc6a681f5bd3946886d84896f5797455fbcc" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.338547 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.338583 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca77bf6d5c0d4d76bcac958857c9dc6a681f5bd3946886d84896f5797455fbcc"} err="failed to get container status \"ca77bf6d5c0d4d76bcac958857c9dc6a681f5bd3946886d84896f5797455fbcc\": rpc error: code = NotFound desc = could not find container \"ca77bf6d5c0d4d76bcac958857c9dc6a681f5bd3946886d84896f5797455fbcc\": container with ID starting with ca77bf6d5c0d4d76bcac958857c9dc6a681f5bd3946886d84896f5797455fbcc not found: ID does not exist" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.349720 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 07:18:08 crc kubenswrapper[4946]: E1128 07:18:08.350943 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2307ed21-fd67-403c-ae9d-acc822502e42" containerName="kube-state-metrics" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.351085 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="2307ed21-fd67-403c-ae9d-acc822502e42" containerName="kube-state-metrics" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.351705 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="2307ed21-fd67-403c-ae9d-acc822502e42" containerName="kube-state-metrics" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.355344 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.365019 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.365315 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.377330 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 07:18:08 crc kubenswrapper[4946]: W1128 07:18:08.442122 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb7322c8_b99d_4970_85c0_218d683f1ca3.slice/crio-8bdb5f4fee871ff9da23960598cc8a268908f38bdfb5abdf2c002b509d1ec125 WatchSource:0}: Error finding container 8bdb5f4fee871ff9da23960598cc8a268908f38bdfb5abdf2c002b509d1ec125: Status 404 returned error can't find the container with id 8bdb5f4fee871ff9da23960598cc8a268908f38bdfb5abdf2c002b509d1ec125 Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.455979 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.473832 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/376693c8-e03f-4085-9be2-0ef9a0e27c5c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"376693c8-e03f-4085-9be2-0ef9a0e27c5c\") " pod="openstack/kube-state-metrics-0" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.473886 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/376693c8-e03f-4085-9be2-0ef9a0e27c5c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"376693c8-e03f-4085-9be2-0ef9a0e27c5c\") " pod="openstack/kube-state-metrics-0" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.474210 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/376693c8-e03f-4085-9be2-0ef9a0e27c5c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"376693c8-e03f-4085-9be2-0ef9a0e27c5c\") " pod="openstack/kube-state-metrics-0" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.474275 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxpd9\" (UniqueName: \"kubernetes.io/projected/376693c8-e03f-4085-9be2-0ef9a0e27c5c-kube-api-access-dxpd9\") pod \"kube-state-metrics-0\" (UID: \"376693c8-e03f-4085-9be2-0ef9a0e27c5c\") " pod="openstack/kube-state-metrics-0" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.577765 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxpd9\" (UniqueName: \"kubernetes.io/projected/376693c8-e03f-4085-9be2-0ef9a0e27c5c-kube-api-access-dxpd9\") pod \"kube-state-metrics-0\" (UID: \"376693c8-e03f-4085-9be2-0ef9a0e27c5c\") " pod="openstack/kube-state-metrics-0" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.577877 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/376693c8-e03f-4085-9be2-0ef9a0e27c5c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"376693c8-e03f-4085-9be2-0ef9a0e27c5c\") " pod="openstack/kube-state-metrics-0" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.577943 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/376693c8-e03f-4085-9be2-0ef9a0e27c5c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"376693c8-e03f-4085-9be2-0ef9a0e27c5c\") " pod="openstack/kube-state-metrics-0" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.579145 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/376693c8-e03f-4085-9be2-0ef9a0e27c5c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"376693c8-e03f-4085-9be2-0ef9a0e27c5c\") " pod="openstack/kube-state-metrics-0" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.582883 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/376693c8-e03f-4085-9be2-0ef9a0e27c5c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"376693c8-e03f-4085-9be2-0ef9a0e27c5c\") " pod="openstack/kube-state-metrics-0" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.584260 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/376693c8-e03f-4085-9be2-0ef9a0e27c5c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"376693c8-e03f-4085-9be2-0ef9a0e27c5c\") " pod="openstack/kube-state-metrics-0" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.589054 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/376693c8-e03f-4085-9be2-0ef9a0e27c5c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"376693c8-e03f-4085-9be2-0ef9a0e27c5c\") " pod="openstack/kube-state-metrics-0" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.598967 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxpd9\" (UniqueName: \"kubernetes.io/projected/376693c8-e03f-4085-9be2-0ef9a0e27c5c-kube-api-access-dxpd9\") pod \"kube-state-metrics-0\" (UID: \"376693c8-e03f-4085-9be2-0ef9a0e27c5c\") " pod="openstack/kube-state-metrics-0" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.685212 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.884272 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.987284 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/31368578-5450-4e83-a5f4-f4075b195972-nova-metadata-tls-certs\") pod \"31368578-5450-4e83-a5f4-f4075b195972\" (UID: \"31368578-5450-4e83-a5f4-f4075b195972\") " Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.987679 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31368578-5450-4e83-a5f4-f4075b195972-logs\") pod \"31368578-5450-4e83-a5f4-f4075b195972\" (UID: \"31368578-5450-4e83-a5f4-f4075b195972\") " Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.987871 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31368578-5450-4e83-a5f4-f4075b195972-config-data\") pod \"31368578-5450-4e83-a5f4-f4075b195972\" (UID: \"31368578-5450-4e83-a5f4-f4075b195972\") " Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.987968 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31368578-5450-4e83-a5f4-f4075b195972-combined-ca-bundle\") pod \"31368578-5450-4e83-a5f4-f4075b195972\" (UID: \"31368578-5450-4e83-a5f4-f4075b195972\") " Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.988021 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2ptl\" (UniqueName: \"kubernetes.io/projected/31368578-5450-4e83-a5f4-f4075b195972-kube-api-access-v2ptl\") pod \"31368578-5450-4e83-a5f4-f4075b195972\" (UID: \"31368578-5450-4e83-a5f4-f4075b195972\") " Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.989036 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31368578-5450-4e83-a5f4-f4075b195972-logs" (OuterVolumeSpecName: "logs") pod "31368578-5450-4e83-a5f4-f4075b195972" (UID: "31368578-5450-4e83-a5f4-f4075b195972"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:18:08 crc kubenswrapper[4946]: I1128 07:18:08.998846 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31368578-5450-4e83-a5f4-f4075b195972-kube-api-access-v2ptl" (OuterVolumeSpecName: "kube-api-access-v2ptl") pod "31368578-5450-4e83-a5f4-f4075b195972" (UID: "31368578-5450-4e83-a5f4-f4075b195972"). InnerVolumeSpecName "kube-api-access-v2ptl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.018292 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31368578-5450-4e83-a5f4-f4075b195972-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31368578-5450-4e83-a5f4-f4075b195972" (UID: "31368578-5450-4e83-a5f4-f4075b195972"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.023114 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31368578-5450-4e83-a5f4-f4075b195972-config-data" (OuterVolumeSpecName: "config-data") pod "31368578-5450-4e83-a5f4-f4075b195972" (UID: "31368578-5450-4e83-a5f4-f4075b195972"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.064918 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31368578-5450-4e83-a5f4-f4075b195972-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "31368578-5450-4e83-a5f4-f4075b195972" (UID: "31368578-5450-4e83-a5f4-f4075b195972"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.091409 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2ptl\" (UniqueName: \"kubernetes.io/projected/31368578-5450-4e83-a5f4-f4075b195972-kube-api-access-v2ptl\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.091449 4946 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/31368578-5450-4e83-a5f4-f4075b195972-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.091978 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31368578-5450-4e83-a5f4-f4075b195972-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.092439 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31368578-5450-4e83-a5f4-f4075b195972-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.092495 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31368578-5450-4e83-a5f4-f4075b195972-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.202080 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.340693 4946 generic.go:334] "Generic (PLEG): container finished" podID="31368578-5450-4e83-a5f4-f4075b195972" containerID="d94b8f37be39b22b692fe08643c204d440a890ca5a43ac70f1ed35f81a3bc25f" exitCode=0 Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.340729 4946 generic.go:334] "Generic (PLEG): container finished" podID="31368578-5450-4e83-a5f4-f4075b195972" containerID="d250a9708f5498f5795040bb9e841f3fbd772ccf9d0d473b05f432aa4bdb7fde" exitCode=143 Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.340763 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"31368578-5450-4e83-a5f4-f4075b195972","Type":"ContainerDied","Data":"d94b8f37be39b22b692fe08643c204d440a890ca5a43ac70f1ed35f81a3bc25f"} Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.340792 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"31368578-5450-4e83-a5f4-f4075b195972","Type":"ContainerDied","Data":"d250a9708f5498f5795040bb9e841f3fbd772ccf9d0d473b05f432aa4bdb7fde"} Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.340804 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"31368578-5450-4e83-a5f4-f4075b195972","Type":"ContainerDied","Data":"829a1dea75791ce8e666c689b8963bfdbedec1047152529f63a222040ad7fb21"} Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.340823 4946 scope.go:117] "RemoveContainer" containerID="d94b8f37be39b22b692fe08643c204d440a890ca5a43ac70f1ed35f81a3bc25f" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.340929 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.347727 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"db7322c8-b99d-4970-85c0-218d683f1ca3","Type":"ContainerStarted","Data":"aff8824b6f9748bbf0caf47e4c6c01486fb6b04768113350feeb4bb43a17bff5"} Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.347782 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"db7322c8-b99d-4970-85c0-218d683f1ca3","Type":"ContainerStarted","Data":"8bdb5f4fee871ff9da23960598cc8a268908f38bdfb5abdf2c002b509d1ec125"} Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.348693 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.350627 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"376693c8-e03f-4085-9be2-0ef9a0e27c5c","Type":"ContainerStarted","Data":"fcaa1cb1eda846a1f782d645fb204b00da3dd8440a9ad15f6a22767eb3dd67de"} Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.377915 4946 scope.go:117] "RemoveContainer" containerID="d250a9708f5498f5795040bb9e841f3fbd772ccf9d0d473b05f432aa4bdb7fde" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.381913 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.381885097 podStartE2EDuration="2.381885097s" podCreationTimestamp="2025-11-28 07:18:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:18:09.363756822 +0000 UTC m=+1543.741821933" watchObservedRunningTime="2025-11-28 07:18:09.381885097 +0000 UTC m=+1543.759950208" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.453838 4946 scope.go:117] "RemoveContainer" containerID="d94b8f37be39b22b692fe08643c204d440a890ca5a43ac70f1ed35f81a3bc25f" Nov 28 07:18:09 crc kubenswrapper[4946]: E1128 07:18:09.456898 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d94b8f37be39b22b692fe08643c204d440a890ca5a43ac70f1ed35f81a3bc25f\": container with ID starting with d94b8f37be39b22b692fe08643c204d440a890ca5a43ac70f1ed35f81a3bc25f not found: ID does not exist" containerID="d94b8f37be39b22b692fe08643c204d440a890ca5a43ac70f1ed35f81a3bc25f" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.457218 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d94b8f37be39b22b692fe08643c204d440a890ca5a43ac70f1ed35f81a3bc25f"} err="failed to get container status \"d94b8f37be39b22b692fe08643c204d440a890ca5a43ac70f1ed35f81a3bc25f\": rpc error: code = NotFound desc = could not find container \"d94b8f37be39b22b692fe08643c204d440a890ca5a43ac70f1ed35f81a3bc25f\": container with ID starting with d94b8f37be39b22b692fe08643c204d440a890ca5a43ac70f1ed35f81a3bc25f not found: ID does not exist" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.457361 4946 scope.go:117] "RemoveContainer" containerID="d250a9708f5498f5795040bb9e841f3fbd772ccf9d0d473b05f432aa4bdb7fde" Nov 28 07:18:09 crc kubenswrapper[4946]: E1128 07:18:09.458563 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d250a9708f5498f5795040bb9e841f3fbd772ccf9d0d473b05f432aa4bdb7fde\": container with ID starting with d250a9708f5498f5795040bb9e841f3fbd772ccf9d0d473b05f432aa4bdb7fde not found: ID does not exist" containerID="d250a9708f5498f5795040bb9e841f3fbd772ccf9d0d473b05f432aa4bdb7fde" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.458611 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d250a9708f5498f5795040bb9e841f3fbd772ccf9d0d473b05f432aa4bdb7fde"} err="failed to get container status \"d250a9708f5498f5795040bb9e841f3fbd772ccf9d0d473b05f432aa4bdb7fde\": rpc error: code = NotFound desc = could not find container \"d250a9708f5498f5795040bb9e841f3fbd772ccf9d0d473b05f432aa4bdb7fde\": container with ID starting with d250a9708f5498f5795040bb9e841f3fbd772ccf9d0d473b05f432aa4bdb7fde not found: ID does not exist" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.458641 4946 scope.go:117] "RemoveContainer" containerID="d94b8f37be39b22b692fe08643c204d440a890ca5a43ac70f1ed35f81a3bc25f" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.462793 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d94b8f37be39b22b692fe08643c204d440a890ca5a43ac70f1ed35f81a3bc25f"} err="failed to get container status \"d94b8f37be39b22b692fe08643c204d440a890ca5a43ac70f1ed35f81a3bc25f\": rpc error: code = NotFound desc = could not find container \"d94b8f37be39b22b692fe08643c204d440a890ca5a43ac70f1ed35f81a3bc25f\": container with ID starting with d94b8f37be39b22b692fe08643c204d440a890ca5a43ac70f1ed35f81a3bc25f not found: ID does not exist" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.464758 4946 scope.go:117] "RemoveContainer" containerID="d250a9708f5498f5795040bb9e841f3fbd772ccf9d0d473b05f432aa4bdb7fde" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.466819 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d250a9708f5498f5795040bb9e841f3fbd772ccf9d0d473b05f432aa4bdb7fde"} err="failed to get container status \"d250a9708f5498f5795040bb9e841f3fbd772ccf9d0d473b05f432aa4bdb7fde\": rpc error: code = NotFound desc = could not find container \"d250a9708f5498f5795040bb9e841f3fbd772ccf9d0d473b05f432aa4bdb7fde\": container with ID starting with d250a9708f5498f5795040bb9e841f3fbd772ccf9d0d473b05f432aa4bdb7fde not found: ID does not exist" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.467936 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.482594 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.494205 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:18:09 crc kubenswrapper[4946]: E1128 07:18:09.495070 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31368578-5450-4e83-a5f4-f4075b195972" containerName="nova-metadata-log" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.495094 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="31368578-5450-4e83-a5f4-f4075b195972" containerName="nova-metadata-log" Nov 28 07:18:09 crc kubenswrapper[4946]: E1128 07:18:09.495344 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31368578-5450-4e83-a5f4-f4075b195972" containerName="nova-metadata-metadata" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.495359 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="31368578-5450-4e83-a5f4-f4075b195972" containerName="nova-metadata-metadata" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.495583 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="31368578-5450-4e83-a5f4-f4075b195972" containerName="nova-metadata-log" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.495619 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="31368578-5450-4e83-a5f4-f4075b195972" containerName="nova-metadata-metadata" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.499171 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.501199 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.502131 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.503074 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.611379 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e2a58ce5-5b99-461d-b7b9-ebeb95624d70\") " pod="openstack/nova-metadata-0" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.611499 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7prx\" (UniqueName: \"kubernetes.io/projected/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-kube-api-access-v7prx\") pod \"nova-metadata-0\" (UID: \"e2a58ce5-5b99-461d-b7b9-ebeb95624d70\") " pod="openstack/nova-metadata-0" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.611549 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-config-data\") pod \"nova-metadata-0\" (UID: \"e2a58ce5-5b99-461d-b7b9-ebeb95624d70\") " pod="openstack/nova-metadata-0" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.612036 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e2a58ce5-5b99-461d-b7b9-ebeb95624d70\") " pod="openstack/nova-metadata-0" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.612760 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-logs\") pod \"nova-metadata-0\" (UID: \"e2a58ce5-5b99-461d-b7b9-ebeb95624d70\") " pod="openstack/nova-metadata-0" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.716138 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-logs\") pod \"nova-metadata-0\" (UID: \"e2a58ce5-5b99-461d-b7b9-ebeb95624d70\") " pod="openstack/nova-metadata-0" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.716877 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e2a58ce5-5b99-461d-b7b9-ebeb95624d70\") " pod="openstack/nova-metadata-0" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.716956 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7prx\" (UniqueName: \"kubernetes.io/projected/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-kube-api-access-v7prx\") pod \"nova-metadata-0\" (UID: \"e2a58ce5-5b99-461d-b7b9-ebeb95624d70\") " pod="openstack/nova-metadata-0" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.717020 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-config-data\") pod \"nova-metadata-0\" (UID: \"e2a58ce5-5b99-461d-b7b9-ebeb95624d70\") " pod="openstack/nova-metadata-0" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.717096 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e2a58ce5-5b99-461d-b7b9-ebeb95624d70\") " pod="openstack/nova-metadata-0" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.716726 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-logs\") pod \"nova-metadata-0\" (UID: \"e2a58ce5-5b99-461d-b7b9-ebeb95624d70\") " pod="openstack/nova-metadata-0" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.721326 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e2a58ce5-5b99-461d-b7b9-ebeb95624d70\") " pod="openstack/nova-metadata-0" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.721886 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e2a58ce5-5b99-461d-b7b9-ebeb95624d70\") " pod="openstack/nova-metadata-0" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.722525 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-config-data\") pod \"nova-metadata-0\" (UID: \"e2a58ce5-5b99-461d-b7b9-ebeb95624d70\") " pod="openstack/nova-metadata-0" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.736291 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7prx\" (UniqueName: \"kubernetes.io/projected/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-kube-api-access-v7prx\") pod \"nova-metadata-0\" (UID: \"e2a58ce5-5b99-461d-b7b9-ebeb95624d70\") " pod="openstack/nova-metadata-0" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.818326 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.950819 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.951484 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f63766d-f8ca-45f4-a9b0-12a48917af68" containerName="ceilometer-central-agent" containerID="cri-o://6c7d264059d252430d87f0b0e94282ba9f85fc631b483117759d6db21b5d7867" gracePeriod=30 Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.951817 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f63766d-f8ca-45f4-a9b0-12a48917af68" containerName="proxy-httpd" containerID="cri-o://d88b97a54777bafa1a703658388dacffc37ec6f4337066f1021a59a5567b8ad5" gracePeriod=30 Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.952109 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f63766d-f8ca-45f4-a9b0-12a48917af68" containerName="ceilometer-notification-agent" containerID="cri-o://9d8270d9349b37ee7bca9009e6a1267b7660fdda5e573f31309b147bfbf1f0dd" gracePeriod=30 Nov 28 07:18:09 crc kubenswrapper[4946]: I1128 07:18:09.952189 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f63766d-f8ca-45f4-a9b0-12a48917af68" containerName="sg-core" containerID="cri-o://83cec6a77a4e88418903dfaf539eba1629b9e34b8cde6e47872b3a75d523845f" gracePeriod=30 Nov 28 07:18:10 crc kubenswrapper[4946]: I1128 07:18:10.022986 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2307ed21-fd67-403c-ae9d-acc822502e42" path="/var/lib/kubelet/pods/2307ed21-fd67-403c-ae9d-acc822502e42/volumes" Nov 28 07:18:10 crc kubenswrapper[4946]: I1128 07:18:10.023671 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31368578-5450-4e83-a5f4-f4075b195972" path="/var/lib/kubelet/pods/31368578-5450-4e83-a5f4-f4075b195972/volumes" Nov 28 07:18:10 crc kubenswrapper[4946]: I1128 07:18:10.316842 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:18:10 crc kubenswrapper[4946]: W1128 07:18:10.320710 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2a58ce5_5b99_461d_b7b9_ebeb95624d70.slice/crio-d39a0b98b7980c95c6a17611f89440f0e1128c288d138babd30c82dc2c8bcecc WatchSource:0}: Error finding container d39a0b98b7980c95c6a17611f89440f0e1128c288d138babd30c82dc2c8bcecc: Status 404 returned error can't find the container with id d39a0b98b7980c95c6a17611f89440f0e1128c288d138babd30c82dc2c8bcecc Nov 28 07:18:10 crc kubenswrapper[4946]: I1128 07:18:10.365092 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"376693c8-e03f-4085-9be2-0ef9a0e27c5c","Type":"ContainerStarted","Data":"0c9ce058f230f9783a496b0b6cde4e3a924eca33a5a495750e924592c8150850"} Nov 28 07:18:10 crc kubenswrapper[4946]: I1128 07:18:10.366437 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e2a58ce5-5b99-461d-b7b9-ebeb95624d70","Type":"ContainerStarted","Data":"d39a0b98b7980c95c6a17611f89440f0e1128c288d138babd30c82dc2c8bcecc"} Nov 28 07:18:10 crc kubenswrapper[4946]: I1128 07:18:10.366797 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 28 07:18:10 crc kubenswrapper[4946]: I1128 07:18:10.369231 4946 generic.go:334] "Generic (PLEG): container finished" podID="9f63766d-f8ca-45f4-a9b0-12a48917af68" containerID="d88b97a54777bafa1a703658388dacffc37ec6f4337066f1021a59a5567b8ad5" exitCode=0 Nov 28 07:18:10 crc kubenswrapper[4946]: I1128 07:18:10.369256 4946 generic.go:334] "Generic (PLEG): container finished" podID="9f63766d-f8ca-45f4-a9b0-12a48917af68" containerID="83cec6a77a4e88418903dfaf539eba1629b9e34b8cde6e47872b3a75d523845f" exitCode=2 Nov 28 07:18:10 crc kubenswrapper[4946]: I1128 07:18:10.370277 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f63766d-f8ca-45f4-a9b0-12a48917af68","Type":"ContainerDied","Data":"d88b97a54777bafa1a703658388dacffc37ec6f4337066f1021a59a5567b8ad5"} Nov 28 07:18:10 crc kubenswrapper[4946]: I1128 07:18:10.370298 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f63766d-f8ca-45f4-a9b0-12a48917af68","Type":"ContainerDied","Data":"83cec6a77a4e88418903dfaf539eba1629b9e34b8cde6e47872b3a75d523845f"} Nov 28 07:18:10 crc kubenswrapper[4946]: I1128 07:18:10.390703 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.963889209 podStartE2EDuration="2.390677765s" podCreationTimestamp="2025-11-28 07:18:08 +0000 UTC" firstStartedPulling="2025-11-28 07:18:09.197922136 +0000 UTC m=+1543.575987247" lastFinishedPulling="2025-11-28 07:18:09.624710692 +0000 UTC m=+1544.002775803" observedRunningTime="2025-11-28 07:18:10.385254452 +0000 UTC m=+1544.763319573" watchObservedRunningTime="2025-11-28 07:18:10.390677765 +0000 UTC m=+1544.768742876" Nov 28 07:18:10 crc kubenswrapper[4946]: E1128 07:18:10.882906 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="01f3959a3fddadfeb293023b6dbdd4f890722a5e2dfba684d15de2a137b654ab" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 07:18:10 crc kubenswrapper[4946]: E1128 07:18:10.894870 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="01f3959a3fddadfeb293023b6dbdd4f890722a5e2dfba684d15de2a137b654ab" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 07:18:10 crc kubenswrapper[4946]: E1128 07:18:10.900575 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="01f3959a3fddadfeb293023b6dbdd4f890722a5e2dfba684d15de2a137b654ab" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 07:18:10 crc kubenswrapper[4946]: E1128 07:18:10.900814 4946 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="db67bc15-ec43-4391-8aa9-b0c214813024" containerName="nova-scheduler-scheduler" Nov 28 07:18:11 crc kubenswrapper[4946]: I1128 07:18:11.405841 4946 generic.go:334] "Generic (PLEG): container finished" podID="9f63766d-f8ca-45f4-a9b0-12a48917af68" containerID="9d8270d9349b37ee7bca9009e6a1267b7660fdda5e573f31309b147bfbf1f0dd" exitCode=0 Nov 28 07:18:11 crc kubenswrapper[4946]: I1128 07:18:11.406142 4946 generic.go:334] "Generic (PLEG): container finished" podID="9f63766d-f8ca-45f4-a9b0-12a48917af68" containerID="6c7d264059d252430d87f0b0e94282ba9f85fc631b483117759d6db21b5d7867" exitCode=0 Nov 28 07:18:11 crc kubenswrapper[4946]: I1128 07:18:11.406207 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f63766d-f8ca-45f4-a9b0-12a48917af68","Type":"ContainerDied","Data":"9d8270d9349b37ee7bca9009e6a1267b7660fdda5e573f31309b147bfbf1f0dd"} Nov 28 07:18:11 crc kubenswrapper[4946]: I1128 07:18:11.406245 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f63766d-f8ca-45f4-a9b0-12a48917af68","Type":"ContainerDied","Data":"6c7d264059d252430d87f0b0e94282ba9f85fc631b483117759d6db21b5d7867"} Nov 28 07:18:11 crc kubenswrapper[4946]: I1128 07:18:11.411052 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e2a58ce5-5b99-461d-b7b9-ebeb95624d70","Type":"ContainerStarted","Data":"59f6230e209f2d726ba347a01276004806be61ec0ebb63fde762dcc3c87045f1"} Nov 28 07:18:11 crc kubenswrapper[4946]: I1128 07:18:11.411102 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e2a58ce5-5b99-461d-b7b9-ebeb95624d70","Type":"ContainerStarted","Data":"34e825dd736734b342689a93be686bc2c8e15162e54c5db9c531e3183a7da476"} Nov 28 07:18:11 crc kubenswrapper[4946]: I1128 07:18:11.434800 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.434775707 podStartE2EDuration="2.434775707s" podCreationTimestamp="2025-11-28 07:18:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:18:11.431352263 +0000 UTC m=+1545.809417384" watchObservedRunningTime="2025-11-28 07:18:11.434775707 +0000 UTC m=+1545.812840818" Nov 28 07:18:11 crc kubenswrapper[4946]: I1128 07:18:11.929111 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.072238 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f63766d-f8ca-45f4-a9b0-12a48917af68-run-httpd\") pod \"9f63766d-f8ca-45f4-a9b0-12a48917af68\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.072299 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f63766d-f8ca-45f4-a9b0-12a48917af68-config-data\") pod \"9f63766d-f8ca-45f4-a9b0-12a48917af68\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.072330 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f63766d-f8ca-45f4-a9b0-12a48917af68-combined-ca-bundle\") pod \"9f63766d-f8ca-45f4-a9b0-12a48917af68\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.072848 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsd9n\" (UniqueName: \"kubernetes.io/projected/9f63766d-f8ca-45f4-a9b0-12a48917af68-kube-api-access-jsd9n\") pod \"9f63766d-f8ca-45f4-a9b0-12a48917af68\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.073150 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f63766d-f8ca-45f4-a9b0-12a48917af68-sg-core-conf-yaml\") pod \"9f63766d-f8ca-45f4-a9b0-12a48917af68\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.073213 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f63766d-f8ca-45f4-a9b0-12a48917af68-scripts\") pod \"9f63766d-f8ca-45f4-a9b0-12a48917af68\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.073301 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f63766d-f8ca-45f4-a9b0-12a48917af68-log-httpd\") pod \"9f63766d-f8ca-45f4-a9b0-12a48917af68\" (UID: \"9f63766d-f8ca-45f4-a9b0-12a48917af68\") " Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.075086 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f63766d-f8ca-45f4-a9b0-12a48917af68-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9f63766d-f8ca-45f4-a9b0-12a48917af68" (UID: "9f63766d-f8ca-45f4-a9b0-12a48917af68"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.077696 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f63766d-f8ca-45f4-a9b0-12a48917af68-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9f63766d-f8ca-45f4-a9b0-12a48917af68" (UID: "9f63766d-f8ca-45f4-a9b0-12a48917af68"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.080968 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f63766d-f8ca-45f4-a9b0-12a48917af68-kube-api-access-jsd9n" (OuterVolumeSpecName: "kube-api-access-jsd9n") pod "9f63766d-f8ca-45f4-a9b0-12a48917af68" (UID: "9f63766d-f8ca-45f4-a9b0-12a48917af68"). InnerVolumeSpecName "kube-api-access-jsd9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.081416 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f63766d-f8ca-45f4-a9b0-12a48917af68-scripts" (OuterVolumeSpecName: "scripts") pod "9f63766d-f8ca-45f4-a9b0-12a48917af68" (UID: "9f63766d-f8ca-45f4-a9b0-12a48917af68"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.109828 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f63766d-f8ca-45f4-a9b0-12a48917af68-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9f63766d-f8ca-45f4-a9b0-12a48917af68" (UID: "9f63766d-f8ca-45f4-a9b0-12a48917af68"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.150684 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f63766d-f8ca-45f4-a9b0-12a48917af68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f63766d-f8ca-45f4-a9b0-12a48917af68" (UID: "9f63766d-f8ca-45f4-a9b0-12a48917af68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.172901 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f63766d-f8ca-45f4-a9b0-12a48917af68-config-data" (OuterVolumeSpecName: "config-data") pod "9f63766d-f8ca-45f4-a9b0-12a48917af68" (UID: "9f63766d-f8ca-45f4-a9b0-12a48917af68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.176362 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f63766d-f8ca-45f4-a9b0-12a48917af68-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.176398 4946 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f63766d-f8ca-45f4-a9b0-12a48917af68-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.176408 4946 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f63766d-f8ca-45f4-a9b0-12a48917af68-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.176418 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f63766d-f8ca-45f4-a9b0-12a48917af68-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.176427 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f63766d-f8ca-45f4-a9b0-12a48917af68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.176436 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsd9n\" (UniqueName: \"kubernetes.io/projected/9f63766d-f8ca-45f4-a9b0-12a48917af68-kube-api-access-jsd9n\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.176445 4946 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f63766d-f8ca-45f4-a9b0-12a48917af68-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.253583 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.380874 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ws54\" (UniqueName: \"kubernetes.io/projected/db67bc15-ec43-4391-8aa9-b0c214813024-kube-api-access-4ws54\") pod \"db67bc15-ec43-4391-8aa9-b0c214813024\" (UID: \"db67bc15-ec43-4391-8aa9-b0c214813024\") " Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.381256 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db67bc15-ec43-4391-8aa9-b0c214813024-config-data\") pod \"db67bc15-ec43-4391-8aa9-b0c214813024\" (UID: \"db67bc15-ec43-4391-8aa9-b0c214813024\") " Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.381285 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db67bc15-ec43-4391-8aa9-b0c214813024-combined-ca-bundle\") pod \"db67bc15-ec43-4391-8aa9-b0c214813024\" (UID: \"db67bc15-ec43-4391-8aa9-b0c214813024\") " Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.384649 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db67bc15-ec43-4391-8aa9-b0c214813024-kube-api-access-4ws54" (OuterVolumeSpecName: "kube-api-access-4ws54") pod "db67bc15-ec43-4391-8aa9-b0c214813024" (UID: "db67bc15-ec43-4391-8aa9-b0c214813024"). InnerVolumeSpecName "kube-api-access-4ws54". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.412075 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db67bc15-ec43-4391-8aa9-b0c214813024-config-data" (OuterVolumeSpecName: "config-data") pod "db67bc15-ec43-4391-8aa9-b0c214813024" (UID: "db67bc15-ec43-4391-8aa9-b0c214813024"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.422063 4946 generic.go:334] "Generic (PLEG): container finished" podID="db67bc15-ec43-4391-8aa9-b0c214813024" containerID="01f3959a3fddadfeb293023b6dbdd4f890722a5e2dfba684d15de2a137b654ab" exitCode=0 Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.422151 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"db67bc15-ec43-4391-8aa9-b0c214813024","Type":"ContainerDied","Data":"01f3959a3fddadfeb293023b6dbdd4f890722a5e2dfba684d15de2a137b654ab"} Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.422153 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.422186 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"db67bc15-ec43-4391-8aa9-b0c214813024","Type":"ContainerDied","Data":"a31056c2513c9f8e756ef66d5720920aa1e6fc9de230ef9050cb4ad030009c48"} Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.422210 4946 scope.go:117] "RemoveContainer" containerID="01f3959a3fddadfeb293023b6dbdd4f890722a5e2dfba684d15de2a137b654ab" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.426131 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.427491 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f63766d-f8ca-45f4-a9b0-12a48917af68","Type":"ContainerDied","Data":"3f5d9148f2efecd3e1062e2ef7ba17ea95c195e089ae596aac3d6e7dd0b6c3fd"} Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.472748 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db67bc15-ec43-4391-8aa9-b0c214813024-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db67bc15-ec43-4391-8aa9-b0c214813024" (UID: "db67bc15-ec43-4391-8aa9-b0c214813024"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.483579 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ws54\" (UniqueName: \"kubernetes.io/projected/db67bc15-ec43-4391-8aa9-b0c214813024-kube-api-access-4ws54\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.483616 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db67bc15-ec43-4391-8aa9-b0c214813024-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.483630 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db67bc15-ec43-4391-8aa9-b0c214813024-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.533651 4946 scope.go:117] "RemoveContainer" containerID="01f3959a3fddadfeb293023b6dbdd4f890722a5e2dfba684d15de2a137b654ab" Nov 28 07:18:12 crc kubenswrapper[4946]: E1128 07:18:12.534207 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01f3959a3fddadfeb293023b6dbdd4f890722a5e2dfba684d15de2a137b654ab\": container with ID starting with 01f3959a3fddadfeb293023b6dbdd4f890722a5e2dfba684d15de2a137b654ab not found: ID does not exist" containerID="01f3959a3fddadfeb293023b6dbdd4f890722a5e2dfba684d15de2a137b654ab" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.534241 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01f3959a3fddadfeb293023b6dbdd4f890722a5e2dfba684d15de2a137b654ab"} err="failed to get container status \"01f3959a3fddadfeb293023b6dbdd4f890722a5e2dfba684d15de2a137b654ab\": rpc error: code = NotFound desc = could not find container \"01f3959a3fddadfeb293023b6dbdd4f890722a5e2dfba684d15de2a137b654ab\": container with ID starting with 01f3959a3fddadfeb293023b6dbdd4f890722a5e2dfba684d15de2a137b654ab not found: ID does not exist" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.534265 4946 scope.go:117] "RemoveContainer" containerID="d88b97a54777bafa1a703658388dacffc37ec6f4337066f1021a59a5567b8ad5" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.537440 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.563667 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.584951 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.585396 4946 scope.go:117] "RemoveContainer" containerID="83cec6a77a4e88418903dfaf539eba1629b9e34b8cde6e47872b3a75d523845f" Nov 28 07:18:12 crc kubenswrapper[4946]: E1128 07:18:12.585678 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db67bc15-ec43-4391-8aa9-b0c214813024" containerName="nova-scheduler-scheduler" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.585711 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="db67bc15-ec43-4391-8aa9-b0c214813024" containerName="nova-scheduler-scheduler" Nov 28 07:18:12 crc kubenswrapper[4946]: E1128 07:18:12.585736 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f63766d-f8ca-45f4-a9b0-12a48917af68" containerName="ceilometer-central-agent" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.585745 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f63766d-f8ca-45f4-a9b0-12a48917af68" containerName="ceilometer-central-agent" Nov 28 07:18:12 crc kubenswrapper[4946]: E1128 07:18:12.585772 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f63766d-f8ca-45f4-a9b0-12a48917af68" containerName="sg-core" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.585783 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f63766d-f8ca-45f4-a9b0-12a48917af68" containerName="sg-core" Nov 28 07:18:12 crc kubenswrapper[4946]: E1128 07:18:12.585834 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f63766d-f8ca-45f4-a9b0-12a48917af68" containerName="ceilometer-notification-agent" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.585846 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f63766d-f8ca-45f4-a9b0-12a48917af68" containerName="ceilometer-notification-agent" Nov 28 07:18:12 crc kubenswrapper[4946]: E1128 07:18:12.585867 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f63766d-f8ca-45f4-a9b0-12a48917af68" containerName="proxy-httpd" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.585875 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f63766d-f8ca-45f4-a9b0-12a48917af68" containerName="proxy-httpd" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.586372 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f63766d-f8ca-45f4-a9b0-12a48917af68" containerName="ceilometer-central-agent" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.586402 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f63766d-f8ca-45f4-a9b0-12a48917af68" containerName="ceilometer-notification-agent" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.586429 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="db67bc15-ec43-4391-8aa9-b0c214813024" containerName="nova-scheduler-scheduler" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.586445 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f63766d-f8ca-45f4-a9b0-12a48917af68" containerName="proxy-httpd" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.586475 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f63766d-f8ca-45f4-a9b0-12a48917af68" containerName="sg-core" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.588975 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.593187 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.593583 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.593648 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.594146 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.623349 4946 scope.go:117] "RemoveContainer" containerID="9d8270d9349b37ee7bca9009e6a1267b7660fdda5e573f31309b147bfbf1f0dd" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.645061 4946 scope.go:117] "RemoveContainer" containerID="6c7d264059d252430d87f0b0e94282ba9f85fc631b483117759d6db21b5d7867" Nov 28 07:18:12 crc kubenswrapper[4946]: E1128 07:18:12.684745 4946 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f63766d_f8ca_45f4_a9b0_12a48917af68.slice/crio-3f5d9148f2efecd3e1062e2ef7ba17ea95c195e089ae596aac3d6e7dd0b6c3fd\": RecentStats: unable to find data in memory cache]" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.690537 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-scripts\") pod \"ceilometer-0\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.690624 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-config-data\") pod \"ceilometer-0\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.690692 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78b36fc3-eefb-4e39-854a-cb47fc071ce8-run-httpd\") pod \"ceilometer-0\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.690749 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.690926 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78b36fc3-eefb-4e39-854a-cb47fc071ce8-log-httpd\") pod \"ceilometer-0\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.691075 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.691197 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9skr\" (UniqueName: \"kubernetes.io/projected/78b36fc3-eefb-4e39-854a-cb47fc071ce8-kube-api-access-c9skr\") pod \"ceilometer-0\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.691256 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.756338 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.770535 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.779229 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.780691 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.782828 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.789595 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.793438 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-config-data\") pod \"ceilometer-0\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.794195 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78b36fc3-eefb-4e39-854a-cb47fc071ce8-run-httpd\") pod \"ceilometer-0\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.794279 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.794314 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78b36fc3-eefb-4e39-854a-cb47fc071ce8-log-httpd\") pod \"ceilometer-0\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.794400 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.794451 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9skr\" (UniqueName: \"kubernetes.io/projected/78b36fc3-eefb-4e39-854a-cb47fc071ce8-kube-api-access-c9skr\") pod \"ceilometer-0\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.794493 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.794561 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-scripts\") pod \"ceilometer-0\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.794645 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78b36fc3-eefb-4e39-854a-cb47fc071ce8-run-httpd\") pod \"ceilometer-0\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.795259 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78b36fc3-eefb-4e39-854a-cb47fc071ce8-log-httpd\") pod \"ceilometer-0\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.797554 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-scripts\") pod \"ceilometer-0\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.801682 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.801726 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.802008 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-config-data\") pod \"ceilometer-0\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.805025 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.827244 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9skr\" (UniqueName: \"kubernetes.io/projected/78b36fc3-eefb-4e39-854a-cb47fc071ce8-kube-api-access-c9skr\") pod \"ceilometer-0\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.896662 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqdsj\" (UniqueName: \"kubernetes.io/projected/8eb7b962-332f-4f57-a438-5a4c33a3859a-kube-api-access-xqdsj\") pod \"nova-scheduler-0\" (UID: \"8eb7b962-332f-4f57-a438-5a4c33a3859a\") " pod="openstack/nova-scheduler-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.896775 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb7b962-332f-4f57-a438-5a4c33a3859a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8eb7b962-332f-4f57-a438-5a4c33a3859a\") " pod="openstack/nova-scheduler-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.896827 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb7b962-332f-4f57-a438-5a4c33a3859a-config-data\") pod \"nova-scheduler-0\" (UID: \"8eb7b962-332f-4f57-a438-5a4c33a3859a\") " pod="openstack/nova-scheduler-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.913806 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.998438 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb7b962-332f-4f57-a438-5a4c33a3859a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8eb7b962-332f-4f57-a438-5a4c33a3859a\") " pod="openstack/nova-scheduler-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.998670 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb7b962-332f-4f57-a438-5a4c33a3859a-config-data\") pod \"nova-scheduler-0\" (UID: \"8eb7b962-332f-4f57-a438-5a4c33a3859a\") " pod="openstack/nova-scheduler-0" Nov 28 07:18:12 crc kubenswrapper[4946]: I1128 07:18:12.998789 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqdsj\" (UniqueName: \"kubernetes.io/projected/8eb7b962-332f-4f57-a438-5a4c33a3859a-kube-api-access-xqdsj\") pod \"nova-scheduler-0\" (UID: \"8eb7b962-332f-4f57-a438-5a4c33a3859a\") " pod="openstack/nova-scheduler-0" Nov 28 07:18:13 crc kubenswrapper[4946]: I1128 07:18:13.002975 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb7b962-332f-4f57-a438-5a4c33a3859a-config-data\") pod \"nova-scheduler-0\" (UID: \"8eb7b962-332f-4f57-a438-5a4c33a3859a\") " pod="openstack/nova-scheduler-0" Nov 28 07:18:13 crc kubenswrapper[4946]: I1128 07:18:13.016200 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb7b962-332f-4f57-a438-5a4c33a3859a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8eb7b962-332f-4f57-a438-5a4c33a3859a\") " pod="openstack/nova-scheduler-0" Nov 28 07:18:13 crc kubenswrapper[4946]: I1128 07:18:13.024168 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqdsj\" (UniqueName: \"kubernetes.io/projected/8eb7b962-332f-4f57-a438-5a4c33a3859a-kube-api-access-xqdsj\") pod \"nova-scheduler-0\" (UID: \"8eb7b962-332f-4f57-a438-5a4c33a3859a\") " pod="openstack/nova-scheduler-0" Nov 28 07:18:13 crc kubenswrapper[4946]: I1128 07:18:13.106176 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 07:18:13 crc kubenswrapper[4946]: I1128 07:18:13.439792 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:18:13 crc kubenswrapper[4946]: I1128 07:18:13.444827 4946 generic.go:334] "Generic (PLEG): container finished" podID="a3c04d77-b056-4082-a3eb-64f9f4679296" containerID="eb0a5ac8d697a88614fd21d06b02ba39f498724f65d07996731adc13cd44ad4b" exitCode=0 Nov 28 07:18:13 crc kubenswrapper[4946]: I1128 07:18:13.444923 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a3c04d77-b056-4082-a3eb-64f9f4679296","Type":"ContainerDied","Data":"eb0a5ac8d697a88614fd21d06b02ba39f498724f65d07996731adc13cd44ad4b"} Nov 28 07:18:13 crc kubenswrapper[4946]: I1128 07:18:13.588008 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:18:13 crc kubenswrapper[4946]: I1128 07:18:13.689848 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:18:13 crc kubenswrapper[4946]: W1128 07:18:13.690188 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eb7b962_332f_4f57_a438_5a4c33a3859a.slice/crio-58c6ada20c0b2240f0c445d4dd594e428a6ab2b203fc10125f8cc134b99310e7 WatchSource:0}: Error finding container 58c6ada20c0b2240f0c445d4dd594e428a6ab2b203fc10125f8cc134b99310e7: Status 404 returned error can't find the container with id 58c6ada20c0b2240f0c445d4dd594e428a6ab2b203fc10125f8cc134b99310e7 Nov 28 07:18:13 crc kubenswrapper[4946]: I1128 07:18:13.729642 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3c04d77-b056-4082-a3eb-64f9f4679296-logs\") pod \"a3c04d77-b056-4082-a3eb-64f9f4679296\" (UID: \"a3c04d77-b056-4082-a3eb-64f9f4679296\") " Nov 28 07:18:13 crc kubenswrapper[4946]: I1128 07:18:13.729760 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3c04d77-b056-4082-a3eb-64f9f4679296-config-data\") pod \"a3c04d77-b056-4082-a3eb-64f9f4679296\" (UID: \"a3c04d77-b056-4082-a3eb-64f9f4679296\") " Nov 28 07:18:13 crc kubenswrapper[4946]: I1128 07:18:13.729826 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd7ww\" (UniqueName: \"kubernetes.io/projected/a3c04d77-b056-4082-a3eb-64f9f4679296-kube-api-access-kd7ww\") pod \"a3c04d77-b056-4082-a3eb-64f9f4679296\" (UID: \"a3c04d77-b056-4082-a3eb-64f9f4679296\") " Nov 28 07:18:13 crc kubenswrapper[4946]: I1128 07:18:13.729982 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c04d77-b056-4082-a3eb-64f9f4679296-combined-ca-bundle\") pod \"a3c04d77-b056-4082-a3eb-64f9f4679296\" (UID: \"a3c04d77-b056-4082-a3eb-64f9f4679296\") " Nov 28 07:18:13 crc kubenswrapper[4946]: I1128 07:18:13.730161 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3c04d77-b056-4082-a3eb-64f9f4679296-logs" (OuterVolumeSpecName: "logs") pod "a3c04d77-b056-4082-a3eb-64f9f4679296" (UID: "a3c04d77-b056-4082-a3eb-64f9f4679296"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:18:13 crc kubenswrapper[4946]: I1128 07:18:13.730575 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3c04d77-b056-4082-a3eb-64f9f4679296-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:13 crc kubenswrapper[4946]: I1128 07:18:13.735672 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3c04d77-b056-4082-a3eb-64f9f4679296-kube-api-access-kd7ww" (OuterVolumeSpecName: "kube-api-access-kd7ww") pod "a3c04d77-b056-4082-a3eb-64f9f4679296" (UID: "a3c04d77-b056-4082-a3eb-64f9f4679296"). InnerVolumeSpecName "kube-api-access-kd7ww". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:18:13 crc kubenswrapper[4946]: I1128 07:18:13.759100 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3c04d77-b056-4082-a3eb-64f9f4679296-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3c04d77-b056-4082-a3eb-64f9f4679296" (UID: "a3c04d77-b056-4082-a3eb-64f9f4679296"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:13 crc kubenswrapper[4946]: I1128 07:18:13.764940 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3c04d77-b056-4082-a3eb-64f9f4679296-config-data" (OuterVolumeSpecName: "config-data") pod "a3c04d77-b056-4082-a3eb-64f9f4679296" (UID: "a3c04d77-b056-4082-a3eb-64f9f4679296"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:13 crc kubenswrapper[4946]: I1128 07:18:13.832996 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c04d77-b056-4082-a3eb-64f9f4679296-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:13 crc kubenswrapper[4946]: I1128 07:18:13.833036 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3c04d77-b056-4082-a3eb-64f9f4679296-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:13 crc kubenswrapper[4946]: I1128 07:18:13.833054 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd7ww\" (UniqueName: \"kubernetes.io/projected/a3c04d77-b056-4082-a3eb-64f9f4679296-kube-api-access-kd7ww\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.002487 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f63766d-f8ca-45f4-a9b0-12a48917af68" path="/var/lib/kubelet/pods/9f63766d-f8ca-45f4-a9b0-12a48917af68/volumes" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.003995 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db67bc15-ec43-4391-8aa9-b0c214813024" path="/var/lib/kubelet/pods/db67bc15-ec43-4391-8aa9-b0c214813024/volumes" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.476072 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8eb7b962-332f-4f57-a438-5a4c33a3859a","Type":"ContainerStarted","Data":"f872345529c102cbf095d9b27a69a827b21f57789777c251b623d90ef5501ba9"} Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.476149 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8eb7b962-332f-4f57-a438-5a4c33a3859a","Type":"ContainerStarted","Data":"58c6ada20c0b2240f0c445d4dd594e428a6ab2b203fc10125f8cc134b99310e7"} Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.478893 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78b36fc3-eefb-4e39-854a-cb47fc071ce8","Type":"ContainerStarted","Data":"ae2dffb836d7163122f7f8bb040fc5ff6afa12c5d3cf22e091d2c225a591de63"} Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.478921 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78b36fc3-eefb-4e39-854a-cb47fc071ce8","Type":"ContainerStarted","Data":"55bc7fb360eff3c9e008b38437840b5cc92cf8b45c42c24f48edfd93929c2888"} Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.481008 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a3c04d77-b056-4082-a3eb-64f9f4679296","Type":"ContainerDied","Data":"616ec6c456a2c6a547d1ac40346cc50e3a45227e3cbff97e070f431e8fb2b4bf"} Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.481095 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.481141 4946 scope.go:117] "RemoveContainer" containerID="eb0a5ac8d697a88614fd21d06b02ba39f498724f65d07996731adc13cd44ad4b" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.508488 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.508445911 podStartE2EDuration="2.508445911s" podCreationTimestamp="2025-11-28 07:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:18:14.49697757 +0000 UTC m=+1548.875042681" watchObservedRunningTime="2025-11-28 07:18:14.508445911 +0000 UTC m=+1548.886511042" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.517016 4946 scope.go:117] "RemoveContainer" containerID="5e04565953fdb740a56d881500149e6fb75f2cb1d1d6ee266392ee03ee31197f" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.525357 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.549658 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.578396 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 07:18:14 crc kubenswrapper[4946]: E1128 07:18:14.579378 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c04d77-b056-4082-a3eb-64f9f4679296" containerName="nova-api-log" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.579536 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c04d77-b056-4082-a3eb-64f9f4679296" containerName="nova-api-log" Nov 28 07:18:14 crc kubenswrapper[4946]: E1128 07:18:14.579659 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c04d77-b056-4082-a3eb-64f9f4679296" containerName="nova-api-api" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.579746 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c04d77-b056-4082-a3eb-64f9f4679296" containerName="nova-api-api" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.580144 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3c04d77-b056-4082-a3eb-64f9f4679296" containerName="nova-api-log" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.580259 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3c04d77-b056-4082-a3eb-64f9f4679296" containerName="nova-api-api" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.581845 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.584244 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.593307 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.649714 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5z28\" (UniqueName: \"kubernetes.io/projected/7b94decf-8fc0-40e9-aec6-5bc974c8bd31-kube-api-access-f5z28\") pod \"nova-api-0\" (UID: \"7b94decf-8fc0-40e9-aec6-5bc974c8bd31\") " pod="openstack/nova-api-0" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.649988 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b94decf-8fc0-40e9-aec6-5bc974c8bd31-config-data\") pod \"nova-api-0\" (UID: \"7b94decf-8fc0-40e9-aec6-5bc974c8bd31\") " pod="openstack/nova-api-0" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.650077 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b94decf-8fc0-40e9-aec6-5bc974c8bd31-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7b94decf-8fc0-40e9-aec6-5bc974c8bd31\") " pod="openstack/nova-api-0" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.650173 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b94decf-8fc0-40e9-aec6-5bc974c8bd31-logs\") pod \"nova-api-0\" (UID: \"7b94decf-8fc0-40e9-aec6-5bc974c8bd31\") " pod="openstack/nova-api-0" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.752630 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b94decf-8fc0-40e9-aec6-5bc974c8bd31-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7b94decf-8fc0-40e9-aec6-5bc974c8bd31\") " pod="openstack/nova-api-0" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.752827 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b94decf-8fc0-40e9-aec6-5bc974c8bd31-logs\") pod \"nova-api-0\" (UID: \"7b94decf-8fc0-40e9-aec6-5bc974c8bd31\") " pod="openstack/nova-api-0" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.753046 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5z28\" (UniqueName: \"kubernetes.io/projected/7b94decf-8fc0-40e9-aec6-5bc974c8bd31-kube-api-access-f5z28\") pod \"nova-api-0\" (UID: \"7b94decf-8fc0-40e9-aec6-5bc974c8bd31\") " pod="openstack/nova-api-0" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.753163 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b94decf-8fc0-40e9-aec6-5bc974c8bd31-config-data\") pod \"nova-api-0\" (UID: \"7b94decf-8fc0-40e9-aec6-5bc974c8bd31\") " pod="openstack/nova-api-0" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.753647 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b94decf-8fc0-40e9-aec6-5bc974c8bd31-logs\") pod \"nova-api-0\" (UID: \"7b94decf-8fc0-40e9-aec6-5bc974c8bd31\") " pod="openstack/nova-api-0" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.758229 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b94decf-8fc0-40e9-aec6-5bc974c8bd31-config-data\") pod \"nova-api-0\" (UID: \"7b94decf-8fc0-40e9-aec6-5bc974c8bd31\") " pod="openstack/nova-api-0" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.758738 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b94decf-8fc0-40e9-aec6-5bc974c8bd31-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7b94decf-8fc0-40e9-aec6-5bc974c8bd31\") " pod="openstack/nova-api-0" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.779151 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5z28\" (UniqueName: \"kubernetes.io/projected/7b94decf-8fc0-40e9-aec6-5bc974c8bd31-kube-api-access-f5z28\") pod \"nova-api-0\" (UID: \"7b94decf-8fc0-40e9-aec6-5bc974c8bd31\") " pod="openstack/nova-api-0" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.819040 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.819102 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 07:18:14 crc kubenswrapper[4946]: I1128 07:18:14.908760 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:18:15 crc kubenswrapper[4946]: I1128 07:18:15.404875 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:18:15 crc kubenswrapper[4946]: I1128 07:18:15.504154 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7b94decf-8fc0-40e9-aec6-5bc974c8bd31","Type":"ContainerStarted","Data":"4b84a6f6f89005089448b245dec63aff975c254d701a03076ee3f62f1b72f34b"} Nov 28 07:18:15 crc kubenswrapper[4946]: I1128 07:18:15.508813 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78b36fc3-eefb-4e39-854a-cb47fc071ce8","Type":"ContainerStarted","Data":"8738789435a9ffcc57707ab5a88cb5f0bce1e0a03a7c86c06c46ee7fc5db79e6"} Nov 28 07:18:16 crc kubenswrapper[4946]: I1128 07:18:16.010607 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3c04d77-b056-4082-a3eb-64f9f4679296" path="/var/lib/kubelet/pods/a3c04d77-b056-4082-a3eb-64f9f4679296/volumes" Nov 28 07:18:16 crc kubenswrapper[4946]: I1128 07:18:16.519100 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7b94decf-8fc0-40e9-aec6-5bc974c8bd31","Type":"ContainerStarted","Data":"77d5525729ead9c316a58288e3afe4056575da23c5db111e75a8ea618e7ea28b"} Nov 28 07:18:16 crc kubenswrapper[4946]: I1128 07:18:16.519164 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7b94decf-8fc0-40e9-aec6-5bc974c8bd31","Type":"ContainerStarted","Data":"d3be03d7e21fcb7493467c17b0c51e4a36f997e53fc38b279a638e3ebeecce7c"} Nov 28 07:18:16 crc kubenswrapper[4946]: I1128 07:18:16.522729 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78b36fc3-eefb-4e39-854a-cb47fc071ce8","Type":"ContainerStarted","Data":"506b76300850ba41a05a13b06ea9754cb81f3f0a3827b318c3cb59582c3f7eec"} Nov 28 07:18:16 crc kubenswrapper[4946]: I1128 07:18:16.540055 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.540029938 podStartE2EDuration="2.540029938s" podCreationTimestamp="2025-11-28 07:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:18:16.537874026 +0000 UTC m=+1550.915939147" watchObservedRunningTime="2025-11-28 07:18:16.540029938 +0000 UTC m=+1550.918095059" Nov 28 07:18:17 crc kubenswrapper[4946]: I1128 07:18:17.537835 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78b36fc3-eefb-4e39-854a-cb47fc071ce8","Type":"ContainerStarted","Data":"ec10cd3cf50926151c61de1b32a5a2264b32de0a1f2bb9e57b70cbe152954656"} Nov 28 07:18:17 crc kubenswrapper[4946]: I1128 07:18:17.538968 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 07:18:17 crc kubenswrapper[4946]: I1128 07:18:17.573754 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9627290689999999 podStartE2EDuration="5.573724927s" podCreationTimestamp="2025-11-28 07:18:12 +0000 UTC" firstStartedPulling="2025-11-28 07:18:13.443988408 +0000 UTC m=+1547.822053519" lastFinishedPulling="2025-11-28 07:18:17.054984266 +0000 UTC m=+1551.433049377" observedRunningTime="2025-11-28 07:18:17.567434543 +0000 UTC m=+1551.945499664" watchObservedRunningTime="2025-11-28 07:18:17.573724927 +0000 UTC m=+1551.951790038" Nov 28 07:18:17 crc kubenswrapper[4946]: I1128 07:18:17.860999 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 28 07:18:18 crc kubenswrapper[4946]: I1128 07:18:18.113622 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 07:18:18 crc kubenswrapper[4946]: I1128 07:18:18.695633 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 28 07:18:19 crc kubenswrapper[4946]: I1128 07:18:19.818962 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 07:18:19 crc kubenswrapper[4946]: I1128 07:18:19.819345 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 07:18:20 crc kubenswrapper[4946]: I1128 07:18:20.837691 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e2a58ce5-5b99-461d-b7b9-ebeb95624d70" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.187:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 07:18:20 crc kubenswrapper[4946]: I1128 07:18:20.837703 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e2a58ce5-5b99-461d-b7b9-ebeb95624d70" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.187:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 07:18:23 crc kubenswrapper[4946]: I1128 07:18:23.113231 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 07:18:23 crc kubenswrapper[4946]: I1128 07:18:23.148544 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 07:18:23 crc kubenswrapper[4946]: I1128 07:18:23.684712 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 07:18:24 crc kubenswrapper[4946]: I1128 07:18:24.909436 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 07:18:24 crc kubenswrapper[4946]: I1128 07:18:24.910587 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 07:18:25 crc kubenswrapper[4946]: I1128 07:18:25.991744 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7b94decf-8fc0-40e9-aec6-5bc974c8bd31" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 07:18:25 crc kubenswrapper[4946]: I1128 07:18:25.991765 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7b94decf-8fc0-40e9-aec6-5bc974c8bd31" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 07:18:29 crc kubenswrapper[4946]: I1128 07:18:29.830216 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 07:18:29 crc kubenswrapper[4946]: I1128 07:18:29.832309 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 07:18:29 crc kubenswrapper[4946]: I1128 07:18:29.877178 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.584440 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.673934 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5919ecc3-7d40-4914-9e48-bde9efff3853-combined-ca-bundle\") pod \"5919ecc3-7d40-4914-9e48-bde9efff3853\" (UID: \"5919ecc3-7d40-4914-9e48-bde9efff3853\") " Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.674235 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5919ecc3-7d40-4914-9e48-bde9efff3853-config-data\") pod \"5919ecc3-7d40-4914-9e48-bde9efff3853\" (UID: \"5919ecc3-7d40-4914-9e48-bde9efff3853\") " Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.674290 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j2ss\" (UniqueName: \"kubernetes.io/projected/5919ecc3-7d40-4914-9e48-bde9efff3853-kube-api-access-6j2ss\") pod \"5919ecc3-7d40-4914-9e48-bde9efff3853\" (UID: \"5919ecc3-7d40-4914-9e48-bde9efff3853\") " Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.684708 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5919ecc3-7d40-4914-9e48-bde9efff3853-kube-api-access-6j2ss" (OuterVolumeSpecName: "kube-api-access-6j2ss") pod "5919ecc3-7d40-4914-9e48-bde9efff3853" (UID: "5919ecc3-7d40-4914-9e48-bde9efff3853"). InnerVolumeSpecName "kube-api-access-6j2ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.710568 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5919ecc3-7d40-4914-9e48-bde9efff3853-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5919ecc3-7d40-4914-9e48-bde9efff3853" (UID: "5919ecc3-7d40-4914-9e48-bde9efff3853"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.716023 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5919ecc3-7d40-4914-9e48-bde9efff3853-config-data" (OuterVolumeSpecName: "config-data") pod "5919ecc3-7d40-4914-9e48-bde9efff3853" (UID: "5919ecc3-7d40-4914-9e48-bde9efff3853"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.716123 4946 generic.go:334] "Generic (PLEG): container finished" podID="5919ecc3-7d40-4914-9e48-bde9efff3853" containerID="1dc38591f3fe43d534ca3a32b55c4cc20f1ed9c36d9e889d2540e36f214004ac" exitCode=137 Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.716237 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.716302 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5919ecc3-7d40-4914-9e48-bde9efff3853","Type":"ContainerDied","Data":"1dc38591f3fe43d534ca3a32b55c4cc20f1ed9c36d9e889d2540e36f214004ac"} Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.716350 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5919ecc3-7d40-4914-9e48-bde9efff3853","Type":"ContainerDied","Data":"9c068f43c2649bc5200afbe71381373c5ba019a06fa119274747b0e2c8856986"} Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.716387 4946 scope.go:117] "RemoveContainer" containerID="1dc38591f3fe43d534ca3a32b55c4cc20f1ed9c36d9e889d2540e36f214004ac" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.726924 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.777940 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5919ecc3-7d40-4914-9e48-bde9efff3853-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.777983 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6j2ss\" (UniqueName: \"kubernetes.io/projected/5919ecc3-7d40-4914-9e48-bde9efff3853-kube-api-access-6j2ss\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.778000 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5919ecc3-7d40-4914-9e48-bde9efff3853-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.791007 4946 scope.go:117] "RemoveContainer" containerID="1dc38591f3fe43d534ca3a32b55c4cc20f1ed9c36d9e889d2540e36f214004ac" Nov 28 07:18:30 crc kubenswrapper[4946]: E1128 07:18:30.791594 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dc38591f3fe43d534ca3a32b55c4cc20f1ed9c36d9e889d2540e36f214004ac\": container with ID starting with 1dc38591f3fe43d534ca3a32b55c4cc20f1ed9c36d9e889d2540e36f214004ac not found: ID does not exist" containerID="1dc38591f3fe43d534ca3a32b55c4cc20f1ed9c36d9e889d2540e36f214004ac" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.791675 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dc38591f3fe43d534ca3a32b55c4cc20f1ed9c36d9e889d2540e36f214004ac"} err="failed to get container status \"1dc38591f3fe43d534ca3a32b55c4cc20f1ed9c36d9e889d2540e36f214004ac\": rpc error: code = NotFound desc = could not find container \"1dc38591f3fe43d534ca3a32b55c4cc20f1ed9c36d9e889d2540e36f214004ac\": container with ID starting with 1dc38591f3fe43d534ca3a32b55c4cc20f1ed9c36d9e889d2540e36f214004ac not found: ID does not exist" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.806565 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.820462 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.839866 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 07:18:30 crc kubenswrapper[4946]: E1128 07:18:30.840328 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5919ecc3-7d40-4914-9e48-bde9efff3853" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.840343 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="5919ecc3-7d40-4914-9e48-bde9efff3853" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.840571 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="5919ecc3-7d40-4914-9e48-bde9efff3853" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.841260 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.844745 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.845484 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.845616 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.859711 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.880044 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1279af9a-0d83-4c31-94b1-6c732b89a785-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1279af9a-0d83-4c31-94b1-6c732b89a785\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.880100 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1279af9a-0d83-4c31-94b1-6c732b89a785-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1279af9a-0d83-4c31-94b1-6c732b89a785\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.880145 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v9wl\" (UniqueName: \"kubernetes.io/projected/1279af9a-0d83-4c31-94b1-6c732b89a785-kube-api-access-4v9wl\") pod \"nova-cell1-novncproxy-0\" (UID: \"1279af9a-0d83-4c31-94b1-6c732b89a785\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.880208 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1279af9a-0d83-4c31-94b1-6c732b89a785-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1279af9a-0d83-4c31-94b1-6c732b89a785\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.880236 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1279af9a-0d83-4c31-94b1-6c732b89a785-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1279af9a-0d83-4c31-94b1-6c732b89a785\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.981754 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1279af9a-0d83-4c31-94b1-6c732b89a785-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1279af9a-0d83-4c31-94b1-6c732b89a785\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.981807 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1279af9a-0d83-4c31-94b1-6c732b89a785-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1279af9a-0d83-4c31-94b1-6c732b89a785\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.981858 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1279af9a-0d83-4c31-94b1-6c732b89a785-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1279af9a-0d83-4c31-94b1-6c732b89a785\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.981885 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1279af9a-0d83-4c31-94b1-6c732b89a785-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1279af9a-0d83-4c31-94b1-6c732b89a785\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.981925 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v9wl\" (UniqueName: \"kubernetes.io/projected/1279af9a-0d83-4c31-94b1-6c732b89a785-kube-api-access-4v9wl\") pod \"nova-cell1-novncproxy-0\" (UID: \"1279af9a-0d83-4c31-94b1-6c732b89a785\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.986082 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1279af9a-0d83-4c31-94b1-6c732b89a785-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1279af9a-0d83-4c31-94b1-6c732b89a785\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.986288 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1279af9a-0d83-4c31-94b1-6c732b89a785-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1279af9a-0d83-4c31-94b1-6c732b89a785\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.986672 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1279af9a-0d83-4c31-94b1-6c732b89a785-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1279af9a-0d83-4c31-94b1-6c732b89a785\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:18:30 crc kubenswrapper[4946]: I1128 07:18:30.986891 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1279af9a-0d83-4c31-94b1-6c732b89a785-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1279af9a-0d83-4c31-94b1-6c732b89a785\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:18:31 crc kubenswrapper[4946]: I1128 07:18:31.004501 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v9wl\" (UniqueName: \"kubernetes.io/projected/1279af9a-0d83-4c31-94b1-6c732b89a785-kube-api-access-4v9wl\") pod \"nova-cell1-novncproxy-0\" (UID: \"1279af9a-0d83-4c31-94b1-6c732b89a785\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:18:31 crc kubenswrapper[4946]: I1128 07:18:31.166001 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:18:31 crc kubenswrapper[4946]: I1128 07:18:31.664256 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 07:18:31 crc kubenswrapper[4946]: I1128 07:18:31.733567 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1279af9a-0d83-4c31-94b1-6c732b89a785","Type":"ContainerStarted","Data":"de486bff19c7f1c94a92bc5ce30f2c68b13a5ab44e0e1e6e8d05527f56060f4a"} Nov 28 07:18:32 crc kubenswrapper[4946]: I1128 07:18:32.010573 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5919ecc3-7d40-4914-9e48-bde9efff3853" path="/var/lib/kubelet/pods/5919ecc3-7d40-4914-9e48-bde9efff3853/volumes" Nov 28 07:18:32 crc kubenswrapper[4946]: I1128 07:18:32.749776 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1279af9a-0d83-4c31-94b1-6c732b89a785","Type":"ContainerStarted","Data":"a98ed7264301f60a91e8b70f16d9b85a1db93f02fb19bfcfa856fa5650ad98e4"} Nov 28 07:18:32 crc kubenswrapper[4946]: I1128 07:18:32.780365 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.780335712 podStartE2EDuration="2.780335712s" podCreationTimestamp="2025-11-28 07:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:18:32.775935884 +0000 UTC m=+1567.154001055" watchObservedRunningTime="2025-11-28 07:18:32.780335712 +0000 UTC m=+1567.158400853" Nov 28 07:18:34 crc kubenswrapper[4946]: I1128 07:18:34.914237 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 07:18:34 crc kubenswrapper[4946]: I1128 07:18:34.915622 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 07:18:34 crc kubenswrapper[4946]: I1128 07:18:34.917751 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 07:18:34 crc kubenswrapper[4946]: I1128 07:18:34.919617 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 07:18:35 crc kubenswrapper[4946]: I1128 07:18:35.787489 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 07:18:35 crc kubenswrapper[4946]: I1128 07:18:35.792733 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 07:18:36 crc kubenswrapper[4946]: I1128 07:18:36.025759 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc646c8f9-smgp4"] Nov 28 07:18:36 crc kubenswrapper[4946]: I1128 07:18:36.028049 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" Nov 28 07:18:36 crc kubenswrapper[4946]: I1128 07:18:36.041326 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc646c8f9-smgp4"] Nov 28 07:18:36 crc kubenswrapper[4946]: I1128 07:18:36.111641 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-dns-swift-storage-0\") pod \"dnsmasq-dns-6bc646c8f9-smgp4\" (UID: \"23813153-a582-4ea1-bdf5-f81b2994aed6\") " pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" Nov 28 07:18:36 crc kubenswrapper[4946]: I1128 07:18:36.111700 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc646c8f9-smgp4\" (UID: \"23813153-a582-4ea1-bdf5-f81b2994aed6\") " pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" Nov 28 07:18:36 crc kubenswrapper[4946]: I1128 07:18:36.111800 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-config\") pod \"dnsmasq-dns-6bc646c8f9-smgp4\" (UID: \"23813153-a582-4ea1-bdf5-f81b2994aed6\") " pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" Nov 28 07:18:36 crc kubenswrapper[4946]: I1128 07:18:36.111835 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-ovsdbserver-nb\") pod \"dnsmasq-dns-6bc646c8f9-smgp4\" (UID: \"23813153-a582-4ea1-bdf5-f81b2994aed6\") " pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" Nov 28 07:18:36 crc kubenswrapper[4946]: I1128 07:18:36.112137 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddtqp\" (UniqueName: \"kubernetes.io/projected/23813153-a582-4ea1-bdf5-f81b2994aed6-kube-api-access-ddtqp\") pod \"dnsmasq-dns-6bc646c8f9-smgp4\" (UID: \"23813153-a582-4ea1-bdf5-f81b2994aed6\") " pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" Nov 28 07:18:36 crc kubenswrapper[4946]: I1128 07:18:36.112373 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-dns-svc\") pod \"dnsmasq-dns-6bc646c8f9-smgp4\" (UID: \"23813153-a582-4ea1-bdf5-f81b2994aed6\") " pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" Nov 28 07:18:36 crc kubenswrapper[4946]: I1128 07:18:36.167111 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:18:36 crc kubenswrapper[4946]: I1128 07:18:36.214523 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddtqp\" (UniqueName: \"kubernetes.io/projected/23813153-a582-4ea1-bdf5-f81b2994aed6-kube-api-access-ddtqp\") pod \"dnsmasq-dns-6bc646c8f9-smgp4\" (UID: \"23813153-a582-4ea1-bdf5-f81b2994aed6\") " pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" Nov 28 07:18:36 crc kubenswrapper[4946]: I1128 07:18:36.214573 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-dns-svc\") pod \"dnsmasq-dns-6bc646c8f9-smgp4\" (UID: \"23813153-a582-4ea1-bdf5-f81b2994aed6\") " pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" Nov 28 07:18:36 crc kubenswrapper[4946]: I1128 07:18:36.214612 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-dns-swift-storage-0\") pod \"dnsmasq-dns-6bc646c8f9-smgp4\" (UID: \"23813153-a582-4ea1-bdf5-f81b2994aed6\") " pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" Nov 28 07:18:36 crc kubenswrapper[4946]: I1128 07:18:36.214651 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc646c8f9-smgp4\" (UID: \"23813153-a582-4ea1-bdf5-f81b2994aed6\") " pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" Nov 28 07:18:36 crc kubenswrapper[4946]: I1128 07:18:36.214725 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-config\") pod \"dnsmasq-dns-6bc646c8f9-smgp4\" (UID: \"23813153-a582-4ea1-bdf5-f81b2994aed6\") " pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" Nov 28 07:18:36 crc kubenswrapper[4946]: I1128 07:18:36.214755 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-ovsdbserver-nb\") pod \"dnsmasq-dns-6bc646c8f9-smgp4\" (UID: \"23813153-a582-4ea1-bdf5-f81b2994aed6\") " pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" Nov 28 07:18:36 crc kubenswrapper[4946]: I1128 07:18:36.215675 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-ovsdbserver-nb\") pod \"dnsmasq-dns-6bc646c8f9-smgp4\" (UID: \"23813153-a582-4ea1-bdf5-f81b2994aed6\") " pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" Nov 28 07:18:36 crc kubenswrapper[4946]: I1128 07:18:36.215683 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-dns-svc\") pod \"dnsmasq-dns-6bc646c8f9-smgp4\" (UID: \"23813153-a582-4ea1-bdf5-f81b2994aed6\") " pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" Nov 28 07:18:36 crc kubenswrapper[4946]: I1128 07:18:36.216270 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-config\") pod \"dnsmasq-dns-6bc646c8f9-smgp4\" (UID: \"23813153-a582-4ea1-bdf5-f81b2994aed6\") " pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" Nov 28 07:18:36 crc kubenswrapper[4946]: I1128 07:18:36.216389 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-dns-swift-storage-0\") pod \"dnsmasq-dns-6bc646c8f9-smgp4\" (UID: \"23813153-a582-4ea1-bdf5-f81b2994aed6\") " pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" Nov 28 07:18:36 crc kubenswrapper[4946]: I1128 07:18:36.216522 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc646c8f9-smgp4\" (UID: \"23813153-a582-4ea1-bdf5-f81b2994aed6\") " pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" Nov 28 07:18:36 crc kubenswrapper[4946]: I1128 07:18:36.234231 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddtqp\" (UniqueName: \"kubernetes.io/projected/23813153-a582-4ea1-bdf5-f81b2994aed6-kube-api-access-ddtqp\") pod \"dnsmasq-dns-6bc646c8f9-smgp4\" (UID: \"23813153-a582-4ea1-bdf5-f81b2994aed6\") " pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" Nov 28 07:18:36 crc kubenswrapper[4946]: I1128 07:18:36.366643 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" Nov 28 07:18:36 crc kubenswrapper[4946]: I1128 07:18:36.969249 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc646c8f9-smgp4"] Nov 28 07:18:37 crc kubenswrapper[4946]: I1128 07:18:37.806376 4946 generic.go:334] "Generic (PLEG): container finished" podID="23813153-a582-4ea1-bdf5-f81b2994aed6" containerID="03897902ee4aec557adaba7cabf089950c612511ec09140569b554e27e9ebd74" exitCode=0 Nov 28 07:18:37 crc kubenswrapper[4946]: I1128 07:18:37.806486 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" event={"ID":"23813153-a582-4ea1-bdf5-f81b2994aed6","Type":"ContainerDied","Data":"03897902ee4aec557adaba7cabf089950c612511ec09140569b554e27e9ebd74"} Nov 28 07:18:37 crc kubenswrapper[4946]: I1128 07:18:37.806810 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" event={"ID":"23813153-a582-4ea1-bdf5-f81b2994aed6","Type":"ContainerStarted","Data":"9d2e3a71051d1be78327bc9f4e290b1533e28074078e0c1bca7158416982e102"} Nov 28 07:18:38 crc kubenswrapper[4946]: I1128 07:18:38.321744 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:18:38 crc kubenswrapper[4946]: I1128 07:18:38.322813 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78b36fc3-eefb-4e39-854a-cb47fc071ce8" containerName="ceilometer-central-agent" containerID="cri-o://ae2dffb836d7163122f7f8bb040fc5ff6afa12c5d3cf22e091d2c225a591de63" gracePeriod=30 Nov 28 07:18:38 crc kubenswrapper[4946]: I1128 07:18:38.323002 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78b36fc3-eefb-4e39-854a-cb47fc071ce8" containerName="proxy-httpd" containerID="cri-o://ec10cd3cf50926151c61de1b32a5a2264b32de0a1f2bb9e57b70cbe152954656" gracePeriod=30 Nov 28 07:18:38 crc kubenswrapper[4946]: I1128 07:18:38.323058 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78b36fc3-eefb-4e39-854a-cb47fc071ce8" containerName="sg-core" containerID="cri-o://506b76300850ba41a05a13b06ea9754cb81f3f0a3827b318c3cb59582c3f7eec" gracePeriod=30 Nov 28 07:18:38 crc kubenswrapper[4946]: I1128 07:18:38.323107 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78b36fc3-eefb-4e39-854a-cb47fc071ce8" containerName="ceilometer-notification-agent" containerID="cri-o://8738789435a9ffcc57707ab5a88cb5f0bce1e0a03a7c86c06c46ee7fc5db79e6" gracePeriod=30 Nov 28 07:18:38 crc kubenswrapper[4946]: I1128 07:18:38.329049 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="78b36fc3-eefb-4e39-854a-cb47fc071ce8" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.188:3000/\": EOF" Nov 28 07:18:38 crc kubenswrapper[4946]: I1128 07:18:38.630626 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:18:38 crc kubenswrapper[4946]: I1128 07:18:38.820347 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" event={"ID":"23813153-a582-4ea1-bdf5-f81b2994aed6","Type":"ContainerStarted","Data":"c572a089b466148ca1bc267403c169adc75a8306ad42f222db62e4a135735727"} Nov 28 07:18:38 crc kubenswrapper[4946]: I1128 07:18:38.820476 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" Nov 28 07:18:38 crc kubenswrapper[4946]: I1128 07:18:38.838390 4946 generic.go:334] "Generic (PLEG): container finished" podID="78b36fc3-eefb-4e39-854a-cb47fc071ce8" containerID="ec10cd3cf50926151c61de1b32a5a2264b32de0a1f2bb9e57b70cbe152954656" exitCode=0 Nov 28 07:18:38 crc kubenswrapper[4946]: I1128 07:18:38.838447 4946 generic.go:334] "Generic (PLEG): container finished" podID="78b36fc3-eefb-4e39-854a-cb47fc071ce8" containerID="506b76300850ba41a05a13b06ea9754cb81f3f0a3827b318c3cb59582c3f7eec" exitCode=2 Nov 28 07:18:38 crc kubenswrapper[4946]: I1128 07:18:38.838486 4946 generic.go:334] "Generic (PLEG): container finished" podID="78b36fc3-eefb-4e39-854a-cb47fc071ce8" containerID="ae2dffb836d7163122f7f8bb040fc5ff6afa12c5d3cf22e091d2c225a591de63" exitCode=0 Nov 28 07:18:38 crc kubenswrapper[4946]: I1128 07:18:38.838849 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7b94decf-8fc0-40e9-aec6-5bc974c8bd31" containerName="nova-api-log" containerID="cri-o://d3be03d7e21fcb7493467c17b0c51e4a36f997e53fc38b279a638e3ebeecce7c" gracePeriod=30 Nov 28 07:18:38 crc kubenswrapper[4946]: I1128 07:18:38.839629 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78b36fc3-eefb-4e39-854a-cb47fc071ce8","Type":"ContainerDied","Data":"ec10cd3cf50926151c61de1b32a5a2264b32de0a1f2bb9e57b70cbe152954656"} Nov 28 07:18:38 crc kubenswrapper[4946]: I1128 07:18:38.839687 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78b36fc3-eefb-4e39-854a-cb47fc071ce8","Type":"ContainerDied","Data":"506b76300850ba41a05a13b06ea9754cb81f3f0a3827b318c3cb59582c3f7eec"} Nov 28 07:18:38 crc kubenswrapper[4946]: I1128 07:18:38.839713 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78b36fc3-eefb-4e39-854a-cb47fc071ce8","Type":"ContainerDied","Data":"ae2dffb836d7163122f7f8bb040fc5ff6afa12c5d3cf22e091d2c225a591de63"} Nov 28 07:18:38 crc kubenswrapper[4946]: I1128 07:18:38.839775 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7b94decf-8fc0-40e9-aec6-5bc974c8bd31" containerName="nova-api-api" containerID="cri-o://77d5525729ead9c316a58288e3afe4056575da23c5db111e75a8ea618e7ea28b" gracePeriod=30 Nov 28 07:18:38 crc kubenswrapper[4946]: I1128 07:18:38.856397 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" podStartSLOduration=3.856372559 podStartE2EDuration="3.856372559s" podCreationTimestamp="2025-11-28 07:18:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:18:38.845590224 +0000 UTC m=+1573.223655345" watchObservedRunningTime="2025-11-28 07:18:38.856372559 +0000 UTC m=+1573.234437690" Nov 28 07:18:39 crc kubenswrapper[4946]: I1128 07:18:39.854141 4946 generic.go:334] "Generic (PLEG): container finished" podID="7b94decf-8fc0-40e9-aec6-5bc974c8bd31" containerID="d3be03d7e21fcb7493467c17b0c51e4a36f997e53fc38b279a638e3ebeecce7c" exitCode=143 Nov 28 07:18:39 crc kubenswrapper[4946]: I1128 07:18:39.854516 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7b94decf-8fc0-40e9-aec6-5bc974c8bd31","Type":"ContainerDied","Data":"d3be03d7e21fcb7493467c17b0c51e4a36f997e53fc38b279a638e3ebeecce7c"} Nov 28 07:18:40 crc kubenswrapper[4946]: I1128 07:18:40.880924 4946 generic.go:334] "Generic (PLEG): container finished" podID="78b36fc3-eefb-4e39-854a-cb47fc071ce8" containerID="8738789435a9ffcc57707ab5a88cb5f0bce1e0a03a7c86c06c46ee7fc5db79e6" exitCode=0 Nov 28 07:18:40 crc kubenswrapper[4946]: I1128 07:18:40.881040 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78b36fc3-eefb-4e39-854a-cb47fc071ce8","Type":"ContainerDied","Data":"8738789435a9ffcc57707ab5a88cb5f0bce1e0a03a7c86c06c46ee7fc5db79e6"} Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.158605 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.166782 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.195536 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.250779 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-sg-core-conf-yaml\") pod \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.250933 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-ceilometer-tls-certs\") pod \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.250977 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-config-data\") pod \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.250994 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9skr\" (UniqueName: \"kubernetes.io/projected/78b36fc3-eefb-4e39-854a-cb47fc071ce8-kube-api-access-c9skr\") pod \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.251107 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78b36fc3-eefb-4e39-854a-cb47fc071ce8-log-httpd\") pod \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.251177 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-combined-ca-bundle\") pod \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.251240 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-scripts\") pod \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.251295 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78b36fc3-eefb-4e39-854a-cb47fc071ce8-run-httpd\") pod \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\" (UID: \"78b36fc3-eefb-4e39-854a-cb47fc071ce8\") " Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.251826 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78b36fc3-eefb-4e39-854a-cb47fc071ce8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "78b36fc3-eefb-4e39-854a-cb47fc071ce8" (UID: "78b36fc3-eefb-4e39-854a-cb47fc071ce8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.251928 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78b36fc3-eefb-4e39-854a-cb47fc071ce8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "78b36fc3-eefb-4e39-854a-cb47fc071ce8" (UID: "78b36fc3-eefb-4e39-854a-cb47fc071ce8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.252951 4946 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78b36fc3-eefb-4e39-854a-cb47fc071ce8-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.252973 4946 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78b36fc3-eefb-4e39-854a-cb47fc071ce8-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.256863 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-scripts" (OuterVolumeSpecName: "scripts") pod "78b36fc3-eefb-4e39-854a-cb47fc071ce8" (UID: "78b36fc3-eefb-4e39-854a-cb47fc071ce8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.258421 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78b36fc3-eefb-4e39-854a-cb47fc071ce8-kube-api-access-c9skr" (OuterVolumeSpecName: "kube-api-access-c9skr") pod "78b36fc3-eefb-4e39-854a-cb47fc071ce8" (UID: "78b36fc3-eefb-4e39-854a-cb47fc071ce8"). InnerVolumeSpecName "kube-api-access-c9skr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.284810 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "78b36fc3-eefb-4e39-854a-cb47fc071ce8" (UID: "78b36fc3-eefb-4e39-854a-cb47fc071ce8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.324482 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "78b36fc3-eefb-4e39-854a-cb47fc071ce8" (UID: "78b36fc3-eefb-4e39-854a-cb47fc071ce8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.346309 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78b36fc3-eefb-4e39-854a-cb47fc071ce8" (UID: "78b36fc3-eefb-4e39-854a-cb47fc071ce8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.355292 4946 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.355325 4946 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.355336 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9skr\" (UniqueName: \"kubernetes.io/projected/78b36fc3-eefb-4e39-854a-cb47fc071ce8-kube-api-access-c9skr\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.355346 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.355359 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.372055 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-config-data" (OuterVolumeSpecName: "config-data") pod "78b36fc3-eefb-4e39-854a-cb47fc071ce8" (UID: "78b36fc3-eefb-4e39-854a-cb47fc071ce8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.457432 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78b36fc3-eefb-4e39-854a-cb47fc071ce8-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.904276 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.905847 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78b36fc3-eefb-4e39-854a-cb47fc071ce8","Type":"ContainerDied","Data":"55bc7fb360eff3c9e008b38437840b5cc92cf8b45c42c24f48edfd93929c2888"} Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.905972 4946 scope.go:117] "RemoveContainer" containerID="ec10cd3cf50926151c61de1b32a5a2264b32de0a1f2bb9e57b70cbe152954656" Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.930414 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:18:41 crc kubenswrapper[4946]: I1128 07:18:41.957654 4946 scope.go:117] "RemoveContainer" containerID="506b76300850ba41a05a13b06ea9754cb81f3f0a3827b318c3cb59582c3f7eec" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.010212 4946 scope.go:117] "RemoveContainer" containerID="8738789435a9ffcc57707ab5a88cb5f0bce1e0a03a7c86c06c46ee7fc5db79e6" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.039340 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.058985 4946 scope.go:117] "RemoveContainer" containerID="ae2dffb836d7163122f7f8bb040fc5ff6afa12c5d3cf22e091d2c225a591de63" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.060909 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.073918 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:18:42 crc kubenswrapper[4946]: E1128 07:18:42.074663 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78b36fc3-eefb-4e39-854a-cb47fc071ce8" containerName="sg-core" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.074686 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="78b36fc3-eefb-4e39-854a-cb47fc071ce8" containerName="sg-core" Nov 28 07:18:42 crc kubenswrapper[4946]: E1128 07:18:42.074738 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78b36fc3-eefb-4e39-854a-cb47fc071ce8" containerName="ceilometer-central-agent" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.074751 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="78b36fc3-eefb-4e39-854a-cb47fc071ce8" containerName="ceilometer-central-agent" Nov 28 07:18:42 crc kubenswrapper[4946]: E1128 07:18:42.074791 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78b36fc3-eefb-4e39-854a-cb47fc071ce8" containerName="proxy-httpd" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.074804 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="78b36fc3-eefb-4e39-854a-cb47fc071ce8" containerName="proxy-httpd" Nov 28 07:18:42 crc kubenswrapper[4946]: E1128 07:18:42.074833 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78b36fc3-eefb-4e39-854a-cb47fc071ce8" containerName="ceilometer-notification-agent" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.074845 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="78b36fc3-eefb-4e39-854a-cb47fc071ce8" containerName="ceilometer-notification-agent" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.075110 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="78b36fc3-eefb-4e39-854a-cb47fc071ce8" containerName="proxy-httpd" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.075138 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="78b36fc3-eefb-4e39-854a-cb47fc071ce8" containerName="sg-core" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.075155 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="78b36fc3-eefb-4e39-854a-cb47fc071ce8" containerName="ceilometer-notification-agent" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.075165 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="78b36fc3-eefb-4e39-854a-cb47fc071ce8" containerName="ceilometer-central-agent" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.097711 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.102058 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.106938 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.107011 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.111308 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.171266 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-2frfb"] Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.172856 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2frfb" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.178023 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.178314 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.179990 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-config-data\") pod \"ceilometer-0\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.180036 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-scripts\") pod \"ceilometer-0\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.180066 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.180123 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/177d3d9f-5e48-4b4e-9329-9d46daa35557-log-httpd\") pod \"ceilometer-0\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.180181 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/177d3d9f-5e48-4b4e-9329-9d46daa35557-run-httpd\") pod \"ceilometer-0\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.180206 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.180230 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pztt2\" (UniqueName: \"kubernetes.io/projected/177d3d9f-5e48-4b4e-9329-9d46daa35557-kube-api-access-pztt2\") pod \"ceilometer-0\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.180251 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.195293 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-2frfb"] Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.282939 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/177d3d9f-5e48-4b4e-9329-9d46daa35557-log-httpd\") pod \"ceilometer-0\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.283113 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1b34bc2-b103-4751-b3c0-bb31e89e5854-scripts\") pod \"nova-cell1-cell-mapping-2frfb\" (UID: \"e1b34bc2-b103-4751-b3c0-bb31e89e5854\") " pod="openstack/nova-cell1-cell-mapping-2frfb" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.283199 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/177d3d9f-5e48-4b4e-9329-9d46daa35557-run-httpd\") pod \"ceilometer-0\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.283285 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.283363 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pztt2\" (UniqueName: \"kubernetes.io/projected/177d3d9f-5e48-4b4e-9329-9d46daa35557-kube-api-access-pztt2\") pod \"ceilometer-0\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.283439 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.283509 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b34bc2-b103-4751-b3c0-bb31e89e5854-config-data\") pod \"nova-cell1-cell-mapping-2frfb\" (UID: \"e1b34bc2-b103-4751-b3c0-bb31e89e5854\") " pod="openstack/nova-cell1-cell-mapping-2frfb" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.283577 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/177d3d9f-5e48-4b4e-9329-9d46daa35557-log-httpd\") pod \"ceilometer-0\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.283602 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/177d3d9f-5e48-4b4e-9329-9d46daa35557-run-httpd\") pod \"ceilometer-0\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.283678 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-config-data\") pod \"ceilometer-0\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.283773 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-scripts\") pod \"ceilometer-0\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.283857 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.283957 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b34bc2-b103-4751-b3c0-bb31e89e5854-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2frfb\" (UID: \"e1b34bc2-b103-4751-b3c0-bb31e89e5854\") " pod="openstack/nova-cell1-cell-mapping-2frfb" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.284035 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fctq2\" (UniqueName: \"kubernetes.io/projected/e1b34bc2-b103-4751-b3c0-bb31e89e5854-kube-api-access-fctq2\") pod \"nova-cell1-cell-mapping-2frfb\" (UID: \"e1b34bc2-b103-4751-b3c0-bb31e89e5854\") " pod="openstack/nova-cell1-cell-mapping-2frfb" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.288873 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.292619 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-config-data\") pod \"ceilometer-0\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.293823 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-scripts\") pod \"ceilometer-0\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.295095 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.297679 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.317164 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pztt2\" (UniqueName: \"kubernetes.io/projected/177d3d9f-5e48-4b4e-9329-9d46daa35557-kube-api-access-pztt2\") pod \"ceilometer-0\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.386652 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1b34bc2-b103-4751-b3c0-bb31e89e5854-scripts\") pod \"nova-cell1-cell-mapping-2frfb\" (UID: \"e1b34bc2-b103-4751-b3c0-bb31e89e5854\") " pod="openstack/nova-cell1-cell-mapping-2frfb" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.386762 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b34bc2-b103-4751-b3c0-bb31e89e5854-config-data\") pod \"nova-cell1-cell-mapping-2frfb\" (UID: \"e1b34bc2-b103-4751-b3c0-bb31e89e5854\") " pod="openstack/nova-cell1-cell-mapping-2frfb" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.386960 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b34bc2-b103-4751-b3c0-bb31e89e5854-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2frfb\" (UID: \"e1b34bc2-b103-4751-b3c0-bb31e89e5854\") " pod="openstack/nova-cell1-cell-mapping-2frfb" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.386996 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fctq2\" (UniqueName: \"kubernetes.io/projected/e1b34bc2-b103-4751-b3c0-bb31e89e5854-kube-api-access-fctq2\") pod \"nova-cell1-cell-mapping-2frfb\" (UID: \"e1b34bc2-b103-4751-b3c0-bb31e89e5854\") " pod="openstack/nova-cell1-cell-mapping-2frfb" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.391172 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b34bc2-b103-4751-b3c0-bb31e89e5854-config-data\") pod \"nova-cell1-cell-mapping-2frfb\" (UID: \"e1b34bc2-b103-4751-b3c0-bb31e89e5854\") " pod="openstack/nova-cell1-cell-mapping-2frfb" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.391690 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1b34bc2-b103-4751-b3c0-bb31e89e5854-scripts\") pod \"nova-cell1-cell-mapping-2frfb\" (UID: \"e1b34bc2-b103-4751-b3c0-bb31e89e5854\") " pod="openstack/nova-cell1-cell-mapping-2frfb" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.392103 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b34bc2-b103-4751-b3c0-bb31e89e5854-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2frfb\" (UID: \"e1b34bc2-b103-4751-b3c0-bb31e89e5854\") " pod="openstack/nova-cell1-cell-mapping-2frfb" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.409756 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fctq2\" (UniqueName: \"kubernetes.io/projected/e1b34bc2-b103-4751-b3c0-bb31e89e5854-kube-api-access-fctq2\") pod \"nova-cell1-cell-mapping-2frfb\" (UID: \"e1b34bc2-b103-4751-b3c0-bb31e89e5854\") " pod="openstack/nova-cell1-cell-mapping-2frfb" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.442865 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.505958 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.513526 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2frfb" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.592074 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b94decf-8fc0-40e9-aec6-5bc974c8bd31-combined-ca-bundle\") pod \"7b94decf-8fc0-40e9-aec6-5bc974c8bd31\" (UID: \"7b94decf-8fc0-40e9-aec6-5bc974c8bd31\") " Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.592203 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5z28\" (UniqueName: \"kubernetes.io/projected/7b94decf-8fc0-40e9-aec6-5bc974c8bd31-kube-api-access-f5z28\") pod \"7b94decf-8fc0-40e9-aec6-5bc974c8bd31\" (UID: \"7b94decf-8fc0-40e9-aec6-5bc974c8bd31\") " Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.592239 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b94decf-8fc0-40e9-aec6-5bc974c8bd31-config-data\") pod \"7b94decf-8fc0-40e9-aec6-5bc974c8bd31\" (UID: \"7b94decf-8fc0-40e9-aec6-5bc974c8bd31\") " Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.592453 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b94decf-8fc0-40e9-aec6-5bc974c8bd31-logs\") pod \"7b94decf-8fc0-40e9-aec6-5bc974c8bd31\" (UID: \"7b94decf-8fc0-40e9-aec6-5bc974c8bd31\") " Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.593812 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b94decf-8fc0-40e9-aec6-5bc974c8bd31-logs" (OuterVolumeSpecName: "logs") pod "7b94decf-8fc0-40e9-aec6-5bc974c8bd31" (UID: "7b94decf-8fc0-40e9-aec6-5bc974c8bd31"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.596580 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b94decf-8fc0-40e9-aec6-5bc974c8bd31-kube-api-access-f5z28" (OuterVolumeSpecName: "kube-api-access-f5z28") pod "7b94decf-8fc0-40e9-aec6-5bc974c8bd31" (UID: "7b94decf-8fc0-40e9-aec6-5bc974c8bd31"). InnerVolumeSpecName "kube-api-access-f5z28". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.633297 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b94decf-8fc0-40e9-aec6-5bc974c8bd31-config-data" (OuterVolumeSpecName: "config-data") pod "7b94decf-8fc0-40e9-aec6-5bc974c8bd31" (UID: "7b94decf-8fc0-40e9-aec6-5bc974c8bd31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.636488 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b94decf-8fc0-40e9-aec6-5bc974c8bd31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b94decf-8fc0-40e9-aec6-5bc974c8bd31" (UID: "7b94decf-8fc0-40e9-aec6-5bc974c8bd31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.694980 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b94decf-8fc0-40e9-aec6-5bc974c8bd31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.695047 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5z28\" (UniqueName: \"kubernetes.io/projected/7b94decf-8fc0-40e9-aec6-5bc974c8bd31-kube-api-access-f5z28\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.695066 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b94decf-8fc0-40e9-aec6-5bc974c8bd31-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.695100 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b94decf-8fc0-40e9-aec6-5bc974c8bd31-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.839296 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:18:42 crc kubenswrapper[4946]: W1128 07:18:42.841743 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod177d3d9f_5e48_4b4e_9329_9d46daa35557.slice/crio-9e164633707d4e421bae4d9bb28f87286e7ceec32f8c3f9934039032283c4284 WatchSource:0}: Error finding container 9e164633707d4e421bae4d9bb28f87286e7ceec32f8c3f9934039032283c4284: Status 404 returned error can't find the container with id 9e164633707d4e421bae4d9bb28f87286e7ceec32f8c3f9934039032283c4284 Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.911240 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"177d3d9f-5e48-4b4e-9329-9d46daa35557","Type":"ContainerStarted","Data":"9e164633707d4e421bae4d9bb28f87286e7ceec32f8c3f9934039032283c4284"} Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.913596 4946 generic.go:334] "Generic (PLEG): container finished" podID="7b94decf-8fc0-40e9-aec6-5bc974c8bd31" containerID="77d5525729ead9c316a58288e3afe4056575da23c5db111e75a8ea618e7ea28b" exitCode=0 Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.913651 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7b94decf-8fc0-40e9-aec6-5bc974c8bd31","Type":"ContainerDied","Data":"77d5525729ead9c316a58288e3afe4056575da23c5db111e75a8ea618e7ea28b"} Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.913670 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7b94decf-8fc0-40e9-aec6-5bc974c8bd31","Type":"ContainerDied","Data":"4b84a6f6f89005089448b245dec63aff975c254d701a03076ee3f62f1b72f34b"} Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.913687 4946 scope.go:117] "RemoveContainer" containerID="77d5525729ead9c316a58288e3afe4056575da23c5db111e75a8ea618e7ea28b" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.913821 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.941438 4946 scope.go:117] "RemoveContainer" containerID="d3be03d7e21fcb7493467c17b0c51e4a36f997e53fc38b279a638e3ebeecce7c" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.957085 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.973163 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.988850 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 07:18:42 crc kubenswrapper[4946]: E1128 07:18:42.989488 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b94decf-8fc0-40e9-aec6-5bc974c8bd31" containerName="nova-api-api" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.989512 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b94decf-8fc0-40e9-aec6-5bc974c8bd31" containerName="nova-api-api" Nov 28 07:18:42 crc kubenswrapper[4946]: E1128 07:18:42.989530 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b94decf-8fc0-40e9-aec6-5bc974c8bd31" containerName="nova-api-log" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.989542 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b94decf-8fc0-40e9-aec6-5bc974c8bd31" containerName="nova-api-log" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.989829 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b94decf-8fc0-40e9-aec6-5bc974c8bd31" containerName="nova-api-log" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.989859 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b94decf-8fc0-40e9-aec6-5bc974c8bd31" containerName="nova-api-api" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.991358 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.992847 4946 scope.go:117] "RemoveContainer" containerID="77d5525729ead9c316a58288e3afe4056575da23c5db111e75a8ea618e7ea28b" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.993955 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.994965 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.995305 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 28 07:18:42 crc kubenswrapper[4946]: E1128 07:18:42.995588 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77d5525729ead9c316a58288e3afe4056575da23c5db111e75a8ea618e7ea28b\": container with ID starting with 77d5525729ead9c316a58288e3afe4056575da23c5db111e75a8ea618e7ea28b not found: ID does not exist" containerID="77d5525729ead9c316a58288e3afe4056575da23c5db111e75a8ea618e7ea28b" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.995634 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77d5525729ead9c316a58288e3afe4056575da23c5db111e75a8ea618e7ea28b"} err="failed to get container status \"77d5525729ead9c316a58288e3afe4056575da23c5db111e75a8ea618e7ea28b\": rpc error: code = NotFound desc = could not find container \"77d5525729ead9c316a58288e3afe4056575da23c5db111e75a8ea618e7ea28b\": container with ID starting with 77d5525729ead9c316a58288e3afe4056575da23c5db111e75a8ea618e7ea28b not found: ID does not exist" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.995673 4946 scope.go:117] "RemoveContainer" containerID="d3be03d7e21fcb7493467c17b0c51e4a36f997e53fc38b279a638e3ebeecce7c" Nov 28 07:18:42 crc kubenswrapper[4946]: E1128 07:18:42.997067 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3be03d7e21fcb7493467c17b0c51e4a36f997e53fc38b279a638e3ebeecce7c\": container with ID starting with d3be03d7e21fcb7493467c17b0c51e4a36f997e53fc38b279a638e3ebeecce7c not found: ID does not exist" containerID="d3be03d7e21fcb7493467c17b0c51e4a36f997e53fc38b279a638e3ebeecce7c" Nov 28 07:18:42 crc kubenswrapper[4946]: I1128 07:18:42.997116 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3be03d7e21fcb7493467c17b0c51e4a36f997e53fc38b279a638e3ebeecce7c"} err="failed to get container status \"d3be03d7e21fcb7493467c17b0c51e4a36f997e53fc38b279a638e3ebeecce7c\": rpc error: code = NotFound desc = could not find container \"d3be03d7e21fcb7493467c17b0c51e4a36f997e53fc38b279a638e3ebeecce7c\": container with ID starting with d3be03d7e21fcb7493467c17b0c51e4a36f997e53fc38b279a638e3ebeecce7c not found: ID does not exist" Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.016912 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.105631 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/530d5752-8c2f-4ea7-9729-db7ad592b073-public-tls-certs\") pod \"nova-api-0\" (UID: \"530d5752-8c2f-4ea7-9729-db7ad592b073\") " pod="openstack/nova-api-0" Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.105743 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/530d5752-8c2f-4ea7-9729-db7ad592b073-internal-tls-certs\") pod \"nova-api-0\" (UID: \"530d5752-8c2f-4ea7-9729-db7ad592b073\") " pod="openstack/nova-api-0" Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.105779 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/530d5752-8c2f-4ea7-9729-db7ad592b073-logs\") pod \"nova-api-0\" (UID: \"530d5752-8c2f-4ea7-9729-db7ad592b073\") " pod="openstack/nova-api-0" Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.105938 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/530d5752-8c2f-4ea7-9729-db7ad592b073-config-data\") pod \"nova-api-0\" (UID: \"530d5752-8c2f-4ea7-9729-db7ad592b073\") " pod="openstack/nova-api-0" Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.105977 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/530d5752-8c2f-4ea7-9729-db7ad592b073-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"530d5752-8c2f-4ea7-9729-db7ad592b073\") " pod="openstack/nova-api-0" Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.106024 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g55qp\" (UniqueName: \"kubernetes.io/projected/530d5752-8c2f-4ea7-9729-db7ad592b073-kube-api-access-g55qp\") pod \"nova-api-0\" (UID: \"530d5752-8c2f-4ea7-9729-db7ad592b073\") " pod="openstack/nova-api-0" Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.106695 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-2frfb"] Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.208772 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/530d5752-8c2f-4ea7-9729-db7ad592b073-public-tls-certs\") pod \"nova-api-0\" (UID: \"530d5752-8c2f-4ea7-9729-db7ad592b073\") " pod="openstack/nova-api-0" Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.209422 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/530d5752-8c2f-4ea7-9729-db7ad592b073-internal-tls-certs\") pod \"nova-api-0\" (UID: \"530d5752-8c2f-4ea7-9729-db7ad592b073\") " pod="openstack/nova-api-0" Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.209606 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/530d5752-8c2f-4ea7-9729-db7ad592b073-logs\") pod \"nova-api-0\" (UID: \"530d5752-8c2f-4ea7-9729-db7ad592b073\") " pod="openstack/nova-api-0" Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.209780 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/530d5752-8c2f-4ea7-9729-db7ad592b073-config-data\") pod \"nova-api-0\" (UID: \"530d5752-8c2f-4ea7-9729-db7ad592b073\") " pod="openstack/nova-api-0" Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.209910 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/530d5752-8c2f-4ea7-9729-db7ad592b073-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"530d5752-8c2f-4ea7-9729-db7ad592b073\") " pod="openstack/nova-api-0" Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.210075 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g55qp\" (UniqueName: \"kubernetes.io/projected/530d5752-8c2f-4ea7-9729-db7ad592b073-kube-api-access-g55qp\") pod \"nova-api-0\" (UID: \"530d5752-8c2f-4ea7-9729-db7ad592b073\") " pod="openstack/nova-api-0" Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.210224 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/530d5752-8c2f-4ea7-9729-db7ad592b073-logs\") pod \"nova-api-0\" (UID: \"530d5752-8c2f-4ea7-9729-db7ad592b073\") " pod="openstack/nova-api-0" Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.213415 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/530d5752-8c2f-4ea7-9729-db7ad592b073-internal-tls-certs\") pod \"nova-api-0\" (UID: \"530d5752-8c2f-4ea7-9729-db7ad592b073\") " pod="openstack/nova-api-0" Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.214228 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/530d5752-8c2f-4ea7-9729-db7ad592b073-public-tls-certs\") pod \"nova-api-0\" (UID: \"530d5752-8c2f-4ea7-9729-db7ad592b073\") " pod="openstack/nova-api-0" Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.215043 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/530d5752-8c2f-4ea7-9729-db7ad592b073-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"530d5752-8c2f-4ea7-9729-db7ad592b073\") " pod="openstack/nova-api-0" Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.215099 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/530d5752-8c2f-4ea7-9729-db7ad592b073-config-data\") pod \"nova-api-0\" (UID: \"530d5752-8c2f-4ea7-9729-db7ad592b073\") " pod="openstack/nova-api-0" Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.227774 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g55qp\" (UniqueName: \"kubernetes.io/projected/530d5752-8c2f-4ea7-9729-db7ad592b073-kube-api-access-g55qp\") pod \"nova-api-0\" (UID: \"530d5752-8c2f-4ea7-9729-db7ad592b073\") " pod="openstack/nova-api-0" Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.310453 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.804760 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:18:43 crc kubenswrapper[4946]: W1128 07:18:43.804848 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod530d5752_8c2f_4ea7_9729_db7ad592b073.slice/crio-f1804690135259868e0336b9c0e146f5409dd9e68ba358d45919d6b57d0e87aa WatchSource:0}: Error finding container f1804690135259868e0336b9c0e146f5409dd9e68ba358d45919d6b57d0e87aa: Status 404 returned error can't find the container with id f1804690135259868e0336b9c0e146f5409dd9e68ba358d45919d6b57d0e87aa Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.946328 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"177d3d9f-5e48-4b4e-9329-9d46daa35557","Type":"ContainerStarted","Data":"c42c1219c7b69cbd56741a6ed7605e384c6a1b1aa06e921f85ae5e9b886c56e0"} Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.949724 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"530d5752-8c2f-4ea7-9729-db7ad592b073","Type":"ContainerStarted","Data":"f1804690135259868e0336b9c0e146f5409dd9e68ba358d45919d6b57d0e87aa"} Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.955983 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2frfb" event={"ID":"e1b34bc2-b103-4751-b3c0-bb31e89e5854","Type":"ContainerStarted","Data":"bdf5d4a6d7c993569bedcc568b4c82d9c837e7bbc8a9e5ecdc19c96b5ce2738b"} Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.956035 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2frfb" event={"ID":"e1b34bc2-b103-4751-b3c0-bb31e89e5854","Type":"ContainerStarted","Data":"ae5f63c7fb41800c85f0079dd9fd5f41b6b9b9ca2f497560f43435e7e89807db"} Nov 28 07:18:43 crc kubenswrapper[4946]: I1128 07:18:43.977441 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-2frfb" podStartSLOduration=1.977416716 podStartE2EDuration="1.977416716s" podCreationTimestamp="2025-11-28 07:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:18:43.975871438 +0000 UTC m=+1578.353936549" watchObservedRunningTime="2025-11-28 07:18:43.977416716 +0000 UTC m=+1578.355481827" Nov 28 07:18:44 crc kubenswrapper[4946]: I1128 07:18:44.021132 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78b36fc3-eefb-4e39-854a-cb47fc071ce8" path="/var/lib/kubelet/pods/78b36fc3-eefb-4e39-854a-cb47fc071ce8/volumes" Nov 28 07:18:44 crc kubenswrapper[4946]: I1128 07:18:44.022637 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b94decf-8fc0-40e9-aec6-5bc974c8bd31" path="/var/lib/kubelet/pods/7b94decf-8fc0-40e9-aec6-5bc974c8bd31/volumes" Nov 28 07:18:44 crc kubenswrapper[4946]: I1128 07:18:44.973065 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"177d3d9f-5e48-4b4e-9329-9d46daa35557","Type":"ContainerStarted","Data":"6652f9f13937850c60218d658275da3e9395b434290cd817f8561dffb81b033e"} Nov 28 07:18:44 crc kubenswrapper[4946]: I1128 07:18:44.979575 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"530d5752-8c2f-4ea7-9729-db7ad592b073","Type":"ContainerStarted","Data":"194be73950de2daf5a5a4de2fb269ebed8cdaa575fb9170aa9fc9a53b72a41d0"} Nov 28 07:18:44 crc kubenswrapper[4946]: I1128 07:18:44.979612 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"530d5752-8c2f-4ea7-9729-db7ad592b073","Type":"ContainerStarted","Data":"967b5eb5ae78d63e4a6d6f66cd2f85c0619c9f9e6987bdad4b307d561577468f"} Nov 28 07:18:45 crc kubenswrapper[4946]: I1128 07:18:45.018203 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.018176048 podStartE2EDuration="3.018176048s" podCreationTimestamp="2025-11-28 07:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:18:45.002709149 +0000 UTC m=+1579.380774290" watchObservedRunningTime="2025-11-28 07:18:45.018176048 +0000 UTC m=+1579.396241159" Nov 28 07:18:46 crc kubenswrapper[4946]: I1128 07:18:45.999559 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"177d3d9f-5e48-4b4e-9329-9d46daa35557","Type":"ContainerStarted","Data":"e44eb1721b1c349eb901f9e287cf8f82e2cd4a6445ee30ac6af7e82f4ca77c2a"} Nov 28 07:18:46 crc kubenswrapper[4946]: I1128 07:18:46.368722 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" Nov 28 07:18:46 crc kubenswrapper[4946]: I1128 07:18:46.463160 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75df6cf455-7ctpw"] Nov 28 07:18:46 crc kubenswrapper[4946]: I1128 07:18:46.463503 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" podUID="01e161c0-19be-45e2-9e1c-939bf287bd3e" containerName="dnsmasq-dns" containerID="cri-o://013a463e5332a4ac23726892ef20f93b135d9eefa71f004d0bc295ed866d6e69" gracePeriod=10 Nov 28 07:18:47 crc kubenswrapper[4946]: I1128 07:18:47.029541 4946 generic.go:334] "Generic (PLEG): container finished" podID="01e161c0-19be-45e2-9e1c-939bf287bd3e" containerID="013a463e5332a4ac23726892ef20f93b135d9eefa71f004d0bc295ed866d6e69" exitCode=0 Nov 28 07:18:47 crc kubenswrapper[4946]: I1128 07:18:47.029672 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" event={"ID":"01e161c0-19be-45e2-9e1c-939bf287bd3e","Type":"ContainerDied","Data":"013a463e5332a4ac23726892ef20f93b135d9eefa71f004d0bc295ed866d6e69"} Nov 28 07:18:47 crc kubenswrapper[4946]: I1128 07:18:47.029926 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" event={"ID":"01e161c0-19be-45e2-9e1c-939bf287bd3e","Type":"ContainerDied","Data":"a334b93cb025df01de10265472779d3a75245878cd83853812ffecfd9e06d4e3"} Nov 28 07:18:47 crc kubenswrapper[4946]: I1128 07:18:47.029953 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a334b93cb025df01de10265472779d3a75245878cd83853812ffecfd9e06d4e3" Nov 28 07:18:47 crc kubenswrapper[4946]: I1128 07:18:47.064750 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" Nov 28 07:18:47 crc kubenswrapper[4946]: I1128 07:18:47.104227 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-dns-swift-storage-0\") pod \"01e161c0-19be-45e2-9e1c-939bf287bd3e\" (UID: \"01e161c0-19be-45e2-9e1c-939bf287bd3e\") " Nov 28 07:18:47 crc kubenswrapper[4946]: I1128 07:18:47.104516 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-config\") pod \"01e161c0-19be-45e2-9e1c-939bf287bd3e\" (UID: \"01e161c0-19be-45e2-9e1c-939bf287bd3e\") " Nov 28 07:18:47 crc kubenswrapper[4946]: I1128 07:18:47.104548 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp5bz\" (UniqueName: \"kubernetes.io/projected/01e161c0-19be-45e2-9e1c-939bf287bd3e-kube-api-access-sp5bz\") pod \"01e161c0-19be-45e2-9e1c-939bf287bd3e\" (UID: \"01e161c0-19be-45e2-9e1c-939bf287bd3e\") " Nov 28 07:18:47 crc kubenswrapper[4946]: I1128 07:18:47.104637 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-dns-svc\") pod \"01e161c0-19be-45e2-9e1c-939bf287bd3e\" (UID: \"01e161c0-19be-45e2-9e1c-939bf287bd3e\") " Nov 28 07:18:47 crc kubenswrapper[4946]: I1128 07:18:47.104677 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-ovsdbserver-sb\") pod \"01e161c0-19be-45e2-9e1c-939bf287bd3e\" (UID: \"01e161c0-19be-45e2-9e1c-939bf287bd3e\") " Nov 28 07:18:47 crc kubenswrapper[4946]: I1128 07:18:47.104728 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-ovsdbserver-nb\") pod \"01e161c0-19be-45e2-9e1c-939bf287bd3e\" (UID: \"01e161c0-19be-45e2-9e1c-939bf287bd3e\") " Nov 28 07:18:47 crc kubenswrapper[4946]: I1128 07:18:47.119399 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01e161c0-19be-45e2-9e1c-939bf287bd3e-kube-api-access-sp5bz" (OuterVolumeSpecName: "kube-api-access-sp5bz") pod "01e161c0-19be-45e2-9e1c-939bf287bd3e" (UID: "01e161c0-19be-45e2-9e1c-939bf287bd3e"). InnerVolumeSpecName "kube-api-access-sp5bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:18:47 crc kubenswrapper[4946]: I1128 07:18:47.208829 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-config" (OuterVolumeSpecName: "config") pod "01e161c0-19be-45e2-9e1c-939bf287bd3e" (UID: "01e161c0-19be-45e2-9e1c-939bf287bd3e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:18:47 crc kubenswrapper[4946]: I1128 07:18:47.209579 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:47 crc kubenswrapper[4946]: I1128 07:18:47.209623 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp5bz\" (UniqueName: \"kubernetes.io/projected/01e161c0-19be-45e2-9e1c-939bf287bd3e-kube-api-access-sp5bz\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:47 crc kubenswrapper[4946]: I1128 07:18:47.213628 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "01e161c0-19be-45e2-9e1c-939bf287bd3e" (UID: "01e161c0-19be-45e2-9e1c-939bf287bd3e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:18:47 crc kubenswrapper[4946]: I1128 07:18:47.217955 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "01e161c0-19be-45e2-9e1c-939bf287bd3e" (UID: "01e161c0-19be-45e2-9e1c-939bf287bd3e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:18:47 crc kubenswrapper[4946]: I1128 07:18:47.223318 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "01e161c0-19be-45e2-9e1c-939bf287bd3e" (UID: "01e161c0-19be-45e2-9e1c-939bf287bd3e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:18:47 crc kubenswrapper[4946]: I1128 07:18:47.266551 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "01e161c0-19be-45e2-9e1c-939bf287bd3e" (UID: "01e161c0-19be-45e2-9e1c-939bf287bd3e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:18:47 crc kubenswrapper[4946]: I1128 07:18:47.311296 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:47 crc kubenswrapper[4946]: I1128 07:18:47.311568 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:47 crc kubenswrapper[4946]: I1128 07:18:47.311631 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:47 crc kubenswrapper[4946]: I1128 07:18:47.311715 4946 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/01e161c0-19be-45e2-9e1c-939bf287bd3e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:48 crc kubenswrapper[4946]: I1128 07:18:48.043155 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"177d3d9f-5e48-4b4e-9329-9d46daa35557","Type":"ContainerStarted","Data":"ffcb008ee579acbdfa8d874703f551f470e1a6f511bed4e136ae6381f70bdf76"} Nov 28 07:18:48 crc kubenswrapper[4946]: I1128 07:18:48.043561 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 07:18:48 crc kubenswrapper[4946]: I1128 07:18:48.043210 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75df6cf455-7ctpw" Nov 28 07:18:48 crc kubenswrapper[4946]: I1128 07:18:48.070228 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9886873729999999 podStartE2EDuration="6.070206449s" podCreationTimestamp="2025-11-28 07:18:42 +0000 UTC" firstStartedPulling="2025-11-28 07:18:42.844887835 +0000 UTC m=+1577.222952946" lastFinishedPulling="2025-11-28 07:18:46.926406911 +0000 UTC m=+1581.304472022" observedRunningTime="2025-11-28 07:18:48.064208382 +0000 UTC m=+1582.442273513" watchObservedRunningTime="2025-11-28 07:18:48.070206449 +0000 UTC m=+1582.448271560" Nov 28 07:18:48 crc kubenswrapper[4946]: I1128 07:18:48.088754 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75df6cf455-7ctpw"] Nov 28 07:18:48 crc kubenswrapper[4946]: I1128 07:18:48.098190 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75df6cf455-7ctpw"] Nov 28 07:18:49 crc kubenswrapper[4946]: I1128 07:18:49.056005 4946 generic.go:334] "Generic (PLEG): container finished" podID="e1b34bc2-b103-4751-b3c0-bb31e89e5854" containerID="bdf5d4a6d7c993569bedcc568b4c82d9c837e7bbc8a9e5ecdc19c96b5ce2738b" exitCode=0 Nov 28 07:18:49 crc kubenswrapper[4946]: I1128 07:18:49.056093 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2frfb" event={"ID":"e1b34bc2-b103-4751-b3c0-bb31e89e5854","Type":"ContainerDied","Data":"bdf5d4a6d7c993569bedcc568b4c82d9c837e7bbc8a9e5ecdc19c96b5ce2738b"} Nov 28 07:18:49 crc kubenswrapper[4946]: I1128 07:18:49.146933 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xdwz8"] Nov 28 07:18:49 crc kubenswrapper[4946]: E1128 07:18:49.148430 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e161c0-19be-45e2-9e1c-939bf287bd3e" containerName="init" Nov 28 07:18:49 crc kubenswrapper[4946]: I1128 07:18:49.148497 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e161c0-19be-45e2-9e1c-939bf287bd3e" containerName="init" Nov 28 07:18:49 crc kubenswrapper[4946]: E1128 07:18:49.148548 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e161c0-19be-45e2-9e1c-939bf287bd3e" containerName="dnsmasq-dns" Nov 28 07:18:49 crc kubenswrapper[4946]: I1128 07:18:49.148561 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e161c0-19be-45e2-9e1c-939bf287bd3e" containerName="dnsmasq-dns" Nov 28 07:18:49 crc kubenswrapper[4946]: I1128 07:18:49.148865 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="01e161c0-19be-45e2-9e1c-939bf287bd3e" containerName="dnsmasq-dns" Nov 28 07:18:49 crc kubenswrapper[4946]: I1128 07:18:49.151128 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdwz8" Nov 28 07:18:49 crc kubenswrapper[4946]: I1128 07:18:49.160101 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdwz8"] Nov 28 07:18:49 crc kubenswrapper[4946]: I1128 07:18:49.248998 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzf6t\" (UniqueName: \"kubernetes.io/projected/200ece9e-7940-45c6-86bc-fdf0da574bc2-kube-api-access-dzf6t\") pod \"redhat-marketplace-xdwz8\" (UID: \"200ece9e-7940-45c6-86bc-fdf0da574bc2\") " pod="openshift-marketplace/redhat-marketplace-xdwz8" Nov 28 07:18:49 crc kubenswrapper[4946]: I1128 07:18:49.249091 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/200ece9e-7940-45c6-86bc-fdf0da574bc2-catalog-content\") pod \"redhat-marketplace-xdwz8\" (UID: \"200ece9e-7940-45c6-86bc-fdf0da574bc2\") " pod="openshift-marketplace/redhat-marketplace-xdwz8" Nov 28 07:18:49 crc kubenswrapper[4946]: I1128 07:18:49.249230 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/200ece9e-7940-45c6-86bc-fdf0da574bc2-utilities\") pod \"redhat-marketplace-xdwz8\" (UID: \"200ece9e-7940-45c6-86bc-fdf0da574bc2\") " pod="openshift-marketplace/redhat-marketplace-xdwz8" Nov 28 07:18:49 crc kubenswrapper[4946]: I1128 07:18:49.351965 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/200ece9e-7940-45c6-86bc-fdf0da574bc2-utilities\") pod \"redhat-marketplace-xdwz8\" (UID: \"200ece9e-7940-45c6-86bc-fdf0da574bc2\") " pod="openshift-marketplace/redhat-marketplace-xdwz8" Nov 28 07:18:49 crc kubenswrapper[4946]: I1128 07:18:49.352202 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzf6t\" (UniqueName: \"kubernetes.io/projected/200ece9e-7940-45c6-86bc-fdf0da574bc2-kube-api-access-dzf6t\") pod \"redhat-marketplace-xdwz8\" (UID: \"200ece9e-7940-45c6-86bc-fdf0da574bc2\") " pod="openshift-marketplace/redhat-marketplace-xdwz8" Nov 28 07:18:49 crc kubenswrapper[4946]: I1128 07:18:49.352281 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/200ece9e-7940-45c6-86bc-fdf0da574bc2-catalog-content\") pod \"redhat-marketplace-xdwz8\" (UID: \"200ece9e-7940-45c6-86bc-fdf0da574bc2\") " pod="openshift-marketplace/redhat-marketplace-xdwz8" Nov 28 07:18:49 crc kubenswrapper[4946]: I1128 07:18:49.352728 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/200ece9e-7940-45c6-86bc-fdf0da574bc2-utilities\") pod \"redhat-marketplace-xdwz8\" (UID: \"200ece9e-7940-45c6-86bc-fdf0da574bc2\") " pod="openshift-marketplace/redhat-marketplace-xdwz8" Nov 28 07:18:49 crc kubenswrapper[4946]: I1128 07:18:49.353091 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/200ece9e-7940-45c6-86bc-fdf0da574bc2-catalog-content\") pod \"redhat-marketplace-xdwz8\" (UID: \"200ece9e-7940-45c6-86bc-fdf0da574bc2\") " pod="openshift-marketplace/redhat-marketplace-xdwz8" Nov 28 07:18:49 crc kubenswrapper[4946]: I1128 07:18:49.384390 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzf6t\" (UniqueName: \"kubernetes.io/projected/200ece9e-7940-45c6-86bc-fdf0da574bc2-kube-api-access-dzf6t\") pod \"redhat-marketplace-xdwz8\" (UID: \"200ece9e-7940-45c6-86bc-fdf0da574bc2\") " pod="openshift-marketplace/redhat-marketplace-xdwz8" Nov 28 07:18:49 crc kubenswrapper[4946]: I1128 07:18:49.485668 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdwz8" Nov 28 07:18:50 crc kubenswrapper[4946]: I1128 07:18:50.001263 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01e161c0-19be-45e2-9e1c-939bf287bd3e" path="/var/lib/kubelet/pods/01e161c0-19be-45e2-9e1c-939bf287bd3e/volumes" Nov 28 07:18:50 crc kubenswrapper[4946]: W1128 07:18:50.010638 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod200ece9e_7940_45c6_86bc_fdf0da574bc2.slice/crio-ed893722dad3b7e5edf8c0937fbf035e74de79df74460f6d6b7ab8eb107bdbcd WatchSource:0}: Error finding container ed893722dad3b7e5edf8c0937fbf035e74de79df74460f6d6b7ab8eb107bdbcd: Status 404 returned error can't find the container with id ed893722dad3b7e5edf8c0937fbf035e74de79df74460f6d6b7ab8eb107bdbcd Nov 28 07:18:50 crc kubenswrapper[4946]: I1128 07:18:50.018343 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdwz8"] Nov 28 07:18:50 crc kubenswrapper[4946]: I1128 07:18:50.071973 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdwz8" event={"ID":"200ece9e-7940-45c6-86bc-fdf0da574bc2","Type":"ContainerStarted","Data":"ed893722dad3b7e5edf8c0937fbf035e74de79df74460f6d6b7ab8eb107bdbcd"} Nov 28 07:18:50 crc kubenswrapper[4946]: I1128 07:18:50.347239 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2frfb" Nov 28 07:18:50 crc kubenswrapper[4946]: I1128 07:18:50.484250 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1b34bc2-b103-4751-b3c0-bb31e89e5854-scripts\") pod \"e1b34bc2-b103-4751-b3c0-bb31e89e5854\" (UID: \"e1b34bc2-b103-4751-b3c0-bb31e89e5854\") " Nov 28 07:18:50 crc kubenswrapper[4946]: I1128 07:18:50.484309 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b34bc2-b103-4751-b3c0-bb31e89e5854-config-data\") pod \"e1b34bc2-b103-4751-b3c0-bb31e89e5854\" (UID: \"e1b34bc2-b103-4751-b3c0-bb31e89e5854\") " Nov 28 07:18:50 crc kubenswrapper[4946]: I1128 07:18:50.484401 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b34bc2-b103-4751-b3c0-bb31e89e5854-combined-ca-bundle\") pod \"e1b34bc2-b103-4751-b3c0-bb31e89e5854\" (UID: \"e1b34bc2-b103-4751-b3c0-bb31e89e5854\") " Nov 28 07:18:50 crc kubenswrapper[4946]: I1128 07:18:50.485202 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fctq2\" (UniqueName: \"kubernetes.io/projected/e1b34bc2-b103-4751-b3c0-bb31e89e5854-kube-api-access-fctq2\") pod \"e1b34bc2-b103-4751-b3c0-bb31e89e5854\" (UID: \"e1b34bc2-b103-4751-b3c0-bb31e89e5854\") " Nov 28 07:18:50 crc kubenswrapper[4946]: I1128 07:18:50.489809 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1b34bc2-b103-4751-b3c0-bb31e89e5854-scripts" (OuterVolumeSpecName: "scripts") pod "e1b34bc2-b103-4751-b3c0-bb31e89e5854" (UID: "e1b34bc2-b103-4751-b3c0-bb31e89e5854"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:50 crc kubenswrapper[4946]: I1128 07:18:50.490295 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1b34bc2-b103-4751-b3c0-bb31e89e5854-kube-api-access-fctq2" (OuterVolumeSpecName: "kube-api-access-fctq2") pod "e1b34bc2-b103-4751-b3c0-bb31e89e5854" (UID: "e1b34bc2-b103-4751-b3c0-bb31e89e5854"). InnerVolumeSpecName "kube-api-access-fctq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:18:50 crc kubenswrapper[4946]: I1128 07:18:50.515734 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1b34bc2-b103-4751-b3c0-bb31e89e5854-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1b34bc2-b103-4751-b3c0-bb31e89e5854" (UID: "e1b34bc2-b103-4751-b3c0-bb31e89e5854"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:50 crc kubenswrapper[4946]: I1128 07:18:50.523998 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1b34bc2-b103-4751-b3c0-bb31e89e5854-config-data" (OuterVolumeSpecName: "config-data") pod "e1b34bc2-b103-4751-b3c0-bb31e89e5854" (UID: "e1b34bc2-b103-4751-b3c0-bb31e89e5854"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:50 crc kubenswrapper[4946]: I1128 07:18:50.588284 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fctq2\" (UniqueName: \"kubernetes.io/projected/e1b34bc2-b103-4751-b3c0-bb31e89e5854-kube-api-access-fctq2\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:50 crc kubenswrapper[4946]: I1128 07:18:50.588351 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1b34bc2-b103-4751-b3c0-bb31e89e5854-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:50 crc kubenswrapper[4946]: I1128 07:18:50.588367 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b34bc2-b103-4751-b3c0-bb31e89e5854-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:50 crc kubenswrapper[4946]: I1128 07:18:50.588382 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b34bc2-b103-4751-b3c0-bb31e89e5854-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:51 crc kubenswrapper[4946]: I1128 07:18:51.087297 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2frfb" event={"ID":"e1b34bc2-b103-4751-b3c0-bb31e89e5854","Type":"ContainerDied","Data":"ae5f63c7fb41800c85f0079dd9fd5f41b6b9b9ca2f497560f43435e7e89807db"} Nov 28 07:18:51 crc kubenswrapper[4946]: I1128 07:18:51.088261 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae5f63c7fb41800c85f0079dd9fd5f41b6b9b9ca2f497560f43435e7e89807db" Nov 28 07:18:51 crc kubenswrapper[4946]: I1128 07:18:51.087643 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2frfb" Nov 28 07:18:51 crc kubenswrapper[4946]: I1128 07:18:51.091923 4946 generic.go:334] "Generic (PLEG): container finished" podID="200ece9e-7940-45c6-86bc-fdf0da574bc2" containerID="bb07bcee2916a907f3d38fe3c0d82e43daf857ac01107fa5e32768cb5ddcfc5d" exitCode=0 Nov 28 07:18:51 crc kubenswrapper[4946]: I1128 07:18:51.092004 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdwz8" event={"ID":"200ece9e-7940-45c6-86bc-fdf0da574bc2","Type":"ContainerDied","Data":"bb07bcee2916a907f3d38fe3c0d82e43daf857ac01107fa5e32768cb5ddcfc5d"} Nov 28 07:18:51 crc kubenswrapper[4946]: I1128 07:18:51.268953 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:18:51 crc kubenswrapper[4946]: I1128 07:18:51.269480 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="530d5752-8c2f-4ea7-9729-db7ad592b073" containerName="nova-api-log" containerID="cri-o://967b5eb5ae78d63e4a6d6f66cd2f85c0619c9f9e6987bdad4b307d561577468f" gracePeriod=30 Nov 28 07:18:51 crc kubenswrapper[4946]: I1128 07:18:51.269612 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="530d5752-8c2f-4ea7-9729-db7ad592b073" containerName="nova-api-api" containerID="cri-o://194be73950de2daf5a5a4de2fb269ebed8cdaa575fb9170aa9fc9a53b72a41d0" gracePeriod=30 Nov 28 07:18:51 crc kubenswrapper[4946]: I1128 07:18:51.282377 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:18:51 crc kubenswrapper[4946]: I1128 07:18:51.282641 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8eb7b962-332f-4f57-a438-5a4c33a3859a" containerName="nova-scheduler-scheduler" containerID="cri-o://f872345529c102cbf095d9b27a69a827b21f57789777c251b623d90ef5501ba9" gracePeriod=30 Nov 28 07:18:51 crc kubenswrapper[4946]: I1128 07:18:51.352329 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:18:51 crc kubenswrapper[4946]: I1128 07:18:51.353332 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e2a58ce5-5b99-461d-b7b9-ebeb95624d70" containerName="nova-metadata-log" containerID="cri-o://34e825dd736734b342689a93be686bc2c8e15162e54c5db9c531e3183a7da476" gracePeriod=30 Nov 28 07:18:51 crc kubenswrapper[4946]: I1128 07:18:51.353453 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e2a58ce5-5b99-461d-b7b9-ebeb95624d70" containerName="nova-metadata-metadata" containerID="cri-o://59f6230e209f2d726ba347a01276004806be61ec0ebb63fde762dcc3c87045f1" gracePeriod=30 Nov 28 07:18:51 crc kubenswrapper[4946]: I1128 07:18:51.884294 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.026096 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/530d5752-8c2f-4ea7-9729-db7ad592b073-internal-tls-certs\") pod \"530d5752-8c2f-4ea7-9729-db7ad592b073\" (UID: \"530d5752-8c2f-4ea7-9729-db7ad592b073\") " Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.026573 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/530d5752-8c2f-4ea7-9729-db7ad592b073-config-data\") pod \"530d5752-8c2f-4ea7-9729-db7ad592b073\" (UID: \"530d5752-8c2f-4ea7-9729-db7ad592b073\") " Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.026652 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/530d5752-8c2f-4ea7-9729-db7ad592b073-logs\") pod \"530d5752-8c2f-4ea7-9729-db7ad592b073\" (UID: \"530d5752-8c2f-4ea7-9729-db7ad592b073\") " Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.026720 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/530d5752-8c2f-4ea7-9729-db7ad592b073-public-tls-certs\") pod \"530d5752-8c2f-4ea7-9729-db7ad592b073\" (UID: \"530d5752-8c2f-4ea7-9729-db7ad592b073\") " Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.026740 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/530d5752-8c2f-4ea7-9729-db7ad592b073-combined-ca-bundle\") pod \"530d5752-8c2f-4ea7-9729-db7ad592b073\" (UID: \"530d5752-8c2f-4ea7-9729-db7ad592b073\") " Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.027160 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/530d5752-8c2f-4ea7-9729-db7ad592b073-logs" (OuterVolumeSpecName: "logs") pod "530d5752-8c2f-4ea7-9729-db7ad592b073" (UID: "530d5752-8c2f-4ea7-9729-db7ad592b073"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.027380 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g55qp\" (UniqueName: \"kubernetes.io/projected/530d5752-8c2f-4ea7-9729-db7ad592b073-kube-api-access-g55qp\") pod \"530d5752-8c2f-4ea7-9729-db7ad592b073\" (UID: \"530d5752-8c2f-4ea7-9729-db7ad592b073\") " Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.027894 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/530d5752-8c2f-4ea7-9729-db7ad592b073-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.042193 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/530d5752-8c2f-4ea7-9729-db7ad592b073-kube-api-access-g55qp" (OuterVolumeSpecName: "kube-api-access-g55qp") pod "530d5752-8c2f-4ea7-9729-db7ad592b073" (UID: "530d5752-8c2f-4ea7-9729-db7ad592b073"). InnerVolumeSpecName "kube-api-access-g55qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.056604 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/530d5752-8c2f-4ea7-9729-db7ad592b073-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "530d5752-8c2f-4ea7-9729-db7ad592b073" (UID: "530d5752-8c2f-4ea7-9729-db7ad592b073"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.066623 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/530d5752-8c2f-4ea7-9729-db7ad592b073-config-data" (OuterVolumeSpecName: "config-data") pod "530d5752-8c2f-4ea7-9729-db7ad592b073" (UID: "530d5752-8c2f-4ea7-9729-db7ad592b073"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.084875 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/530d5752-8c2f-4ea7-9729-db7ad592b073-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "530d5752-8c2f-4ea7-9729-db7ad592b073" (UID: "530d5752-8c2f-4ea7-9729-db7ad592b073"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.094830 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/530d5752-8c2f-4ea7-9729-db7ad592b073-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "530d5752-8c2f-4ea7-9729-db7ad592b073" (UID: "530d5752-8c2f-4ea7-9729-db7ad592b073"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.105520 4946 generic.go:334] "Generic (PLEG): container finished" podID="e2a58ce5-5b99-461d-b7b9-ebeb95624d70" containerID="34e825dd736734b342689a93be686bc2c8e15162e54c5db9c531e3183a7da476" exitCode=143 Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.105670 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e2a58ce5-5b99-461d-b7b9-ebeb95624d70","Type":"ContainerDied","Data":"34e825dd736734b342689a93be686bc2c8e15162e54c5db9c531e3183a7da476"} Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.110432 4946 generic.go:334] "Generic (PLEG): container finished" podID="530d5752-8c2f-4ea7-9729-db7ad592b073" containerID="194be73950de2daf5a5a4de2fb269ebed8cdaa575fb9170aa9fc9a53b72a41d0" exitCode=0 Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.110498 4946 generic.go:334] "Generic (PLEG): container finished" podID="530d5752-8c2f-4ea7-9729-db7ad592b073" containerID="967b5eb5ae78d63e4a6d6f66cd2f85c0619c9f9e6987bdad4b307d561577468f" exitCode=143 Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.110510 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.110531 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"530d5752-8c2f-4ea7-9729-db7ad592b073","Type":"ContainerDied","Data":"194be73950de2daf5a5a4de2fb269ebed8cdaa575fb9170aa9fc9a53b72a41d0"} Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.110573 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"530d5752-8c2f-4ea7-9729-db7ad592b073","Type":"ContainerDied","Data":"967b5eb5ae78d63e4a6d6f66cd2f85c0619c9f9e6987bdad4b307d561577468f"} Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.110593 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"530d5752-8c2f-4ea7-9729-db7ad592b073","Type":"ContainerDied","Data":"f1804690135259868e0336b9c0e146f5409dd9e68ba358d45919d6b57d0e87aa"} Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.110613 4946 scope.go:117] "RemoveContainer" containerID="194be73950de2daf5a5a4de2fb269ebed8cdaa575fb9170aa9fc9a53b72a41d0" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.130597 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g55qp\" (UniqueName: \"kubernetes.io/projected/530d5752-8c2f-4ea7-9729-db7ad592b073-kube-api-access-g55qp\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.130630 4946 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/530d5752-8c2f-4ea7-9729-db7ad592b073-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.130644 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/530d5752-8c2f-4ea7-9729-db7ad592b073-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.130659 4946 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/530d5752-8c2f-4ea7-9729-db7ad592b073-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.130731 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/530d5752-8c2f-4ea7-9729-db7ad592b073-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.154821 4946 scope.go:117] "RemoveContainer" containerID="967b5eb5ae78d63e4a6d6f66cd2f85c0619c9f9e6987bdad4b307d561577468f" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.167138 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.197113 4946 scope.go:117] "RemoveContainer" containerID="194be73950de2daf5a5a4de2fb269ebed8cdaa575fb9170aa9fc9a53b72a41d0" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.197339 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:18:52 crc kubenswrapper[4946]: E1128 07:18:52.198803 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"194be73950de2daf5a5a4de2fb269ebed8cdaa575fb9170aa9fc9a53b72a41d0\": container with ID starting with 194be73950de2daf5a5a4de2fb269ebed8cdaa575fb9170aa9fc9a53b72a41d0 not found: ID does not exist" containerID="194be73950de2daf5a5a4de2fb269ebed8cdaa575fb9170aa9fc9a53b72a41d0" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.198872 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"194be73950de2daf5a5a4de2fb269ebed8cdaa575fb9170aa9fc9a53b72a41d0"} err="failed to get container status \"194be73950de2daf5a5a4de2fb269ebed8cdaa575fb9170aa9fc9a53b72a41d0\": rpc error: code = NotFound desc = could not find container \"194be73950de2daf5a5a4de2fb269ebed8cdaa575fb9170aa9fc9a53b72a41d0\": container with ID starting with 194be73950de2daf5a5a4de2fb269ebed8cdaa575fb9170aa9fc9a53b72a41d0 not found: ID does not exist" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.198919 4946 scope.go:117] "RemoveContainer" containerID="967b5eb5ae78d63e4a6d6f66cd2f85c0619c9f9e6987bdad4b307d561577468f" Nov 28 07:18:52 crc kubenswrapper[4946]: E1128 07:18:52.199529 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"967b5eb5ae78d63e4a6d6f66cd2f85c0619c9f9e6987bdad4b307d561577468f\": container with ID starting with 967b5eb5ae78d63e4a6d6f66cd2f85c0619c9f9e6987bdad4b307d561577468f not found: ID does not exist" containerID="967b5eb5ae78d63e4a6d6f66cd2f85c0619c9f9e6987bdad4b307d561577468f" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.199577 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"967b5eb5ae78d63e4a6d6f66cd2f85c0619c9f9e6987bdad4b307d561577468f"} err="failed to get container status \"967b5eb5ae78d63e4a6d6f66cd2f85c0619c9f9e6987bdad4b307d561577468f\": rpc error: code = NotFound desc = could not find container \"967b5eb5ae78d63e4a6d6f66cd2f85c0619c9f9e6987bdad4b307d561577468f\": container with ID starting with 967b5eb5ae78d63e4a6d6f66cd2f85c0619c9f9e6987bdad4b307d561577468f not found: ID does not exist" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.199609 4946 scope.go:117] "RemoveContainer" containerID="194be73950de2daf5a5a4de2fb269ebed8cdaa575fb9170aa9fc9a53b72a41d0" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.199987 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"194be73950de2daf5a5a4de2fb269ebed8cdaa575fb9170aa9fc9a53b72a41d0"} err="failed to get container status \"194be73950de2daf5a5a4de2fb269ebed8cdaa575fb9170aa9fc9a53b72a41d0\": rpc error: code = NotFound desc = could not find container \"194be73950de2daf5a5a4de2fb269ebed8cdaa575fb9170aa9fc9a53b72a41d0\": container with ID starting with 194be73950de2daf5a5a4de2fb269ebed8cdaa575fb9170aa9fc9a53b72a41d0 not found: ID does not exist" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.200027 4946 scope.go:117] "RemoveContainer" containerID="967b5eb5ae78d63e4a6d6f66cd2f85c0619c9f9e6987bdad4b307d561577468f" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.200413 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"967b5eb5ae78d63e4a6d6f66cd2f85c0619c9f9e6987bdad4b307d561577468f"} err="failed to get container status \"967b5eb5ae78d63e4a6d6f66cd2f85c0619c9f9e6987bdad4b307d561577468f\": rpc error: code = NotFound desc = could not find container \"967b5eb5ae78d63e4a6d6f66cd2f85c0619c9f9e6987bdad4b307d561577468f\": container with ID starting with 967b5eb5ae78d63e4a6d6f66cd2f85c0619c9f9e6987bdad4b307d561577468f not found: ID does not exist" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.233436 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 07:18:52 crc kubenswrapper[4946]: E1128 07:18:52.234378 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="530d5752-8c2f-4ea7-9729-db7ad592b073" containerName="nova-api-log" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.234416 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="530d5752-8c2f-4ea7-9729-db7ad592b073" containerName="nova-api-log" Nov 28 07:18:52 crc kubenswrapper[4946]: E1128 07:18:52.234588 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="530d5752-8c2f-4ea7-9729-db7ad592b073" containerName="nova-api-api" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.234604 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="530d5752-8c2f-4ea7-9729-db7ad592b073" containerName="nova-api-api" Nov 28 07:18:52 crc kubenswrapper[4946]: E1128 07:18:52.234621 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1b34bc2-b103-4751-b3c0-bb31e89e5854" containerName="nova-manage" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.234633 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1b34bc2-b103-4751-b3c0-bb31e89e5854" containerName="nova-manage" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.235012 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1b34bc2-b103-4751-b3c0-bb31e89e5854" containerName="nova-manage" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.235055 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="530d5752-8c2f-4ea7-9729-db7ad592b073" containerName="nova-api-log" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.235084 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="530d5752-8c2f-4ea7-9729-db7ad592b073" containerName="nova-api-api" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.237241 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.240162 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.240492 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.240538 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.264629 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.336090 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/587a9b3d-1634-4af6-96d2-e60c03a7d75f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\") " pod="openstack/nova-api-0" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.336152 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/587a9b3d-1634-4af6-96d2-e60c03a7d75f-public-tls-certs\") pod \"nova-api-0\" (UID: \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\") " pod="openstack/nova-api-0" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.336321 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/587a9b3d-1634-4af6-96d2-e60c03a7d75f-logs\") pod \"nova-api-0\" (UID: \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\") " pod="openstack/nova-api-0" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.336379 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/587a9b3d-1634-4af6-96d2-e60c03a7d75f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\") " pod="openstack/nova-api-0" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.336447 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lxxd\" (UniqueName: \"kubernetes.io/projected/587a9b3d-1634-4af6-96d2-e60c03a7d75f-kube-api-access-5lxxd\") pod \"nova-api-0\" (UID: \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\") " pod="openstack/nova-api-0" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.336500 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/587a9b3d-1634-4af6-96d2-e60c03a7d75f-config-data\") pod \"nova-api-0\" (UID: \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\") " pod="openstack/nova-api-0" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.438713 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lxxd\" (UniqueName: \"kubernetes.io/projected/587a9b3d-1634-4af6-96d2-e60c03a7d75f-kube-api-access-5lxxd\") pod \"nova-api-0\" (UID: \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\") " pod="openstack/nova-api-0" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.438831 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/587a9b3d-1634-4af6-96d2-e60c03a7d75f-config-data\") pod \"nova-api-0\" (UID: \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\") " pod="openstack/nova-api-0" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.438920 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/587a9b3d-1634-4af6-96d2-e60c03a7d75f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\") " pod="openstack/nova-api-0" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.438956 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/587a9b3d-1634-4af6-96d2-e60c03a7d75f-public-tls-certs\") pod \"nova-api-0\" (UID: \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\") " pod="openstack/nova-api-0" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.439129 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/587a9b3d-1634-4af6-96d2-e60c03a7d75f-logs\") pod \"nova-api-0\" (UID: \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\") " pod="openstack/nova-api-0" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.439211 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/587a9b3d-1634-4af6-96d2-e60c03a7d75f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\") " pod="openstack/nova-api-0" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.440441 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/587a9b3d-1634-4af6-96d2-e60c03a7d75f-logs\") pod \"nova-api-0\" (UID: \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\") " pod="openstack/nova-api-0" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.447831 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/587a9b3d-1634-4af6-96d2-e60c03a7d75f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\") " pod="openstack/nova-api-0" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.447988 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/587a9b3d-1634-4af6-96d2-e60c03a7d75f-config-data\") pod \"nova-api-0\" (UID: \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\") " pod="openstack/nova-api-0" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.449096 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/587a9b3d-1634-4af6-96d2-e60c03a7d75f-public-tls-certs\") pod \"nova-api-0\" (UID: \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\") " pod="openstack/nova-api-0" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.449680 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/587a9b3d-1634-4af6-96d2-e60c03a7d75f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\") " pod="openstack/nova-api-0" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.457594 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lxxd\" (UniqueName: \"kubernetes.io/projected/587a9b3d-1634-4af6-96d2-e60c03a7d75f-kube-api-access-5lxxd\") pod \"nova-api-0\" (UID: \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\") " pod="openstack/nova-api-0" Nov 28 07:18:52 crc kubenswrapper[4946]: I1128 07:18:52.570192 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.069144 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.127513 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdwz8" event={"ID":"200ece9e-7940-45c6-86bc-fdf0da574bc2","Type":"ContainerDied","Data":"9c7b6cf3bcd660b5ec497db0c1f5eee1577f7402bb9f16ac584d0314880d3d77"} Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.128288 4946 generic.go:334] "Generic (PLEG): container finished" podID="200ece9e-7940-45c6-86bc-fdf0da574bc2" containerID="9c7b6cf3bcd660b5ec497db0c1f5eee1577f7402bb9f16ac584d0314880d3d77" exitCode=0 Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.130357 4946 generic.go:334] "Generic (PLEG): container finished" podID="8eb7b962-332f-4f57-a438-5a4c33a3859a" containerID="f872345529c102cbf095d9b27a69a827b21f57789777c251b623d90ef5501ba9" exitCode=0 Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.130409 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8eb7b962-332f-4f57-a438-5a4c33a3859a","Type":"ContainerDied","Data":"f872345529c102cbf095d9b27a69a827b21f57789777c251b623d90ef5501ba9"} Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.130422 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.130434 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8eb7b962-332f-4f57-a438-5a4c33a3859a","Type":"ContainerDied","Data":"58c6ada20c0b2240f0c445d4dd594e428a6ab2b203fc10125f8cc134b99310e7"} Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.130486 4946 scope.go:117] "RemoveContainer" containerID="f872345529c102cbf095d9b27a69a827b21f57789777c251b623d90ef5501ba9" Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.159036 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb7b962-332f-4f57-a438-5a4c33a3859a-combined-ca-bundle\") pod \"8eb7b962-332f-4f57-a438-5a4c33a3859a\" (UID: \"8eb7b962-332f-4f57-a438-5a4c33a3859a\") " Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.159115 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb7b962-332f-4f57-a438-5a4c33a3859a-config-data\") pod \"8eb7b962-332f-4f57-a438-5a4c33a3859a\" (UID: \"8eb7b962-332f-4f57-a438-5a4c33a3859a\") " Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.159322 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqdsj\" (UniqueName: \"kubernetes.io/projected/8eb7b962-332f-4f57-a438-5a4c33a3859a-kube-api-access-xqdsj\") pod \"8eb7b962-332f-4f57-a438-5a4c33a3859a\" (UID: \"8eb7b962-332f-4f57-a438-5a4c33a3859a\") " Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.163761 4946 scope.go:117] "RemoveContainer" containerID="f872345529c102cbf095d9b27a69a827b21f57789777c251b623d90ef5501ba9" Nov 28 07:18:53 crc kubenswrapper[4946]: E1128 07:18:53.165804 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f872345529c102cbf095d9b27a69a827b21f57789777c251b623d90ef5501ba9\": container with ID starting with f872345529c102cbf095d9b27a69a827b21f57789777c251b623d90ef5501ba9 not found: ID does not exist" containerID="f872345529c102cbf095d9b27a69a827b21f57789777c251b623d90ef5501ba9" Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.165870 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f872345529c102cbf095d9b27a69a827b21f57789777c251b623d90ef5501ba9"} err="failed to get container status \"f872345529c102cbf095d9b27a69a827b21f57789777c251b623d90ef5501ba9\": rpc error: code = NotFound desc = could not find container \"f872345529c102cbf095d9b27a69a827b21f57789777c251b623d90ef5501ba9\": container with ID starting with f872345529c102cbf095d9b27a69a827b21f57789777c251b623d90ef5501ba9 not found: ID does not exist" Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.166758 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eb7b962-332f-4f57-a438-5a4c33a3859a-kube-api-access-xqdsj" (OuterVolumeSpecName: "kube-api-access-xqdsj") pod "8eb7b962-332f-4f57-a438-5a4c33a3859a" (UID: "8eb7b962-332f-4f57-a438-5a4c33a3859a"). InnerVolumeSpecName "kube-api-access-xqdsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:18:53 crc kubenswrapper[4946]: W1128 07:18:53.204193 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod587a9b3d_1634_4af6_96d2_e60c03a7d75f.slice/crio-d6d9bebf6aea4f0a994edca594a609e4a3c7e1938d3194319fd9e9f014301848 WatchSource:0}: Error finding container d6d9bebf6aea4f0a994edca594a609e4a3c7e1938d3194319fd9e9f014301848: Status 404 returned error can't find the container with id d6d9bebf6aea4f0a994edca594a609e4a3c7e1938d3194319fd9e9f014301848 Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.204716 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.204803 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eb7b962-332f-4f57-a438-5a4c33a3859a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8eb7b962-332f-4f57-a438-5a4c33a3859a" (UID: "8eb7b962-332f-4f57-a438-5a4c33a3859a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.208206 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eb7b962-332f-4f57-a438-5a4c33a3859a-config-data" (OuterVolumeSpecName: "config-data") pod "8eb7b962-332f-4f57-a438-5a4c33a3859a" (UID: "8eb7b962-332f-4f57-a438-5a4c33a3859a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.263356 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb7b962-332f-4f57-a438-5a4c33a3859a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.263390 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb7b962-332f-4f57-a438-5a4c33a3859a-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.263400 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqdsj\" (UniqueName: \"kubernetes.io/projected/8eb7b962-332f-4f57-a438-5a4c33a3859a-kube-api-access-xqdsj\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.542004 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.553582 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.565397 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:18:53 crc kubenswrapper[4946]: E1128 07:18:53.565874 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eb7b962-332f-4f57-a438-5a4c33a3859a" containerName="nova-scheduler-scheduler" Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.565893 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eb7b962-332f-4f57-a438-5a4c33a3859a" containerName="nova-scheduler-scheduler" Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.566124 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eb7b962-332f-4f57-a438-5a4c33a3859a" containerName="nova-scheduler-scheduler" Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.566860 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.568839 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.608682 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.671591 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t26h2\" (UniqueName: \"kubernetes.io/projected/c3ba1954-566f-4e25-8312-855a58935547-kube-api-access-t26h2\") pod \"nova-scheduler-0\" (UID: \"c3ba1954-566f-4e25-8312-855a58935547\") " pod="openstack/nova-scheduler-0" Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.671954 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3ba1954-566f-4e25-8312-855a58935547-config-data\") pod \"nova-scheduler-0\" (UID: \"c3ba1954-566f-4e25-8312-855a58935547\") " pod="openstack/nova-scheduler-0" Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.672001 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ba1954-566f-4e25-8312-855a58935547-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c3ba1954-566f-4e25-8312-855a58935547\") " pod="openstack/nova-scheduler-0" Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.774405 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t26h2\" (UniqueName: \"kubernetes.io/projected/c3ba1954-566f-4e25-8312-855a58935547-kube-api-access-t26h2\") pod \"nova-scheduler-0\" (UID: \"c3ba1954-566f-4e25-8312-855a58935547\") " pod="openstack/nova-scheduler-0" Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.774577 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3ba1954-566f-4e25-8312-855a58935547-config-data\") pod \"nova-scheduler-0\" (UID: \"c3ba1954-566f-4e25-8312-855a58935547\") " pod="openstack/nova-scheduler-0" Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.774613 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ba1954-566f-4e25-8312-855a58935547-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c3ba1954-566f-4e25-8312-855a58935547\") " pod="openstack/nova-scheduler-0" Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.785769 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ba1954-566f-4e25-8312-855a58935547-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c3ba1954-566f-4e25-8312-855a58935547\") " pod="openstack/nova-scheduler-0" Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.803106 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3ba1954-566f-4e25-8312-855a58935547-config-data\") pod \"nova-scheduler-0\" (UID: \"c3ba1954-566f-4e25-8312-855a58935547\") " pod="openstack/nova-scheduler-0" Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.828015 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t26h2\" (UniqueName: \"kubernetes.io/projected/c3ba1954-566f-4e25-8312-855a58935547-kube-api-access-t26h2\") pod \"nova-scheduler-0\" (UID: \"c3ba1954-566f-4e25-8312-855a58935547\") " pod="openstack/nova-scheduler-0" Nov 28 07:18:53 crc kubenswrapper[4946]: I1128 07:18:53.898259 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 07:18:54 crc kubenswrapper[4946]: I1128 07:18:54.001782 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="530d5752-8c2f-4ea7-9729-db7ad592b073" path="/var/lib/kubelet/pods/530d5752-8c2f-4ea7-9729-db7ad592b073/volumes" Nov 28 07:18:54 crc kubenswrapper[4946]: I1128 07:18:54.002589 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eb7b962-332f-4f57-a438-5a4c33a3859a" path="/var/lib/kubelet/pods/8eb7b962-332f-4f57-a438-5a4c33a3859a/volumes" Nov 28 07:18:54 crc kubenswrapper[4946]: I1128 07:18:54.142755 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdwz8" event={"ID":"200ece9e-7940-45c6-86bc-fdf0da574bc2","Type":"ContainerStarted","Data":"aa073a1e9f192b8b457ccbc053bfb678c58da7b60bbb18fc3b37df15c00a96da"} Nov 28 07:18:54 crc kubenswrapper[4946]: I1128 07:18:54.148350 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"587a9b3d-1634-4af6-96d2-e60c03a7d75f","Type":"ContainerStarted","Data":"ff6fb5d2f5f3038d8cdb313bf501d56a6b33a90f0593856e2e8097e215b377eb"} Nov 28 07:18:54 crc kubenswrapper[4946]: I1128 07:18:54.148388 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"587a9b3d-1634-4af6-96d2-e60c03a7d75f","Type":"ContainerStarted","Data":"cdc13d7826d930cfa67ccc8a5fb14e61b1d536afb075f29af7e2cdabf489a9d0"} Nov 28 07:18:54 crc kubenswrapper[4946]: I1128 07:18:54.148397 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"587a9b3d-1634-4af6-96d2-e60c03a7d75f","Type":"ContainerStarted","Data":"d6d9bebf6aea4f0a994edca594a609e4a3c7e1938d3194319fd9e9f014301848"} Nov 28 07:18:54 crc kubenswrapper[4946]: I1128 07:18:54.171029 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xdwz8" podStartSLOduration=2.6817436199999998 podStartE2EDuration="5.171003772s" podCreationTimestamp="2025-11-28 07:18:49 +0000 UTC" firstStartedPulling="2025-11-28 07:18:51.104340282 +0000 UTC m=+1585.482405413" lastFinishedPulling="2025-11-28 07:18:53.593600444 +0000 UTC m=+1587.971665565" observedRunningTime="2025-11-28 07:18:54.161142581 +0000 UTC m=+1588.539207692" watchObservedRunningTime="2025-11-28 07:18:54.171003772 +0000 UTC m=+1588.549068883" Nov 28 07:18:54 crc kubenswrapper[4946]: I1128 07:18:54.179524 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.179505121 podStartE2EDuration="2.179505121s" podCreationTimestamp="2025-11-28 07:18:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:18:54.177266976 +0000 UTC m=+1588.555332087" watchObservedRunningTime="2025-11-28 07:18:54.179505121 +0000 UTC m=+1588.557570232" Nov 28 07:18:54 crc kubenswrapper[4946]: I1128 07:18:54.455942 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:18:54 crc kubenswrapper[4946]: W1128 07:18:54.457666 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3ba1954_566f_4e25_8312_855a58935547.slice/crio-2735b0564fb3827b237fcba8745bfe5e24c027d064789dd8b7e46fba685b7394 WatchSource:0}: Error finding container 2735b0564fb3827b237fcba8745bfe5e24c027d064789dd8b7e46fba685b7394: Status 404 returned error can't find the container with id 2735b0564fb3827b237fcba8745bfe5e24c027d064789dd8b7e46fba685b7394 Nov 28 07:18:54 crc kubenswrapper[4946]: I1128 07:18:54.730384 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:18:54 crc kubenswrapper[4946]: I1128 07:18:54.730440 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:18:54 crc kubenswrapper[4946]: I1128 07:18:54.880672 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.023571 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7prx\" (UniqueName: \"kubernetes.io/projected/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-kube-api-access-v7prx\") pod \"e2a58ce5-5b99-461d-b7b9-ebeb95624d70\" (UID: \"e2a58ce5-5b99-461d-b7b9-ebeb95624d70\") " Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.023683 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-config-data\") pod \"e2a58ce5-5b99-461d-b7b9-ebeb95624d70\" (UID: \"e2a58ce5-5b99-461d-b7b9-ebeb95624d70\") " Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.023721 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-nova-metadata-tls-certs\") pod \"e2a58ce5-5b99-461d-b7b9-ebeb95624d70\" (UID: \"e2a58ce5-5b99-461d-b7b9-ebeb95624d70\") " Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.023802 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-logs\") pod \"e2a58ce5-5b99-461d-b7b9-ebeb95624d70\" (UID: \"e2a58ce5-5b99-461d-b7b9-ebeb95624d70\") " Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.023831 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-combined-ca-bundle\") pod \"e2a58ce5-5b99-461d-b7b9-ebeb95624d70\" (UID: \"e2a58ce5-5b99-461d-b7b9-ebeb95624d70\") " Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.024629 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-logs" (OuterVolumeSpecName: "logs") pod "e2a58ce5-5b99-461d-b7b9-ebeb95624d70" (UID: "e2a58ce5-5b99-461d-b7b9-ebeb95624d70"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.030182 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-kube-api-access-v7prx" (OuterVolumeSpecName: "kube-api-access-v7prx") pod "e2a58ce5-5b99-461d-b7b9-ebeb95624d70" (UID: "e2a58ce5-5b99-461d-b7b9-ebeb95624d70"). InnerVolumeSpecName "kube-api-access-v7prx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.052310 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-config-data" (OuterVolumeSpecName: "config-data") pod "e2a58ce5-5b99-461d-b7b9-ebeb95624d70" (UID: "e2a58ce5-5b99-461d-b7b9-ebeb95624d70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.060015 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2a58ce5-5b99-461d-b7b9-ebeb95624d70" (UID: "e2a58ce5-5b99-461d-b7b9-ebeb95624d70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.092697 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e2a58ce5-5b99-461d-b7b9-ebeb95624d70" (UID: "e2a58ce5-5b99-461d-b7b9-ebeb95624d70"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.127560 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7prx\" (UniqueName: \"kubernetes.io/projected/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-kube-api-access-v7prx\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.127611 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.128396 4946 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.128432 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.128451 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a58ce5-5b99-461d-b7b9-ebeb95624d70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.164869 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c3ba1954-566f-4e25-8312-855a58935547","Type":"ContainerStarted","Data":"47d5db84ad9670de902990ff6af710f405452597c9ef2064765b609a18f5147a"} Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.164947 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c3ba1954-566f-4e25-8312-855a58935547","Type":"ContainerStarted","Data":"2735b0564fb3827b237fcba8745bfe5e24c027d064789dd8b7e46fba685b7394"} Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.170031 4946 generic.go:334] "Generic (PLEG): container finished" podID="e2a58ce5-5b99-461d-b7b9-ebeb95624d70" containerID="59f6230e209f2d726ba347a01276004806be61ec0ebb63fde762dcc3c87045f1" exitCode=0 Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.170509 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e2a58ce5-5b99-461d-b7b9-ebeb95624d70","Type":"ContainerDied","Data":"59f6230e209f2d726ba347a01276004806be61ec0ebb63fde762dcc3c87045f1"} Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.170591 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e2a58ce5-5b99-461d-b7b9-ebeb95624d70","Type":"ContainerDied","Data":"d39a0b98b7980c95c6a17611f89440f0e1128c288d138babd30c82dc2c8bcecc"} Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.170653 4946 scope.go:117] "RemoveContainer" containerID="59f6230e209f2d726ba347a01276004806be61ec0ebb63fde762dcc3c87045f1" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.170963 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.189998 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.189970509 podStartE2EDuration="2.189970509s" podCreationTimestamp="2025-11-28 07:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:18:55.188318619 +0000 UTC m=+1589.566383770" watchObservedRunningTime="2025-11-28 07:18:55.189970509 +0000 UTC m=+1589.568035640" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.212327 4946 scope.go:117] "RemoveContainer" containerID="34e825dd736734b342689a93be686bc2c8e15162e54c5db9c531e3183a7da476" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.245946 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.248191 4946 scope.go:117] "RemoveContainer" containerID="59f6230e209f2d726ba347a01276004806be61ec0ebb63fde762dcc3c87045f1" Nov 28 07:18:55 crc kubenswrapper[4946]: E1128 07:18:55.248835 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59f6230e209f2d726ba347a01276004806be61ec0ebb63fde762dcc3c87045f1\": container with ID starting with 59f6230e209f2d726ba347a01276004806be61ec0ebb63fde762dcc3c87045f1 not found: ID does not exist" containerID="59f6230e209f2d726ba347a01276004806be61ec0ebb63fde762dcc3c87045f1" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.248876 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59f6230e209f2d726ba347a01276004806be61ec0ebb63fde762dcc3c87045f1"} err="failed to get container status \"59f6230e209f2d726ba347a01276004806be61ec0ebb63fde762dcc3c87045f1\": rpc error: code = NotFound desc = could not find container \"59f6230e209f2d726ba347a01276004806be61ec0ebb63fde762dcc3c87045f1\": container with ID starting with 59f6230e209f2d726ba347a01276004806be61ec0ebb63fde762dcc3c87045f1 not found: ID does not exist" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.248908 4946 scope.go:117] "RemoveContainer" containerID="34e825dd736734b342689a93be686bc2c8e15162e54c5db9c531e3183a7da476" Nov 28 07:18:55 crc kubenswrapper[4946]: E1128 07:18:55.249446 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34e825dd736734b342689a93be686bc2c8e15162e54c5db9c531e3183a7da476\": container with ID starting with 34e825dd736734b342689a93be686bc2c8e15162e54c5db9c531e3183a7da476 not found: ID does not exist" containerID="34e825dd736734b342689a93be686bc2c8e15162e54c5db9c531e3183a7da476" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.249492 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34e825dd736734b342689a93be686bc2c8e15162e54c5db9c531e3183a7da476"} err="failed to get container status \"34e825dd736734b342689a93be686bc2c8e15162e54c5db9c531e3183a7da476\": rpc error: code = NotFound desc = could not find container \"34e825dd736734b342689a93be686bc2c8e15162e54c5db9c531e3183a7da476\": container with ID starting with 34e825dd736734b342689a93be686bc2c8e15162e54c5db9c531e3183a7da476 not found: ID does not exist" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.262195 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.275845 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:18:55 crc kubenswrapper[4946]: E1128 07:18:55.276286 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a58ce5-5b99-461d-b7b9-ebeb95624d70" containerName="nova-metadata-metadata" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.276306 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a58ce5-5b99-461d-b7b9-ebeb95624d70" containerName="nova-metadata-metadata" Nov 28 07:18:55 crc kubenswrapper[4946]: E1128 07:18:55.276326 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a58ce5-5b99-461d-b7b9-ebeb95624d70" containerName="nova-metadata-log" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.276333 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a58ce5-5b99-461d-b7b9-ebeb95624d70" containerName="nova-metadata-log" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.276560 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a58ce5-5b99-461d-b7b9-ebeb95624d70" containerName="nova-metadata-log" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.276578 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a58ce5-5b99-461d-b7b9-ebeb95624d70" containerName="nova-metadata-metadata" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.277673 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.283965 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.286076 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.287644 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.437906 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/206851cb-4673-4ce1-b038-c2e425d306b7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"206851cb-4673-4ce1-b038-c2e425d306b7\") " pod="openstack/nova-metadata-0" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.438353 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfzh8\" (UniqueName: \"kubernetes.io/projected/206851cb-4673-4ce1-b038-c2e425d306b7-kube-api-access-pfzh8\") pod \"nova-metadata-0\" (UID: \"206851cb-4673-4ce1-b038-c2e425d306b7\") " pod="openstack/nova-metadata-0" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.438757 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/206851cb-4673-4ce1-b038-c2e425d306b7-logs\") pod \"nova-metadata-0\" (UID: \"206851cb-4673-4ce1-b038-c2e425d306b7\") " pod="openstack/nova-metadata-0" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.438843 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/206851cb-4673-4ce1-b038-c2e425d306b7-config-data\") pod \"nova-metadata-0\" (UID: \"206851cb-4673-4ce1-b038-c2e425d306b7\") " pod="openstack/nova-metadata-0" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.438895 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/206851cb-4673-4ce1-b038-c2e425d306b7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"206851cb-4673-4ce1-b038-c2e425d306b7\") " pod="openstack/nova-metadata-0" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.544367 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/206851cb-4673-4ce1-b038-c2e425d306b7-logs\") pod \"nova-metadata-0\" (UID: \"206851cb-4673-4ce1-b038-c2e425d306b7\") " pod="openstack/nova-metadata-0" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.544639 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/206851cb-4673-4ce1-b038-c2e425d306b7-config-data\") pod \"nova-metadata-0\" (UID: \"206851cb-4673-4ce1-b038-c2e425d306b7\") " pod="openstack/nova-metadata-0" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.544712 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/206851cb-4673-4ce1-b038-c2e425d306b7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"206851cb-4673-4ce1-b038-c2e425d306b7\") " pod="openstack/nova-metadata-0" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.544950 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/206851cb-4673-4ce1-b038-c2e425d306b7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"206851cb-4673-4ce1-b038-c2e425d306b7\") " pod="openstack/nova-metadata-0" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.545120 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfzh8\" (UniqueName: \"kubernetes.io/projected/206851cb-4673-4ce1-b038-c2e425d306b7-kube-api-access-pfzh8\") pod \"nova-metadata-0\" (UID: \"206851cb-4673-4ce1-b038-c2e425d306b7\") " pod="openstack/nova-metadata-0" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.545444 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/206851cb-4673-4ce1-b038-c2e425d306b7-logs\") pod \"nova-metadata-0\" (UID: \"206851cb-4673-4ce1-b038-c2e425d306b7\") " pod="openstack/nova-metadata-0" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.549984 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/206851cb-4673-4ce1-b038-c2e425d306b7-config-data\") pod \"nova-metadata-0\" (UID: \"206851cb-4673-4ce1-b038-c2e425d306b7\") " pod="openstack/nova-metadata-0" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.550655 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/206851cb-4673-4ce1-b038-c2e425d306b7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"206851cb-4673-4ce1-b038-c2e425d306b7\") " pod="openstack/nova-metadata-0" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.550894 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/206851cb-4673-4ce1-b038-c2e425d306b7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"206851cb-4673-4ce1-b038-c2e425d306b7\") " pod="openstack/nova-metadata-0" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.571833 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfzh8\" (UniqueName: \"kubernetes.io/projected/206851cb-4673-4ce1-b038-c2e425d306b7-kube-api-access-pfzh8\") pod \"nova-metadata-0\" (UID: \"206851cb-4673-4ce1-b038-c2e425d306b7\") " pod="openstack/nova-metadata-0" Nov 28 07:18:55 crc kubenswrapper[4946]: I1128 07:18:55.599738 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:18:56 crc kubenswrapper[4946]: I1128 07:18:56.003305 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2a58ce5-5b99-461d-b7b9-ebeb95624d70" path="/var/lib/kubelet/pods/e2a58ce5-5b99-461d-b7b9-ebeb95624d70/volumes" Nov 28 07:18:56 crc kubenswrapper[4946]: I1128 07:18:56.097451 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:18:56 crc kubenswrapper[4946]: W1128 07:18:56.102007 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod206851cb_4673_4ce1_b038_c2e425d306b7.slice/crio-d211035670e1cae21262677232429628b3c8c8f69a73b74a5dd382d675b6587d WatchSource:0}: Error finding container d211035670e1cae21262677232429628b3c8c8f69a73b74a5dd382d675b6587d: Status 404 returned error can't find the container with id d211035670e1cae21262677232429628b3c8c8f69a73b74a5dd382d675b6587d Nov 28 07:18:56 crc kubenswrapper[4946]: I1128 07:18:56.186420 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"206851cb-4673-4ce1-b038-c2e425d306b7","Type":"ContainerStarted","Data":"d211035670e1cae21262677232429628b3c8c8f69a73b74a5dd382d675b6587d"} Nov 28 07:18:57 crc kubenswrapper[4946]: I1128 07:18:57.207095 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"206851cb-4673-4ce1-b038-c2e425d306b7","Type":"ContainerStarted","Data":"15a189a2591a0270f4f6fdcfc4b5cb8f68b93b0132f30d27ff66c62f4a2d9ffd"} Nov 28 07:18:57 crc kubenswrapper[4946]: I1128 07:18:57.207504 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"206851cb-4673-4ce1-b038-c2e425d306b7","Type":"ContainerStarted","Data":"24ddca1eb18e17e4df35510352b9a1bdc29c68afcf2f6e975fb5d6693d132c62"} Nov 28 07:18:57 crc kubenswrapper[4946]: I1128 07:18:57.235660 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.2356203629999998 podStartE2EDuration="2.235620363s" podCreationTimestamp="2025-11-28 07:18:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:18:57.229149994 +0000 UTC m=+1591.607215115" watchObservedRunningTime="2025-11-28 07:18:57.235620363 +0000 UTC m=+1591.613685514" Nov 28 07:18:58 crc kubenswrapper[4946]: I1128 07:18:58.899718 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 07:18:59 crc kubenswrapper[4946]: I1128 07:18:59.486325 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xdwz8" Nov 28 07:18:59 crc kubenswrapper[4946]: I1128 07:18:59.486902 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xdwz8" Nov 28 07:18:59 crc kubenswrapper[4946]: I1128 07:18:59.571072 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xdwz8" Nov 28 07:18:59 crc kubenswrapper[4946]: I1128 07:18:59.819903 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e2a58ce5-5b99-461d-b7b9-ebeb95624d70" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.187:8775/\": dial tcp 10.217.0.187:8775: i/o timeout" Nov 28 07:18:59 crc kubenswrapper[4946]: I1128 07:18:59.819907 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e2a58ce5-5b99-461d-b7b9-ebeb95624d70" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.187:8775/\": dial tcp 10.217.0.187:8775: i/o timeout" Nov 28 07:19:00 crc kubenswrapper[4946]: I1128 07:19:00.330747 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xdwz8" Nov 28 07:19:00 crc kubenswrapper[4946]: I1128 07:19:00.412108 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdwz8"] Nov 28 07:19:00 crc kubenswrapper[4946]: I1128 07:19:00.601544 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 07:19:00 crc kubenswrapper[4946]: I1128 07:19:00.601614 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 07:19:02 crc kubenswrapper[4946]: I1128 07:19:02.273262 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xdwz8" podUID="200ece9e-7940-45c6-86bc-fdf0da574bc2" containerName="registry-server" containerID="cri-o://aa073a1e9f192b8b457ccbc053bfb678c58da7b60bbb18fc3b37df15c00a96da" gracePeriod=2 Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:02.571280 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:02.571637 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:02.883269 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdwz8" Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.026126 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzf6t\" (UniqueName: \"kubernetes.io/projected/200ece9e-7940-45c6-86bc-fdf0da574bc2-kube-api-access-dzf6t\") pod \"200ece9e-7940-45c6-86bc-fdf0da574bc2\" (UID: \"200ece9e-7940-45c6-86bc-fdf0da574bc2\") " Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.026549 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/200ece9e-7940-45c6-86bc-fdf0da574bc2-catalog-content\") pod \"200ece9e-7940-45c6-86bc-fdf0da574bc2\" (UID: \"200ece9e-7940-45c6-86bc-fdf0da574bc2\") " Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.026805 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/200ece9e-7940-45c6-86bc-fdf0da574bc2-utilities\") pod \"200ece9e-7940-45c6-86bc-fdf0da574bc2\" (UID: \"200ece9e-7940-45c6-86bc-fdf0da574bc2\") " Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.027803 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/200ece9e-7940-45c6-86bc-fdf0da574bc2-utilities" (OuterVolumeSpecName: "utilities") pod "200ece9e-7940-45c6-86bc-fdf0da574bc2" (UID: "200ece9e-7940-45c6-86bc-fdf0da574bc2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.048015 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/200ece9e-7940-45c6-86bc-fdf0da574bc2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "200ece9e-7940-45c6-86bc-fdf0da574bc2" (UID: "200ece9e-7940-45c6-86bc-fdf0da574bc2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.052687 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/200ece9e-7940-45c6-86bc-fdf0da574bc2-kube-api-access-dzf6t" (OuterVolumeSpecName: "kube-api-access-dzf6t") pod "200ece9e-7940-45c6-86bc-fdf0da574bc2" (UID: "200ece9e-7940-45c6-86bc-fdf0da574bc2"). InnerVolumeSpecName "kube-api-access-dzf6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.130045 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/200ece9e-7940-45c6-86bc-fdf0da574bc2-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.130082 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzf6t\" (UniqueName: \"kubernetes.io/projected/200ece9e-7940-45c6-86bc-fdf0da574bc2-kube-api-access-dzf6t\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.130093 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/200ece9e-7940-45c6-86bc-fdf0da574bc2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.284356 4946 generic.go:334] "Generic (PLEG): container finished" podID="200ece9e-7940-45c6-86bc-fdf0da574bc2" containerID="aa073a1e9f192b8b457ccbc053bfb678c58da7b60bbb18fc3b37df15c00a96da" exitCode=0 Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.284417 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdwz8" event={"ID":"200ece9e-7940-45c6-86bc-fdf0da574bc2","Type":"ContainerDied","Data":"aa073a1e9f192b8b457ccbc053bfb678c58da7b60bbb18fc3b37df15c00a96da"} Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.284530 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdwz8" event={"ID":"200ece9e-7940-45c6-86bc-fdf0da574bc2","Type":"ContainerDied","Data":"ed893722dad3b7e5edf8c0937fbf035e74de79df74460f6d6b7ab8eb107bdbcd"} Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.284528 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdwz8" Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.284569 4946 scope.go:117] "RemoveContainer" containerID="aa073a1e9f192b8b457ccbc053bfb678c58da7b60bbb18fc3b37df15c00a96da" Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.316035 4946 scope.go:117] "RemoveContainer" containerID="9c7b6cf3bcd660b5ec497db0c1f5eee1577f7402bb9f16ac584d0314880d3d77" Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.325417 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdwz8"] Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.337107 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdwz8"] Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.342537 4946 scope.go:117] "RemoveContainer" containerID="bb07bcee2916a907f3d38fe3c0d82e43daf857ac01107fa5e32768cb5ddcfc5d" Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.419530 4946 scope.go:117] "RemoveContainer" containerID="aa073a1e9f192b8b457ccbc053bfb678c58da7b60bbb18fc3b37df15c00a96da" Nov 28 07:19:03 crc kubenswrapper[4946]: E1128 07:19:03.420020 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa073a1e9f192b8b457ccbc053bfb678c58da7b60bbb18fc3b37df15c00a96da\": container with ID starting with aa073a1e9f192b8b457ccbc053bfb678c58da7b60bbb18fc3b37df15c00a96da not found: ID does not exist" containerID="aa073a1e9f192b8b457ccbc053bfb678c58da7b60bbb18fc3b37df15c00a96da" Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.420049 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa073a1e9f192b8b457ccbc053bfb678c58da7b60bbb18fc3b37df15c00a96da"} err="failed to get container status \"aa073a1e9f192b8b457ccbc053bfb678c58da7b60bbb18fc3b37df15c00a96da\": rpc error: code = NotFound desc = could not find container \"aa073a1e9f192b8b457ccbc053bfb678c58da7b60bbb18fc3b37df15c00a96da\": container with ID starting with aa073a1e9f192b8b457ccbc053bfb678c58da7b60bbb18fc3b37df15c00a96da not found: ID does not exist" Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.420071 4946 scope.go:117] "RemoveContainer" containerID="9c7b6cf3bcd660b5ec497db0c1f5eee1577f7402bb9f16ac584d0314880d3d77" Nov 28 07:19:03 crc kubenswrapper[4946]: E1128 07:19:03.420564 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c7b6cf3bcd660b5ec497db0c1f5eee1577f7402bb9f16ac584d0314880d3d77\": container with ID starting with 9c7b6cf3bcd660b5ec497db0c1f5eee1577f7402bb9f16ac584d0314880d3d77 not found: ID does not exist" containerID="9c7b6cf3bcd660b5ec497db0c1f5eee1577f7402bb9f16ac584d0314880d3d77" Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.420615 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c7b6cf3bcd660b5ec497db0c1f5eee1577f7402bb9f16ac584d0314880d3d77"} err="failed to get container status \"9c7b6cf3bcd660b5ec497db0c1f5eee1577f7402bb9f16ac584d0314880d3d77\": rpc error: code = NotFound desc = could not find container \"9c7b6cf3bcd660b5ec497db0c1f5eee1577f7402bb9f16ac584d0314880d3d77\": container with ID starting with 9c7b6cf3bcd660b5ec497db0c1f5eee1577f7402bb9f16ac584d0314880d3d77 not found: ID does not exist" Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.420654 4946 scope.go:117] "RemoveContainer" containerID="bb07bcee2916a907f3d38fe3c0d82e43daf857ac01107fa5e32768cb5ddcfc5d" Nov 28 07:19:03 crc kubenswrapper[4946]: E1128 07:19:03.420970 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb07bcee2916a907f3d38fe3c0d82e43daf857ac01107fa5e32768cb5ddcfc5d\": container with ID starting with bb07bcee2916a907f3d38fe3c0d82e43daf857ac01107fa5e32768cb5ddcfc5d not found: ID does not exist" containerID="bb07bcee2916a907f3d38fe3c0d82e43daf857ac01107fa5e32768cb5ddcfc5d" Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.420992 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb07bcee2916a907f3d38fe3c0d82e43daf857ac01107fa5e32768cb5ddcfc5d"} err="failed to get container status \"bb07bcee2916a907f3d38fe3c0d82e43daf857ac01107fa5e32768cb5ddcfc5d\": rpc error: code = NotFound desc = could not find container \"bb07bcee2916a907f3d38fe3c0d82e43daf857ac01107fa5e32768cb5ddcfc5d\": container with ID starting with bb07bcee2916a907f3d38fe3c0d82e43daf857ac01107fa5e32768cb5ddcfc5d not found: ID does not exist" Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.590636 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="587a9b3d-1634-4af6-96d2-e60c03a7d75f" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.197:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.590636 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="587a9b3d-1634-4af6-96d2-e60c03a7d75f" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.197:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.899970 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 07:19:03 crc kubenswrapper[4946]: I1128 07:19:03.937875 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 07:19:04 crc kubenswrapper[4946]: I1128 07:19:04.005998 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="200ece9e-7940-45c6-86bc-fdf0da574bc2" path="/var/lib/kubelet/pods/200ece9e-7940-45c6-86bc-fdf0da574bc2/volumes" Nov 28 07:19:04 crc kubenswrapper[4946]: I1128 07:19:04.347441 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 07:19:05 crc kubenswrapper[4946]: I1128 07:19:05.601452 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 07:19:05 crc kubenswrapper[4946]: I1128 07:19:05.601554 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 07:19:06 crc kubenswrapper[4946]: I1128 07:19:06.625828 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="206851cb-4673-4ce1-b038-c2e425d306b7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 07:19:06 crc kubenswrapper[4946]: I1128 07:19:06.625845 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="206851cb-4673-4ce1-b038-c2e425d306b7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 07:19:12 crc kubenswrapper[4946]: I1128 07:19:12.518755 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 28 07:19:12 crc kubenswrapper[4946]: I1128 07:19:12.587092 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 07:19:12 crc kubenswrapper[4946]: I1128 07:19:12.588185 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 07:19:12 crc kubenswrapper[4946]: I1128 07:19:12.588412 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 07:19:12 crc kubenswrapper[4946]: I1128 07:19:12.605247 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 07:19:13 crc kubenswrapper[4946]: I1128 07:19:13.406292 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 07:19:13 crc kubenswrapper[4946]: I1128 07:19:13.416235 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 07:19:15 crc kubenswrapper[4946]: I1128 07:19:15.607685 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 07:19:15 crc kubenswrapper[4946]: I1128 07:19:15.617440 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 07:19:15 crc kubenswrapper[4946]: I1128 07:19:15.623536 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 07:19:16 crc kubenswrapper[4946]: I1128 07:19:16.454212 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 07:19:24 crc kubenswrapper[4946]: I1128 07:19:24.730372 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:19:24 crc kubenswrapper[4946]: I1128 07:19:24.731325 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:19:37 crc kubenswrapper[4946]: I1128 07:19:37.798952 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 28 07:19:37 crc kubenswrapper[4946]: I1128 07:19:37.799648 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="1a71b82d-c922-4d23-b816-f662cc5539ec" containerName="openstackclient" containerID="cri-o://2456162c80891db11f4df38895a38626b9a5d79c2067609e9aa578de23df284f" gracePeriod=2 Nov 28 07:19:37 crc kubenswrapper[4946]: I1128 07:19:37.807360 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.241109 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.287556 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glanceb763-account-delete-mmnjs"] Nov 28 07:19:38 crc kubenswrapper[4946]: E1128 07:19:38.288051 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="200ece9e-7940-45c6-86bc-fdf0da574bc2" containerName="extract-utilities" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.288065 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="200ece9e-7940-45c6-86bc-fdf0da574bc2" containerName="extract-utilities" Nov 28 07:19:38 crc kubenswrapper[4946]: E1128 07:19:38.288087 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a71b82d-c922-4d23-b816-f662cc5539ec" containerName="openstackclient" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.288094 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a71b82d-c922-4d23-b816-f662cc5539ec" containerName="openstackclient" Nov 28 07:19:38 crc kubenswrapper[4946]: E1128 07:19:38.288106 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="200ece9e-7940-45c6-86bc-fdf0da574bc2" containerName="registry-server" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.288113 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="200ece9e-7940-45c6-86bc-fdf0da574bc2" containerName="registry-server" Nov 28 07:19:38 crc kubenswrapper[4946]: E1128 07:19:38.288141 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="200ece9e-7940-45c6-86bc-fdf0da574bc2" containerName="extract-content" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.288147 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="200ece9e-7940-45c6-86bc-fdf0da574bc2" containerName="extract-content" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.288338 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="200ece9e-7940-45c6-86bc-fdf0da574bc2" containerName="registry-server" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.288356 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a71b82d-c922-4d23-b816-f662cc5539ec" containerName="openstackclient" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.289071 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glanceb763-account-delete-mmnjs" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.333571 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glanceb763-account-delete-mmnjs"] Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.402068 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xzzn6"] Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.420245 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8bs8\" (UniqueName: \"kubernetes.io/projected/eb07712a-805e-4e0f-9a81-dd8ce42bfb88-kube-api-access-g8bs8\") pod \"glanceb763-account-delete-mmnjs\" (UID: \"eb07712a-805e-4e0f-9a81-dd8ce42bfb88\") " pod="openstack/glanceb763-account-delete-mmnjs" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.420357 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb07712a-805e-4e0f-9a81-dd8ce42bfb88-operator-scripts\") pod \"glanceb763-account-delete-mmnjs\" (UID: \"eb07712a-805e-4e0f-9a81-dd8ce42bfb88\") " pod="openstack/glanceb763-account-delete-mmnjs" Nov 28 07:19:38 crc kubenswrapper[4946]: E1128 07:19:38.421531 4946 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 28 07:19:38 crc kubenswrapper[4946]: E1128 07:19:38.421581 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-config-data podName:59fdca77-b333-44be-ab8c-96a2f4bcc340 nodeName:}" failed. No retries permitted until 2025-11-28 07:19:38.921562412 +0000 UTC m=+1633.299627523 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-config-data") pod "rabbitmq-server-0" (UID: "59fdca77-b333-44be-ab8c-96a2f4bcc340") : configmap "rabbitmq-config-data" not found Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.504022 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder8ebd-account-delete-trxbg"] Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.505667 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder8ebd-account-delete-trxbg" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.522979 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8bs8\" (UniqueName: \"kubernetes.io/projected/eb07712a-805e-4e0f-9a81-dd8ce42bfb88-kube-api-access-g8bs8\") pod \"glanceb763-account-delete-mmnjs\" (UID: \"eb07712a-805e-4e0f-9a81-dd8ce42bfb88\") " pod="openstack/glanceb763-account-delete-mmnjs" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.523066 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb07712a-805e-4e0f-9a81-dd8ce42bfb88-operator-scripts\") pod \"glanceb763-account-delete-mmnjs\" (UID: \"eb07712a-805e-4e0f-9a81-dd8ce42bfb88\") " pod="openstack/glanceb763-account-delete-mmnjs" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.523741 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb07712a-805e-4e0f-9a81-dd8ce42bfb88-operator-scripts\") pod \"glanceb763-account-delete-mmnjs\" (UID: \"eb07712a-805e-4e0f-9a81-dd8ce42bfb88\") " pod="openstack/glanceb763-account-delete-mmnjs" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.536120 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-wm9fv"] Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.536590 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-wm9fv" podUID="7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08" containerName="openstack-network-exporter" containerID="cri-o://7daeb0c76b280f50561eb7b7c2864caf3e13cb79b3ecceb81295252f5c19b84a" gracePeriod=30 Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.572532 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder8ebd-account-delete-trxbg"] Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.598590 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8bs8\" (UniqueName: \"kubernetes.io/projected/eb07712a-805e-4e0f-9a81-dd8ce42bfb88-kube-api-access-g8bs8\") pod \"glanceb763-account-delete-mmnjs\" (UID: \"eb07712a-805e-4e0f-9a81-dd8ce42bfb88\") " pod="openstack/glanceb763-account-delete-mmnjs" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.631208 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32ed9247-5959-4a5b-a879-52fac366f999-operator-scripts\") pod \"cinder8ebd-account-delete-trxbg\" (UID: \"32ed9247-5959-4a5b-a879-52fac366f999\") " pod="openstack/cinder8ebd-account-delete-trxbg" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.631264 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msrtg\" (UniqueName: \"kubernetes.io/projected/32ed9247-5959-4a5b-a879-52fac366f999-kube-api-access-msrtg\") pod \"cinder8ebd-account-delete-trxbg\" (UID: \"32ed9247-5959-4a5b-a879-52fac366f999\") " pod="openstack/cinder8ebd-account-delete-trxbg" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.644924 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-gpnz5"] Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.651758 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glanceb763-account-delete-mmnjs" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.666887 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutronc822-account-delete-z6qcr"] Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.668325 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutronc822-account-delete-z6qcr" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.704396 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutronc822-account-delete-z6qcr"] Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.740094 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32ed9247-5959-4a5b-a879-52fac366f999-operator-scripts\") pod \"cinder8ebd-account-delete-trxbg\" (UID: \"32ed9247-5959-4a5b-a879-52fac366f999\") " pod="openstack/cinder8ebd-account-delete-trxbg" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.740353 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msrtg\" (UniqueName: \"kubernetes.io/projected/32ed9247-5959-4a5b-a879-52fac366f999-kube-api-access-msrtg\") pod \"cinder8ebd-account-delete-trxbg\" (UID: \"32ed9247-5959-4a5b-a879-52fac366f999\") " pod="openstack/cinder8ebd-account-delete-trxbg" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.741621 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32ed9247-5959-4a5b-a879-52fac366f999-operator-scripts\") pod \"cinder8ebd-account-delete-trxbg\" (UID: \"32ed9247-5959-4a5b-a879-52fac366f999\") " pod="openstack/cinder8ebd-account-delete-trxbg" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.777280 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-5xxm2"] Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.797213 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-5xxm2"] Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.811519 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msrtg\" (UniqueName: \"kubernetes.io/projected/32ed9247-5959-4a5b-a879-52fac366f999-kube-api-access-msrtg\") pod \"cinder8ebd-account-delete-trxbg\" (UID: \"32ed9247-5959-4a5b-a879-52fac366f999\") " pod="openstack/cinder8ebd-account-delete-trxbg" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.825260 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican1925-account-delete-gjf5l"] Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.826898 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican1925-account-delete-gjf5l" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.843671 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hb62\" (UniqueName: \"kubernetes.io/projected/f80c95e6-2981-4755-ada4-26bbf1372693-kube-api-access-4hb62\") pod \"neutronc822-account-delete-z6qcr\" (UID: \"f80c95e6-2981-4755-ada4-26bbf1372693\") " pod="openstack/neutronc822-account-delete-z6qcr" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.843751 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f80c95e6-2981-4755-ada4-26bbf1372693-operator-scripts\") pod \"neutronc822-account-delete-z6qcr\" (UID: \"f80c95e6-2981-4755-ada4-26bbf1372693\") " pod="openstack/neutronc822-account-delete-z6qcr" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.858610 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.858905 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="c274eefa-3598-470b-9b07-25928903d425" containerName="ovn-northd" containerID="cri-o://7e494d19861cee1bf5cc7147950495eba143f3dbb2bec8c94fbf0c3c3c16f2dd" gracePeriod=30 Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.859498 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="c274eefa-3598-470b-9b07-25928903d425" containerName="openstack-network-exporter" containerID="cri-o://f8b2e10c106db832955230f63c48caf7a4ee259e84e9b344f03d06060ead1493" gracePeriod=30 Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.875404 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder8ebd-account-delete-trxbg" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.910664 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-8wk8q"] Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.939068 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-8wk8q"] Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.946257 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f80c95e6-2981-4755-ada4-26bbf1372693-operator-scripts\") pod \"neutronc822-account-delete-z6qcr\" (UID: \"f80c95e6-2981-4755-ada4-26bbf1372693\") " pod="openstack/neutronc822-account-delete-z6qcr" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.946490 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ck5w\" (UniqueName: \"kubernetes.io/projected/9f7aa965-bfc0-4db1-a2d2-07fe02be9f18-kube-api-access-9ck5w\") pod \"barbican1925-account-delete-gjf5l\" (UID: \"9f7aa965-bfc0-4db1-a2d2-07fe02be9f18\") " pod="openstack/barbican1925-account-delete-gjf5l" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.946540 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hb62\" (UniqueName: \"kubernetes.io/projected/f80c95e6-2981-4755-ada4-26bbf1372693-kube-api-access-4hb62\") pod \"neutronc822-account-delete-z6qcr\" (UID: \"f80c95e6-2981-4755-ada4-26bbf1372693\") " pod="openstack/neutronc822-account-delete-z6qcr" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.946577 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f7aa965-bfc0-4db1-a2d2-07fe02be9f18-operator-scripts\") pod \"barbican1925-account-delete-gjf5l\" (UID: \"9f7aa965-bfc0-4db1-a2d2-07fe02be9f18\") " pod="openstack/barbican1925-account-delete-gjf5l" Nov 28 07:19:38 crc kubenswrapper[4946]: E1128 07:19:38.946758 4946 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 28 07:19:38 crc kubenswrapper[4946]: E1128 07:19:38.946825 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-config-data podName:59fdca77-b333-44be-ab8c-96a2f4bcc340 nodeName:}" failed. No retries permitted until 2025-11-28 07:19:39.946804486 +0000 UTC m=+1634.324869597 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-config-data") pod "rabbitmq-server-0" (UID: "59fdca77-b333-44be-ab8c-96a2f4bcc340") : configmap "rabbitmq-config-data" not found Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.947223 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f80c95e6-2981-4755-ada4-26bbf1372693-operator-scripts\") pod \"neutronc822-account-delete-z6qcr\" (UID: \"f80c95e6-2981-4755-ada4-26bbf1372693\") " pod="openstack/neutronc822-account-delete-z6qcr" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.970350 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placementd9cc-account-delete-9rfvn"] Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.971805 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placementd9cc-account-delete-9rfvn" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.982805 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hb62\" (UniqueName: \"kubernetes.io/projected/f80c95e6-2981-4755-ada4-26bbf1372693-kube-api-access-4hb62\") pod \"neutronc822-account-delete-z6qcr\" (UID: \"f80c95e6-2981-4755-ada4-26bbf1372693\") " pod="openstack/neutronc822-account-delete-z6qcr" Nov 28 07:19:38 crc kubenswrapper[4946]: I1128 07:19:38.995552 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican1925-account-delete-gjf5l"] Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.028391 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutronc822-account-delete-z6qcr" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.051940 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ck5w\" (UniqueName: \"kubernetes.io/projected/9f7aa965-bfc0-4db1-a2d2-07fe02be9f18-kube-api-access-9ck5w\") pod \"barbican1925-account-delete-gjf5l\" (UID: \"9f7aa965-bfc0-4db1-a2d2-07fe02be9f18\") " pod="openstack/barbican1925-account-delete-gjf5l" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.052028 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f7aa965-bfc0-4db1-a2d2-07fe02be9f18-operator-scripts\") pod \"barbican1925-account-delete-gjf5l\" (UID: \"9f7aa965-bfc0-4db1-a2d2-07fe02be9f18\") " pod="openstack/barbican1925-account-delete-gjf5l" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.053121 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f7aa965-bfc0-4db1-a2d2-07fe02be9f18-operator-scripts\") pod \"barbican1925-account-delete-gjf5l\" (UID: \"9f7aa965-bfc0-4db1-a2d2-07fe02be9f18\") " pod="openstack/barbican1925-account-delete-gjf5l" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.077821 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placementd9cc-account-delete-9rfvn"] Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.107938 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-9p5hs"] Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.124768 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ck5w\" (UniqueName: \"kubernetes.io/projected/9f7aa965-bfc0-4db1-a2d2-07fe02be9f18-kube-api-access-9ck5w\") pod \"barbican1925-account-delete-gjf5l\" (UID: \"9f7aa965-bfc0-4db1-a2d2-07fe02be9f18\") " pod="openstack/barbican1925-account-delete-gjf5l" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.156654 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1486d432-b3a6-4470-b145-076dafbfca67-operator-scripts\") pod \"placementd9cc-account-delete-9rfvn\" (UID: \"1486d432-b3a6-4470-b145-076dafbfca67\") " pod="openstack/placementd9cc-account-delete-9rfvn" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.156726 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsf2k\" (UniqueName: \"kubernetes.io/projected/1486d432-b3a6-4470-b145-076dafbfca67-kube-api-access-nsf2k\") pod \"placementd9cc-account-delete-9rfvn\" (UID: \"1486d432-b3a6-4470-b145-076dafbfca67\") " pod="openstack/placementd9cc-account-delete-9rfvn" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.160522 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-9p5hs"] Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.208300 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-l7pvv"] Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.253673 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-l7pvv"] Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.260359 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1486d432-b3a6-4470-b145-076dafbfca67-operator-scripts\") pod \"placementd9cc-account-delete-9rfvn\" (UID: \"1486d432-b3a6-4470-b145-076dafbfca67\") " pod="openstack/placementd9cc-account-delete-9rfvn" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.260428 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsf2k\" (UniqueName: \"kubernetes.io/projected/1486d432-b3a6-4470-b145-076dafbfca67-kube-api-access-nsf2k\") pod \"placementd9cc-account-delete-9rfvn\" (UID: \"1486d432-b3a6-4470-b145-076dafbfca67\") " pod="openstack/placementd9cc-account-delete-9rfvn" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.263201 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1486d432-b3a6-4470-b145-076dafbfca67-operator-scripts\") pod \"placementd9cc-account-delete-9rfvn\" (UID: \"1486d432-b3a6-4470-b145-076dafbfca67\") " pod="openstack/placementd9cc-account-delete-9rfvn" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.282278 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsf2k\" (UniqueName: \"kubernetes.io/projected/1486d432-b3a6-4470-b145-076dafbfca67-kube-api-access-nsf2k\") pod \"placementd9cc-account-delete-9rfvn\" (UID: \"1486d432-b3a6-4470-b145-076dafbfca67\") " pod="openstack/placementd9cc-account-delete-9rfvn" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.316595 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.317369 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="a2e2a06b-b43f-4a3b-985d-964e177c3c06" containerName="openstack-network-exporter" containerID="cri-o://46e527156036f2db6b68d08e685dc075879a5f56a11d96918d1c98628885e7e8" gracePeriod=300 Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.337195 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc646c8f9-smgp4"] Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.337517 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" podUID="23813153-a582-4ea1-bdf5-f81b2994aed6" containerName="dnsmasq-dns" containerID="cri-o://c572a089b466148ca1bc267403c169adc75a8306ad42f222db62e4a135735727" gracePeriod=10 Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.372702 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican1925-account-delete-gjf5l" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.412134 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placementd9cc-account-delete-9rfvn" Nov 28 07:19:39 crc kubenswrapper[4946]: E1128 07:19:39.419476 4946 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-xzzn6" message=< Nov 28 07:19:39 crc kubenswrapper[4946]: Exiting ovn-controller (1) [ OK ] Nov 28 07:19:39 crc kubenswrapper[4946]: > Nov 28 07:19:39 crc kubenswrapper[4946]: E1128 07:19:39.419515 4946 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-xzzn6" podUID="5feb905d-9c23-4603-b118-fdc05a237848" containerName="ovn-controller" containerID="cri-o://304cacbd4eab8b01e9777876a67dcba73de4883771064616952125d6fef1d4cb" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.419553 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-xzzn6" podUID="5feb905d-9c23-4603-b118-fdc05a237848" containerName="ovn-controller" containerID="cri-o://304cacbd4eab8b01e9777876a67dcba73de4883771064616952125d6fef1d4cb" gracePeriod=29 Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.477007 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novaapid713-account-delete-lj6h9"] Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.478979 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapid713-account-delete-lj6h9" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.556843 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-cbhjz"] Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.577410 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="a2e2a06b-b43f-4a3b-985d-964e177c3c06" containerName="ovsdbserver-sb" containerID="cri-o://89293c54aab4564bb3c1f382ee28896a803e4970c73eac5d5bec925d87394c71" gracePeriod=300 Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.610585 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-cbhjz"] Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.668621 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30b2d9aa-848b-4f33-9bd8-921f5de5ab36-operator-scripts\") pod \"novaapid713-account-delete-lj6h9\" (UID: \"30b2d9aa-848b-4f33-9bd8-921f5de5ab36\") " pod="openstack/novaapid713-account-delete-lj6h9" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.668761 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp5hj\" (UniqueName: \"kubernetes.io/projected/30b2d9aa-848b-4f33-9bd8-921f5de5ab36-kube-api-access-dp5hj\") pod \"novaapid713-account-delete-lj6h9\" (UID: \"30b2d9aa-848b-4f33-9bd8-921f5de5ab36\") " pod="openstack/novaapid713-account-delete-lj6h9" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.691431 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.692875 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="926ec930-a8f3-4c87-9963-39779e7309cc" containerName="openstack-network-exporter" containerID="cri-o://ca0d99baba41d46f00aebe709dcc220db85de34c057c9dcc89055c2491a77711" gracePeriod=300 Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.771336 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30b2d9aa-848b-4f33-9bd8-921f5de5ab36-operator-scripts\") pod \"novaapid713-account-delete-lj6h9\" (UID: \"30b2d9aa-848b-4f33-9bd8-921f5de5ab36\") " pod="openstack/novaapid713-account-delete-lj6h9" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.771505 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp5hj\" (UniqueName: \"kubernetes.io/projected/30b2d9aa-848b-4f33-9bd8-921f5de5ab36-kube-api-access-dp5hj\") pod \"novaapid713-account-delete-lj6h9\" (UID: \"30b2d9aa-848b-4f33-9bd8-921f5de5ab36\") " pod="openstack/novaapid713-account-delete-lj6h9" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.772817 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30b2d9aa-848b-4f33-9bd8-921f5de5ab36-operator-scripts\") pod \"novaapid713-account-delete-lj6h9\" (UID: \"30b2d9aa-848b-4f33-9bd8-921f5de5ab36\") " pod="openstack/novaapid713-account-delete-lj6h9" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.783775 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-wm9fv_7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08/openstack-network-exporter/0.log" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.783857 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wm9fv" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.784875 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapid713-account-delete-lj6h9"] Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.815116 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp5hj\" (UniqueName: \"kubernetes.io/projected/30b2d9aa-848b-4f33-9bd8-921f5de5ab36-kube-api-access-dp5hj\") pod \"novaapid713-account-delete-lj6h9\" (UID: \"30b2d9aa-848b-4f33-9bd8-921f5de5ab36\") " pod="openstack/novaapid713-account-delete-lj6h9" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.853593 4946 generic.go:334] "Generic (PLEG): container finished" podID="5feb905d-9c23-4603-b118-fdc05a237848" containerID="304cacbd4eab8b01e9777876a67dcba73de4883771064616952125d6fef1d4cb" exitCode=0 Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.854608 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xzzn6" event={"ID":"5feb905d-9c23-4603-b118-fdc05a237848","Type":"ContainerDied","Data":"304cacbd4eab8b01e9777876a67dcba73de4883771064616952125d6fef1d4cb"} Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.865076 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapid713-account-delete-lj6h9" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.866211 4946 generic.go:334] "Generic (PLEG): container finished" podID="c274eefa-3598-470b-9b07-25928903d425" containerID="f8b2e10c106db832955230f63c48caf7a4ee259e84e9b344f03d06060ead1493" exitCode=2 Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.866267 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c274eefa-3598-470b-9b07-25928903d425","Type":"ContainerDied","Data":"f8b2e10c106db832955230f63c48caf7a4ee259e84e9b344f03d06060ead1493"} Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.870060 4946 generic.go:334] "Generic (PLEG): container finished" podID="23813153-a582-4ea1-bdf5-f81b2994aed6" containerID="c572a089b466148ca1bc267403c169adc75a8306ad42f222db62e4a135735727" exitCode=0 Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.870105 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" event={"ID":"23813153-a582-4ea1-bdf5-f81b2994aed6","Type":"ContainerDied","Data":"c572a089b466148ca1bc267403c169adc75a8306ad42f222db62e4a135735727"} Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.873106 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-config\") pod \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\" (UID: \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\") " Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.873202 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82g9r\" (UniqueName: \"kubernetes.io/projected/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-kube-api-access-82g9r\") pod \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\" (UID: \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\") " Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.873244 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-metrics-certs-tls-certs\") pod \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\" (UID: \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\") " Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.873312 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-combined-ca-bundle\") pod \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\" (UID: \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\") " Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.873347 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-ovn-rundir\") pod \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\" (UID: \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\") " Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.873368 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-ovs-rundir\") pod \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\" (UID: \"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08\") " Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.873869 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08" (UID: "7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.874487 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08" (UID: "7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.874557 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-config" (OuterVolumeSpecName: "config") pod "7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08" (UID: "7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.878320 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-kube-api-access-82g9r" (OuterVolumeSpecName: "kube-api-access-82g9r") pod "7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08" (UID: "7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08"). InnerVolumeSpecName "kube-api-access-82g9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.878641 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="926ec930-a8f3-4c87-9963-39779e7309cc" containerName="ovsdbserver-nb" containerID="cri-o://87bce839fcc780921b732faa53136928a6e9541af2325b83cd8ed770c4841758" gracePeriod=300 Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.882493 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a2e2a06b-b43f-4a3b-985d-964e177c3c06/ovsdbserver-sb/0.log" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.882547 4946 generic.go:334] "Generic (PLEG): container finished" podID="a2e2a06b-b43f-4a3b-985d-964e177c3c06" containerID="46e527156036f2db6b68d08e685dc075879a5f56a11d96918d1c98628885e7e8" exitCode=2 Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.882645 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a2e2a06b-b43f-4a3b-985d-964e177c3c06","Type":"ContainerDied","Data":"46e527156036f2db6b68d08e685dc075879a5f56a11d96918d1c98628885e7e8"} Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.884635 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.884876 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="acadbe07-94b0-4a5d-ac42-6524f0e4ce61" containerName="glance-log" containerID="cri-o://364195b07b396d49046a6d8594cb12442c0047908fd530817352c055ef2e1319" gracePeriod=30 Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.885437 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="acadbe07-94b0-4a5d-ac42-6524f0e4ce61" containerName="glance-httpd" containerID="cri-o://b1f1ded874bdaaff5d6be92e1099409af1ca775953c8f30295a4c606005bdf30" gracePeriod=30 Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.911857 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-wm9fv_7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08/openstack-network-exporter/0.log" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.911916 4946 generic.go:334] "Generic (PLEG): container finished" podID="7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08" containerID="7daeb0c76b280f50561eb7b7c2864caf3e13cb79b3ecceb81295252f5c19b84a" exitCode=2 Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.911955 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wm9fv" event={"ID":"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08","Type":"ContainerDied","Data":"7daeb0c76b280f50561eb7b7c2864caf3e13cb79b3ecceb81295252f5c19b84a"} Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.911988 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wm9fv" event={"ID":"7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08","Type":"ContainerDied","Data":"b564078b348c2bcf44f711ce76b205e521ced0d4dacf8dd440c610f0a366acbb"} Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.912004 4946 scope.go:117] "RemoveContainer" containerID="7daeb0c76b280f50561eb7b7c2864caf3e13cb79b3ecceb81295252f5c19b84a" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.912174 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wm9fv" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.973550 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell0d35d-account-delete-c848z"] Nov 28 07:19:39 crc kubenswrapper[4946]: E1128 07:19:39.974103 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08" containerName="openstack-network-exporter" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.974117 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08" containerName="openstack-network-exporter" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.974367 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08" containerName="openstack-network-exporter" Nov 28 07:19:39 crc kubenswrapper[4946]: I1128 07:19:39.975121 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell0d35d-account-delete-c848z" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:39.996415 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b138df41-1f0c-4edb-9546-e0f5ec16cf06-operator-scripts\") pod \"novacell0d35d-account-delete-c848z\" (UID: \"b138df41-1f0c-4edb-9546-e0f5ec16cf06\") " pod="openstack/novacell0d35d-account-delete-c848z" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:39.996673 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhb8l\" (UniqueName: \"kubernetes.io/projected/b138df41-1f0c-4edb-9546-e0f5ec16cf06-kube-api-access-fhb8l\") pod \"novacell0d35d-account-delete-c848z\" (UID: \"b138df41-1f0c-4edb-9546-e0f5ec16cf06\") " pod="openstack/novacell0d35d-account-delete-c848z" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.001164 4946 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-ovn-rundir\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.001190 4946 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-ovs-rundir\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.001202 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.001215 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82g9r\" (UniqueName: \"kubernetes.io/projected/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-kube-api-access-82g9r\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:40 crc kubenswrapper[4946]: E1128 07:19:40.001313 4946 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 28 07:19:40 crc kubenswrapper[4946]: E1128 07:19:40.001372 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-config-data podName:59fdca77-b333-44be-ab8c-96a2f4bcc340 nodeName:}" failed. No retries permitted until 2025-11-28 07:19:42.001352483 +0000 UTC m=+1636.379417594 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-config-data") pod "rabbitmq-server-0" (UID: "59fdca77-b333-44be-ab8c-96a2f4bcc340") : configmap "rabbitmq-config-data" not found Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.032273 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-gpnz5" podUID="944abeae-3f1d-4391-a375-b64ed9c17b14" containerName="ovs-vswitchd" containerID="cri-o://b12ef267dfaa1906a3e883237d5228860410b06e258081a1cabe06d004313ce6" gracePeriod=29 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.037247 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08" (UID: "7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.036054 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cbfc817-e82c-436d-bc3a-6a3a94ee82e8" path="/var/lib/kubelet/pods/0cbfc817-e82c-436d-bc3a-6a3a94ee82e8/volumes" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.039148 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57401bfe-4d01-4983-8703-d78c50a9886e" path="/var/lib/kubelet/pods/57401bfe-4d01-4983-8703-d78c50a9886e/volumes" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.040044 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d0d856a-c2e4-4ccf-adfb-70391210f8d9" path="/var/lib/kubelet/pods/8d0d856a-c2e4-4ccf-adfb-70391210f8d9/volumes" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.040611 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97fa13e8-c7b4-4612-912c-2976861bce81" path="/var/lib/kubelet/pods/97fa13e8-c7b4-4612-912c-2976861bce81/volumes" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.041690 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4229e67-36fc-40ba-8d90-af5b0d95743f" path="/var/lib/kubelet/pods/a4229e67-36fc-40ba-8d90-af5b0d95743f/volumes" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.048612 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08" (UID: "7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.057849 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell0d35d-account-delete-c848z"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.057916 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.057929 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.057943 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-nmgj8"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.058166 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="813134a8-463b-4f7d-8160-ceb1c5a96853" containerName="glance-log" containerID="cri-o://53c5b6d4bf9c96653c84e3fcb9a4ea1b7994f3a98935936a45c752c693dcee48" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.058328 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="813134a8-463b-4f7d-8160-ceb1c5a96853" containerName="glance-httpd" containerID="cri-o://a5bc002da109da3ebb39a470b77cd1c3a84898e95c562ea010269e7e486161bb" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.070174 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-nmgj8"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.083419 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-mr4lq"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.093712 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.094092 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-gpnz5" podUID="944abeae-3f1d-4391-a375-b64ed9c17b14" containerName="ovsdb-server" containerID="cri-o://c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443" gracePeriod=29 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.094596 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="account-server" containerID="cri-o://849cb349e2d8d74c8a5c873c020253aa6650e18316daa4e4cda33facfa08c64a" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.094800 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="container-updater" containerID="cri-o://61e1d296d26ca679c99de8f8b9f1a8d780c7ec16fe7bd59626b4da870f354696" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.094863 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="swift-recon-cron" containerID="cri-o://5bd491e35ece677c5f005652efd7b988e650a41c30c0108a8430aaaf6dcac5e9" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.094921 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="rsync" containerID="cri-o://1c43b5f235fc838d55b3e3138910a90b5396838761a0454dd53a66990464a1e9" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.094966 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="object-expirer" containerID="cri-o://179205cba904e1173a81ebd776bc1a0a5b564f12e0185b57e69ccb4e5be0d40a" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.095013 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="object-updater" containerID="cri-o://882d75cc76bc05932cfdf4ca19f83e0f38f9b3f3c27568b1f8c72bd6dd29c2f0" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.095067 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="object-auditor" containerID="cri-o://08e99b903087a98fbf737f58badd6a71a2d6baa168caa8e67046f8c35f351fbf" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.095118 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="object-replicator" containerID="cri-o://3bb83e39c083e832f8a225589956ac89495d0a3270463d35cd5097cffffacde0" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.095169 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="object-server" containerID="cri-o://7be129de7a59c6e25c3ad9da58ad52ccaea20aa7256f4fab80d19e1a02a9f707" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.095242 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="account-reaper" containerID="cri-o://1d5e25dca53ee666bbf126081c00a7502572f05a1dfc6f018a9664b36c521ea0" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.095298 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="container-auditor" containerID="cri-o://d6be15027db8aca6a49663ad2705ef701b99e1ed3611ca1bdf517ec488caf40d" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.095341 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="container-replicator" containerID="cri-o://67bb3d551d212ca961ad8ea7d743990298fb96bccbac103b400c071fec12a04a" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.095412 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="container-server" containerID="cri-o://0b64d0bffa35322fe586ccacf74d1f4ab7472d93d29af359db4d89db97ef491e" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.095506 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="account-replicator" containerID="cri-o://847a01abfc1b8567574b77c168b044fe726b365c808d4ec665d2ff16afd92625" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.095558 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="account-auditor" containerID="cri-o://2ac3d6bf342fed17048fda402f024c8913172e1208e753608700d74e53cfe4f3" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.103164 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhb8l\" (UniqueName: \"kubernetes.io/projected/b138df41-1f0c-4edb-9546-e0f5ec16cf06-kube-api-access-fhb8l\") pod \"novacell0d35d-account-delete-c848z\" (UID: \"b138df41-1f0c-4edb-9546-e0f5ec16cf06\") " pod="openstack/novacell0d35d-account-delete-c848z" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.103368 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b138df41-1f0c-4edb-9546-e0f5ec16cf06-operator-scripts\") pod \"novacell0d35d-account-delete-c848z\" (UID: \"b138df41-1f0c-4edb-9546-e0f5ec16cf06\") " pod="openstack/novacell0d35d-account-delete-c848z" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.103429 4946 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.103451 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.104132 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b138df41-1f0c-4edb-9546-e0f5ec16cf06-operator-scripts\") pod \"novacell0d35d-account-delete-c848z\" (UID: \"b138df41-1f0c-4edb-9546-e0f5ec16cf06\") " pod="openstack/novacell0d35d-account-delete-c848z" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.105910 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-mr4lq"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.118739 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.119058 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="05997c14-3116-4439-8e63-230bf0e5c411" containerName="cinder-api-log" containerID="cri-o://e5f294dc1328491a34656832803a089d67378b093e4cd726077d224138730910" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.119619 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="05997c14-3116-4439-8e63-230bf0e5c411" containerName="cinder-api" containerID="cri-o://bf9f1ffb702a1544bb3f60c1bb7628caa63240849c8027e55a2a252bcc1fc987" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.122950 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhb8l\" (UniqueName: \"kubernetes.io/projected/b138df41-1f0c-4edb-9546-e0f5ec16cf06-kube-api-access-fhb8l\") pod \"novacell0d35d-account-delete-c848z\" (UID: \"b138df41-1f0c-4edb-9546-e0f5ec16cf06\") " pod="openstack/novacell0d35d-account-delete-c848z" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.130941 4946 scope.go:117] "RemoveContainer" containerID="7daeb0c76b280f50561eb7b7c2864caf3e13cb79b3ecceb81295252f5c19b84a" Nov 28 07:19:40 crc kubenswrapper[4946]: E1128 07:19:40.131334 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7daeb0c76b280f50561eb7b7c2864caf3e13cb79b3ecceb81295252f5c19b84a\": container with ID starting with 7daeb0c76b280f50561eb7b7c2864caf3e13cb79b3ecceb81295252f5c19b84a not found: ID does not exist" containerID="7daeb0c76b280f50561eb7b7c2864caf3e13cb79b3ecceb81295252f5c19b84a" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.132316 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7daeb0c76b280f50561eb7b7c2864caf3e13cb79b3ecceb81295252f5c19b84a"} err="failed to get container status \"7daeb0c76b280f50561eb7b7c2864caf3e13cb79b3ecceb81295252f5c19b84a\": rpc error: code = NotFound desc = could not find container \"7daeb0c76b280f50561eb7b7c2864caf3e13cb79b3ecceb81295252f5c19b84a\": container with ID starting with 7daeb0c76b280f50561eb7b7c2864caf3e13cb79b3ecceb81295252f5c19b84a not found: ID does not exist" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.137609 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-2frfb"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.156917 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xzzn6" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.187556 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-2frfb"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.197586 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.198096 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c485c360-55fc-49da-851d-ab74f7c7fc98" containerName="cinder-scheduler" containerID="cri-o://a144d6e25c9728943275e991de8aeab0546d7b79b5798dc6dd8598ae23f2baf4" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.198267 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c485c360-55fc-49da-851d-ab74f7c7fc98" containerName="probe" containerID="cri-o://5edec769179e5fbde067252513cb9a91acd14bb765488eebcacb651d47671cea" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: E1128 07:19:40.207096 4946 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 28 07:19:40 crc kubenswrapper[4946]: E1128 07:19:40.207148 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-config-data podName:3521840d-60d0-450c-8c05-7e2ad0fc4e97 nodeName:}" failed. No retries permitted until 2025-11-28 07:19:40.707132584 +0000 UTC m=+1635.085197695 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-config-data") pod "rabbitmq-cell1-server-0" (UID: "3521840d-60d0-450c-8c05-7e2ad0fc4e97") : configmap "rabbitmq-cell1-config-data" not found Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.211010 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-777c4856b5-mgnhk"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.211220 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-777c4856b5-mgnhk" podUID="d1578c84-1d87-41b2-bfa7-637c3b53366f" containerName="neutron-api" containerID="cri-o://a520f59984f4e2b3696712434dc811010686939bd64077205d50d0ba4e29000f" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.211637 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-777c4856b5-mgnhk" podUID="d1578c84-1d87-41b2-bfa7-637c3b53366f" containerName="neutron-httpd" containerID="cri-o://1b4c77565cb683565551995a4fa5e6f14515e5125068ec275e329eaccc6d274a" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.223803 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-96fb6f878-56tfz"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.224115 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-96fb6f878-56tfz" podUID="52101de8-a25c-4372-9df3-3f090167ff5f" containerName="placement-log" containerID="cri-o://a2f882b052a314819dd4340b85645a14b6914096d138c6a1d81c6036bd6013fb" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.224661 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-96fb6f878-56tfz" podUID="52101de8-a25c-4372-9df3-3f090167ff5f" containerName="placement-api" containerID="cri-o://6b8d481b71ffab080960ca39f72a71bb4c2e8f178d73020eb2dfb24175734884" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.263496 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-97bfd767f-7zg9s"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.263779 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-97bfd767f-7zg9s" podUID="4619b857-5e70-4ab3-807d-d233c9d9223c" containerName="barbican-worker-log" containerID="cri-o://9517940817e1a7d770b2a8b25720d839e772ec8d3fd3f428c2debea57ae43b63" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.264336 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-97bfd767f-7zg9s" podUID="4619b857-5e70-4ab3-807d-d233c9d9223c" containerName="barbican-worker" containerID="cri-o://1bd51decff33c36bbc5dd5c23b04d649356ea9e1d0ccc8a4e364d1512e184087" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.278693 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.279021 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="206851cb-4673-4ce1-b038-c2e425d306b7" containerName="nova-metadata-log" containerID="cri-o://24ddca1eb18e17e4df35510352b9a1bdc29c68afcf2f6e975fb5d6693d132c62" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.279808 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="206851cb-4673-4ce1-b038-c2e425d306b7" containerName="nova-metadata-metadata" containerID="cri-o://15a189a2591a0270f4f6fdcfc4b5cb8f68b93b0132f30d27ff66c62f4a2d9ffd" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.289727 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-bc95b876b-t8r9q"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.289983 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" podUID="c41f96d0-0ffc-424b-afa7-d7b87f2bf11d" containerName="barbican-keystone-listener-log" containerID="cri-o://0eb88c99eb48005972cbcbed71e1bfbdae3511e830c076af9bb07fc162c1abd1" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.290178 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" podUID="c41f96d0-0ffc-424b-afa7-d7b87f2bf11d" containerName="barbican-keystone-listener" containerID="cri-o://0f786e996c300ebfa42ac27a855c0fd4efcf8b18e7989e6e4ee76f3ce6556d28" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.301295 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.306962 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5feb905d-9c23-4603-b118-fdc05a237848-var-log-ovn\") pod \"5feb905d-9c23-4603-b118-fdc05a237848\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.307377 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5feb905d-9c23-4603-b118-fdc05a237848-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "5feb905d-9c23-4603-b118-fdc05a237848" (UID: "5feb905d-9c23-4603-b118-fdc05a237848"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.308750 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5feb905d-9c23-4603-b118-fdc05a237848-var-run-ovn\") pod \"5feb905d-9c23-4603-b118-fdc05a237848\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.308868 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5feb905d-9c23-4603-b118-fdc05a237848-var-run\") pod \"5feb905d-9c23-4603-b118-fdc05a237848\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.308904 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5feb905d-9c23-4603-b118-fdc05a237848-ovn-controller-tls-certs\") pod \"5feb905d-9c23-4603-b118-fdc05a237848\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.308935 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5feb905d-9c23-4603-b118-fdc05a237848-combined-ca-bundle\") pod \"5feb905d-9c23-4603-b118-fdc05a237848\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.309017 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5feb905d-9c23-4603-b118-fdc05a237848-scripts\") pod \"5feb905d-9c23-4603-b118-fdc05a237848\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.309045 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4zb9\" (UniqueName: \"kubernetes.io/projected/5feb905d-9c23-4603-b118-fdc05a237848-kube-api-access-s4zb9\") pod \"5feb905d-9c23-4603-b118-fdc05a237848\" (UID: \"5feb905d-9c23-4603-b118-fdc05a237848\") " Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.309418 4946 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5feb905d-9c23-4603-b118-fdc05a237848-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.309857 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5feb905d-9c23-4603-b118-fdc05a237848-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "5feb905d-9c23-4603-b118-fdc05a237848" (UID: "5feb905d-9c23-4603-b118-fdc05a237848"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.309887 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5feb905d-9c23-4603-b118-fdc05a237848-var-run" (OuterVolumeSpecName: "var-run") pod "5feb905d-9c23-4603-b118-fdc05a237848" (UID: "5feb905d-9c23-4603-b118-fdc05a237848"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.310799 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5feb905d-9c23-4603-b118-fdc05a237848-scripts" (OuterVolumeSpecName: "scripts") pod "5feb905d-9c23-4603-b118-fdc05a237848" (UID: "5feb905d-9c23-4603-b118-fdc05a237848"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.315678 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5feb905d-9c23-4603-b118-fdc05a237848-kube-api-access-s4zb9" (OuterVolumeSpecName: "kube-api-access-s4zb9") pod "5feb905d-9c23-4603-b118-fdc05a237848" (UID: "5feb905d-9c23-4603-b118-fdc05a237848"). InnerVolumeSpecName "kube-api-access-s4zb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.320989 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-576769594d-lbv64"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.321324 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-576769594d-lbv64" podUID="313c5837-e776-49ef-8689-14f6f70d31a1" containerName="barbican-api-log" containerID="cri-o://ed90e8dccea49daff0a79a0d11bad0590ddbd5f45981e375a7dee23f1a208e56" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.321579 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-576769594d-lbv64" podUID="313c5837-e776-49ef-8689-14f6f70d31a1" containerName="barbican-api" containerID="cri-o://7154bbd8ae6506bb7231f4ac45fc4ff0e5022062ff798b35ec9d298de493ab9f" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.339586 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.340151 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="587a9b3d-1634-4af6-96d2-e60c03a7d75f" containerName="nova-api-log" containerID="cri-o://cdc13d7826d930cfa67ccc8a5fb14e61b1d536afb075f29af7e2cdabf489a9d0" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.340709 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="587a9b3d-1634-4af6-96d2-e60c03a7d75f" containerName="nova-api-api" containerID="cri-o://ff6fb5d2f5f3038d8cdb313bf501d56a6b33a90f0593856e2e8097e215b377eb" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.350016 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.381203 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-xbl8m"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.387024 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell0d35d-account-delete-c848z" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.415838 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-0c3a-account-create-update-vmw8c"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.416501 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5feb905d-9c23-4603-b118-fdc05a237848-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.416518 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4zb9\" (UniqueName: \"kubernetes.io/projected/5feb905d-9c23-4603-b118-fdc05a237848-kube-api-access-s4zb9\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.416528 4946 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5feb905d-9c23-4603-b118-fdc05a237848-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.416537 4946 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5feb905d-9c23-4603-b118-fdc05a237848-var-run\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.433256 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5feb905d-9c23-4603-b118-fdc05a237848-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5feb905d-9c23-4603-b118-fdc05a237848" (UID: "5feb905d-9c23-4603-b118-fdc05a237848"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.449980 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-xbl8m"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.466998 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-0c3a-account-create-update-vmw8c"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.493419 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="59fdca77-b333-44be-ab8c-96a2f4bcc340" containerName="rabbitmq" containerID="cri-o://ff38770809a612b7b65b4599d06925432273b10bf7ef83575ef3bffae3781506" gracePeriod=604800 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.495356 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.495777 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="1279af9a-0d83-4c31-94b1-6c732b89a785" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a98ed7264301f60a91e8b70f16d9b85a1db93f02fb19bfcfa856fa5650ad98e4" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.531066 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5feb905d-9c23-4603-b118-fdc05a237848-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.622304 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.636622 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-dns-swift-storage-0\") pod \"23813153-a582-4ea1-bdf5-f81b2994aed6\" (UID: \"23813153-a582-4ea1-bdf5-f81b2994aed6\") " Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.636815 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-config\") pod \"23813153-a582-4ea1-bdf5-f81b2994aed6\" (UID: \"23813153-a582-4ea1-bdf5-f81b2994aed6\") " Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.636921 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddtqp\" (UniqueName: \"kubernetes.io/projected/23813153-a582-4ea1-bdf5-f81b2994aed6-kube-api-access-ddtqp\") pod \"23813153-a582-4ea1-bdf5-f81b2994aed6\" (UID: \"23813153-a582-4ea1-bdf5-f81b2994aed6\") " Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.637026 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-ovsdbserver-sb\") pod \"23813153-a582-4ea1-bdf5-f81b2994aed6\" (UID: \"23813153-a582-4ea1-bdf5-f81b2994aed6\") " Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.637083 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-ovsdbserver-nb\") pod \"23813153-a582-4ea1-bdf5-f81b2994aed6\" (UID: \"23813153-a582-4ea1-bdf5-f81b2994aed6\") " Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.637119 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-dns-svc\") pod \"23813153-a582-4ea1-bdf5-f81b2994aed6\" (UID: \"23813153-a582-4ea1-bdf5-f81b2994aed6\") " Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.644023 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5feb905d-9c23-4603-b118-fdc05a237848-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "5feb905d-9c23-4603-b118-fdc05a237848" (UID: "5feb905d-9c23-4603-b118-fdc05a237848"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.739947 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23813153-a582-4ea1-bdf5-f81b2994aed6-kube-api-access-ddtqp" (OuterVolumeSpecName: "kube-api-access-ddtqp") pod "23813153-a582-4ea1-bdf5-f81b2994aed6" (UID: "23813153-a582-4ea1-bdf5-f81b2994aed6"). InnerVolumeSpecName "kube-api-access-ddtqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.744246 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.744580 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="db7322c8-b99d-4970-85c0-218d683f1ca3" containerName="nova-cell1-conductor-conductor" containerID="cri-o://aff8824b6f9748bbf0caf47e4c6c01486fb6b04768113350feeb4bb43a17bff5" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.750989 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="e87edf72-a3a2-4df0-8249-5902f158998d" containerName="galera" containerID="cri-o://3f80615bde2b6e4140684945a44377c8842d66297f035126f4b7271272f03968" gracePeriod=30 Nov 28 07:19:40 crc kubenswrapper[4946]: E1128 07:19:40.751215 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7e494d19861cee1bf5cc7147950495eba143f3dbb2bec8c94fbf0c3c3c16f2dd" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.758320 4946 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5feb905d-9c23-4603-b118-fdc05a237848-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.758346 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddtqp\" (UniqueName: \"kubernetes.io/projected/23813153-a582-4ea1-bdf5-f81b2994aed6-kube-api-access-ddtqp\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:40 crc kubenswrapper[4946]: E1128 07:19:40.758412 4946 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 28 07:19:40 crc kubenswrapper[4946]: E1128 07:19:40.758479 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-config-data podName:3521840d-60d0-450c-8c05-7e2ad0fc4e97 nodeName:}" failed. No retries permitted until 2025-11-28 07:19:41.758447881 +0000 UTC m=+1636.136512992 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-config-data") pod "rabbitmq-cell1-server-0" (UID: "3521840d-60d0-450c-8c05-7e2ad0fc4e97") : configmap "rabbitmq-cell1-config-data" not found Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.765092 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wxxkg"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.801485 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "23813153-a582-4ea1-bdf5-f81b2994aed6" (UID: "23813153-a582-4ea1-bdf5-f81b2994aed6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:40 crc kubenswrapper[4946]: E1128 07:19:40.802334 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7e494d19861cee1bf5cc7147950495eba143f3dbb2bec8c94fbf0c3c3c16f2dd" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.806268 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "23813153-a582-4ea1-bdf5-f81b2994aed6" (UID: "23813153-a582-4ea1-bdf5-f81b2994aed6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.810884 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-config" (OuterVolumeSpecName: "config") pod "23813153-a582-4ea1-bdf5-f81b2994aed6" (UID: "23813153-a582-4ea1-bdf5-f81b2994aed6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:40 crc kubenswrapper[4946]: E1128 07:19:40.816989 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7e494d19861cee1bf5cc7147950495eba143f3dbb2bec8c94fbf0c3c3c16f2dd" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 28 07:19:40 crc kubenswrapper[4946]: E1128 07:19:40.817063 4946 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="c274eefa-3598-470b-9b07-25928903d425" containerName="ovn-northd" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.819155 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wxxkg"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.830249 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "23813153-a582-4ea1-bdf5-f81b2994aed6" (UID: "23813153-a582-4ea1-bdf5-f81b2994aed6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.854487 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "23813153-a582-4ea1-bdf5-f81b2994aed6" (UID: "23813153-a582-4ea1-bdf5-f81b2994aed6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.862640 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.862675 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.862684 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.862693 4946 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.862702 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23813153-a582-4ea1-bdf5-f81b2994aed6-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.920716 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5k9zq"] Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.961850 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.970311 4946 generic.go:334] "Generic (PLEG): container finished" podID="313c5837-e776-49ef-8689-14f6f70d31a1" containerID="ed90e8dccea49daff0a79a0d11bad0590ddbd5f45981e375a7dee23f1a208e56" exitCode=143 Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.970697 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-576769594d-lbv64" event={"ID":"313c5837-e776-49ef-8689-14f6f70d31a1","Type":"ContainerDied","Data":"ed90e8dccea49daff0a79a0d11bad0590ddbd5f45981e375a7dee23f1a208e56"} Nov 28 07:19:40 crc kubenswrapper[4946]: I1128 07:19:40.981795 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glanceb763-account-delete-mmnjs" event={"ID":"eb07712a-805e-4e0f-9a81-dd8ce42bfb88","Type":"ContainerStarted","Data":"0133e20bc1b79a9a37f8565ec8bda91fcc10535cbd3cf9e8a684a504b49f82f8"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:40.998709 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_926ec930-a8f3-4c87-9963-39779e7309cc/ovsdbserver-nb/0.log" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:40.998798 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.004641 4946 generic.go:334] "Generic (PLEG): container finished" podID="d1578c84-1d87-41b2-bfa7-637c3b53366f" containerID="1b4c77565cb683565551995a4fa5e6f14515e5125068ec275e329eaccc6d274a" exitCode=0 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.007735 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-777c4856b5-mgnhk" event={"ID":"d1578c84-1d87-41b2-bfa7-637c3b53366f","Type":"ContainerDied","Data":"1b4c77565cb683565551995a4fa5e6f14515e5125068ec275e329eaccc6d274a"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.013332 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_926ec930-a8f3-4c87-9963-39779e7309cc/ovsdbserver-nb/0.log" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.013382 4946 generic.go:334] "Generic (PLEG): container finished" podID="926ec930-a8f3-4c87-9963-39779e7309cc" containerID="ca0d99baba41d46f00aebe709dcc220db85de34c057c9dcc89055c2491a77711" exitCode=2 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.013401 4946 generic.go:334] "Generic (PLEG): container finished" podID="926ec930-a8f3-4c87-9963-39779e7309cc" containerID="87bce839fcc780921b732faa53136928a6e9541af2325b83cd8ed770c4841758" exitCode=143 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.013443 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"926ec930-a8f3-4c87-9963-39779e7309cc","Type":"ContainerDied","Data":"ca0d99baba41d46f00aebe709dcc220db85de34c057c9dcc89055c2491a77711"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.013498 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"926ec930-a8f3-4c87-9963-39779e7309cc","Type":"ContainerDied","Data":"87bce839fcc780921b732faa53136928a6e9541af2325b83cd8ed770c4841758"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.013510 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"926ec930-a8f3-4c87-9963-39779e7309cc","Type":"ContainerDied","Data":"de1fb84e8a63cbf2aac0c218842162ab70e03c113fdf0099c21d8f1ecb56d669"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.013526 4946 scope.go:117] "RemoveContainer" containerID="ca0d99baba41d46f00aebe709dcc220db85de34c057c9dcc89055c2491a77711" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.013665 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.024371 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xzzn6" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.026267 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xzzn6" event={"ID":"5feb905d-9c23-4603-b118-fdc05a237848","Type":"ContainerDied","Data":"de1f85f32262dd8a4c62e3a5822bb8e7386c76210efceeb02306ca683145848a"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.029591 4946 generic.go:334] "Generic (PLEG): container finished" podID="206851cb-4673-4ce1-b038-c2e425d306b7" containerID="24ddca1eb18e17e4df35510352b9a1bdc29c68afcf2f6e975fb5d6693d132c62" exitCode=143 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.029645 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"206851cb-4673-4ce1-b038-c2e425d306b7","Type":"ContainerDied","Data":"24ddca1eb18e17e4df35510352b9a1bdc29c68afcf2f6e975fb5d6693d132c62"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.055810 4946 generic.go:334] "Generic (PLEG): container finished" podID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerID="1c43b5f235fc838d55b3e3138910a90b5396838761a0454dd53a66990464a1e9" exitCode=0 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.055840 4946 generic.go:334] "Generic (PLEG): container finished" podID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerID="179205cba904e1173a81ebd776bc1a0a5b564f12e0185b57e69ccb4e5be0d40a" exitCode=0 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.055847 4946 generic.go:334] "Generic (PLEG): container finished" podID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerID="882d75cc76bc05932cfdf4ca19f83e0f38f9b3f3c27568b1f8c72bd6dd29c2f0" exitCode=0 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.055854 4946 generic.go:334] "Generic (PLEG): container finished" podID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerID="08e99b903087a98fbf737f58badd6a71a2d6baa168caa8e67046f8c35f351fbf" exitCode=0 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.055860 4946 generic.go:334] "Generic (PLEG): container finished" podID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerID="3bb83e39c083e832f8a225589956ac89495d0a3270463d35cd5097cffffacde0" exitCode=0 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.055867 4946 generic.go:334] "Generic (PLEG): container finished" podID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerID="7be129de7a59c6e25c3ad9da58ad52ccaea20aa7256f4fab80d19e1a02a9f707" exitCode=0 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.055873 4946 generic.go:334] "Generic (PLEG): container finished" podID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerID="61e1d296d26ca679c99de8f8b9f1a8d780c7ec16fe7bd59626b4da870f354696" exitCode=0 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.055879 4946 generic.go:334] "Generic (PLEG): container finished" podID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerID="d6be15027db8aca6a49663ad2705ef701b99e1ed3611ca1bdf517ec488caf40d" exitCode=0 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.055886 4946 generic.go:334] "Generic (PLEG): container finished" podID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerID="67bb3d551d212ca961ad8ea7d743990298fb96bccbac103b400c071fec12a04a" exitCode=0 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.055894 4946 generic.go:334] "Generic (PLEG): container finished" podID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerID="0b64d0bffa35322fe586ccacf74d1f4ab7472d93d29af359db4d89db97ef491e" exitCode=0 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.055900 4946 generic.go:334] "Generic (PLEG): container finished" podID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerID="1d5e25dca53ee666bbf126081c00a7502572f05a1dfc6f018a9664b36c521ea0" exitCode=0 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.055907 4946 generic.go:334] "Generic (PLEG): container finished" podID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerID="2ac3d6bf342fed17048fda402f024c8913172e1208e753608700d74e53cfe4f3" exitCode=0 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.055912 4946 generic.go:334] "Generic (PLEG): container finished" podID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerID="847a01abfc1b8567574b77c168b044fe726b365c808d4ec665d2ff16afd92625" exitCode=0 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.055920 4946 generic.go:334] "Generic (PLEG): container finished" podID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerID="849cb349e2d8d74c8a5c873c020253aa6650e18316daa4e4cda33facfa08c64a" exitCode=0 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.055962 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerDied","Data":"1c43b5f235fc838d55b3e3138910a90b5396838761a0454dd53a66990464a1e9"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.055990 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerDied","Data":"179205cba904e1173a81ebd776bc1a0a5b564f12e0185b57e69ccb4e5be0d40a"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.056000 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerDied","Data":"882d75cc76bc05932cfdf4ca19f83e0f38f9b3f3c27568b1f8c72bd6dd29c2f0"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.056008 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerDied","Data":"08e99b903087a98fbf737f58badd6a71a2d6baa168caa8e67046f8c35f351fbf"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.056018 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerDied","Data":"3bb83e39c083e832f8a225589956ac89495d0a3270463d35cd5097cffffacde0"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.056028 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerDied","Data":"7be129de7a59c6e25c3ad9da58ad52ccaea20aa7256f4fab80d19e1a02a9f707"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.056039 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerDied","Data":"61e1d296d26ca679c99de8f8b9f1a8d780c7ec16fe7bd59626b4da870f354696"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.056049 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerDied","Data":"d6be15027db8aca6a49663ad2705ef701b99e1ed3611ca1bdf517ec488caf40d"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.056061 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerDied","Data":"67bb3d551d212ca961ad8ea7d743990298fb96bccbac103b400c071fec12a04a"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.056071 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerDied","Data":"0b64d0bffa35322fe586ccacf74d1f4ab7472d93d29af359db4d89db97ef491e"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.056080 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerDied","Data":"1d5e25dca53ee666bbf126081c00a7502572f05a1dfc6f018a9664b36c521ea0"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.056090 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerDied","Data":"2ac3d6bf342fed17048fda402f024c8913172e1208e753608700d74e53cfe4f3"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.056102 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerDied","Data":"847a01abfc1b8567574b77c168b044fe726b365c808d4ec665d2ff16afd92625"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.056111 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerDied","Data":"849cb349e2d8d74c8a5c873c020253aa6650e18316daa4e4cda33facfa08c64a"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.059845 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5k9zq"] Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.067251 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/926ec930-a8f3-4c87-9963-39779e7309cc-metrics-certs-tls-certs\") pod \"926ec930-a8f3-4c87-9963-39779e7309cc\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.067298 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1a71b82d-c922-4d23-b816-f662cc5539ec-openstack-config-secret\") pod \"1a71b82d-c922-4d23-b816-f662cc5539ec\" (UID: \"1a71b82d-c922-4d23-b816-f662cc5539ec\") " Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.067333 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/926ec930-a8f3-4c87-9963-39779e7309cc-config\") pod \"926ec930-a8f3-4c87-9963-39779e7309cc\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.067353 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq926\" (UniqueName: \"kubernetes.io/projected/1a71b82d-c922-4d23-b816-f662cc5539ec-kube-api-access-tq926\") pod \"1a71b82d-c922-4d23-b816-f662cc5539ec\" (UID: \"1a71b82d-c922-4d23-b816-f662cc5539ec\") " Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.067382 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a71b82d-c922-4d23-b816-f662cc5539ec-combined-ca-bundle\") pod \"1a71b82d-c922-4d23-b816-f662cc5539ec\" (UID: \"1a71b82d-c922-4d23-b816-f662cc5539ec\") " Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.067404 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qtt6\" (UniqueName: \"kubernetes.io/projected/926ec930-a8f3-4c87-9963-39779e7309cc-kube-api-access-4qtt6\") pod \"926ec930-a8f3-4c87-9963-39779e7309cc\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.067456 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"926ec930-a8f3-4c87-9963-39779e7309cc\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.073999 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1a71b82d-c922-4d23-b816-f662cc5539ec-openstack-config\") pod \"1a71b82d-c922-4d23-b816-f662cc5539ec\" (UID: \"1a71b82d-c922-4d23-b816-f662cc5539ec\") " Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.074036 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/926ec930-a8f3-4c87-9963-39779e7309cc-combined-ca-bundle\") pod \"926ec930-a8f3-4c87-9963-39779e7309cc\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.074092 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/926ec930-a8f3-4c87-9963-39779e7309cc-ovsdb-rundir\") pod \"926ec930-a8f3-4c87-9963-39779e7309cc\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.074160 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/926ec930-a8f3-4c87-9963-39779e7309cc-scripts\") pod \"926ec930-a8f3-4c87-9963-39779e7309cc\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.074283 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/926ec930-a8f3-4c87-9963-39779e7309cc-ovsdbserver-nb-tls-certs\") pod \"926ec930-a8f3-4c87-9963-39779e7309cc\" (UID: \"926ec930-a8f3-4c87-9963-39779e7309cc\") " Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.071431 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/926ec930-a8f3-4c87-9963-39779e7309cc-config" (OuterVolumeSpecName: "config") pod "926ec930-a8f3-4c87-9963-39779e7309cc" (UID: "926ec930-a8f3-4c87-9963-39779e7309cc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.080079 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/926ec930-a8f3-4c87-9963-39779e7309cc-scripts" (OuterVolumeSpecName: "scripts") pod "926ec930-a8f3-4c87-9963-39779e7309cc" (UID: "926ec930-a8f3-4c87-9963-39779e7309cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.081072 4946 generic.go:334] "Generic (PLEG): container finished" podID="587a9b3d-1634-4af6-96d2-e60c03a7d75f" containerID="cdc13d7826d930cfa67ccc8a5fb14e61b1d536afb075f29af7e2cdabf489a9d0" exitCode=143 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.081199 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"587a9b3d-1634-4af6-96d2-e60c03a7d75f","Type":"ContainerDied","Data":"cdc13d7826d930cfa67ccc8a5fb14e61b1d536afb075f29af7e2cdabf489a9d0"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.081808 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/926ec930-a8f3-4c87-9963-39779e7309cc-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "926ec930-a8f3-4c87-9963-39779e7309cc" (UID: "926ec930-a8f3-4c87-9963-39779e7309cc"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.093980 4946 scope.go:117] "RemoveContainer" containerID="87bce839fcc780921b732faa53136928a6e9541af2325b83cd8ed770c4841758" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.094563 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.094803 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="b9847c34-a2be-405c-8bd8-34ba251d218d" containerName="nova-cell0-conductor-conductor" containerID="cri-o://c93f03ceec40d0cc831cb034e30236277dd7917f19967a2a82daec8f7e2ea5a1" gracePeriod=30 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.113257 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/926ec930-a8f3-4c87-9963-39779e7309cc-kube-api-access-4qtt6" (OuterVolumeSpecName: "kube-api-access-4qtt6") pod "926ec930-a8f3-4c87-9963-39779e7309cc" (UID: "926ec930-a8f3-4c87-9963-39779e7309cc"). InnerVolumeSpecName "kube-api-access-4qtt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.113632 4946 generic.go:334] "Generic (PLEG): container finished" podID="1a71b82d-c922-4d23-b816-f662cc5539ec" containerID="2456162c80891db11f4df38895a38626b9a5d79c2067609e9aa578de23df284f" exitCode=137 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.113721 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.115100 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "926ec930-a8f3-4c87-9963-39779e7309cc" (UID: "926ec930-a8f3-4c87-9963-39779e7309cc"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.117081 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a71b82d-c922-4d23-b816-f662cc5539ec-kube-api-access-tq926" (OuterVolumeSpecName: "kube-api-access-tq926") pod "1a71b82d-c922-4d23-b816-f662cc5539ec" (UID: "1a71b82d-c922-4d23-b816-f662cc5539ec"). InnerVolumeSpecName "kube-api-access-tq926". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.132090 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.132097 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc646c8f9-smgp4" event={"ID":"23813153-a582-4ea1-bdf5-f81b2994aed6","Type":"ContainerDied","Data":"9d2e3a71051d1be78327bc9f4e290b1533e28074078e0c1bca7158416982e102"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.157821 4946 generic.go:334] "Generic (PLEG): container finished" podID="acadbe07-94b0-4a5d-ac42-6524f0e4ce61" containerID="364195b07b396d49046a6d8594cb12442c0047908fd530817352c055ef2e1319" exitCode=143 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.157900 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"acadbe07-94b0-4a5d-ac42-6524f0e4ce61","Type":"ContainerDied","Data":"364195b07b396d49046a6d8594cb12442c0047908fd530817352c055ef2e1319"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.169256 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="1279af9a-0d83-4c31-94b1-6c732b89a785" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.0.191:6080/vnc_lite.html\": dial tcp 10.217.0.191:6080: connect: connection refused" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.177310 4946 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.177339 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/926ec930-a8f3-4c87-9963-39779e7309cc-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.177351 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/926ec930-a8f3-4c87-9963-39779e7309cc-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.177361 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/926ec930-a8f3-4c87-9963-39779e7309cc-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.177369 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tq926\" (UniqueName: \"kubernetes.io/projected/1a71b82d-c922-4d23-b816-f662cc5539ec-kube-api-access-tq926\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.177378 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qtt6\" (UniqueName: \"kubernetes.io/projected/926ec930-a8f3-4c87-9963-39779e7309cc-kube-api-access-4qtt6\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.202358 4946 generic.go:334] "Generic (PLEG): container finished" podID="c41f96d0-0ffc-424b-afa7-d7b87f2bf11d" containerID="0eb88c99eb48005972cbcbed71e1bfbdae3511e830c076af9bb07fc162c1abd1" exitCode=143 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.202514 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-wm9fv"] Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.202559 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" event={"ID":"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d","Type":"ContainerDied","Data":"0eb88c99eb48005972cbcbed71e1bfbdae3511e830c076af9bb07fc162c1abd1"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.210851 4946 scope.go:117] "RemoveContainer" containerID="ca0d99baba41d46f00aebe709dcc220db85de34c057c9dcc89055c2491a77711" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.229582 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-wm9fv"] Nov 28 07:19:41 crc kubenswrapper[4946]: E1128 07:19:41.237102 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca0d99baba41d46f00aebe709dcc220db85de34c057c9dcc89055c2491a77711\": container with ID starting with ca0d99baba41d46f00aebe709dcc220db85de34c057c9dcc89055c2491a77711 not found: ID does not exist" containerID="ca0d99baba41d46f00aebe709dcc220db85de34c057c9dcc89055c2491a77711" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.237144 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca0d99baba41d46f00aebe709dcc220db85de34c057c9dcc89055c2491a77711"} err="failed to get container status \"ca0d99baba41d46f00aebe709dcc220db85de34c057c9dcc89055c2491a77711\": rpc error: code = NotFound desc = could not find container \"ca0d99baba41d46f00aebe709dcc220db85de34c057c9dcc89055c2491a77711\": container with ID starting with ca0d99baba41d46f00aebe709dcc220db85de34c057c9dcc89055c2491a77711 not found: ID does not exist" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.237169 4946 scope.go:117] "RemoveContainer" containerID="87bce839fcc780921b732faa53136928a6e9541af2325b83cd8ed770c4841758" Nov 28 07:19:41 crc kubenswrapper[4946]: E1128 07:19:41.239884 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87bce839fcc780921b732faa53136928a6e9541af2325b83cd8ed770c4841758\": container with ID starting with 87bce839fcc780921b732faa53136928a6e9541af2325b83cd8ed770c4841758 not found: ID does not exist" containerID="87bce839fcc780921b732faa53136928a6e9541af2325b83cd8ed770c4841758" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.239955 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87bce839fcc780921b732faa53136928a6e9541af2325b83cd8ed770c4841758"} err="failed to get container status \"87bce839fcc780921b732faa53136928a6e9541af2325b83cd8ed770c4841758\": rpc error: code = NotFound desc = could not find container \"87bce839fcc780921b732faa53136928a6e9541af2325b83cd8ed770c4841758\": container with ID starting with 87bce839fcc780921b732faa53136928a6e9541af2325b83cd8ed770c4841758 not found: ID does not exist" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.239986 4946 scope.go:117] "RemoveContainer" containerID="ca0d99baba41d46f00aebe709dcc220db85de34c057c9dcc89055c2491a77711" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.240884 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca0d99baba41d46f00aebe709dcc220db85de34c057c9dcc89055c2491a77711"} err="failed to get container status \"ca0d99baba41d46f00aebe709dcc220db85de34c057c9dcc89055c2491a77711\": rpc error: code = NotFound desc = could not find container \"ca0d99baba41d46f00aebe709dcc220db85de34c057c9dcc89055c2491a77711\": container with ID starting with ca0d99baba41d46f00aebe709dcc220db85de34c057c9dcc89055c2491a77711 not found: ID does not exist" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.240901 4946 scope.go:117] "RemoveContainer" containerID="87bce839fcc780921b732faa53136928a6e9541af2325b83cd8ed770c4841758" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.242928 4946 generic.go:334] "Generic (PLEG): container finished" podID="4619b857-5e70-4ab3-807d-d233c9d9223c" containerID="9517940817e1a7d770b2a8b25720d839e772ec8d3fd3f428c2debea57ae43b63" exitCode=143 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.242992 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-97bfd767f-7zg9s" event={"ID":"4619b857-5e70-4ab3-807d-d233c9d9223c","Type":"ContainerDied","Data":"9517940817e1a7d770b2a8b25720d839e772ec8d3fd3f428c2debea57ae43b63"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.243041 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87bce839fcc780921b732faa53136928a6e9541af2325b83cd8ed770c4841758"} err="failed to get container status \"87bce839fcc780921b732faa53136928a6e9541af2325b83cd8ed770c4841758\": rpc error: code = NotFound desc = could not find container \"87bce839fcc780921b732faa53136928a6e9541af2325b83cd8ed770c4841758\": container with ID starting with 87bce839fcc780921b732faa53136928a6e9541af2325b83cd8ed770c4841758 not found: ID does not exist" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.243056 4946 scope.go:117] "RemoveContainer" containerID="304cacbd4eab8b01e9777876a67dcba73de4883771064616952125d6fef1d4cb" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.256923 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder8ebd-account-delete-trxbg" event={"ID":"32ed9247-5959-4a5b-a879-52fac366f999","Type":"ContainerStarted","Data":"fbd9ee63819666086f4ce4c4da4555aa44a3bb92afa7680ff1b6e2c037dbdd9b"} Nov 28 07:19:41 crc kubenswrapper[4946]: W1128 07:19:41.267230 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf80c95e6_2981_4755_ada4_26bbf1372693.slice/crio-97fb181ee6bd1c34e7d62d2be6dd9bbd910c691e00f6686394d09ffed72b82a8 WatchSource:0}: Error finding container 97fb181ee6bd1c34e7d62d2be6dd9bbd910c691e00f6686394d09ffed72b82a8: Status 404 returned error can't find the container with id 97fb181ee6bd1c34e7d62d2be6dd9bbd910c691e00f6686394d09ffed72b82a8 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.270076 4946 generic.go:334] "Generic (PLEG): container finished" podID="05997c14-3116-4439-8e63-230bf0e5c411" containerID="e5f294dc1328491a34656832803a089d67378b093e4cd726077d224138730910" exitCode=143 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.270162 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"05997c14-3116-4439-8e63-230bf0e5c411","Type":"ContainerDied","Data":"e5f294dc1328491a34656832803a089d67378b093e4cd726077d224138730910"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.270263 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glanceb763-account-delete-mmnjs"] Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.275889 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a2e2a06b-b43f-4a3b-985d-964e177c3c06/ovsdbserver-sb/0.log" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.275938 4946 generic.go:334] "Generic (PLEG): container finished" podID="a2e2a06b-b43f-4a3b-985d-964e177c3c06" containerID="89293c54aab4564bb3c1f382ee28896a803e4970c73eac5d5bec925d87394c71" exitCode=143 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.275998 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a2e2a06b-b43f-4a3b-985d-964e177c3c06","Type":"ContainerDied","Data":"89293c54aab4564bb3c1f382ee28896a803e4970c73eac5d5bec925d87394c71"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.280267 4946 generic.go:334] "Generic (PLEG): container finished" podID="52101de8-a25c-4372-9df3-3f090167ff5f" containerID="a2f882b052a314819dd4340b85645a14b6914096d138c6a1d81c6036bd6013fb" exitCode=143 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.280369 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-96fb6f878-56tfz" event={"ID":"52101de8-a25c-4372-9df3-3f090167ff5f","Type":"ContainerDied","Data":"a2f882b052a314819dd4340b85645a14b6914096d138c6a1d81c6036bd6013fb"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.291140 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder8ebd-account-delete-trxbg"] Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.324038 4946 scope.go:117] "RemoveContainer" containerID="2456162c80891db11f4df38895a38626b9a5d79c2067609e9aa578de23df284f" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.325658 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.331794 4946 generic.go:334] "Generic (PLEG): container finished" podID="813134a8-463b-4f7d-8160-ceb1c5a96853" containerID="53c5b6d4bf9c96653c84e3fcb9a4ea1b7994f3a98935936a45c752c693dcee48" exitCode=143 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.331999 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"813134a8-463b-4f7d-8160-ceb1c5a96853","Type":"ContainerDied","Data":"53c5b6d4bf9c96653c84e3fcb9a4ea1b7994f3a98935936a45c752c693dcee48"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.345069 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.345329 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c3ba1954-566f-4e25-8312-855a58935547" containerName="nova-scheduler-scheduler" containerID="cri-o://47d5db84ad9670de902990ff6af710f405452597c9ef2064765b609a18f5147a" gracePeriod=30 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.352893 4946 generic.go:334] "Generic (PLEG): container finished" podID="944abeae-3f1d-4391-a375-b64ed9c17b14" containerID="c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443" exitCode=0 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.353120 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gpnz5" event={"ID":"944abeae-3f1d-4391-a375-b64ed9c17b14","Type":"ContainerDied","Data":"c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443"} Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.359883 4946 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.364617 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xzzn6"] Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.380996 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xzzn6"] Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.383049 4946 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.404345 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutronc822-account-delete-z6qcr"] Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.409541 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a71b82d-c922-4d23-b816-f662cc5539ec-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "1a71b82d-c922-4d23-b816-f662cc5539ec" (UID: "1a71b82d-c922-4d23-b816-f662cc5539ec"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.416081 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc646c8f9-smgp4"] Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.436960 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc646c8f9-smgp4"] Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.440001 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a2e2a06b-b43f-4a3b-985d-964e177c3c06/ovsdbserver-sb/0.log" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.440174 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.448544 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a71b82d-c922-4d23-b816-f662cc5539ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a71b82d-c922-4d23-b816-f662cc5539ec" (UID: "1a71b82d-c922-4d23-b816-f662cc5539ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.449554 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-65f88f985c-d964v"] Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.449775 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-65f88f985c-d964v" podUID="ea608140-3ad6-4c56-9754-ec74fc292781" containerName="proxy-httpd" containerID="cri-o://0c66dae97a36e8c460ce04d5fac2b27cca2e32b76e908a748791b556732f4db4" gracePeriod=30 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.449911 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-65f88f985c-d964v" podUID="ea608140-3ad6-4c56-9754-ec74fc292781" containerName="proxy-server" containerID="cri-o://d78e71107b7394d2525bc8feb2c6598f678c038511392ededa5a41827a995dcb" gracePeriod=30 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.452357 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/926ec930-a8f3-4c87-9963-39779e7309cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "926ec930-a8f3-4c87-9963-39779e7309cc" (UID: "926ec930-a8f3-4c87-9963-39779e7309cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.466349 4946 scope.go:117] "RemoveContainer" containerID="2456162c80891db11f4df38895a38626b9a5d79c2067609e9aa578de23df284f" Nov 28 07:19:41 crc kubenswrapper[4946]: E1128 07:19:41.466897 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2456162c80891db11f4df38895a38626b9a5d79c2067609e9aa578de23df284f\": container with ID starting with 2456162c80891db11f4df38895a38626b9a5d79c2067609e9aa578de23df284f not found: ID does not exist" containerID="2456162c80891db11f4df38895a38626b9a5d79c2067609e9aa578de23df284f" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.466928 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2456162c80891db11f4df38895a38626b9a5d79c2067609e9aa578de23df284f"} err="failed to get container status \"2456162c80891db11f4df38895a38626b9a5d79c2067609e9aa578de23df284f\": rpc error: code = NotFound desc = could not find container \"2456162c80891db11f4df38895a38626b9a5d79c2067609e9aa578de23df284f\": container with ID starting with 2456162c80891db11f4df38895a38626b9a5d79c2067609e9aa578de23df284f not found: ID does not exist" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.466950 4946 scope.go:117] "RemoveContainer" containerID="c572a089b466148ca1bc267403c169adc75a8306ad42f222db62e4a135735727" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.469435 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican1925-account-delete-gjf5l"] Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.480017 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placementd9cc-account-delete-9rfvn"] Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.486029 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2e2a06b-b43f-4a3b-985d-964e177c3c06-metrics-certs-tls-certs\") pod \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.486179 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2e2a06b-b43f-4a3b-985d-964e177c3c06-config\") pod \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.486263 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2e2a06b-b43f-4a3b-985d-964e177c3c06-scripts\") pod \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.486392 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e2a06b-b43f-4a3b-985d-964e177c3c06-combined-ca-bundle\") pod \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.486862 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2e2a06b-b43f-4a3b-985d-964e177c3c06-config" (OuterVolumeSpecName: "config") pod "a2e2a06b-b43f-4a3b-985d-964e177c3c06" (UID: "a2e2a06b-b43f-4a3b-985d-964e177c3c06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.486991 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.487444 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2e2a06b-b43f-4a3b-985d-964e177c3c06-scripts" (OuterVolumeSpecName: "scripts") pod "a2e2a06b-b43f-4a3b-985d-964e177c3c06" (UID: "a2e2a06b-b43f-4a3b-985d-964e177c3c06"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.489107 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a2e2a06b-b43f-4a3b-985d-964e177c3c06-ovsdb-rundir\") pod \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.489218 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2e2a06b-b43f-4a3b-985d-964e177c3c06-ovsdbserver-sb-tls-certs\") pod \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.489300 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt9kr\" (UniqueName: \"kubernetes.io/projected/a2e2a06b-b43f-4a3b-985d-964e177c3c06-kube-api-access-wt9kr\") pod \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\" (UID: \"a2e2a06b-b43f-4a3b-985d-964e177c3c06\") " Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.491778 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a71b82d-c922-4d23-b816-f662cc5539ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.491933 4946 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1a71b82d-c922-4d23-b816-f662cc5539ec-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.491992 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/926ec930-a8f3-4c87-9963-39779e7309cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.492047 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2e2a06b-b43f-4a3b-985d-964e177c3c06-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.492102 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2e2a06b-b43f-4a3b-985d-964e177c3c06-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.494121 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2e2a06b-b43f-4a3b-985d-964e177c3c06-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "a2e2a06b-b43f-4a3b-985d-964e177c3c06" (UID: "a2e2a06b-b43f-4a3b-985d-964e177c3c06"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.495484 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2e2a06b-b43f-4a3b-985d-964e177c3c06-kube-api-access-wt9kr" (OuterVolumeSpecName: "kube-api-access-wt9kr") pod "a2e2a06b-b43f-4a3b-985d-964e177c3c06" (UID: "a2e2a06b-b43f-4a3b-985d-964e177c3c06"). InnerVolumeSpecName "kube-api-access-wt9kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.499640 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "a2e2a06b-b43f-4a3b-985d-964e177c3c06" (UID: "a2e2a06b-b43f-4a3b-985d-964e177c3c06"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.507976 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/926ec930-a8f3-4c87-9963-39779e7309cc-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "926ec930-a8f3-4c87-9963-39779e7309cc" (UID: "926ec930-a8f3-4c87-9963-39779e7309cc"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.547168 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a71b82d-c922-4d23-b816-f662cc5539ec-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "1a71b82d-c922-4d23-b816-f662cc5539ec" (UID: "1a71b82d-c922-4d23-b816-f662cc5539ec"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.557870 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/926ec930-a8f3-4c87-9963-39779e7309cc-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "926ec930-a8f3-4c87-9963-39779e7309cc" (UID: "926ec930-a8f3-4c87-9963-39779e7309cc"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.581795 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2e2a06b-b43f-4a3b-985d-964e177c3c06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2e2a06b-b43f-4a3b-985d-964e177c3c06" (UID: "a2e2a06b-b43f-4a3b-985d-964e177c3c06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.585642 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapid713-account-delete-lj6h9"] Nov 28 07:19:41 crc kubenswrapper[4946]: W1128 07:19:41.587982 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1486d432_b3a6_4470_b145_076dafbfca67.slice/crio-cda7a05104ff5fb2283c5e7e234c600e4029a235771098b4405c1e0918b0ecaf WatchSource:0}: Error finding container cda7a05104ff5fb2283c5e7e234c600e4029a235771098b4405c1e0918b0ecaf: Status 404 returned error can't find the container with id cda7a05104ff5fb2283c5e7e234c600e4029a235771098b4405c1e0918b0ecaf Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.592069 4946 scope.go:117] "RemoveContainer" containerID="03897902ee4aec557adaba7cabf089950c612511ec09140569b554e27e9ebd74" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.594405 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="3521840d-60d0-450c-8c05-7e2ad0fc4e97" containerName="rabbitmq" containerID="cri-o://5a518b7e7d038229a500bd8709ec0d601f6bd6d8f0d81ac3077b20e90a835629" gracePeriod=604800 Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.601328 4946 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.601768 4946 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/926ec930-a8f3-4c87-9963-39779e7309cc-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.601786 4946 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1a71b82d-c922-4d23-b816-f662cc5539ec-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.601796 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a2e2a06b-b43f-4a3b-985d-964e177c3c06-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.601806 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt9kr\" (UniqueName: \"kubernetes.io/projected/a2e2a06b-b43f-4a3b-985d-964e177c3c06-kube-api-access-wt9kr\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.601818 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e2a06b-b43f-4a3b-985d-964e177c3c06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.602196 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/926ec930-a8f3-4c87-9963-39779e7309cc-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.700269 4946 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.705508 4946 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.760203 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2e2a06b-b43f-4a3b-985d-964e177c3c06-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "a2e2a06b-b43f-4a3b-985d-964e177c3c06" (UID: "a2e2a06b-b43f-4a3b-985d-964e177c3c06"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.797639 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2e2a06b-b43f-4a3b-985d-964e177c3c06-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "a2e2a06b-b43f-4a3b-985d-964e177c3c06" (UID: "a2e2a06b-b43f-4a3b-985d-964e177c3c06"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.827619 4946 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2e2a06b-b43f-4a3b-985d-964e177c3c06-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:41 crc kubenswrapper[4946]: I1128 07:19:41.827657 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2e2a06b-b43f-4a3b-985d-964e177c3c06-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:41 crc kubenswrapper[4946]: E1128 07:19:41.827727 4946 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 28 07:19:41 crc kubenswrapper[4946]: E1128 07:19:41.827782 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-config-data podName:3521840d-60d0-450c-8c05-7e2ad0fc4e97 nodeName:}" failed. No retries permitted until 2025-11-28 07:19:43.827764954 +0000 UTC m=+1638.205830065 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-config-data") pod "rabbitmq-cell1-server-0" (UID: "3521840d-60d0-450c-8c05-7e2ad0fc4e97") : configmap "rabbitmq-cell1-config-data" not found Nov 28 07:19:42 crc kubenswrapper[4946]: E1128 07:19:42.035204 4946 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 28 07:19:42 crc kubenswrapper[4946]: E1128 07:19:42.035317 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-config-data podName:59fdca77-b333-44be-ab8c-96a2f4bcc340 nodeName:}" failed. No retries permitted until 2025-11-28 07:19:46.035294108 +0000 UTC m=+1640.413359219 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-config-data") pod "rabbitmq-server-0" (UID: "59fdca77-b333-44be-ab8c-96a2f4bcc340") : configmap "rabbitmq-config-data" not found Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.117966 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a71b82d-c922-4d23-b816-f662cc5539ec" path="/var/lib/kubelet/pods/1a71b82d-c922-4d23-b816-f662cc5539ec/volumes" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.118745 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23813153-a582-4ea1-bdf5-f81b2994aed6" path="/var/lib/kubelet/pods/23813153-a582-4ea1-bdf5-f81b2994aed6/volumes" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.119268 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cb45337-5b52-46b3-b7b3-bbe5123d34e5" path="/var/lib/kubelet/pods/2cb45337-5b52-46b3-b7b3-bbe5123d34e5/volumes" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.120418 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52295fb5-c0ec-4910-ad42-413e574375bd" path="/var/lib/kubelet/pods/52295fb5-c0ec-4910-ad42-413e574375bd/volumes" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.121221 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5feb905d-9c23-4603-b118-fdc05a237848" path="/var/lib/kubelet/pods/5feb905d-9c23-4603-b118-fdc05a237848/volumes" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.121828 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d80e1b8-ad91-4449-bd5a-07c7c15ce996" path="/var/lib/kubelet/pods/6d80e1b8-ad91-4449-bd5a-07c7c15ce996/volumes" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.122804 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08" path="/var/lib/kubelet/pods/7c1972c6-dc6e-46b8-a2a8-dcbc4a873a08/volumes" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.123340 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a209c4a0-a388-463f-a49c-cd24fc3b3ca8" path="/var/lib/kubelet/pods/a209c4a0-a388-463f-a49c-cd24fc3b3ca8/volumes" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.123951 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1b70933-a95b-4637-8c02-d77b9ef7a980" path="/var/lib/kubelet/pods/d1b70933-a95b-4637-8c02-d77b9ef7a980/volumes" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.124443 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1b34bc2-b103-4751-b3c0-bb31e89e5854" path="/var/lib/kubelet/pods/e1b34bc2-b103-4751-b3c0-bb31e89e5854/volumes" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.125440 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc547561-a892-4d64-83cd-35f61e46ffbc" path="/var/lib/kubelet/pods/fc547561-a892-4d64-83cd-35f61e46ffbc/volumes" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.126379 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell0d35d-account-delete-c848z"] Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.180421 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 07:19:42 crc kubenswrapper[4946]: W1128 07:19:42.182954 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb138df41_1f0c_4edb_9546_e0f5ec16cf06.slice/crio-a1274a39d0273ac5daf35b3538734606c8afb8baaf853738f70068c3dfe1dc86 WatchSource:0}: Error finding container a1274a39d0273ac5daf35b3538734606c8afb8baaf853738f70068c3dfe1dc86: Status 404 returned error can't find the container with id a1274a39d0273ac5daf35b3538734606c8afb8baaf853738f70068c3dfe1dc86 Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.188715 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.279728 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.332962 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.346331 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1279af9a-0d83-4c31-94b1-6c732b89a785-nova-novncproxy-tls-certs\") pod \"1279af9a-0d83-4c31-94b1-6c732b89a785\" (UID: \"1279af9a-0d83-4c31-94b1-6c732b89a785\") " Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.346545 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1279af9a-0d83-4c31-94b1-6c732b89a785-combined-ca-bundle\") pod \"1279af9a-0d83-4c31-94b1-6c732b89a785\" (UID: \"1279af9a-0d83-4c31-94b1-6c732b89a785\") " Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.346688 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1279af9a-0d83-4c31-94b1-6c732b89a785-vencrypt-tls-certs\") pod \"1279af9a-0d83-4c31-94b1-6c732b89a785\" (UID: \"1279af9a-0d83-4c31-94b1-6c732b89a785\") " Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.346721 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v9wl\" (UniqueName: \"kubernetes.io/projected/1279af9a-0d83-4c31-94b1-6c732b89a785-kube-api-access-4v9wl\") pod \"1279af9a-0d83-4c31-94b1-6c732b89a785\" (UID: \"1279af9a-0d83-4c31-94b1-6c732b89a785\") " Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.346783 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1279af9a-0d83-4c31-94b1-6c732b89a785-config-data\") pod \"1279af9a-0d83-4c31-94b1-6c732b89a785\" (UID: \"1279af9a-0d83-4c31-94b1-6c732b89a785\") " Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.358505 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.367279 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1279af9a-0d83-4c31-94b1-6c732b89a785-kube-api-access-4v9wl" (OuterVolumeSpecName: "kube-api-access-4v9wl") pod "1279af9a-0d83-4c31-94b1-6c732b89a785" (UID: "1279af9a-0d83-4c31-94b1-6c732b89a785"). InnerVolumeSpecName "kube-api-access-4v9wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.372408 4946 generic.go:334] "Generic (PLEG): container finished" podID="32ed9247-5959-4a5b-a879-52fac366f999" containerID="a833a51c90092a7c66ad1c8d79f182f5d4ed7bcf164f9e13778a6a030a938bf7" exitCode=0 Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.372514 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder8ebd-account-delete-trxbg" event={"ID":"32ed9247-5959-4a5b-a879-52fac366f999","Type":"ContainerDied","Data":"a833a51c90092a7c66ad1c8d79f182f5d4ed7bcf164f9e13778a6a030a938bf7"} Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.376642 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican1925-account-delete-gjf5l" event={"ID":"9f7aa965-bfc0-4db1-a2d2-07fe02be9f18","Type":"ContainerStarted","Data":"734c2f38a6c27cff45594e1786fb0a29642647112e55aaa972412f1017ebae67"} Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.377819 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutronc822-account-delete-z6qcr" event={"ID":"f80c95e6-2981-4755-ada4-26bbf1372693","Type":"ContainerStarted","Data":"97fb181ee6bd1c34e7d62d2be6dd9bbd910c691e00f6686394d09ffed72b82a8"} Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.379495 4946 generic.go:334] "Generic (PLEG): container finished" podID="c485c360-55fc-49da-851d-ab74f7c7fc98" containerID="5edec769179e5fbde067252513cb9a91acd14bb765488eebcacb651d47671cea" exitCode=0 Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.379547 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c485c360-55fc-49da-851d-ab74f7c7fc98","Type":"ContainerDied","Data":"5edec769179e5fbde067252513cb9a91acd14bb765488eebcacb651d47671cea"} Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.384428 4946 generic.go:334] "Generic (PLEG): container finished" podID="1279af9a-0d83-4c31-94b1-6c732b89a785" containerID="a98ed7264301f60a91e8b70f16d9b85a1db93f02fb19bfcfa856fa5650ad98e4" exitCode=0 Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.384517 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1279af9a-0d83-4c31-94b1-6c732b89a785","Type":"ContainerDied","Data":"a98ed7264301f60a91e8b70f16d9b85a1db93f02fb19bfcfa856fa5650ad98e4"} Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.384543 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1279af9a-0d83-4c31-94b1-6c732b89a785","Type":"ContainerDied","Data":"de486bff19c7f1c94a92bc5ce30f2c68b13a5ab44e0e1e6e8d05527f56060f4a"} Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.384559 4946 scope.go:117] "RemoveContainer" containerID="a98ed7264301f60a91e8b70f16d9b85a1db93f02fb19bfcfa856fa5650ad98e4" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.384684 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.405181 4946 generic.go:334] "Generic (PLEG): container finished" podID="eb07712a-805e-4e0f-9a81-dd8ce42bfb88" containerID="b85b412d05146dbed3aa695cab3bb9464a18b9f516b6b0592e0bcd619a892c43" exitCode=0 Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.405235 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glanceb763-account-delete-mmnjs" event={"ID":"eb07712a-805e-4e0f-9a81-dd8ce42bfb88","Type":"ContainerDied","Data":"b85b412d05146dbed3aa695cab3bb9464a18b9f516b6b0592e0bcd619a892c43"} Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.411298 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapid713-account-delete-lj6h9" event={"ID":"30b2d9aa-848b-4f33-9bd8-921f5de5ab36","Type":"ContainerStarted","Data":"51e719a433208488b0d81729535e9ef02ad774f5529721339bd692ccf59ffadd"} Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.413421 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a2e2a06b-b43f-4a3b-985d-964e177c3c06/ovsdbserver-sb/0.log" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.413525 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a2e2a06b-b43f-4a3b-985d-964e177c3c06","Type":"ContainerDied","Data":"3480e8458f642a16249d203a2be0807dba908b4e9d3c38c8c944e916316103b5"} Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.413639 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.417835 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell0d35d-account-delete-c848z" event={"ID":"b138df41-1f0c-4edb-9546-e0f5ec16cf06","Type":"ContainerStarted","Data":"a1274a39d0273ac5daf35b3538734606c8afb8baaf853738f70068c3dfe1dc86"} Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.424320 4946 scope.go:117] "RemoveContainer" containerID="a98ed7264301f60a91e8b70f16d9b85a1db93f02fb19bfcfa856fa5650ad98e4" Nov 28 07:19:42 crc kubenswrapper[4946]: E1128 07:19:42.424893 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a98ed7264301f60a91e8b70f16d9b85a1db93f02fb19bfcfa856fa5650ad98e4\": container with ID starting with a98ed7264301f60a91e8b70f16d9b85a1db93f02fb19bfcfa856fa5650ad98e4 not found: ID does not exist" containerID="a98ed7264301f60a91e8b70f16d9b85a1db93f02fb19bfcfa856fa5650ad98e4" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.424968 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a98ed7264301f60a91e8b70f16d9b85a1db93f02fb19bfcfa856fa5650ad98e4"} err="failed to get container status \"a98ed7264301f60a91e8b70f16d9b85a1db93f02fb19bfcfa856fa5650ad98e4\": rpc error: code = NotFound desc = could not find container \"a98ed7264301f60a91e8b70f16d9b85a1db93f02fb19bfcfa856fa5650ad98e4\": container with ID starting with a98ed7264301f60a91e8b70f16d9b85a1db93f02fb19bfcfa856fa5650ad98e4 not found: ID does not exist" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.424997 4946 scope.go:117] "RemoveContainer" containerID="46e527156036f2db6b68d08e685dc075879a5f56a11d96918d1c98628885e7e8" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.434727 4946 generic.go:334] "Generic (PLEG): container finished" podID="ea608140-3ad6-4c56-9754-ec74fc292781" containerID="d78e71107b7394d2525bc8feb2c6598f678c038511392ededa5a41827a995dcb" exitCode=0 Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.434990 4946 generic.go:334] "Generic (PLEG): container finished" podID="ea608140-3ad6-4c56-9754-ec74fc292781" containerID="0c66dae97a36e8c460ce04d5fac2b27cca2e32b76e908a748791b556732f4db4" exitCode=0 Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.435085 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-65f88f985c-d964v" event={"ID":"ea608140-3ad6-4c56-9754-ec74fc292781","Type":"ContainerDied","Data":"d78e71107b7394d2525bc8feb2c6598f678c038511392ededa5a41827a995dcb"} Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.435116 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-65f88f985c-d964v" event={"ID":"ea608140-3ad6-4c56-9754-ec74fc292781","Type":"ContainerDied","Data":"0c66dae97a36e8c460ce04d5fac2b27cca2e32b76e908a748791b556732f4db4"} Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.438765 4946 generic.go:334] "Generic (PLEG): container finished" podID="c41f96d0-0ffc-424b-afa7-d7b87f2bf11d" containerID="0f786e996c300ebfa42ac27a855c0fd4efcf8b18e7989e6e4ee76f3ce6556d28" exitCode=0 Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.438845 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" event={"ID":"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d","Type":"ContainerDied","Data":"0f786e996c300ebfa42ac27a855c0fd4efcf8b18e7989e6e4ee76f3ce6556d28"} Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.438883 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" event={"ID":"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d","Type":"ContainerDied","Data":"ed0857116183a45137ef9772b65b50928baca9419eea15636a75437e54f008b5"} Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.438946 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-bc95b876b-t8r9q" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.442314 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placementd9cc-account-delete-9rfvn" event={"ID":"1486d432-b3a6-4470-b145-076dafbfca67","Type":"ContainerStarted","Data":"cda7a05104ff5fb2283c5e7e234c600e4029a235771098b4405c1e0918b0ecaf"} Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.450003 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e87edf72-a3a2-4df0-8249-5902f158998d-galera-tls-certs\") pod \"e87edf72-a3a2-4df0-8249-5902f158998d\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.450130 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e87edf72-a3a2-4df0-8249-5902f158998d-config-data-generated\") pod \"e87edf72-a3a2-4df0-8249-5902f158998d\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.450151 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-logs\") pod \"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d\" (UID: \"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d\") " Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.450205 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9pfn\" (UniqueName: \"kubernetes.io/projected/e87edf72-a3a2-4df0-8249-5902f158998d-kube-api-access-l9pfn\") pod \"e87edf72-a3a2-4df0-8249-5902f158998d\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.450315 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e87edf72-a3a2-4df0-8249-5902f158998d-combined-ca-bundle\") pod \"e87edf72-a3a2-4df0-8249-5902f158998d\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.450386 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-combined-ca-bundle\") pod \"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d\" (UID: \"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d\") " Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.450487 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-config-data-custom\") pod \"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d\" (UID: \"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d\") " Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.450508 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e87edf72-a3a2-4df0-8249-5902f158998d-config-data-default\") pod \"e87edf72-a3a2-4df0-8249-5902f158998d\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.450553 4946 generic.go:334] "Generic (PLEG): container finished" podID="e87edf72-a3a2-4df0-8249-5902f158998d" containerID="3f80615bde2b6e4140684945a44377c8842d66297f035126f4b7271272f03968" exitCode=0 Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.450575 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x5vh\" (UniqueName: \"kubernetes.io/projected/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-kube-api-access-6x5vh\") pod \"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d\" (UID: \"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d\") " Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.450614 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e87edf72-a3a2-4df0-8249-5902f158998d","Type":"ContainerDied","Data":"3f80615bde2b6e4140684945a44377c8842d66297f035126f4b7271272f03968"} Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.450647 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e87edf72-a3a2-4df0-8249-5902f158998d","Type":"ContainerDied","Data":"e45402f668a49491304a9bc6126942f95b60d84665e172d1983b256b50b631e6"} Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.450621 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e87edf72-a3a2-4df0-8249-5902f158998d-operator-scripts\") pod \"e87edf72-a3a2-4df0-8249-5902f158998d\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.450736 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"e87edf72-a3a2-4df0-8249-5902f158998d\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.450778 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e87edf72-a3a2-4df0-8249-5902f158998d-kolla-config\") pod \"e87edf72-a3a2-4df0-8249-5902f158998d\" (UID: \"e87edf72-a3a2-4df0-8249-5902f158998d\") " Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.450843 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-config-data\") pod \"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d\" (UID: \"c41f96d0-0ffc-424b-afa7-d7b87f2bf11d\") " Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.451513 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e87edf72-a3a2-4df0-8249-5902f158998d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e87edf72-a3a2-4df0-8249-5902f158998d" (UID: "e87edf72-a3a2-4df0-8249-5902f158998d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.451766 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e87edf72-a3a2-4df0-8249-5902f158998d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.451788 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v9wl\" (UniqueName: \"kubernetes.io/projected/1279af9a-0d83-4c31-94b1-6c732b89a785-kube-api-access-4v9wl\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.452055 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.458716 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e87edf72-a3a2-4df0-8249-5902f158998d-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "e87edf72-a3a2-4df0-8249-5902f158998d" (UID: "e87edf72-a3a2-4df0-8249-5902f158998d"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.458741 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e87edf72-a3a2-4df0-8249-5902f158998d-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "e87edf72-a3a2-4df0-8249-5902f158998d" (UID: "e87edf72-a3a2-4df0-8249-5902f158998d"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.459556 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-logs" (OuterVolumeSpecName: "logs") pod "c41f96d0-0ffc-424b-afa7-d7b87f2bf11d" (UID: "c41f96d0-0ffc-424b-afa7-d7b87f2bf11d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.462557 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.463310 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e87edf72-a3a2-4df0-8249-5902f158998d-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "e87edf72-a3a2-4df0-8249-5902f158998d" (UID: "e87edf72-a3a2-4df0-8249-5902f158998d"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.467596 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c41f96d0-0ffc-424b-afa7-d7b87f2bf11d" (UID: "c41f96d0-0ffc-424b-afa7-d7b87f2bf11d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.473385 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.479055 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e87edf72-a3a2-4df0-8249-5902f158998d-kube-api-access-l9pfn" (OuterVolumeSpecName: "kube-api-access-l9pfn") pod "e87edf72-a3a2-4df0-8249-5902f158998d" (UID: "e87edf72-a3a2-4df0-8249-5902f158998d"). InnerVolumeSpecName "kube-api-access-l9pfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.481574 4946 scope.go:117] "RemoveContainer" containerID="89293c54aab4564bb3c1f382ee28896a803e4970c73eac5d5bec925d87394c71" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.501104 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-kube-api-access-6x5vh" (OuterVolumeSpecName: "kube-api-access-6x5vh") pod "c41f96d0-0ffc-424b-afa7-d7b87f2bf11d" (UID: "c41f96d0-0ffc-424b-afa7-d7b87f2bf11d"). InnerVolumeSpecName "kube-api-access-6x5vh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.509756 4946 scope.go:117] "RemoveContainer" containerID="0f786e996c300ebfa42ac27a855c0fd4efcf8b18e7989e6e4ee76f3ce6556d28" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.539481 4946 scope.go:117] "RemoveContainer" containerID="0eb88c99eb48005972cbcbed71e1bfbdae3511e830c076af9bb07fc162c1abd1" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.546097 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "mysql-db") pod "e87edf72-a3a2-4df0-8249-5902f158998d" (UID: "e87edf72-a3a2-4df0-8249-5902f158998d"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.554520 4946 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.554547 4946 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e87edf72-a3a2-4df0-8249-5902f158998d-config-data-default\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.554573 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x5vh\" (UniqueName: \"kubernetes.io/projected/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-kube-api-access-6x5vh\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.554602 4946 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.554611 4946 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e87edf72-a3a2-4df0-8249-5902f158998d-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.554620 4946 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e87edf72-a3a2-4df0-8249-5902f158998d-config-data-generated\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.554642 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.554652 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9pfn\" (UniqueName: \"kubernetes.io/projected/e87edf72-a3a2-4df0-8249-5902f158998d-kube-api-access-l9pfn\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.592817 4946 scope.go:117] "RemoveContainer" containerID="0f786e996c300ebfa42ac27a855c0fd4efcf8b18e7989e6e4ee76f3ce6556d28" Nov 28 07:19:42 crc kubenswrapper[4946]: E1128 07:19:42.593346 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f786e996c300ebfa42ac27a855c0fd4efcf8b18e7989e6e4ee76f3ce6556d28\": container with ID starting with 0f786e996c300ebfa42ac27a855c0fd4efcf8b18e7989e6e4ee76f3ce6556d28 not found: ID does not exist" containerID="0f786e996c300ebfa42ac27a855c0fd4efcf8b18e7989e6e4ee76f3ce6556d28" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.593392 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f786e996c300ebfa42ac27a855c0fd4efcf8b18e7989e6e4ee76f3ce6556d28"} err="failed to get container status \"0f786e996c300ebfa42ac27a855c0fd4efcf8b18e7989e6e4ee76f3ce6556d28\": rpc error: code = NotFound desc = could not find container \"0f786e996c300ebfa42ac27a855c0fd4efcf8b18e7989e6e4ee76f3ce6556d28\": container with ID starting with 0f786e996c300ebfa42ac27a855c0fd4efcf8b18e7989e6e4ee76f3ce6556d28 not found: ID does not exist" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.593429 4946 scope.go:117] "RemoveContainer" containerID="0eb88c99eb48005972cbcbed71e1bfbdae3511e830c076af9bb07fc162c1abd1" Nov 28 07:19:42 crc kubenswrapper[4946]: E1128 07:19:42.593799 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eb88c99eb48005972cbcbed71e1bfbdae3511e830c076af9bb07fc162c1abd1\": container with ID starting with 0eb88c99eb48005972cbcbed71e1bfbdae3511e830c076af9bb07fc162c1abd1 not found: ID does not exist" containerID="0eb88c99eb48005972cbcbed71e1bfbdae3511e830c076af9bb07fc162c1abd1" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.593852 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eb88c99eb48005972cbcbed71e1bfbdae3511e830c076af9bb07fc162c1abd1"} err="failed to get container status \"0eb88c99eb48005972cbcbed71e1bfbdae3511e830c076af9bb07fc162c1abd1\": rpc error: code = NotFound desc = could not find container \"0eb88c99eb48005972cbcbed71e1bfbdae3511e830c076af9bb07fc162c1abd1\": container with ID starting with 0eb88c99eb48005972cbcbed71e1bfbdae3511e830c076af9bb07fc162c1abd1 not found: ID does not exist" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.593891 4946 scope.go:117] "RemoveContainer" containerID="3f80615bde2b6e4140684945a44377c8842d66297f035126f4b7271272f03968" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.660840 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e87edf72-a3a2-4df0-8249-5902f158998d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e87edf72-a3a2-4df0-8249-5902f158998d" (UID: "e87edf72-a3a2-4df0-8249-5902f158998d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.707306 4946 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.768221 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e87edf72-a3a2-4df0-8249-5902f158998d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.768251 4946 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.805617 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c41f96d0-0ffc-424b-afa7-d7b87f2bf11d" (UID: "c41f96d0-0ffc-424b-afa7-d7b87f2bf11d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:42 crc kubenswrapper[4946]: E1128 07:19:42.835668 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aff8824b6f9748bbf0caf47e4c6c01486fb6b04768113350feeb4bb43a17bff5" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.836202 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1279af9a-0d83-4c31-94b1-6c732b89a785-config-data" (OuterVolumeSpecName: "config-data") pod "1279af9a-0d83-4c31-94b1-6c732b89a785" (UID: "1279af9a-0d83-4c31-94b1-6c732b89a785"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:42 crc kubenswrapper[4946]: E1128 07:19:42.840737 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aff8824b6f9748bbf0caf47e4c6c01486fb6b04768113350feeb4bb43a17bff5" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 28 07:19:42 crc kubenswrapper[4946]: E1128 07:19:42.842380 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aff8824b6f9748bbf0caf47e4c6c01486fb6b04768113350feeb4bb43a17bff5" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 28 07:19:42 crc kubenswrapper[4946]: E1128 07:19:42.842486 4946 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="db7322c8-b99d-4970-85c0-218d683f1ca3" containerName="nova-cell1-conductor-conductor" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.875128 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.875183 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1279af9a-0d83-4c31-94b1-6c732b89a785-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.907073 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1279af9a-0d83-4c31-94b1-6c732b89a785-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1279af9a-0d83-4c31-94b1-6c732b89a785" (UID: "1279af9a-0d83-4c31-94b1-6c732b89a785"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.909663 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-config-data" (OuterVolumeSpecName: "config-data") pod "c41f96d0-0ffc-424b-afa7-d7b87f2bf11d" (UID: "c41f96d0-0ffc-424b-afa7-d7b87f2bf11d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.934656 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1279af9a-0d83-4c31-94b1-6c732b89a785-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "1279af9a-0d83-4c31-94b1-6c732b89a785" (UID: "1279af9a-0d83-4c31-94b1-6c732b89a785"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:42 crc kubenswrapper[4946]: E1128 07:19:42.972731 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443 is running failed: container process not found" containerID="c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:19:42 crc kubenswrapper[4946]: E1128 07:19:42.973395 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443 is running failed: container process not found" containerID="c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:19:42 crc kubenswrapper[4946]: E1128 07:19:42.976174 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443 is running failed: container process not found" containerID="c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:19:42 crc kubenswrapper[4946]: E1128 07:19:42.976293 4946 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-gpnz5" podUID="944abeae-3f1d-4391-a375-b64ed9c17b14" containerName="ovsdb-server" Nov 28 07:19:42 crc kubenswrapper[4946]: E1128 07:19:42.976329 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b12ef267dfaa1906a3e883237d5228860410b06e258081a1cabe06d004313ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.977312 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1279af9a-0d83-4c31-94b1-6c732b89a785-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.977348 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.977364 4946 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1279af9a-0d83-4c31-94b1-6c732b89a785-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.978129 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e87edf72-a3a2-4df0-8249-5902f158998d-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "e87edf72-a3a2-4df0-8249-5902f158998d" (UID: "e87edf72-a3a2-4df0-8249-5902f158998d"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:42 crc kubenswrapper[4946]: I1128 07:19:42.982294 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1279af9a-0d83-4c31-94b1-6c732b89a785-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "1279af9a-0d83-4c31-94b1-6c732b89a785" (UID: "1279af9a-0d83-4c31-94b1-6c732b89a785"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:42 crc kubenswrapper[4946]: E1128 07:19:42.986261 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b12ef267dfaa1906a3e883237d5228860410b06e258081a1cabe06d004313ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:19:42 crc kubenswrapper[4946]: E1128 07:19:42.988117 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b12ef267dfaa1906a3e883237d5228860410b06e258081a1cabe06d004313ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:19:42 crc kubenswrapper[4946]: E1128 07:19:42.988224 4946 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-gpnz5" podUID="944abeae-3f1d-4391-a375-b64ed9c17b14" containerName="ovs-vswitchd" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.078567 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.078861 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="177d3d9f-5e48-4b4e-9329-9d46daa35557" containerName="ceilometer-central-agent" containerID="cri-o://c42c1219c7b69cbd56741a6ed7605e384c6a1b1aa06e921f85ae5e9b886c56e0" gracePeriod=30 Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.079337 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="177d3d9f-5e48-4b4e-9329-9d46daa35557" containerName="proxy-httpd" containerID="cri-o://ffcb008ee579acbdfa8d874703f551f470e1a6f511bed4e136ae6381f70bdf76" gracePeriod=30 Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.079390 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="177d3d9f-5e48-4b4e-9329-9d46daa35557" containerName="sg-core" containerID="cri-o://e44eb1721b1c349eb901f9e287cf8f82e2cd4a6445ee30ac6af7e82f4ca77c2a" gracePeriod=30 Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.079421 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="177d3d9f-5e48-4b4e-9329-9d46daa35557" containerName="ceilometer-notification-agent" containerID="cri-o://6652f9f13937850c60218d658275da3e9395b434290cd817f8561dffb81b033e" gracePeriod=30 Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.080936 4946 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1279af9a-0d83-4c31-94b1-6c732b89a785-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.080972 4946 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e87edf72-a3a2-4df0-8249-5902f158998d-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.129008 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.129243 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="376693c8-e03f-4085-9be2-0ef9a0e27c5c" containerName="kube-state-metrics" containerID="cri-o://0c9ce058f230f9783a496b0b6cde4e3a924eca33a5a495750e924592c8150850" gracePeriod=30 Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.209984 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.210238 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="62fdad5e-59c5-4d8f-87da-79b384fb82be" containerName="memcached" containerID="cri-o://9030866a771edcec9dbaa59d8cf162cc3924cacd011d8fb010810714f827e7e0" gracePeriod=30 Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.293755 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-zrmg4"] Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.314804 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-djhf5"] Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.332877 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-zrmg4"] Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.340564 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-djhf5"] Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.352593 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-77dfbb6d46-cf289"] Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.352820 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-77dfbb6d46-cf289" podUID="2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37" containerName="keystone-api" containerID="cri-o://d7ff90ba4ae10a7961b71df89adf06b1201b956ef3dcb71ad66f05d75ef803ef" gracePeriod=30 Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.376871 4946 scope.go:117] "RemoveContainer" containerID="fa977f3dc63e5d9bc6c7308001c1dcfb76ff1420cbaefe8d12d5d89989d89e5c" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.423764 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.424140 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone7786-account-delete-n526k"] Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.426420 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2e2a06b-b43f-4a3b-985d-964e177c3c06" containerName="ovsdbserver-sb" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.426435 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2e2a06b-b43f-4a3b-985d-964e177c3c06" containerName="ovsdbserver-sb" Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.426482 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1279af9a-0d83-4c31-94b1-6c732b89a785" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.426490 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1279af9a-0d83-4c31-94b1-6c732b89a785" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.426517 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23813153-a582-4ea1-bdf5-f81b2994aed6" containerName="dnsmasq-dns" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.426522 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="23813153-a582-4ea1-bdf5-f81b2994aed6" containerName="dnsmasq-dns" Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.426547 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23813153-a582-4ea1-bdf5-f81b2994aed6" containerName="init" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.426570 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="23813153-a582-4ea1-bdf5-f81b2994aed6" containerName="init" Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.426591 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="926ec930-a8f3-4c87-9963-39779e7309cc" containerName="openstack-network-exporter" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.426597 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="926ec930-a8f3-4c87-9963-39779e7309cc" containerName="openstack-network-exporter" Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.426609 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e87edf72-a3a2-4df0-8249-5902f158998d" containerName="galera" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.426615 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="e87edf72-a3a2-4df0-8249-5902f158998d" containerName="galera" Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.426629 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="926ec930-a8f3-4c87-9963-39779e7309cc" containerName="ovsdbserver-nb" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.426636 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="926ec930-a8f3-4c87-9963-39779e7309cc" containerName="ovsdbserver-nb" Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.426749 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea608140-3ad6-4c56-9754-ec74fc292781" containerName="proxy-server" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.426757 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea608140-3ad6-4c56-9754-ec74fc292781" containerName="proxy-server" Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.426768 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea608140-3ad6-4c56-9754-ec74fc292781" containerName="proxy-httpd" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.426774 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea608140-3ad6-4c56-9754-ec74fc292781" containerName="proxy-httpd" Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.426805 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e87edf72-a3a2-4df0-8249-5902f158998d" containerName="mysql-bootstrap" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.426812 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="e87edf72-a3a2-4df0-8249-5902f158998d" containerName="mysql-bootstrap" Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.430165 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41f96d0-0ffc-424b-afa7-d7b87f2bf11d" containerName="barbican-keystone-listener" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.430224 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41f96d0-0ffc-424b-afa7-d7b87f2bf11d" containerName="barbican-keystone-listener" Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.430291 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5feb905d-9c23-4603-b118-fdc05a237848" containerName="ovn-controller" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.430338 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="5feb905d-9c23-4603-b118-fdc05a237848" containerName="ovn-controller" Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.430425 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2e2a06b-b43f-4a3b-985d-964e177c3c06" containerName="openstack-network-exporter" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.430762 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2e2a06b-b43f-4a3b-985d-964e177c3c06" containerName="openstack-network-exporter" Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.430825 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41f96d0-0ffc-424b-afa7-d7b87f2bf11d" containerName="barbican-keystone-listener-log" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.430873 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41f96d0-0ffc-424b-afa7-d7b87f2bf11d" containerName="barbican-keystone-listener-log" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.433856 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="c41f96d0-0ffc-424b-afa7-d7b87f2bf11d" containerName="barbican-keystone-listener" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.434100 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="23813153-a582-4ea1-bdf5-f81b2994aed6" containerName="dnsmasq-dns" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.434178 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="c41f96d0-0ffc-424b-afa7-d7b87f2bf11d" containerName="barbican-keystone-listener-log" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.434239 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea608140-3ad6-4c56-9754-ec74fc292781" containerName="proxy-server" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.434301 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="5feb905d-9c23-4603-b118-fdc05a237848" containerName="ovn-controller" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.434346 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="926ec930-a8f3-4c87-9963-39779e7309cc" containerName="ovsdbserver-nb" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.434562 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="e87edf72-a3a2-4df0-8249-5902f158998d" containerName="galera" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.434680 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="926ec930-a8f3-4c87-9963-39779e7309cc" containerName="openstack-network-exporter" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.434742 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea608140-3ad6-4c56-9754-ec74fc292781" containerName="proxy-httpd" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.434935 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2e2a06b-b43f-4a3b-985d-964e177c3c06" containerName="ovsdbserver-sb" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.435042 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="1279af9a-0d83-4c31-94b1-6c732b89a785" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.435738 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2e2a06b-b43f-4a3b-985d-964e177c3c06" containerName="openstack-network-exporter" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.438553 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone7786-account-delete-n526k" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.490566 4946 generic.go:334] "Generic (PLEG): container finished" podID="376693c8-e03f-4085-9be2-0ef9a0e27c5c" containerID="0c9ce058f230f9783a496b0b6cde4e3a924eca33a5a495750e924592c8150850" exitCode=2 Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.490789 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"376693c8-e03f-4085-9be2-0ef9a0e27c5c","Type":"ContainerDied","Data":"0c9ce058f230f9783a496b0b6cde4e3a924eca33a5a495750e924592c8150850"} Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.505326 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-65f88f985c-d964v" event={"ID":"ea608140-3ad6-4c56-9754-ec74fc292781","Type":"ContainerDied","Data":"b59860fc9a7096098acaccd50b27b487d6bc78e0ad06a88ec3e27543fabe4326"} Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.505533 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-65f88f985c-d964v" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.524046 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone7786-account-delete-n526k"] Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.525410 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea608140-3ad6-4c56-9754-ec74fc292781-internal-tls-certs\") pod \"ea608140-3ad6-4c56-9754-ec74fc292781\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.525523 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea608140-3ad6-4c56-9754-ec74fc292781-combined-ca-bundle\") pod \"ea608140-3ad6-4c56-9754-ec74fc292781\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.525615 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea608140-3ad6-4c56-9754-ec74fc292781-run-httpd\") pod \"ea608140-3ad6-4c56-9754-ec74fc292781\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.525683 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea608140-3ad6-4c56-9754-ec74fc292781-log-httpd\") pod \"ea608140-3ad6-4c56-9754-ec74fc292781\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.525709 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea608140-3ad6-4c56-9754-ec74fc292781-etc-swift\") pod \"ea608140-3ad6-4c56-9754-ec74fc292781\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.525744 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea608140-3ad6-4c56-9754-ec74fc292781-public-tls-certs\") pod \"ea608140-3ad6-4c56-9754-ec74fc292781\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.525767 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpdv2\" (UniqueName: \"kubernetes.io/projected/ea608140-3ad6-4c56-9754-ec74fc292781-kube-api-access-vpdv2\") pod \"ea608140-3ad6-4c56-9754-ec74fc292781\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.525858 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea608140-3ad6-4c56-9754-ec74fc292781-config-data\") pod \"ea608140-3ad6-4c56-9754-ec74fc292781\" (UID: \"ea608140-3ad6-4c56-9754-ec74fc292781\") " Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.526405 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87bbb\" (UniqueName: \"kubernetes.io/projected/7b145d4b-42d7-4d88-b847-d5b470797f4c-kube-api-access-87bbb\") pod \"keystone7786-account-delete-n526k\" (UID: \"7b145d4b-42d7-4d88-b847-d5b470797f4c\") " pod="openstack/keystone7786-account-delete-n526k" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.526567 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b145d4b-42d7-4d88-b847-d5b470797f4c-operator-scripts\") pod \"keystone7786-account-delete-n526k\" (UID: \"7b145d4b-42d7-4d88-b847-d5b470797f4c\") " pod="openstack/keystone7786-account-delete-n526k" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.531059 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea608140-3ad6-4c56-9754-ec74fc292781-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ea608140-3ad6-4c56-9754-ec74fc292781" (UID: "ea608140-3ad6-4c56-9754-ec74fc292781"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.531362 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea608140-3ad6-4c56-9754-ec74fc292781-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ea608140-3ad6-4c56-9754-ec74fc292781" (UID: "ea608140-3ad6-4c56-9754-ec74fc292781"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.541680 4946 generic.go:334] "Generic (PLEG): container finished" podID="30b2d9aa-848b-4f33-9bd8-921f5de5ab36" containerID="aae00d1916fbbab97dcf9cfdea6a8d9c1fa2821f3aa4344ef7c74ebbc52befe1" exitCode=0 Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.541807 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapid713-account-delete-lj6h9" event={"ID":"30b2d9aa-848b-4f33-9bd8-921f5de5ab36","Type":"ContainerDied","Data":"aae00d1916fbbab97dcf9cfdea6a8d9c1fa2821f3aa4344ef7c74ebbc52befe1"} Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.581924 4946 generic.go:334] "Generic (PLEG): container finished" podID="177d3d9f-5e48-4b4e-9329-9d46daa35557" containerID="e44eb1721b1c349eb901f9e287cf8f82e2cd4a6445ee30ac6af7e82f4ca77c2a" exitCode=2 Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.582047 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"177d3d9f-5e48-4b4e-9329-9d46daa35557","Type":"ContainerDied","Data":"e44eb1721b1c349eb901f9e287cf8f82e2cd4a6445ee30ac6af7e82f4ca77c2a"} Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.582943 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea608140-3ad6-4c56-9754-ec74fc292781-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "ea608140-3ad6-4c56-9754-ec74fc292781" (UID: "ea608140-3ad6-4c56-9754-ec74fc292781"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.584864 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea608140-3ad6-4c56-9754-ec74fc292781-kube-api-access-vpdv2" (OuterVolumeSpecName: "kube-api-access-vpdv2") pod "ea608140-3ad6-4c56-9754-ec74fc292781" (UID: "ea608140-3ad6-4c56-9754-ec74fc292781"). InnerVolumeSpecName "kube-api-access-vpdv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.633750 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87bbb\" (UniqueName: \"kubernetes.io/projected/7b145d4b-42d7-4d88-b847-d5b470797f4c-kube-api-access-87bbb\") pod \"keystone7786-account-delete-n526k\" (UID: \"7b145d4b-42d7-4d88-b847-d5b470797f4c\") " pod="openstack/keystone7786-account-delete-n526k" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.633889 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b145d4b-42d7-4d88-b847-d5b470797f4c-operator-scripts\") pod \"keystone7786-account-delete-n526k\" (UID: \"7b145d4b-42d7-4d88-b847-d5b470797f4c\") " pod="openstack/keystone7786-account-delete-n526k" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.633965 4946 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea608140-3ad6-4c56-9754-ec74fc292781-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.633980 4946 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea608140-3ad6-4c56-9754-ec74fc292781-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.633989 4946 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea608140-3ad6-4c56-9754-ec74fc292781-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.633998 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpdv2\" (UniqueName: \"kubernetes.io/projected/ea608140-3ad6-4c56-9754-ec74fc292781-kube-api-access-vpdv2\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.634080 4946 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.634145 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b145d4b-42d7-4d88-b847-d5b470797f4c-operator-scripts podName:7b145d4b-42d7-4d88-b847-d5b470797f4c nodeName:}" failed. No retries permitted until 2025-11-28 07:19:44.134124 +0000 UTC m=+1638.512189111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7b145d4b-42d7-4d88-b847-d5b470797f4c-operator-scripts") pod "keystone7786-account-delete-n526k" (UID: "7b145d4b-42d7-4d88-b847-d5b470797f4c") : configmap "openstack-scripts" not found Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.638357 4946 scope.go:117] "RemoveContainer" containerID="3f80615bde2b6e4140684945a44377c8842d66297f035126f4b7271272f03968" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.642765 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.691339 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f80615bde2b6e4140684945a44377c8842d66297f035126f4b7271272f03968\": container with ID starting with 3f80615bde2b6e4140684945a44377c8842d66297f035126f4b7271272f03968 not found: ID does not exist" containerID="3f80615bde2b6e4140684945a44377c8842d66297f035126f4b7271272f03968" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.692947 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f80615bde2b6e4140684945a44377c8842d66297f035126f4b7271272f03968"} err="failed to get container status \"3f80615bde2b6e4140684945a44377c8842d66297f035126f4b7271272f03968\": rpc error: code = NotFound desc = could not find container \"3f80615bde2b6e4140684945a44377c8842d66297f035126f4b7271272f03968\": container with ID starting with 3f80615bde2b6e4140684945a44377c8842d66297f035126f4b7271272f03968 not found: ID does not exist" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.693085 4946 scope.go:117] "RemoveContainer" containerID="fa977f3dc63e5d9bc6c7308001c1dcfb76ff1420cbaefe8d12d5d89989d89e5c" Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.700966 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa977f3dc63e5d9bc6c7308001c1dcfb76ff1420cbaefe8d12d5d89989d89e5c\": container with ID starting with fa977f3dc63e5d9bc6c7308001c1dcfb76ff1420cbaefe8d12d5d89989d89e5c not found: ID does not exist" containerID="fa977f3dc63e5d9bc6c7308001c1dcfb76ff1420cbaefe8d12d5d89989d89e5c" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.701031 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa977f3dc63e5d9bc6c7308001c1dcfb76ff1420cbaefe8d12d5d89989d89e5c"} err="failed to get container status \"fa977f3dc63e5d9bc6c7308001c1dcfb76ff1420cbaefe8d12d5d89989d89e5c\": rpc error: code = NotFound desc = could not find container \"fa977f3dc63e5d9bc6c7308001c1dcfb76ff1420cbaefe8d12d5d89989d89e5c\": container with ID starting with fa977f3dc63e5d9bc6c7308001c1dcfb76ff1420cbaefe8d12d5d89989d89e5c not found: ID does not exist" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.701064 4946 scope.go:117] "RemoveContainer" containerID="d78e71107b7394d2525bc8feb2c6598f678c038511392ededa5a41827a995dcb" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.704884 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-bc95b876b-t8r9q"] Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.721548 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell0d35d-account-delete-c848z" event={"ID":"b138df41-1f0c-4edb-9546-e0f5ec16cf06","Type":"ContainerStarted","Data":"baba1715ed3450c8abd6adfc3431c885e888fd38f6e6c21194828e801a28628d"} Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.722441 4946 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novacell0d35d-account-delete-c848z" secret="" err="secret \"galera-openstack-dockercfg-g9947\" not found" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.728074 4946 generic.go:334] "Generic (PLEG): container finished" podID="acadbe07-94b0-4a5d-ac42-6524f0e4ce61" containerID="b1f1ded874bdaaff5d6be92e1099409af1ca775953c8f30295a4c606005bdf30" exitCode=0 Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.728146 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"acadbe07-94b0-4a5d-ac42-6524f0e4ce61","Type":"ContainerDied","Data":"b1f1ded874bdaaff5d6be92e1099409af1ca775953c8f30295a4c606005bdf30"} Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.730002 4946 generic.go:334] "Generic (PLEG): container finished" podID="1486d432-b3a6-4470-b145-076dafbfca67" containerID="f54d7b1d7eeb5228437fef5261e64be91c65fc15631465cf2866a020ad146692" exitCode=0 Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.730043 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placementd9cc-account-delete-9rfvn" event={"ID":"1486d432-b3a6-4470-b145-076dafbfca67","Type":"ContainerDied","Data":"f54d7b1d7eeb5228437fef5261e64be91c65fc15631465cf2866a020ad146692"} Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.732428 4946 generic.go:334] "Generic (PLEG): container finished" podID="9f7aa965-bfc0-4db1-a2d2-07fe02be9f18" containerID="da9335986b308565d71e74cb7101aedf1862f35b9597f2c00d9b2911df63b62a" exitCode=0 Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.732534 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican1925-account-delete-gjf5l" event={"ID":"9f7aa965-bfc0-4db1-a2d2-07fe02be9f18","Type":"ContainerDied","Data":"da9335986b308565d71e74cb7101aedf1862f35b9597f2c00d9b2911df63b62a"} Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.735216 4946 generic.go:334] "Generic (PLEG): container finished" podID="f80c95e6-2981-4755-ada4-26bbf1372693" containerID="5680ae2c5d4972fb1ab727edfb43325e6ccc9add15115250e4af13fb02631c0a" exitCode=0 Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.735551 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutronc822-account-delete-z6qcr" event={"ID":"f80c95e6-2981-4755-ada4-26bbf1372693","Type":"ContainerDied","Data":"5680ae2c5d4972fb1ab727edfb43325e6ccc9add15115250e4af13fb02631c0a"} Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.737776 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="05997c14-3116-4439-8e63-230bf0e5c411" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.161:8776/healthcheck\": dial tcp 10.217.0.161:8776: connect: connection refused" Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.747611 4946 projected.go:194] Error preparing data for projected volume kube-api-access-87bbb for pod openstack/keystone7786-account-delete-n526k: failed to fetch token: serviceaccounts "galera-openstack" not found Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.747687 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b145d4b-42d7-4d88-b847-d5b470797f4c-kube-api-access-87bbb podName:7b145d4b-42d7-4d88-b847-d5b470797f4c nodeName:}" failed. No retries permitted until 2025-11-28 07:19:44.247665609 +0000 UTC m=+1638.625730720 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-87bbb" (UniqueName: "kubernetes.io/projected/7b145d4b-42d7-4d88-b847-d5b470797f4c-kube-api-access-87bbb") pod "keystone7786-account-delete-n526k" (UID: "7b145d4b-42d7-4d88-b847-d5b470797f4c") : failed to fetch token: serviceaccounts "galera-openstack" not found Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.776082 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="206851cb-4673-4ce1-b038-c2e425d306b7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": read tcp 10.217.0.2:55134->10.217.0.199:8775: read: connection reset by peer" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.776248 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="206851cb-4673-4ce1-b038-c2e425d306b7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": read tcp 10.217.0.2:55122->10.217.0.199:8775: read: connection reset by peer" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.776490 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-bc95b876b-t8r9q"] Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.803579 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-cwsts"] Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.840564 4946 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.840651 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b138df41-1f0c-4edb-9546-e0f5ec16cf06-operator-scripts podName:b138df41-1f0c-4edb-9546-e0f5ec16cf06 nodeName:}" failed. No retries permitted until 2025-11-28 07:19:44.340630608 +0000 UTC m=+1638.718695719 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b138df41-1f0c-4edb-9546-e0f5ec16cf06-operator-scripts") pod "novacell0d35d-account-delete-c848z" (UID: "b138df41-1f0c-4edb-9546-e0f5ec16cf06") : configmap "openstack-scripts" not found Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.848530 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea608140-3ad6-4c56-9754-ec74fc292781-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ea608140-3ad6-4c56-9754-ec74fc292781" (UID: "ea608140-3ad6-4c56-9754-ec74fc292781"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.848600 4946 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.848686 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-config-data podName:3521840d-60d0-450c-8c05-7e2ad0fc4e97 nodeName:}" failed. No retries permitted until 2025-11-28 07:19:47.848663137 +0000 UTC m=+1642.226728248 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-config-data") pod "rabbitmq-cell1-server-0" (UID: "3521840d-60d0-450c-8c05-7e2ad0fc4e97") : configmap "rabbitmq-cell1-config-data" not found Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.858875 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-cwsts"] Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.908735 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="47d5db84ad9670de902990ff6af710f405452597c9ef2064765b609a18f5147a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.914655 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea608140-3ad6-4c56-9754-ec74fc292781-config-data" (OuterVolumeSpecName: "config-data") pod "ea608140-3ad6-4c56-9754-ec74fc292781" (UID: "ea608140-3ad6-4c56-9754-ec74fc292781"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.942523 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="47d5db84ad9670de902990ff6af710f405452597c9ef2064765b609a18f5147a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.954850 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea608140-3ad6-4c56-9754-ec74fc292781-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.954881 4946 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea608140-3ad6-4c56-9754-ec74fc292781-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.981727 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea608140-3ad6-4c56-9754-ec74fc292781-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea608140-3ad6-4c56-9754-ec74fc292781" (UID: "ea608140-3ad6-4c56-9754-ec74fc292781"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.991370 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="47d5db84ad9670de902990ff6af710f405452597c9ef2064765b609a18f5147a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 07:19:43 crc kubenswrapper[4946]: E1128 07:19:43.991454 4946 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c3ba1954-566f-4e25-8312-855a58935547" containerName="nova-scheduler-scheduler" Nov 28 07:19:43 crc kubenswrapper[4946]: I1128 07:19:43.995403 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea608140-3ad6-4c56-9754-ec74fc292781-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ea608140-3ad6-4c56-9754-ec74fc292781" (UID: "ea608140-3ad6-4c56-9754-ec74fc292781"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.032898 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="000349d5-5671-46a1-b1d2-1954a9facc3e" path="/var/lib/kubelet/pods/000349d5-5671-46a1-b1d2-1954a9facc3e/volumes" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.033453 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c6b3d2d-881d-4a19-8551-140a9c02fe4f" path="/var/lib/kubelet/pods/0c6b3d2d-881d-4a19-8551-140a9c02fe4f/volumes" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.033989 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91d18445-e4a1-4a57-abd5-222cade7df9f" path="/var/lib/kubelet/pods/91d18445-e4a1-4a57-abd5-222cade7df9f/volumes" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.035120 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="926ec930-a8f3-4c87-9963-39779e7309cc" path="/var/lib/kubelet/pods/926ec930-a8f3-4c87-9963-39779e7309cc/volumes" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.035785 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2e2a06b-b43f-4a3b-985d-964e177c3c06" path="/var/lib/kubelet/pods/a2e2a06b-b43f-4a3b-985d-964e177c3c06/volumes" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.036345 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c41f96d0-0ffc-424b-afa7-d7b87f2bf11d" path="/var/lib/kubelet/pods/c41f96d0-0ffc-424b-afa7-d7b87f2bf11d/volumes" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.037990 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-8ebd-account-create-update-mn7h9"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.058276 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea608140-3ad6-4c56-9754-ec74fc292781-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.058311 4946 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea608140-3ad6-4c56-9754-ec74fc292781-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.081970 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.089904 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-576769594d-lbv64" podUID="313c5837-e776-49ef-8689-14f6f70d31a1" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:52646->10.217.0.155:9311: read: connection reset by peer" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.089885 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-576769594d-lbv64" podUID="313c5837-e776-49ef-8689-14f6f70d31a1" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:52658->10.217.0.155:9311: read: connection reset by peer" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.107778 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder8ebd-account-delete-trxbg"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.141111 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-8ebd-account-create-update-mn7h9"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.156057 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.160313 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b145d4b-42d7-4d88-b847-d5b470797f4c-operator-scripts\") pod \"keystone7786-account-delete-n526k\" (UID: \"7b145d4b-42d7-4d88-b847-d5b470797f4c\") " pod="openstack/keystone7786-account-delete-n526k" Nov 28 07:19:44 crc kubenswrapper[4946]: E1128 07:19:44.160481 4946 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 28 07:19:44 crc kubenswrapper[4946]: E1128 07:19:44.160530 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b145d4b-42d7-4d88-b847-d5b470797f4c-operator-scripts podName:7b145d4b-42d7-4d88-b847-d5b470797f4c nodeName:}" failed. No retries permitted until 2025-11-28 07:19:45.160513192 +0000 UTC m=+1639.538578303 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7b145d4b-42d7-4d88-b847-d5b470797f4c-operator-scripts") pod "keystone7786-account-delete-n526k" (UID: "7b145d4b-42d7-4d88-b847-d5b470797f4c") : configmap "openstack-scripts" not found Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.163554 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.166648 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-7d4w7"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.182916 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-7d4w7"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.191177 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.192038 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="59fdca77-b333-44be-ab8c-96a2f4bcc340" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.208889 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.236884 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone7786-account-delete-n526k"] Nov 28 07:19:44 crc kubenswrapper[4946]: E1128 07:19:44.237686 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-87bbb operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone7786-account-delete-n526k" podUID="7b145d4b-42d7-4d88-b847-d5b470797f4c" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.247806 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7786-account-create-update-jrfgb"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.252707 4946 scope.go:117] "RemoveContainer" containerID="0c66dae97a36e8c460ce04d5fac2b27cca2e32b76e908a748791b556732f4db4" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.261785 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxpd9\" (UniqueName: \"kubernetes.io/projected/376693c8-e03f-4085-9be2-0ef9a0e27c5c-kube-api-access-dxpd9\") pod \"376693c8-e03f-4085-9be2-0ef9a0e27c5c\" (UID: \"376693c8-e03f-4085-9be2-0ef9a0e27c5c\") " Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.261903 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/376693c8-e03f-4085-9be2-0ef9a0e27c5c-kube-state-metrics-tls-config\") pod \"376693c8-e03f-4085-9be2-0ef9a0e27c5c\" (UID: \"376693c8-e03f-4085-9be2-0ef9a0e27c5c\") " Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.261959 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/376693c8-e03f-4085-9be2-0ef9a0e27c5c-kube-state-metrics-tls-certs\") pod \"376693c8-e03f-4085-9be2-0ef9a0e27c5c\" (UID: \"376693c8-e03f-4085-9be2-0ef9a0e27c5c\") " Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.262011 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/376693c8-e03f-4085-9be2-0ef9a0e27c5c-combined-ca-bundle\") pod \"376693c8-e03f-4085-9be2-0ef9a0e27c5c\" (UID: \"376693c8-e03f-4085-9be2-0ef9a0e27c5c\") " Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.262489 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87bbb\" (UniqueName: \"kubernetes.io/projected/7b145d4b-42d7-4d88-b847-d5b470797f4c-kube-api-access-87bbb\") pod \"keystone7786-account-delete-n526k\" (UID: \"7b145d4b-42d7-4d88-b847-d5b470797f4c\") " pod="openstack/keystone7786-account-delete-n526k" Nov 28 07:19:44 crc kubenswrapper[4946]: E1128 07:19:44.268029 4946 projected.go:194] Error preparing data for projected volume kube-api-access-87bbb for pod openstack/keystone7786-account-delete-n526k: failed to fetch token: serviceaccounts "galera-openstack" not found Nov 28 07:19:44 crc kubenswrapper[4946]: E1128 07:19:44.268112 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b145d4b-42d7-4d88-b847-d5b470797f4c-kube-api-access-87bbb podName:7b145d4b-42d7-4d88-b847-d5b470797f4c nodeName:}" failed. No retries permitted until 2025-11-28 07:19:45.268088832 +0000 UTC m=+1639.646153943 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-87bbb" (UniqueName: "kubernetes.io/projected/7b145d4b-42d7-4d88-b847-d5b470797f4c-kube-api-access-87bbb") pod "keystone7786-account-delete-n526k" (UID: "7b145d4b-42d7-4d88-b847-d5b470797f4c") : failed to fetch token: serviceaccounts "galera-openstack" not found Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.278268 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7786-account-create-update-jrfgb"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.282362 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/376693c8-e03f-4085-9be2-0ef9a0e27c5c-kube-api-access-dxpd9" (OuterVolumeSpecName: "kube-api-access-dxpd9") pod "376693c8-e03f-4085-9be2-0ef9a0e27c5c" (UID: "376693c8-e03f-4085-9be2-0ef9a0e27c5c"). InnerVolumeSpecName "kube-api-access-dxpd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.294239 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-dghw2"] Nov 28 07:19:44 crc kubenswrapper[4946]: E1128 07:19:44.332394 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c93f03ceec40d0cc831cb034e30236277dd7917f19967a2a82daec8f7e2ea5a1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.332548 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/376693c8-e03f-4085-9be2-0ef9a0e27c5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "376693c8-e03f-4085-9be2-0ef9a0e27c5c" (UID: "376693c8-e03f-4085-9be2-0ef9a0e27c5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.342934 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="13d10c33-0ca9-47d5-ac49-19391cebfb39" containerName="galera" containerID="cri-o://63032ee424994bedde2af09f7484105017c4f440ba2229de43942a3d82430207" gracePeriod=30 Nov 28 07:19:44 crc kubenswrapper[4946]: E1128 07:19:44.353778 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c93f03ceec40d0cc831cb034e30236277dd7917f19967a2a82daec8f7e2ea5a1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.356503 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-dghw2"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.359889 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/376693c8-e03f-4085-9be2-0ef9a0e27c5c-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "376693c8-e03f-4085-9be2-0ef9a0e27c5c" (UID: "376693c8-e03f-4085-9be2-0ef9a0e27c5c"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.364636 4946 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/376693c8-e03f-4085-9be2-0ef9a0e27c5c-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.364766 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/376693c8-e03f-4085-9be2-0ef9a0e27c5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.364827 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxpd9\" (UniqueName: \"kubernetes.io/projected/376693c8-e03f-4085-9be2-0ef9a0e27c5c-kube-api-access-dxpd9\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:44 crc kubenswrapper[4946]: E1128 07:19:44.364933 4946 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 28 07:19:44 crc kubenswrapper[4946]: E1128 07:19:44.365151 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b138df41-1f0c-4edb-9546-e0f5ec16cf06-operator-scripts podName:b138df41-1f0c-4edb-9546-e0f5ec16cf06 nodeName:}" failed. No retries permitted until 2025-11-28 07:19:45.365135443 +0000 UTC m=+1639.743200554 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b138df41-1f0c-4edb-9546-e0f5ec16cf06-operator-scripts") pod "novacell0d35d-account-delete-c848z" (UID: "b138df41-1f0c-4edb-9546-e0f5ec16cf06") : configmap "openstack-scripts" not found Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.368224 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/376693c8-e03f-4085-9be2-0ef9a0e27c5c-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "376693c8-e03f-4085-9be2-0ef9a0e27c5c" (UID: "376693c8-e03f-4085-9be2-0ef9a0e27c5c"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:44 crc kubenswrapper[4946]: E1128 07:19:44.368225 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c93f03ceec40d0cc831cb034e30236277dd7917f19967a2a82daec8f7e2ea5a1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 28 07:19:44 crc kubenswrapper[4946]: E1128 07:19:44.368376 4946 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="b9847c34-a2be-405c-8bd8-34ba251d218d" containerName="nova-cell0-conductor-conductor" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.392358 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c822-account-create-update-lr7q8"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.397890 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c822-account-create-update-lr7q8"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.403070 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutronc822-account-delete-z6qcr"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.414600 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-zbgwb"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.432703 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-zbgwb"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.441393 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican1925-account-delete-gjf5l"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.448001 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="3521840d-60d0-450c-8c05-7e2ad0fc4e97" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.456938 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder8ebd-account-delete-trxbg" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.464671 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-1925-account-create-update-877c2"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.467544 4946 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/376693c8-e03f-4085-9be2-0ef9a0e27c5c-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.468318 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.476039 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-1925-account-create-update-877c2"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.480514 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.484389 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-5ncqm"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.489501 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-5ncqm"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.513800 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placementd9cc-account-delete-9rfvn"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.530207 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d9cc-account-create-update-vqrs4"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.538584 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d9cc-account-create-update-vqrs4"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.540194 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/novacell0d35d-account-delete-c848z" podStartSLOduration=5.540176243 podStartE2EDuration="5.540176243s" podCreationTimestamp="2025-11-28 07:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:19:43.787918524 +0000 UTC m=+1638.165983635" watchObservedRunningTime="2025-11-28 07:19:44.540176243 +0000 UTC m=+1638.918241354" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.573753 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32ed9247-5959-4a5b-a879-52fac366f999-operator-scripts\") pod \"32ed9247-5959-4a5b-a879-52fac366f999\" (UID: \"32ed9247-5959-4a5b-a879-52fac366f999\") " Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.573809 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-config-data\") pod \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.573835 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb4q5\" (UniqueName: \"kubernetes.io/projected/05997c14-3116-4439-8e63-230bf0e5c411-kube-api-access-zb4q5\") pod \"05997c14-3116-4439-8e63-230bf0e5c411\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.573859 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-combined-ca-bundle\") pod \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.573885 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-internal-tls-certs\") pod \"05997c14-3116-4439-8e63-230bf0e5c411\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.573924 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ztn2\" (UniqueName: \"kubernetes.io/projected/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-kube-api-access-5ztn2\") pod \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.573950 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-internal-tls-certs\") pod \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.574447 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.574513 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-config-data\") pod \"05997c14-3116-4439-8e63-230bf0e5c411\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.574592 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-scripts\") pod \"05997c14-3116-4439-8e63-230bf0e5c411\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.574633 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05997c14-3116-4439-8e63-230bf0e5c411-etc-machine-id\") pod \"05997c14-3116-4439-8e63-230bf0e5c411\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.574656 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-public-tls-certs\") pod \"05997c14-3116-4439-8e63-230bf0e5c411\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.574686 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05997c14-3116-4439-8e63-230bf0e5c411-logs\") pod \"05997c14-3116-4439-8e63-230bf0e5c411\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.574746 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-httpd-run\") pod \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.574767 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-logs\") pod \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.574855 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-scripts\") pod \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\" (UID: \"acadbe07-94b0-4a5d-ac42-6524f0e4ce61\") " Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.574885 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-combined-ca-bundle\") pod \"05997c14-3116-4439-8e63-230bf0e5c411\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.574902 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msrtg\" (UniqueName: \"kubernetes.io/projected/32ed9247-5959-4a5b-a879-52fac366f999-kube-api-access-msrtg\") pod \"32ed9247-5959-4a5b-a879-52fac366f999\" (UID: \"32ed9247-5959-4a5b-a879-52fac366f999\") " Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.574921 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-config-data-custom\") pod \"05997c14-3116-4439-8e63-230bf0e5c411\" (UID: \"05997c14-3116-4439-8e63-230bf0e5c411\") " Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.574979 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32ed9247-5959-4a5b-a879-52fac366f999-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "32ed9247-5959-4a5b-a879-52fac366f999" (UID: "32ed9247-5959-4a5b-a879-52fac366f999"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.588118 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32ed9247-5959-4a5b-a879-52fac366f999-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.592544 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05997c14-3116-4439-8e63-230bf0e5c411-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "05997c14-3116-4439-8e63-230bf0e5c411" (UID: "05997c14-3116-4439-8e63-230bf0e5c411"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.598411 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05997c14-3116-4439-8e63-230bf0e5c411-kube-api-access-zb4q5" (OuterVolumeSpecName: "kube-api-access-zb4q5") pod "05997c14-3116-4439-8e63-230bf0e5c411" (UID: "05997c14-3116-4439-8e63-230bf0e5c411"). InnerVolumeSpecName "kube-api-access-zb4q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.598735 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-kube-api-access-5ztn2" (OuterVolumeSpecName: "kube-api-access-5ztn2") pod "acadbe07-94b0-4a5d-ac42-6524f0e4ce61" (UID: "acadbe07-94b0-4a5d-ac42-6524f0e4ce61"). InnerVolumeSpecName "kube-api-access-5ztn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.599242 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "acadbe07-94b0-4a5d-ac42-6524f0e4ce61" (UID: "acadbe07-94b0-4a5d-ac42-6524f0e4ce61"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.599300 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05997c14-3116-4439-8e63-230bf0e5c411-logs" (OuterVolumeSpecName: "logs") pod "05997c14-3116-4439-8e63-230bf0e5c411" (UID: "05997c14-3116-4439-8e63-230bf0e5c411"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.604323 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-logs" (OuterVolumeSpecName: "logs") pod "acadbe07-94b0-4a5d-ac42-6524f0e4ce61" (UID: "acadbe07-94b0-4a5d-ac42-6524f0e4ce61"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.614291 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-wkths"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.625846 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-scripts" (OuterVolumeSpecName: "scripts") pod "acadbe07-94b0-4a5d-ac42-6524f0e4ce61" (UID: "acadbe07-94b0-4a5d-ac42-6524f0e4ce61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.628668 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "acadbe07-94b0-4a5d-ac42-6524f0e4ce61" (UID: "acadbe07-94b0-4a5d-ac42-6524f0e4ce61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.628717 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "05997c14-3116-4439-8e63-230bf0e5c411" (UID: "05997c14-3116-4439-8e63-230bf0e5c411"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.631705 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-scripts" (OuterVolumeSpecName: "scripts") pod "05997c14-3116-4439-8e63-230bf0e5c411" (UID: "05997c14-3116-4439-8e63-230bf0e5c411"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.632811 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "acadbe07-94b0-4a5d-ac42-6524f0e4ce61" (UID: "acadbe07-94b0-4a5d-ac42-6524f0e4ce61"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.639239 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32ed9247-5959-4a5b-a879-52fac366f999-kube-api-access-msrtg" (OuterVolumeSpecName: "kube-api-access-msrtg") pod "32ed9247-5959-4a5b-a879-52fac366f999" (UID: "32ed9247-5959-4a5b-a879-52fac366f999"). InnerVolumeSpecName "kube-api-access-msrtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.674114 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05997c14-3116-4439-8e63-230bf0e5c411" (UID: "05997c14-3116-4439-8e63-230bf0e5c411"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.691008 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.691034 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.691047 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msrtg\" (UniqueName: \"kubernetes.io/projected/32ed9247-5959-4a5b-a879-52fac366f999-kube-api-access-msrtg\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.691059 4946 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.691069 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb4q5\" (UniqueName: \"kubernetes.io/projected/05997c14-3116-4439-8e63-230bf0e5c411-kube-api-access-zb4q5\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.691077 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.691088 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ztn2\" (UniqueName: \"kubernetes.io/projected/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-kube-api-access-5ztn2\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.691115 4946 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.691124 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.691134 4946 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05997c14-3116-4439-8e63-230bf0e5c411-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.691563 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05997c14-3116-4439-8e63-230bf0e5c411-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.691578 4946 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.691590 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.698072 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "05997c14-3116-4439-8e63-230bf0e5c411" (UID: "05997c14-3116-4439-8e63-230bf0e5c411"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.709890 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-wkths"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.761972 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-d713-account-create-update-5fhps"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.767321 4946 generic.go:334] "Generic (PLEG): container finished" podID="62fdad5e-59c5-4d8f-87da-79b384fb82be" containerID="9030866a771edcec9dbaa59d8cf162cc3924cacd011d8fb010810714f827e7e0" exitCode=0 Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.767380 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"62fdad5e-59c5-4d8f-87da-79b384fb82be","Type":"ContainerDied","Data":"9030866a771edcec9dbaa59d8cf162cc3924cacd011d8fb010810714f827e7e0"} Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.769148 4946 generic.go:334] "Generic (PLEG): container finished" podID="05997c14-3116-4439-8e63-230bf0e5c411" containerID="bf9f1ffb702a1544bb3f60c1bb7628caa63240849c8027e55a2a252bcc1fc987" exitCode=0 Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.769184 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"05997c14-3116-4439-8e63-230bf0e5c411","Type":"ContainerDied","Data":"bf9f1ffb702a1544bb3f60c1bb7628caa63240849c8027e55a2a252bcc1fc987"} Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.769199 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"05997c14-3116-4439-8e63-230bf0e5c411","Type":"ContainerDied","Data":"05c10eca8555675d751aaa6aff59f9834ee91a9a4feea01b254bf563ec7cf1cb"} Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.769216 4946 scope.go:117] "RemoveContainer" containerID="bf9f1ffb702a1544bb3f60c1bb7628caa63240849c8027e55a2a252bcc1fc987" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.769391 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.769774 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapid713-account-delete-lj6h9"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.779177 4946 generic.go:334] "Generic (PLEG): container finished" podID="177d3d9f-5e48-4b4e-9329-9d46daa35557" containerID="ffcb008ee579acbdfa8d874703f551f470e1a6f511bed4e136ae6381f70bdf76" exitCode=0 Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.779205 4946 generic.go:334] "Generic (PLEG): container finished" podID="177d3d9f-5e48-4b4e-9329-9d46daa35557" containerID="6652f9f13937850c60218d658275da3e9395b434290cd817f8561dffb81b033e" exitCode=0 Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.779213 4946 generic.go:334] "Generic (PLEG): container finished" podID="177d3d9f-5e48-4b4e-9329-9d46daa35557" containerID="c42c1219c7b69cbd56741a6ed7605e384c6a1b1aa06e921f85ae5e9b886c56e0" exitCode=0 Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.779257 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"177d3d9f-5e48-4b4e-9329-9d46daa35557","Type":"ContainerDied","Data":"ffcb008ee579acbdfa8d874703f551f470e1a6f511bed4e136ae6381f70bdf76"} Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.779279 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"177d3d9f-5e48-4b4e-9329-9d46daa35557","Type":"ContainerDied","Data":"6652f9f13937850c60218d658275da3e9395b434290cd817f8561dffb81b033e"} Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.779290 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"177d3d9f-5e48-4b4e-9329-9d46daa35557","Type":"ContainerDied","Data":"c42c1219c7b69cbd56741a6ed7605e384c6a1b1aa06e921f85ae5e9b886c56e0"} Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.780529 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-d713-account-create-update-5fhps"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.786454 4946 generic.go:334] "Generic (PLEG): container finished" podID="db7322c8-b99d-4970-85c0-218d683f1ca3" containerID="aff8824b6f9748bbf0caf47e4c6c01486fb6b04768113350feeb4bb43a17bff5" exitCode=0 Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.786551 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"db7322c8-b99d-4970-85c0-218d683f1ca3","Type":"ContainerDied","Data":"aff8824b6f9748bbf0caf47e4c6c01486fb6b04768113350feeb4bb43a17bff5"} Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.790817 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-6n69v"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.791618 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder8ebd-account-delete-trxbg" event={"ID":"32ed9247-5959-4a5b-a879-52fac366f999","Type":"ContainerDied","Data":"fbd9ee63819666086f4ce4c4da4555aa44a3bb92afa7680ff1b6e2c037dbdd9b"} Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.791646 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbd9ee63819666086f4ce4c4da4555aa44a3bb92afa7680ff1b6e2c037dbdd9b" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.791718 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder8ebd-account-delete-trxbg" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.794482 4946 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.798908 4946 generic.go:334] "Generic (PLEG): container finished" podID="313c5837-e776-49ef-8689-14f6f70d31a1" containerID="7154bbd8ae6506bb7231f4ac45fc4ff0e5022062ff798b35ec9d298de493ab9f" exitCode=0 Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.798988 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-576769594d-lbv64" event={"ID":"313c5837-e776-49ef-8689-14f6f70d31a1","Type":"ContainerDied","Data":"7154bbd8ae6506bb7231f4ac45fc4ff0e5022062ff798b35ec9d298de493ab9f"} Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.801582 4946 generic.go:334] "Generic (PLEG): container finished" podID="c3ba1954-566f-4e25-8312-855a58935547" containerID="47d5db84ad9670de902990ff6af710f405452597c9ef2064765b609a18f5147a" exitCode=0 Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.801688 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c3ba1954-566f-4e25-8312-855a58935547","Type":"ContainerDied","Data":"47d5db84ad9670de902990ff6af710f405452597c9ef2064765b609a18f5147a"} Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.805768 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-6n69v"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.809635 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-d35d-account-create-update-8j294"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.812167 4946 generic.go:334] "Generic (PLEG): container finished" podID="206851cb-4673-4ce1-b038-c2e425d306b7" containerID="15a189a2591a0270f4f6fdcfc4b5cb8f68b93b0132f30d27ff66c62f4a2d9ffd" exitCode=0 Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.812252 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"206851cb-4673-4ce1-b038-c2e425d306b7","Type":"ContainerDied","Data":"15a189a2591a0270f4f6fdcfc4b5cb8f68b93b0132f30d27ff66c62f4a2d9ffd"} Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.813992 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"376693c8-e03f-4085-9be2-0ef9a0e27c5c","Type":"ContainerDied","Data":"fcaa1cb1eda846a1f782d645fb204b00da3dd8440a9ad15f6a22767eb3dd67de"} Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.814098 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.822897 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-d35d-account-create-update-8j294"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.825968 4946 generic.go:334] "Generic (PLEG): container finished" podID="52101de8-a25c-4372-9df3-3f090167ff5f" containerID="6b8d481b71ffab080960ca39f72a71bb4c2e8f178d73020eb2dfb24175734884" exitCode=0 Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.826042 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-96fb6f878-56tfz" event={"ID":"52101de8-a25c-4372-9df3-3f090167ff5f","Type":"ContainerDied","Data":"6b8d481b71ffab080960ca39f72a71bb4c2e8f178d73020eb2dfb24175734884"} Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.831073 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell0d35d-account-delete-c848z"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.832577 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"813134a8-463b-4f7d-8160-ceb1c5a96853","Type":"ContainerDied","Data":"a5bc002da109da3ebb39a470b77cd1c3a84898e95c562ea010269e7e486161bb"} Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.832561 4946 generic.go:334] "Generic (PLEG): container finished" podID="813134a8-463b-4f7d-8160-ceb1c5a96853" containerID="a5bc002da109da3ebb39a470b77cd1c3a84898e95c562ea010269e7e486161bb" exitCode=0 Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.832841 4946 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.836589 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.837118 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-65f88f985c-d964v"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.837162 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"acadbe07-94b0-4a5d-ac42-6524f0e4ce61","Type":"ContainerDied","Data":"d98d8dd8dcc0fc29e123ccc773426df78e1b72e165951ad2e1b448184d5ad6d8"} Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.843199 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-65f88f985c-d964v"] Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.846606 4946 generic.go:334] "Generic (PLEG): container finished" podID="587a9b3d-1634-4af6-96d2-e60c03a7d75f" containerID="ff6fb5d2f5f3038d8cdb313bf501d56a6b33a90f0593856e2e8097e215b377eb" exitCode=0 Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.846807 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"587a9b3d-1634-4af6-96d2-e60c03a7d75f","Type":"ContainerDied","Data":"ff6fb5d2f5f3038d8cdb313bf501d56a6b33a90f0593856e2e8097e215b377eb"} Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.847173 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone7786-account-delete-n526k" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.847710 4946 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novacell0d35d-account-delete-c848z" secret="" err="secret \"galera-openstack-dockercfg-g9947\" not found" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.864970 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "05997c14-3116-4439-8e63-230bf0e5c411" (UID: "05997c14-3116-4439-8e63-230bf0e5c411"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.887708 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-config-data" (OuterVolumeSpecName: "config-data") pod "acadbe07-94b0-4a5d-ac42-6524f0e4ce61" (UID: "acadbe07-94b0-4a5d-ac42-6524f0e4ce61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.897002 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.897030 4946 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.897042 4946 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.899567 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-config-data" (OuterVolumeSpecName: "config-data") pod "05997c14-3116-4439-8e63-230bf0e5c411" (UID: "05997c14-3116-4439-8e63-230bf0e5c411"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:44 crc kubenswrapper[4946]: I1128 07:19:44.928163 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "acadbe07-94b0-4a5d-ac42-6524f0e4ce61" (UID: "acadbe07-94b0-4a5d-ac42-6524f0e4ce61"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.000331 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05997c14-3116-4439-8e63-230bf0e5c411-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.000662 4946 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/acadbe07-94b0-4a5d-ac42-6524f0e4ce61-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.168399 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.177066 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.204868 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b145d4b-42d7-4d88-b847-d5b470797f4c-operator-scripts\") pod \"keystone7786-account-delete-n526k\" (UID: \"7b145d4b-42d7-4d88-b847-d5b470797f4c\") " pod="openstack/keystone7786-account-delete-n526k" Nov 28 07:19:45 crc kubenswrapper[4946]: E1128 07:19:45.205072 4946 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 28 07:19:45 crc kubenswrapper[4946]: E1128 07:19:45.205121 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b145d4b-42d7-4d88-b847-d5b470797f4c-operator-scripts podName:7b145d4b-42d7-4d88-b847-d5b470797f4c nodeName:}" failed. No retries permitted until 2025-11-28 07:19:47.205106392 +0000 UTC m=+1641.583171503 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7b145d4b-42d7-4d88-b847-d5b470797f4c-operator-scripts") pod "keystone7786-account-delete-n526k" (UID: "7b145d4b-42d7-4d88-b847-d5b470797f4c") : configmap "openstack-scripts" not found Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.220915 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder8ebd-account-delete-trxbg"] Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.235657 4946 scope.go:117] "RemoveContainer" containerID="e5f294dc1328491a34656832803a089d67378b093e4cd726077d224138730910" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.259835 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder8ebd-account-delete-trxbg"] Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.263890 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.264086 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone7786-account-delete-n526k" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.272863 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glanceb763-account-delete-mmnjs" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.273933 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.275105 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.278947 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.308451 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"813134a8-463b-4f7d-8160-ceb1c5a96853\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.308531 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-scripts\") pod \"52101de8-a25c-4372-9df3-3f090167ff5f\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.308550 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52101de8-a25c-4372-9df3-3f090167ff5f-logs\") pod \"52101de8-a25c-4372-9df3-3f090167ff5f\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.308596 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/813134a8-463b-4f7d-8160-ceb1c5a96853-scripts\") pod \"813134a8-463b-4f7d-8160-ceb1c5a96853\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.308618 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/813134a8-463b-4f7d-8160-ceb1c5a96853-combined-ca-bundle\") pod \"813134a8-463b-4f7d-8160-ceb1c5a96853\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.308665 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-internal-tls-certs\") pod \"52101de8-a25c-4372-9df3-3f090167ff5f\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.308694 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-combined-ca-bundle\") pod \"52101de8-a25c-4372-9df3-3f090167ff5f\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.308726 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/813134a8-463b-4f7d-8160-ceb1c5a96853-config-data\") pod \"813134a8-463b-4f7d-8160-ceb1c5a96853\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.308752 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db7322c8-b99d-4970-85c0-218d683f1ca3-config-data\") pod \"db7322c8-b99d-4970-85c0-218d683f1ca3\" (UID: \"db7322c8-b99d-4970-85c0-218d683f1ca3\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.308794 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-config-data\") pod \"52101de8-a25c-4372-9df3-3f090167ff5f\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.308810 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqk9h\" (UniqueName: \"kubernetes.io/projected/813134a8-463b-4f7d-8160-ceb1c5a96853-kube-api-access-lqk9h\") pod \"813134a8-463b-4f7d-8160-ceb1c5a96853\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.308834 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-public-tls-certs\") pod \"52101de8-a25c-4372-9df3-3f090167ff5f\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.308863 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnggb\" (UniqueName: \"kubernetes.io/projected/52101de8-a25c-4372-9df3-3f090167ff5f-kube-api-access-bnggb\") pod \"52101de8-a25c-4372-9df3-3f090167ff5f\" (UID: \"52101de8-a25c-4372-9df3-3f090167ff5f\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.308883 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7322c8-b99d-4970-85c0-218d683f1ca3-combined-ca-bundle\") pod \"db7322c8-b99d-4970-85c0-218d683f1ca3\" (UID: \"db7322c8-b99d-4970-85c0-218d683f1ca3\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.308922 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/813134a8-463b-4f7d-8160-ceb1c5a96853-public-tls-certs\") pod \"813134a8-463b-4f7d-8160-ceb1c5a96853\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.308942 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tcv2\" (UniqueName: \"kubernetes.io/projected/db7322c8-b99d-4970-85c0-218d683f1ca3-kube-api-access-6tcv2\") pod \"db7322c8-b99d-4970-85c0-218d683f1ca3\" (UID: \"db7322c8-b99d-4970-85c0-218d683f1ca3\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.308976 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/813134a8-463b-4f7d-8160-ceb1c5a96853-httpd-run\") pod \"813134a8-463b-4f7d-8160-ceb1c5a96853\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.309006 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/813134a8-463b-4f7d-8160-ceb1c5a96853-logs\") pod \"813134a8-463b-4f7d-8160-ceb1c5a96853\" (UID: \"813134a8-463b-4f7d-8160-ceb1c5a96853\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.309375 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87bbb\" (UniqueName: \"kubernetes.io/projected/7b145d4b-42d7-4d88-b847-d5b470797f4c-kube-api-access-87bbb\") pod \"keystone7786-account-delete-n526k\" (UID: \"7b145d4b-42d7-4d88-b847-d5b470797f4c\") " pod="openstack/keystone7786-account-delete-n526k" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.310858 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.311981 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placementd9cc-account-delete-9rfvn" Nov 28 07:19:45 crc kubenswrapper[4946]: E1128 07:19:45.315876 4946 projected.go:194] Error preparing data for projected volume kube-api-access-87bbb for pod openstack/keystone7786-account-delete-n526k: failed to fetch token: serviceaccounts "galera-openstack" not found Nov 28 07:19:45 crc kubenswrapper[4946]: E1128 07:19:45.315948 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b145d4b-42d7-4d88-b847-d5b470797f4c-kube-api-access-87bbb podName:7b145d4b-42d7-4d88-b847-d5b470797f4c nodeName:}" failed. No retries permitted until 2025-11-28 07:19:47.315927334 +0000 UTC m=+1641.693992445 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-87bbb" (UniqueName: "kubernetes.io/projected/7b145d4b-42d7-4d88-b847-d5b470797f4c-kube-api-access-87bbb") pod "keystone7786-account-delete-n526k" (UID: "7b145d4b-42d7-4d88-b847-d5b470797f4c") : failed to fetch token: serviceaccounts "galera-openstack" not found Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.326485 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/813134a8-463b-4f7d-8160-ceb1c5a96853-logs" (OuterVolumeSpecName: "logs") pod "813134a8-463b-4f7d-8160-ceb1c5a96853" (UID: "813134a8-463b-4f7d-8160-ceb1c5a96853"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.327015 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52101de8-a25c-4372-9df3-3f090167ff5f-logs" (OuterVolumeSpecName: "logs") pod "52101de8-a25c-4372-9df3-3f090167ff5f" (UID: "52101de8-a25c-4372-9df3-3f090167ff5f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.327107 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/813134a8-463b-4f7d-8160-ceb1c5a96853-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "813134a8-463b-4f7d-8160-ceb1c5a96853" (UID: "813134a8-463b-4f7d-8160-ceb1c5a96853"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.331767 4946 scope.go:117] "RemoveContainer" containerID="bf9f1ffb702a1544bb3f60c1bb7628caa63240849c8027e55a2a252bcc1fc987" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.332974 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-scripts" (OuterVolumeSpecName: "scripts") pod "52101de8-a25c-4372-9df3-3f090167ff5f" (UID: "52101de8-a25c-4372-9df3-3f090167ff5f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: E1128 07:19:45.334612 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf9f1ffb702a1544bb3f60c1bb7628caa63240849c8027e55a2a252bcc1fc987\": container with ID starting with bf9f1ffb702a1544bb3f60c1bb7628caa63240849c8027e55a2a252bcc1fc987 not found: ID does not exist" containerID="bf9f1ffb702a1544bb3f60c1bb7628caa63240849c8027e55a2a252bcc1fc987" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.334670 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf9f1ffb702a1544bb3f60c1bb7628caa63240849c8027e55a2a252bcc1fc987"} err="failed to get container status \"bf9f1ffb702a1544bb3f60c1bb7628caa63240849c8027e55a2a252bcc1fc987\": rpc error: code = NotFound desc = could not find container \"bf9f1ffb702a1544bb3f60c1bb7628caa63240849c8027e55a2a252bcc1fc987\": container with ID starting with bf9f1ffb702a1544bb3f60c1bb7628caa63240849c8027e55a2a252bcc1fc987 not found: ID does not exist" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.334701 4946 scope.go:117] "RemoveContainer" containerID="e5f294dc1328491a34656832803a089d67378b093e4cd726077d224138730910" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.334640 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 07:19:45 crc kubenswrapper[4946]: E1128 07:19:45.335060 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5f294dc1328491a34656832803a089d67378b093e4cd726077d224138730910\": container with ID starting with e5f294dc1328491a34656832803a089d67378b093e4cd726077d224138730910 not found: ID does not exist" containerID="e5f294dc1328491a34656832803a089d67378b093e4cd726077d224138730910" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.335108 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5f294dc1328491a34656832803a089d67378b093e4cd726077d224138730910"} err="failed to get container status \"e5f294dc1328491a34656832803a089d67378b093e4cd726077d224138730910\": rpc error: code = NotFound desc = could not find container \"e5f294dc1328491a34656832803a089d67378b093e4cd726077d224138730910\": container with ID starting with e5f294dc1328491a34656832803a089d67378b093e4cd726077d224138730910 not found: ID does not exist" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.335141 4946 scope.go:117] "RemoveContainer" containerID="0c9ce058f230f9783a496b0b6cde4e3a924eca33a5a495750e924592c8150850" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.337047 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/813134a8-463b-4f7d-8160-ceb1c5a96853-scripts" (OuterVolumeSpecName: "scripts") pod "813134a8-463b-4f7d-8160-ceb1c5a96853" (UID: "813134a8-463b-4f7d-8160-ceb1c5a96853"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.337209 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "813134a8-463b-4f7d-8160-ceb1c5a96853" (UID: "813134a8-463b-4f7d-8160-ceb1c5a96853"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.338502 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.359456 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.361003 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52101de8-a25c-4372-9df3-3f090167ff5f-kube-api-access-bnggb" (OuterVolumeSpecName: "kube-api-access-bnggb") pod "52101de8-a25c-4372-9df3-3f090167ff5f" (UID: "52101de8-a25c-4372-9df3-3f090167ff5f"). InnerVolumeSpecName "kube-api-access-bnggb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.361619 4946 scope.go:117] "RemoveContainer" containerID="b1f1ded874bdaaff5d6be92e1099409af1ca775953c8f30295a4c606005bdf30" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.361828 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db7322c8-b99d-4970-85c0-218d683f1ca3-kube-api-access-6tcv2" (OuterVolumeSpecName: "kube-api-access-6tcv2") pod "db7322c8-b99d-4970-85c0-218d683f1ca3" (UID: "db7322c8-b99d-4970-85c0-218d683f1ca3"). InnerVolumeSpecName "kube-api-access-6tcv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.363913 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.371304 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.372828 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/813134a8-463b-4f7d-8160-ceb1c5a96853-kube-api-access-lqk9h" (OuterVolumeSpecName: "kube-api-access-lqk9h") pod "813134a8-463b-4f7d-8160-ceb1c5a96853" (UID: "813134a8-463b-4f7d-8160-ceb1c5a96853"). InnerVolumeSpecName "kube-api-access-lqk9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.376663 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.378898 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/813134a8-463b-4f7d-8160-ceb1c5a96853-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "813134a8-463b-4f7d-8160-ceb1c5a96853" (UID: "813134a8-463b-4f7d-8160-ceb1c5a96853"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.381311 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.410829 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gw8z\" (UniqueName: \"kubernetes.io/projected/313c5837-e776-49ef-8689-14f6f70d31a1-kube-api-access-6gw8z\") pod \"313c5837-e776-49ef-8689-14f6f70d31a1\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.410901 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/206851cb-4673-4ce1-b038-c2e425d306b7-logs\") pod \"206851cb-4673-4ce1-b038-c2e425d306b7\" (UID: \"206851cb-4673-4ce1-b038-c2e425d306b7\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.410934 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pztt2\" (UniqueName: \"kubernetes.io/projected/177d3d9f-5e48-4b4e-9329-9d46daa35557-kube-api-access-pztt2\") pod \"177d3d9f-5e48-4b4e-9329-9d46daa35557\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.411179 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/206851cb-4673-4ce1-b038-c2e425d306b7-combined-ca-bundle\") pod \"206851cb-4673-4ce1-b038-c2e425d306b7\" (UID: \"206851cb-4673-4ce1-b038-c2e425d306b7\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.411234 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/206851cb-4673-4ce1-b038-c2e425d306b7-nova-metadata-tls-certs\") pod \"206851cb-4673-4ce1-b038-c2e425d306b7\" (UID: \"206851cb-4673-4ce1-b038-c2e425d306b7\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.411321 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-public-tls-certs\") pod \"313c5837-e776-49ef-8689-14f6f70d31a1\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.411353 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8bs8\" (UniqueName: \"kubernetes.io/projected/eb07712a-805e-4e0f-9a81-dd8ce42bfb88-kube-api-access-g8bs8\") pod \"eb07712a-805e-4e0f-9a81-dd8ce42bfb88\" (UID: \"eb07712a-805e-4e0f-9a81-dd8ce42bfb88\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.411386 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsf2k\" (UniqueName: \"kubernetes.io/projected/1486d432-b3a6-4470-b145-076dafbfca67-kube-api-access-nsf2k\") pod \"1486d432-b3a6-4470-b145-076dafbfca67\" (UID: \"1486d432-b3a6-4470-b145-076dafbfca67\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.411407 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-ceilometer-tls-certs\") pod \"177d3d9f-5e48-4b4e-9329-9d46daa35557\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.411440 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/206851cb-4673-4ce1-b038-c2e425d306b7-config-data\") pod \"206851cb-4673-4ce1-b038-c2e425d306b7\" (UID: \"206851cb-4673-4ce1-b038-c2e425d306b7\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.411501 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-scripts\") pod \"177d3d9f-5e48-4b4e-9329-9d46daa35557\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.411554 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/587a9b3d-1634-4af6-96d2-e60c03a7d75f-internal-tls-certs\") pod \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\" (UID: \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.411607 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lxxd\" (UniqueName: \"kubernetes.io/projected/587a9b3d-1634-4af6-96d2-e60c03a7d75f-kube-api-access-5lxxd\") pod \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\" (UID: \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.411649 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t26h2\" (UniqueName: \"kubernetes.io/projected/c3ba1954-566f-4e25-8312-855a58935547-kube-api-access-t26h2\") pod \"c3ba1954-566f-4e25-8312-855a58935547\" (UID: \"c3ba1954-566f-4e25-8312-855a58935547\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.411685 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/587a9b3d-1634-4af6-96d2-e60c03a7d75f-public-tls-certs\") pod \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\" (UID: \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.411710 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/177d3d9f-5e48-4b4e-9329-9d46daa35557-log-httpd\") pod \"177d3d9f-5e48-4b4e-9329-9d46daa35557\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.411738 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-config-data-custom\") pod \"313c5837-e776-49ef-8689-14f6f70d31a1\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.411777 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-config-data\") pod \"177d3d9f-5e48-4b4e-9329-9d46daa35557\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.411814 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-combined-ca-bundle\") pod \"313c5837-e776-49ef-8689-14f6f70d31a1\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.411842 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb07712a-805e-4e0f-9a81-dd8ce42bfb88-operator-scripts\") pod \"eb07712a-805e-4e0f-9a81-dd8ce42bfb88\" (UID: \"eb07712a-805e-4e0f-9a81-dd8ce42bfb88\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.411877 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/177d3d9f-5e48-4b4e-9329-9d46daa35557-run-httpd\") pod \"177d3d9f-5e48-4b4e-9329-9d46daa35557\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.411903 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1486d432-b3a6-4470-b145-076dafbfca67-operator-scripts\") pod \"1486d432-b3a6-4470-b145-076dafbfca67\" (UID: \"1486d432-b3a6-4470-b145-076dafbfca67\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.411932 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-config-data\") pod \"313c5837-e776-49ef-8689-14f6f70d31a1\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.411972 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3ba1954-566f-4e25-8312-855a58935547-config-data\") pod \"c3ba1954-566f-4e25-8312-855a58935547\" (UID: \"c3ba1954-566f-4e25-8312-855a58935547\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.412026 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfzh8\" (UniqueName: \"kubernetes.io/projected/206851cb-4673-4ce1-b038-c2e425d306b7-kube-api-access-pfzh8\") pod \"206851cb-4673-4ce1-b038-c2e425d306b7\" (UID: \"206851cb-4673-4ce1-b038-c2e425d306b7\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.412060 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/587a9b3d-1634-4af6-96d2-e60c03a7d75f-combined-ca-bundle\") pod \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\" (UID: \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.412104 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/587a9b3d-1634-4af6-96d2-e60c03a7d75f-config-data\") pod \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\" (UID: \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.412129 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-sg-core-conf-yaml\") pod \"177d3d9f-5e48-4b4e-9329-9d46daa35557\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.412166 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/313c5837-e776-49ef-8689-14f6f70d31a1-logs\") pod \"313c5837-e776-49ef-8689-14f6f70d31a1\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.412200 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-combined-ca-bundle\") pod \"177d3d9f-5e48-4b4e-9329-9d46daa35557\" (UID: \"177d3d9f-5e48-4b4e-9329-9d46daa35557\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.412223 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/587a9b3d-1634-4af6-96d2-e60c03a7d75f-logs\") pod \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\" (UID: \"587a9b3d-1634-4af6-96d2-e60c03a7d75f\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.412244 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-internal-tls-certs\") pod \"313c5837-e776-49ef-8689-14f6f70d31a1\" (UID: \"313c5837-e776-49ef-8689-14f6f70d31a1\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.412269 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ba1954-566f-4e25-8312-855a58935547-combined-ca-bundle\") pod \"c3ba1954-566f-4e25-8312-855a58935547\" (UID: \"c3ba1954-566f-4e25-8312-855a58935547\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.412900 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/206851cb-4673-4ce1-b038-c2e425d306b7-logs" (OuterVolumeSpecName: "logs") pod "206851cb-4673-4ce1-b038-c2e425d306b7" (UID: "206851cb-4673-4ce1-b038-c2e425d306b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.413824 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/813134a8-463b-4f7d-8160-ceb1c5a96853-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.455608 4946 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.456314 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/206851cb-4673-4ce1-b038-c2e425d306b7-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.456403 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.456496 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52101de8-a25c-4372-9df3-3f090167ff5f-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.456585 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/813134a8-463b-4f7d-8160-ceb1c5a96853-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.456648 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/813134a8-463b-4f7d-8160-ceb1c5a96853-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.456706 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqk9h\" (UniqueName: \"kubernetes.io/projected/813134a8-463b-4f7d-8160-ceb1c5a96853-kube-api-access-lqk9h\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.456764 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnggb\" (UniqueName: \"kubernetes.io/projected/52101de8-a25c-4372-9df3-3f090167ff5f-kube-api-access-bnggb\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.456829 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tcv2\" (UniqueName: \"kubernetes.io/projected/db7322c8-b99d-4970-85c0-218d683f1ca3-kube-api-access-6tcv2\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.456882 4946 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/813134a8-463b-4f7d-8160-ceb1c5a96853-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.423101 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/177d3d9f-5e48-4b4e-9329-9d46daa35557-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "177d3d9f-5e48-4b4e-9329-9d46daa35557" (UID: "177d3d9f-5e48-4b4e-9329-9d46daa35557"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: E1128 07:19:45.425350 4946 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 28 07:19:45 crc kubenswrapper[4946]: E1128 07:19:45.457102 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b138df41-1f0c-4edb-9546-e0f5ec16cf06-operator-scripts podName:b138df41-1f0c-4edb-9546-e0f5ec16cf06 nodeName:}" failed. No retries permitted until 2025-11-28 07:19:47.457079755 +0000 UTC m=+1641.835144856 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b138df41-1f0c-4edb-9546-e0f5ec16cf06-operator-scripts") pod "novacell0d35d-account-delete-c848z" (UID: "b138df41-1f0c-4edb-9546-e0f5ec16cf06") : configmap "openstack-scripts" not found Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.426333 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/587a9b3d-1634-4af6-96d2-e60c03a7d75f-logs" (OuterVolumeSpecName: "logs") pod "587a9b3d-1634-4af6-96d2-e60c03a7d75f" (UID: "587a9b3d-1634-4af6-96d2-e60c03a7d75f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.434528 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1486d432-b3a6-4470-b145-076dafbfca67-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1486d432-b3a6-4470-b145-076dafbfca67" (UID: "1486d432-b3a6-4470-b145-076dafbfca67"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.436956 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/313c5837-e776-49ef-8689-14f6f70d31a1-logs" (OuterVolumeSpecName: "logs") pod "313c5837-e776-49ef-8689-14f6f70d31a1" (UID: "313c5837-e776-49ef-8689-14f6f70d31a1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.439651 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb07712a-805e-4e0f-9a81-dd8ce42bfb88-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb07712a-805e-4e0f-9a81-dd8ce42bfb88" (UID: "eb07712a-805e-4e0f-9a81-dd8ce42bfb88"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.450730 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/313c5837-e776-49ef-8689-14f6f70d31a1-kube-api-access-6gw8z" (OuterVolumeSpecName: "kube-api-access-6gw8z") pod "313c5837-e776-49ef-8689-14f6f70d31a1" (UID: "313c5837-e776-49ef-8689-14f6f70d31a1"). InnerVolumeSpecName "kube-api-access-6gw8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.454614 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/177d3d9f-5e48-4b4e-9329-9d46daa35557-kube-api-access-pztt2" (OuterVolumeSpecName: "kube-api-access-pztt2") pod "177d3d9f-5e48-4b4e-9329-9d46daa35557" (UID: "177d3d9f-5e48-4b4e-9329-9d46daa35557"). InnerVolumeSpecName "kube-api-access-pztt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.455385 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/206851cb-4673-4ce1-b038-c2e425d306b7-kube-api-access-pfzh8" (OuterVolumeSpecName: "kube-api-access-pfzh8") pod "206851cb-4673-4ce1-b038-c2e425d306b7" (UID: "206851cb-4673-4ce1-b038-c2e425d306b7"). InnerVolumeSpecName "kube-api-access-pfzh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.455439 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/177d3d9f-5e48-4b4e-9329-9d46daa35557-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "177d3d9f-5e48-4b4e-9329-9d46daa35557" (UID: "177d3d9f-5e48-4b4e-9329-9d46daa35557"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.507991 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "313c5837-e776-49ef-8689-14f6f70d31a1" (UID: "313c5837-e776-49ef-8689-14f6f70d31a1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.508093 4946 scope.go:117] "RemoveContainer" containerID="364195b07b396d49046a6d8594cb12442c0047908fd530817352c055ef2e1319" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.529802 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/587a9b3d-1634-4af6-96d2-e60c03a7d75f-kube-api-access-5lxxd" (OuterVolumeSpecName: "kube-api-access-5lxxd") pod "587a9b3d-1634-4af6-96d2-e60c03a7d75f" (UID: "587a9b3d-1634-4af6-96d2-e60c03a7d75f"). InnerVolumeSpecName "kube-api-access-5lxxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.542269 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb07712a-805e-4e0f-9a81-dd8ce42bfb88-kube-api-access-g8bs8" (OuterVolumeSpecName: "kube-api-access-g8bs8") pod "eb07712a-805e-4e0f-9a81-dd8ce42bfb88" (UID: "eb07712a-805e-4e0f-9a81-dd8ce42bfb88"). InnerVolumeSpecName "kube-api-access-g8bs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.550835 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1486d432-b3a6-4470-b145-076dafbfca67-kube-api-access-nsf2k" (OuterVolumeSpecName: "kube-api-access-nsf2k") pod "1486d432-b3a6-4470-b145-076dafbfca67" (UID: "1486d432-b3a6-4470-b145-076dafbfca67"). InnerVolumeSpecName "kube-api-access-nsf2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.550977 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-scripts" (OuterVolumeSpecName: "scripts") pod "177d3d9f-5e48-4b4e-9329-9d46daa35557" (UID: "177d3d9f-5e48-4b4e-9329-9d46daa35557"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.551036 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3ba1954-566f-4e25-8312-855a58935547-kube-api-access-t26h2" (OuterVolumeSpecName: "kube-api-access-t26h2") pod "c3ba1954-566f-4e25-8312-855a58935547" (UID: "c3ba1954-566f-4e25-8312-855a58935547"). InnerVolumeSpecName "kube-api-access-t26h2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.558295 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.562067 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.562090 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lxxd\" (UniqueName: \"kubernetes.io/projected/587a9b3d-1634-4af6-96d2-e60c03a7d75f-kube-api-access-5lxxd\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.562108 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t26h2\" (UniqueName: \"kubernetes.io/projected/c3ba1954-566f-4e25-8312-855a58935547-kube-api-access-t26h2\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.562149 4946 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/177d3d9f-5e48-4b4e-9329-9d46daa35557-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.562162 4946 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.562171 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb07712a-805e-4e0f-9a81-dd8ce42bfb88-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.562183 4946 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/177d3d9f-5e48-4b4e-9329-9d46daa35557-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.562193 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1486d432-b3a6-4470-b145-076dafbfca67-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.562202 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfzh8\" (UniqueName: \"kubernetes.io/projected/206851cb-4673-4ce1-b038-c2e425d306b7-kube-api-access-pfzh8\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.562211 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/313c5837-e776-49ef-8689-14f6f70d31a1-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.562221 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/587a9b3d-1634-4af6-96d2-e60c03a7d75f-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.562230 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gw8z\" (UniqueName: \"kubernetes.io/projected/313c5837-e776-49ef-8689-14f6f70d31a1-kube-api-access-6gw8z\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.562239 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pztt2\" (UniqueName: \"kubernetes.io/projected/177d3d9f-5e48-4b4e-9329-9d46daa35557-kube-api-access-pztt2\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.562331 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8bs8\" (UniqueName: \"kubernetes.io/projected/eb07712a-805e-4e0f-9a81-dd8ce42bfb88-kube-api-access-g8bs8\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.562345 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsf2k\" (UniqueName: \"kubernetes.io/projected/1486d432-b3a6-4470-b145-076dafbfca67-kube-api-access-nsf2k\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.620707 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db7322c8-b99d-4970-85c0-218d683f1ca3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db7322c8-b99d-4970-85c0-218d683f1ca3" (UID: "db7322c8-b99d-4970-85c0-218d683f1ca3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: E1128 07:19:45.648340 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="63032ee424994bedde2af09f7484105017c4f440ba2229de43942a3d82430207" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 28 07:19:45 crc kubenswrapper[4946]: E1128 07:19:45.655770 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="63032ee424994bedde2af09f7484105017c4f440ba2229de43942a3d82430207" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 28 07:19:45 crc kubenswrapper[4946]: E1128 07:19:45.668307 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="63032ee424994bedde2af09f7484105017c4f440ba2229de43942a3d82430207" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 28 07:19:45 crc kubenswrapper[4946]: E1128 07:19:45.668368 4946 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="13d10c33-0ca9-47d5-ac49-19391cebfb39" containerName="galera" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.670914 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62fdad5e-59c5-4d8f-87da-79b384fb82be-combined-ca-bundle\") pod \"62fdad5e-59c5-4d8f-87da-79b384fb82be\" (UID: \"62fdad5e-59c5-4d8f-87da-79b384fb82be\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.671480 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2blsv\" (UniqueName: \"kubernetes.io/projected/62fdad5e-59c5-4d8f-87da-79b384fb82be-kube-api-access-2blsv\") pod \"62fdad5e-59c5-4d8f-87da-79b384fb82be\" (UID: \"62fdad5e-59c5-4d8f-87da-79b384fb82be\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.671620 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62fdad5e-59c5-4d8f-87da-79b384fb82be-config-data\") pod \"62fdad5e-59c5-4d8f-87da-79b384fb82be\" (UID: \"62fdad5e-59c5-4d8f-87da-79b384fb82be\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.671722 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/62fdad5e-59c5-4d8f-87da-79b384fb82be-kolla-config\") pod \"62fdad5e-59c5-4d8f-87da-79b384fb82be\" (UID: \"62fdad5e-59c5-4d8f-87da-79b384fb82be\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.671837 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/62fdad5e-59c5-4d8f-87da-79b384fb82be-memcached-tls-certs\") pod \"62fdad5e-59c5-4d8f-87da-79b384fb82be\" (UID: \"62fdad5e-59c5-4d8f-87da-79b384fb82be\") " Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.672283 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62fdad5e-59c5-4d8f-87da-79b384fb82be-config-data" (OuterVolumeSpecName: "config-data") pod "62fdad5e-59c5-4d8f-87da-79b384fb82be" (UID: "62fdad5e-59c5-4d8f-87da-79b384fb82be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.672298 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7322c8-b99d-4970-85c0-218d683f1ca3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.682258 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62fdad5e-59c5-4d8f-87da-79b384fb82be-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "62fdad5e-59c5-4d8f-87da-79b384fb82be" (UID: "62fdad5e-59c5-4d8f-87da-79b384fb82be"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: E1128 07:19:45.684966 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7e494d19861cee1bf5cc7147950495eba143f3dbb2bec8c94fbf0c3c3c16f2dd" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 28 07:19:45 crc kubenswrapper[4946]: E1128 07:19:45.686828 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7e494d19861cee1bf5cc7147950495eba143f3dbb2bec8c94fbf0c3c3c16f2dd" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 28 07:19:45 crc kubenswrapper[4946]: E1128 07:19:45.689377 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7e494d19861cee1bf5cc7147950495eba143f3dbb2bec8c94fbf0c3c3c16f2dd" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 28 07:19:45 crc kubenswrapper[4946]: E1128 07:19:45.689496 4946 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="c274eefa-3598-470b-9b07-25928903d425" containerName="ovn-northd" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.692102 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ba1954-566f-4e25-8312-855a58935547-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3ba1954-566f-4e25-8312-855a58935547" (UID: "c3ba1954-566f-4e25-8312-855a58935547"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.698673 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62fdad5e-59c5-4d8f-87da-79b384fb82be-kube-api-access-2blsv" (OuterVolumeSpecName: "kube-api-access-2blsv") pod "62fdad5e-59c5-4d8f-87da-79b384fb82be" (UID: "62fdad5e-59c5-4d8f-87da-79b384fb82be"). InnerVolumeSpecName "kube-api-access-2blsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.703684 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/587a9b3d-1634-4af6-96d2-e60c03a7d75f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "587a9b3d-1634-4af6-96d2-e60c03a7d75f" (UID: "587a9b3d-1634-4af6-96d2-e60c03a7d75f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.704617 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db7322c8-b99d-4970-85c0-218d683f1ca3-config-data" (OuterVolumeSpecName: "config-data") pod "db7322c8-b99d-4970-85c0-218d683f1ca3" (UID: "db7322c8-b99d-4970-85c0-218d683f1ca3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.722086 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/206851cb-4673-4ce1-b038-c2e425d306b7-config-data" (OuterVolumeSpecName: "config-data") pod "206851cb-4673-4ce1-b038-c2e425d306b7" (UID: "206851cb-4673-4ce1-b038-c2e425d306b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.723543 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52101de8-a25c-4372-9df3-3f090167ff5f" (UID: "52101de8-a25c-4372-9df3-3f090167ff5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.742382 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-config-data" (OuterVolumeSpecName: "config-data") pod "52101de8-a25c-4372-9df3-3f090167ff5f" (UID: "52101de8-a25c-4372-9df3-3f090167ff5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.751661 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "177d3d9f-5e48-4b4e-9329-9d46daa35557" (UID: "177d3d9f-5e48-4b4e-9329-9d46daa35557"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.755004 4946 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.768231 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "313c5837-e776-49ef-8689-14f6f70d31a1" (UID: "313c5837-e776-49ef-8689-14f6f70d31a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.774145 4946 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.774177 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ba1954-566f-4e25-8312-855a58935547-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.774187 4946 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.774197 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2blsv\" (UniqueName: \"kubernetes.io/projected/62fdad5e-59c5-4d8f-87da-79b384fb82be-kube-api-access-2blsv\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.774208 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/206851cb-4673-4ce1-b038-c2e425d306b7-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.774217 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62fdad5e-59c5-4d8f-87da-79b384fb82be-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.774226 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.774236 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db7322c8-b99d-4970-85c0-218d683f1ca3-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.774245 4946 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/62fdad5e-59c5-4d8f-87da-79b384fb82be-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.774310 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.774321 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.774330 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/587a9b3d-1634-4af6-96d2-e60c03a7d75f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.775648 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/206851cb-4673-4ce1-b038-c2e425d306b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "206851cb-4673-4ce1-b038-c2e425d306b7" (UID: "206851cb-4673-4ce1-b038-c2e425d306b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.793586 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/813134a8-463b-4f7d-8160-ceb1c5a96853-config-data" (OuterVolumeSpecName: "config-data") pod "813134a8-463b-4f7d-8160-ceb1c5a96853" (UID: "813134a8-463b-4f7d-8160-ceb1c5a96853"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.796504 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "313c5837-e776-49ef-8689-14f6f70d31a1" (UID: "313c5837-e776-49ef-8689-14f6f70d31a1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.800705 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ba1954-566f-4e25-8312-855a58935547-config-data" (OuterVolumeSpecName: "config-data") pod "c3ba1954-566f-4e25-8312-855a58935547" (UID: "c3ba1954-566f-4e25-8312-855a58935547"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.807483 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/587a9b3d-1634-4af6-96d2-e60c03a7d75f-config-data" (OuterVolumeSpecName: "config-data") pod "587a9b3d-1634-4af6-96d2-e60c03a7d75f" (UID: "587a9b3d-1634-4af6-96d2-e60c03a7d75f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.836837 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62fdad5e-59c5-4d8f-87da-79b384fb82be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62fdad5e-59c5-4d8f-87da-79b384fb82be" (UID: "62fdad5e-59c5-4d8f-87da-79b384fb82be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.852696 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "52101de8-a25c-4372-9df3-3f090167ff5f" (UID: "52101de8-a25c-4372-9df3-3f090167ff5f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.859476 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican1925-account-delete-gjf5l" event={"ID":"9f7aa965-bfc0-4db1-a2d2-07fe02be9f18","Type":"ContainerDied","Data":"734c2f38a6c27cff45594e1786fb0a29642647112e55aaa972412f1017ebae67"} Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.859522 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="734c2f38a6c27cff45594e1786fb0a29642647112e55aaa972412f1017ebae67" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.859730 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/206851cb-4673-4ce1-b038-c2e425d306b7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "206851cb-4673-4ce1-b038-c2e425d306b7" (UID: "206851cb-4673-4ce1-b038-c2e425d306b7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.862026 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "313c5837-e776-49ef-8689-14f6f70d31a1" (UID: "313c5837-e776-49ef-8689-14f6f70d31a1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.862100 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"206851cb-4673-4ce1-b038-c2e425d306b7","Type":"ContainerDied","Data":"d211035670e1cae21262677232429628b3c8c8f69a73b74a5dd382d675b6587d"} Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.862152 4946 scope.go:117] "RemoveContainer" containerID="15a189a2591a0270f4f6fdcfc4b5cb8f68b93b0132f30d27ff66c62f4a2d9ffd" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.862177 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.866328 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-96fb6f878-56tfz" event={"ID":"52101de8-a25c-4372-9df3-3f090167ff5f","Type":"ContainerDied","Data":"44d720134c5b40834abb0dc5b55058a116e10f41445bf90bdff8941986aa4dd5"} Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.866434 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-96fb6f878-56tfz" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.867304 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "177d3d9f-5e48-4b4e-9329-9d46daa35557" (UID: "177d3d9f-5e48-4b4e-9329-9d46daa35557"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.869058 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"62fdad5e-59c5-4d8f-87da-79b384fb82be","Type":"ContainerDied","Data":"f388c24b8d434f7346a8e73ac37b7bd99ad101a75b092c600bac08281ec01d0e"} Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.869125 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.873629 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placementd9cc-account-delete-9rfvn" event={"ID":"1486d432-b3a6-4470-b145-076dafbfca67","Type":"ContainerDied","Data":"cda7a05104ff5fb2283c5e7e234c600e4029a235771098b4405c1e0918b0ecaf"} Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.873663 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cda7a05104ff5fb2283c5e7e234c600e4029a235771098b4405c1e0918b0ecaf" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.873734 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placementd9cc-account-delete-9rfvn" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.876275 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3ba1954-566f-4e25-8312-855a58935547-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.876311 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/587a9b3d-1634-4af6-96d2-e60c03a7d75f-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.876322 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62fdad5e-59c5-4d8f-87da-79b384fb82be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.876338 4946 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.876349 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/206851cb-4673-4ce1-b038-c2e425d306b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.876360 4946 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/206851cb-4673-4ce1-b038-c2e425d306b7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.876373 4946 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.876384 4946 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.876395 4946 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.876408 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/813134a8-463b-4f7d-8160-ceb1c5a96853-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.878515 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glanceb763-account-delete-mmnjs" event={"ID":"eb07712a-805e-4e0f-9a81-dd8ce42bfb88","Type":"ContainerDied","Data":"0133e20bc1b79a9a37f8565ec8bda91fcc10535cbd3cf9e8a684a504b49f82f8"} Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.878547 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0133e20bc1b79a9a37f8565ec8bda91fcc10535cbd3cf9e8a684a504b49f82f8" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.878619 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glanceb763-account-delete-mmnjs" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.883362 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapid713-account-delete-lj6h9" event={"ID":"30b2d9aa-848b-4f33-9bd8-921f5de5ab36","Type":"ContainerDied","Data":"51e719a433208488b0d81729535e9ef02ad774f5529721339bd692ccf59ffadd"} Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.883402 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51e719a433208488b0d81729535e9ef02ad774f5529721339bd692ccf59ffadd" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.885502 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"177d3d9f-5e48-4b4e-9329-9d46daa35557","Type":"ContainerDied","Data":"9e164633707d4e421bae4d9bb28f87286e7ceec32f8c3f9934039032283c4284"} Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.885600 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.885804 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "177d3d9f-5e48-4b4e-9329-9d46daa35557" (UID: "177d3d9f-5e48-4b4e-9329-9d46daa35557"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.889662 4946 generic.go:334] "Generic (PLEG): container finished" podID="b9847c34-a2be-405c-8bd8-34ba251d218d" containerID="c93f03ceec40d0cc831cb034e30236277dd7917f19967a2a82daec8f7e2ea5a1" exitCode=0 Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.889758 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b9847c34-a2be-405c-8bd8-34ba251d218d","Type":"ContainerDied","Data":"c93f03ceec40d0cc831cb034e30236277dd7917f19967a2a82daec8f7e2ea5a1"} Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.889786 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b9847c34-a2be-405c-8bd8-34ba251d218d","Type":"ContainerDied","Data":"34edb443746b43a19279746540d0fd55b7126f2ec8f1aa208214ccb503a41cac"} Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.889798 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34edb443746b43a19279746540d0fd55b7126f2ec8f1aa208214ccb503a41cac" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.891135 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutronc822-account-delete-z6qcr" event={"ID":"f80c95e6-2981-4755-ada4-26bbf1372693","Type":"ContainerDied","Data":"97fb181ee6bd1c34e7d62d2be6dd9bbd910c691e00f6686394d09ffed72b82a8"} Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.891173 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97fb181ee6bd1c34e7d62d2be6dd9bbd910c691e00f6686394d09ffed72b82a8" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.892517 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c3ba1954-566f-4e25-8312-855a58935547","Type":"ContainerDied","Data":"2735b0564fb3827b237fcba8745bfe5e24c027d064789dd8b7e46fba685b7394"} Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.892603 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.896413 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/587a9b3d-1634-4af6-96d2-e60c03a7d75f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "587a9b3d-1634-4af6-96d2-e60c03a7d75f" (UID: "587a9b3d-1634-4af6-96d2-e60c03a7d75f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.898055 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"813134a8-463b-4f7d-8160-ceb1c5a96853","Type":"ContainerDied","Data":"8378ae4ecbb1d3ada07991bd1f3c82f2762b1a98faa009c8fab02ce25130ddde"} Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.898150 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.910745 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/813134a8-463b-4f7d-8160-ceb1c5a96853-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "813134a8-463b-4f7d-8160-ceb1c5a96853" (UID: "813134a8-463b-4f7d-8160-ceb1c5a96853"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.913673 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.913682 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"db7322c8-b99d-4970-85c0-218d683f1ca3","Type":"ContainerDied","Data":"8bdb5f4fee871ff9da23960598cc8a268908f38bdfb5abdf2c002b509d1ec125"} Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.915653 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/587a9b3d-1634-4af6-96d2-e60c03a7d75f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "587a9b3d-1634-4af6-96d2-e60c03a7d75f" (UID: "587a9b3d-1634-4af6-96d2-e60c03a7d75f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.919330 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-576769594d-lbv64" event={"ID":"313c5837-e776-49ef-8689-14f6f70d31a1","Type":"ContainerDied","Data":"6f40f8c684db37068a0770b44fa872fcdd30e6d8c9cadc9dfd30e9eaf286c527"} Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.919409 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-576769594d-lbv64" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.923157 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone7786-account-delete-n526k" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.924252 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.925643 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"587a9b3d-1634-4af6-96d2-e60c03a7d75f","Type":"ContainerDied","Data":"d6d9bebf6aea4f0a994edca594a609e4a3c7e1938d3194319fd9e9f014301848"} Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.925827 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/novacell0d35d-account-delete-c848z" podUID="b138df41-1f0c-4edb-9546-e0f5ec16cf06" containerName="mariadb-account-delete" containerID="cri-o://baba1715ed3450c8abd6adfc3431c885e888fd38f6e6c21194828e801a28628d" gracePeriod=30 Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.939894 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62fdad5e-59c5-4d8f-87da-79b384fb82be-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "62fdad5e-59c5-4d8f-87da-79b384fb82be" (UID: "62fdad5e-59c5-4d8f-87da-79b384fb82be"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.957720 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-config-data" (OuterVolumeSpecName: "config-data") pod "313c5837-e776-49ef-8689-14f6f70d31a1" (UID: "313c5837-e776-49ef-8689-14f6f70d31a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.976052 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-config-data" (OuterVolumeSpecName: "config-data") pod "177d3d9f-5e48-4b4e-9329-9d46daa35557" (UID: "177d3d9f-5e48-4b4e-9329-9d46daa35557"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.978183 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.978224 4946 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/587a9b3d-1634-4af6-96d2-e60c03a7d75f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.978240 4946 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/587a9b3d-1634-4af6-96d2-e60c03a7d75f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.978255 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/177d3d9f-5e48-4b4e-9329-9d46daa35557-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.978267 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/313c5837-e776-49ef-8689-14f6f70d31a1-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.978279 4946 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/813134a8-463b-4f7d-8160-ceb1c5a96853-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.978290 4946 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/62fdad5e-59c5-4d8f-87da-79b384fb82be-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.985839 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutronc822-account-delete-z6qcr" Nov 28 07:19:45 crc kubenswrapper[4946]: I1128 07:19:45.991284 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "52101de8-a25c-4372-9df3-3f090167ff5f" (UID: "52101de8-a25c-4372-9df3-3f090167ff5f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.003454 4946 scope.go:117] "RemoveContainer" containerID="24ddca1eb18e17e4df35510352b9a1bdc29c68afcf2f6e975fb5d6693d132c62" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.018143 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05997c14-3116-4439-8e63-230bf0e5c411" path="/var/lib/kubelet/pods/05997c14-3116-4439-8e63-230bf0e5c411/volumes" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.019052 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1279af9a-0d83-4c31-94b1-6c732b89a785" path="/var/lib/kubelet/pods/1279af9a-0d83-4c31-94b1-6c732b89a785/volumes" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.019739 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c480cce-4090-47dd-8337-99546f661b9d" path="/var/lib/kubelet/pods/1c480cce-4090-47dd-8337-99546f661b9d/volumes" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.021270 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32ed9247-5959-4a5b-a879-52fac366f999" path="/var/lib/kubelet/pods/32ed9247-5959-4a5b-a879-52fac366f999/volumes" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.021804 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="376693c8-e03f-4085-9be2-0ef9a0e27c5c" path="/var/lib/kubelet/pods/376693c8-e03f-4085-9be2-0ef9a0e27c5c/volumes" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.022317 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38655c7a-a82f-485f-ae38-75634417e780" path="/var/lib/kubelet/pods/38655c7a-a82f-485f-ae38-75634417e780/volumes" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.022934 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bef3c13-6018-44b9-b9ac-7620eee2ddb0" path="/var/lib/kubelet/pods/3bef3c13-6018-44b9-b9ac-7620eee2ddb0/volumes" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.024274 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dcd796a-a7cc-4e4a-866b-c26819be5e92" path="/var/lib/kubelet/pods/3dcd796a-a7cc-4e4a-866b-c26819be5e92/volumes" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.025057 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b2a9fdb-501b-4dad-a136-ca4a932e64d0" path="/var/lib/kubelet/pods/5b2a9fdb-501b-4dad-a136-ca4a932e64d0/volumes" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.027965 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="759da62f-4114-48b9-9eb0-1fad429f3044" path="/var/lib/kubelet/pods/759da62f-4114-48b9-9eb0-1fad429f3044/volumes" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.028990 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="802d0ec6-164b-4033-91d5-514bbd50bc23" path="/var/lib/kubelet/pods/802d0ec6-164b-4033-91d5-514bbd50bc23/volumes" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.029053 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican1925-account-delete-gjf5l" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.029720 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b8aece4-9268-47a9-a9cb-b312c604a23a" path="/var/lib/kubelet/pods/8b8aece4-9268-47a9-a9cb-b312c604a23a/volumes" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.031194 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acadbe07-94b0-4a5d-ac42-6524f0e4ce61" path="/var/lib/kubelet/pods/acadbe07-94b0-4a5d-ac42-6524f0e4ce61/volumes" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.032244 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba455d60-e55b-40f4-9e67-a171de4223f0" path="/var/lib/kubelet/pods/ba455d60-e55b-40f4-9e67-a171de4223f0/volumes" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.032857 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4e29c9b-9e25-4304-a314-54943d88f953" path="/var/lib/kubelet/pods/d4e29c9b-9e25-4304-a314-54943d88f953/volumes" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.033448 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e85173bc-690d-4513-924c-0ba5c540d432" path="/var/lib/kubelet/pods/e85173bc-690d-4513-924c-0ba5c540d432/volumes" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.034759 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e87edf72-a3a2-4df0-8249-5902f158998d" path="/var/lib/kubelet/pods/e87edf72-a3a2-4df0-8249-5902f158998d/volumes" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.035498 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea608140-3ad6-4c56-9754-ec74fc292781" path="/var/lib/kubelet/pods/ea608140-3ad6-4c56-9754-ec74fc292781/volumes" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.035944 4946 scope.go:117] "RemoveContainer" containerID="6b8d481b71ffab080960ca39f72a71bb4c2e8f178d73020eb2dfb24175734884" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.036835 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f619657b-649a-4152-886e-9357b34fced2" path="/var/lib/kubelet/pods/f619657b-649a-4152-886e-9357b34fced2/volumes" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.037771 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feb2d0e9-f8e7-4c39-be09-05728a90c0e9" path="/var/lib/kubelet/pods/feb2d0e9-f8e7-4c39-be09-05728a90c0e9/volumes" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.079593 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb62\" (UniqueName: \"kubernetes.io/projected/f80c95e6-2981-4755-ada4-26bbf1372693-kube-api-access-4hb62\") pod \"f80c95e6-2981-4755-ada4-26bbf1372693\" (UID: \"f80c95e6-2981-4755-ada4-26bbf1372693\") " Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.082124 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f80c95e6-2981-4755-ada4-26bbf1372693-operator-scripts\") pod \"f80c95e6-2981-4755-ada4-26bbf1372693\" (UID: \"f80c95e6-2981-4755-ada4-26bbf1372693\") " Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.082822 4946 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52101de8-a25c-4372-9df3-3f090167ff5f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:46 crc kubenswrapper[4946]: E1128 07:19:46.085955 4946 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 28 07:19:46 crc kubenswrapper[4946]: E1128 07:19:46.086035 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-config-data podName:59fdca77-b333-44be-ab8c-96a2f4bcc340 nodeName:}" failed. No retries permitted until 2025-11-28 07:19:54.085994034 +0000 UTC m=+1648.464059145 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-config-data") pod "rabbitmq-server-0" (UID: "59fdca77-b333-44be-ab8c-96a2f4bcc340") : configmap "rabbitmq-config-data" not found Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.086026 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f80c95e6-2981-4755-ada4-26bbf1372693-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f80c95e6-2981-4755-ada4-26bbf1372693" (UID: "f80c95e6-2981-4755-ada4-26bbf1372693"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.088070 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f80c95e6-2981-4755-ada4-26bbf1372693-kube-api-access-4hb62" (OuterVolumeSpecName: "kube-api-access-4hb62") pod "f80c95e6-2981-4755-ada4-26bbf1372693" (UID: "f80c95e6-2981-4755-ada4-26bbf1372693"). InnerVolumeSpecName "kube-api-access-4hb62". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.113045 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone7786-account-delete-n526k"] Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.113083 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone7786-account-delete-n526k"] Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.117422 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.128205 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.129225 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.141289 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapid713-account-delete-lj6h9" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.152229 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placementd9cc-account-delete-9rfvn"] Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.168714 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placementd9cc-account-delete-9rfvn"] Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.170750 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.188571 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ck5w\" (UniqueName: \"kubernetes.io/projected/9f7aa965-bfc0-4db1-a2d2-07fe02be9f18-kube-api-access-9ck5w\") pod \"9f7aa965-bfc0-4db1-a2d2-07fe02be9f18\" (UID: \"9f7aa965-bfc0-4db1-a2d2-07fe02be9f18\") " Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.188736 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f7aa965-bfc0-4db1-a2d2-07fe02be9f18-operator-scripts\") pod \"9f7aa965-bfc0-4db1-a2d2-07fe02be9f18\" (UID: \"9f7aa965-bfc0-4db1-a2d2-07fe02be9f18\") " Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.189358 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f80c95e6-2981-4755-ada4-26bbf1372693-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.189389 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hb62\" (UniqueName: \"kubernetes.io/projected/f80c95e6-2981-4755-ada4-26bbf1372693-kube-api-access-4hb62\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.191053 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f7aa965-bfc0-4db1-a2d2-07fe02be9f18-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9f7aa965-bfc0-4db1-a2d2-07fe02be9f18" (UID: "9f7aa965-bfc0-4db1-a2d2-07fe02be9f18"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.192172 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.197626 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f7aa965-bfc0-4db1-a2d2-07fe02be9f18-kube-api-access-9ck5w" (OuterVolumeSpecName: "kube-api-access-9ck5w") pod "9f7aa965-bfc0-4db1-a2d2-07fe02be9f18" (UID: "9f7aa965-bfc0-4db1-a2d2-07fe02be9f18"). InnerVolumeSpecName "kube-api-access-9ck5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.219474 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.220250 4946 scope.go:117] "RemoveContainer" containerID="a2f882b052a314819dd4340b85645a14b6914096d138c6a1d81c6036bd6013fb" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.231867 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.249574 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.258692 4946 scope.go:117] "RemoveContainer" containerID="9030866a771edcec9dbaa59d8cf162cc3924cacd011d8fb010810714f827e7e0" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.262906 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.283976 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.290597 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp5hj\" (UniqueName: \"kubernetes.io/projected/30b2d9aa-848b-4f33-9bd8-921f5de5ab36-kube-api-access-dp5hj\") pod \"30b2d9aa-848b-4f33-9bd8-921f5de5ab36\" (UID: \"30b2d9aa-848b-4f33-9bd8-921f5de5ab36\") " Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.290754 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p6lc\" (UniqueName: \"kubernetes.io/projected/b9847c34-a2be-405c-8bd8-34ba251d218d-kube-api-access-9p6lc\") pod \"b9847c34-a2be-405c-8bd8-34ba251d218d\" (UID: \"b9847c34-a2be-405c-8bd8-34ba251d218d\") " Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.290839 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9847c34-a2be-405c-8bd8-34ba251d218d-combined-ca-bundle\") pod \"b9847c34-a2be-405c-8bd8-34ba251d218d\" (UID: \"b9847c34-a2be-405c-8bd8-34ba251d218d\") " Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.290991 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30b2d9aa-848b-4f33-9bd8-921f5de5ab36-operator-scripts\") pod \"30b2d9aa-848b-4f33-9bd8-921f5de5ab36\" (UID: \"30b2d9aa-848b-4f33-9bd8-921f5de5ab36\") " Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.291104 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9847c34-a2be-405c-8bd8-34ba251d218d-config-data\") pod \"b9847c34-a2be-405c-8bd8-34ba251d218d\" (UID: \"b9847c34-a2be-405c-8bd8-34ba251d218d\") " Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.291607 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b145d4b-42d7-4d88-b847-d5b470797f4c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.291657 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87bbb\" (UniqueName: \"kubernetes.io/projected/7b145d4b-42d7-4d88-b847-d5b470797f4c-kube-api-access-87bbb\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.291671 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ck5w\" (UniqueName: \"kubernetes.io/projected/9f7aa965-bfc0-4db1-a2d2-07fe02be9f18-kube-api-access-9ck5w\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.291680 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f7aa965-bfc0-4db1-a2d2-07fe02be9f18-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.291893 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30b2d9aa-848b-4f33-9bd8-921f5de5ab36-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "30b2d9aa-848b-4f33-9bd8-921f5de5ab36" (UID: "30b2d9aa-848b-4f33-9bd8-921f5de5ab36"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.298761 4946 scope.go:117] "RemoveContainer" containerID="ffcb008ee579acbdfa8d874703f551f470e1a6f511bed4e136ae6381f70bdf76" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.299963 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.299952 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30b2d9aa-848b-4f33-9bd8-921f5de5ab36-kube-api-access-dp5hj" (OuterVolumeSpecName: "kube-api-access-dp5hj") pod "30b2d9aa-848b-4f33-9bd8-921f5de5ab36" (UID: "30b2d9aa-848b-4f33-9bd8-921f5de5ab36"). InnerVolumeSpecName "kube-api-access-dp5hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.300350 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9847c34-a2be-405c-8bd8-34ba251d218d-kube-api-access-9p6lc" (OuterVolumeSpecName: "kube-api-access-9p6lc") pod "b9847c34-a2be-405c-8bd8-34ba251d218d" (UID: "b9847c34-a2be-405c-8bd8-34ba251d218d"). InnerVolumeSpecName "kube-api-access-9p6lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.321836 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9847c34-a2be-405c-8bd8-34ba251d218d-config-data" (OuterVolumeSpecName: "config-data") pod "b9847c34-a2be-405c-8bd8-34ba251d218d" (UID: "b9847c34-a2be-405c-8bd8-34ba251d218d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.328370 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.330136 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9847c34-a2be-405c-8bd8-34ba251d218d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9847c34-a2be-405c-8bd8-34ba251d218d" (UID: "b9847c34-a2be-405c-8bd8-34ba251d218d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.333564 4946 scope.go:117] "RemoveContainer" containerID="e44eb1721b1c349eb901f9e287cf8f82e2cd4a6445ee30ac6af7e82f4ca77c2a" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.335131 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.360332 4946 scope.go:117] "RemoveContainer" containerID="6652f9f13937850c60218d658275da3e9395b434290cd817f8561dffb81b033e" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.360396 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-96fb6f878-56tfz"] Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.366526 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-96fb6f878-56tfz"] Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.373340 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-576769594d-lbv64"] Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.378205 4946 scope.go:117] "RemoveContainer" containerID="c42c1219c7b69cbd56741a6ed7605e384c6a1b1aa06e921f85ae5e9b886c56e0" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.388834 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-576769594d-lbv64"] Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.402235 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp5hj\" (UniqueName: \"kubernetes.io/projected/30b2d9aa-848b-4f33-9bd8-921f5de5ab36-kube-api-access-dp5hj\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.402273 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9p6lc\" (UniqueName: \"kubernetes.io/projected/b9847c34-a2be-405c-8bd8-34ba251d218d-kube-api-access-9p6lc\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.402284 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9847c34-a2be-405c-8bd8-34ba251d218d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.402294 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30b2d9aa-848b-4f33-9bd8-921f5de5ab36-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.402302 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9847c34-a2be-405c-8bd8-34ba251d218d-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.403651 4946 scope.go:117] "RemoveContainer" containerID="47d5db84ad9670de902990ff6af710f405452597c9ef2064765b609a18f5147a" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.405158 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.428746 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.466040 4946 scope.go:117] "RemoveContainer" containerID="a5bc002da109da3ebb39a470b77cd1c3a84898e95c562ea010269e7e486161bb" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.491302 4946 scope.go:117] "RemoveContainer" containerID="53c5b6d4bf9c96653c84e3fcb9a4ea1b7994f3a98935936a45c752c693dcee48" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.513481 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.520241 4946 scope.go:117] "RemoveContainer" containerID="aff8824b6f9748bbf0caf47e4c6c01486fb6b04768113350feeb4bb43a17bff5" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.555736 4946 scope.go:117] "RemoveContainer" containerID="7154bbd8ae6506bb7231f4ac45fc4ff0e5022062ff798b35ec9d298de493ab9f" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.581848 4946 scope.go:117] "RemoveContainer" containerID="ed90e8dccea49daff0a79a0d11bad0590ddbd5f45981e375a7dee23f1a208e56" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.605991 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/13d10c33-0ca9-47d5-ac49-19391cebfb39-kolla-config\") pod \"13d10c33-0ca9-47d5-ac49-19391cebfb39\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.606107 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/13d10c33-0ca9-47d5-ac49-19391cebfb39-config-data-generated\") pod \"13d10c33-0ca9-47d5-ac49-19391cebfb39\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.606134 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d10c33-0ca9-47d5-ac49-19391cebfb39-combined-ca-bundle\") pod \"13d10c33-0ca9-47d5-ac49-19391cebfb39\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.606200 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13d10c33-0ca9-47d5-ac49-19391cebfb39-operator-scripts\") pod \"13d10c33-0ca9-47d5-ac49-19391cebfb39\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.606226 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf9c5\" (UniqueName: \"kubernetes.io/projected/13d10c33-0ca9-47d5-ac49-19391cebfb39-kube-api-access-cf9c5\") pod \"13d10c33-0ca9-47d5-ac49-19391cebfb39\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.606241 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"13d10c33-0ca9-47d5-ac49-19391cebfb39\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.606368 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/13d10c33-0ca9-47d5-ac49-19391cebfb39-config-data-default\") pod \"13d10c33-0ca9-47d5-ac49-19391cebfb39\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.606391 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/13d10c33-0ca9-47d5-ac49-19391cebfb39-galera-tls-certs\") pod \"13d10c33-0ca9-47d5-ac49-19391cebfb39\" (UID: \"13d10c33-0ca9-47d5-ac49-19391cebfb39\") " Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.607124 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13d10c33-0ca9-47d5-ac49-19391cebfb39-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "13d10c33-0ca9-47d5-ac49-19391cebfb39" (UID: "13d10c33-0ca9-47d5-ac49-19391cebfb39"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.607283 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13d10c33-0ca9-47d5-ac49-19391cebfb39-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "13d10c33-0ca9-47d5-ac49-19391cebfb39" (UID: "13d10c33-0ca9-47d5-ac49-19391cebfb39"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.608097 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13d10c33-0ca9-47d5-ac49-19391cebfb39-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "13d10c33-0ca9-47d5-ac49-19391cebfb39" (UID: "13d10c33-0ca9-47d5-ac49-19391cebfb39"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.608137 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13d10c33-0ca9-47d5-ac49-19391cebfb39-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "13d10c33-0ca9-47d5-ac49-19391cebfb39" (UID: "13d10c33-0ca9-47d5-ac49-19391cebfb39"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.618048 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "mysql-db") pod "13d10c33-0ca9-47d5-ac49-19391cebfb39" (UID: "13d10c33-0ca9-47d5-ac49-19391cebfb39"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.629441 4946 scope.go:117] "RemoveContainer" containerID="ff6fb5d2f5f3038d8cdb313bf501d56a6b33a90f0593856e2e8097e215b377eb" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.632609 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13d10c33-0ca9-47d5-ac49-19391cebfb39-kube-api-access-cf9c5" (OuterVolumeSpecName: "kube-api-access-cf9c5") pod "13d10c33-0ca9-47d5-ac49-19391cebfb39" (UID: "13d10c33-0ca9-47d5-ac49-19391cebfb39"). InnerVolumeSpecName "kube-api-access-cf9c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.644124 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13d10c33-0ca9-47d5-ac49-19391cebfb39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13d10c33-0ca9-47d5-ac49-19391cebfb39" (UID: "13d10c33-0ca9-47d5-ac49-19391cebfb39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.653654 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13d10c33-0ca9-47d5-ac49-19391cebfb39-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "13d10c33-0ca9-47d5-ac49-19391cebfb39" (UID: "13d10c33-0ca9-47d5-ac49-19391cebfb39"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.707530 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13d10c33-0ca9-47d5-ac49-19391cebfb39-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.707676 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf9c5\" (UniqueName: \"kubernetes.io/projected/13d10c33-0ca9-47d5-ac49-19391cebfb39-kube-api-access-cf9c5\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.707757 4946 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.707839 4946 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/13d10c33-0ca9-47d5-ac49-19391cebfb39-config-data-default\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.707895 4946 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/13d10c33-0ca9-47d5-ac49-19391cebfb39-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.707953 4946 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/13d10c33-0ca9-47d5-ac49-19391cebfb39-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.708010 4946 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/13d10c33-0ca9-47d5-ac49-19391cebfb39-config-data-generated\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.708061 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d10c33-0ca9-47d5-ac49-19391cebfb39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.725921 4946 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.809519 4946 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.905178 4946 scope.go:117] "RemoveContainer" containerID="cdc13d7826d930cfa67ccc8a5fb14e61b1d536afb075f29af7e2cdabf489a9d0" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.936584 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c274eefa-3598-470b-9b07-25928903d425/ovn-northd/0.log" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.936641 4946 generic.go:334] "Generic (PLEG): container finished" podID="c274eefa-3598-470b-9b07-25928903d425" containerID="7e494d19861cee1bf5cc7147950495eba143f3dbb2bec8c94fbf0c3c3c16f2dd" exitCode=139 Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.936713 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c274eefa-3598-470b-9b07-25928903d425","Type":"ContainerDied","Data":"7e494d19861cee1bf5cc7147950495eba143f3dbb2bec8c94fbf0c3c3c16f2dd"} Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.942751 4946 generic.go:334] "Generic (PLEG): container finished" podID="59fdca77-b333-44be-ab8c-96a2f4bcc340" containerID="ff38770809a612b7b65b4599d06925432273b10bf7ef83575ef3bffae3781506" exitCode=0 Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.942846 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"59fdca77-b333-44be-ab8c-96a2f4bcc340","Type":"ContainerDied","Data":"ff38770809a612b7b65b4599d06925432273b10bf7ef83575ef3bffae3781506"} Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.953311 4946 generic.go:334] "Generic (PLEG): container finished" podID="13d10c33-0ca9-47d5-ac49-19391cebfb39" containerID="63032ee424994bedde2af09f7484105017c4f440ba2229de43942a3d82430207" exitCode=0 Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.953370 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"13d10c33-0ca9-47d5-ac49-19391cebfb39","Type":"ContainerDied","Data":"63032ee424994bedde2af09f7484105017c4f440ba2229de43942a3d82430207"} Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.953396 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"13d10c33-0ca9-47d5-ac49-19391cebfb39","Type":"ContainerDied","Data":"3f22782a39a881891b144df25fbafd85c9abad50dd1c8d021984523d44369e27"} Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.953477 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.961813 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.961843 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutronc822-account-delete-z6qcr" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.961881 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican1925-account-delete-gjf5l" Nov 28 07:19:46 crc kubenswrapper[4946]: I1128 07:19:46.961901 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapid713-account-delete-lj6h9" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.058296 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapid713-account-delete-lj6h9"] Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.061792 4946 scope.go:117] "RemoveContainer" containerID="63032ee424994bedde2af09f7484105017c4f440ba2229de43942a3d82430207" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.080162 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novaapid713-account-delete-lj6h9"] Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.102547 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.106007 4946 scope.go:117] "RemoveContainer" containerID="f4f9909c167a1c79bb8b42fbdbe2ecd7c4b12b8dfe0b0cc573d4848c3dfb98d4" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.113742 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.128070 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican1925-account-delete-gjf5l"] Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.135808 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican1925-account-delete-gjf5l"] Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.142419 4946 scope.go:117] "RemoveContainer" containerID="63032ee424994bedde2af09f7484105017c4f440ba2229de43942a3d82430207" Nov 28 07:19:47 crc kubenswrapper[4946]: E1128 07:19:47.143086 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63032ee424994bedde2af09f7484105017c4f440ba2229de43942a3d82430207\": container with ID starting with 63032ee424994bedde2af09f7484105017c4f440ba2229de43942a3d82430207 not found: ID does not exist" containerID="63032ee424994bedde2af09f7484105017c4f440ba2229de43942a3d82430207" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.143163 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63032ee424994bedde2af09f7484105017c4f440ba2229de43942a3d82430207"} err="failed to get container status \"63032ee424994bedde2af09f7484105017c4f440ba2229de43942a3d82430207\": rpc error: code = NotFound desc = could not find container \"63032ee424994bedde2af09f7484105017c4f440ba2229de43942a3d82430207\": container with ID starting with 63032ee424994bedde2af09f7484105017c4f440ba2229de43942a3d82430207 not found: ID does not exist" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.143195 4946 scope.go:117] "RemoveContainer" containerID="f4f9909c167a1c79bb8b42fbdbe2ecd7c4b12b8dfe0b0cc573d4848c3dfb98d4" Nov 28 07:19:47 crc kubenswrapper[4946]: E1128 07:19:47.144390 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4f9909c167a1c79bb8b42fbdbe2ecd7c4b12b8dfe0b0cc573d4848c3dfb98d4\": container with ID starting with f4f9909c167a1c79bb8b42fbdbe2ecd7c4b12b8dfe0b0cc573d4848c3dfb98d4 not found: ID does not exist" containerID="f4f9909c167a1c79bb8b42fbdbe2ecd7c4b12b8dfe0b0cc573d4848c3dfb98d4" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.144422 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4f9909c167a1c79bb8b42fbdbe2ecd7c4b12b8dfe0b0cc573d4848c3dfb98d4"} err="failed to get container status \"f4f9909c167a1c79bb8b42fbdbe2ecd7c4b12b8dfe0b0cc573d4848c3dfb98d4\": rpc error: code = NotFound desc = could not find container \"f4f9909c167a1c79bb8b42fbdbe2ecd7c4b12b8dfe0b0cc573d4848c3dfb98d4\": container with ID starting with f4f9909c167a1c79bb8b42fbdbe2ecd7c4b12b8dfe0b0cc573d4848c3dfb98d4 not found: ID does not exist" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.153012 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.154649 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.169098 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.184727 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutronc822-account-delete-z6qcr"] Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.193885 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutronc822-account-delete-z6qcr"] Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.213349 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c274eefa-3598-470b-9b07-25928903d425/ovn-northd/0.log" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.213482 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.215423 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ggpl\" (UniqueName: \"kubernetes.io/projected/c274eefa-3598-470b-9b07-25928903d425-kube-api-access-5ggpl\") pod \"c274eefa-3598-470b-9b07-25928903d425\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.215526 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59fdca77-b333-44be-ab8c-96a2f4bcc340-erlang-cookie-secret\") pod \"59fdca77-b333-44be-ab8c-96a2f4bcc340\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.215549 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c274eefa-3598-470b-9b07-25928903d425-ovn-northd-tls-certs\") pod \"c274eefa-3598-470b-9b07-25928903d425\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.215590 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59fdca77-b333-44be-ab8c-96a2f4bcc340-rabbitmq-tls\") pod \"59fdca77-b333-44be-ab8c-96a2f4bcc340\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.215625 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59fdca77-b333-44be-ab8c-96a2f4bcc340-pod-info\") pod \"59fdca77-b333-44be-ab8c-96a2f4bcc340\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.216648 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"59fdca77-b333-44be-ab8c-96a2f4bcc340\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.216955 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-config-data\") pod \"59fdca77-b333-44be-ab8c-96a2f4bcc340\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.216990 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59fdca77-b333-44be-ab8c-96a2f4bcc340-rabbitmq-confd\") pod \"59fdca77-b333-44be-ab8c-96a2f4bcc340\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.217064 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c274eefa-3598-470b-9b07-25928903d425-scripts\") pod \"c274eefa-3598-470b-9b07-25928903d425\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.217105 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-plugins-conf\") pod \"59fdca77-b333-44be-ab8c-96a2f4bcc340\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.217163 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c274eefa-3598-470b-9b07-25928903d425-ovn-rundir\") pod \"c274eefa-3598-470b-9b07-25928903d425\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.217197 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgx22\" (UniqueName: \"kubernetes.io/projected/59fdca77-b333-44be-ab8c-96a2f4bcc340-kube-api-access-tgx22\") pod \"59fdca77-b333-44be-ab8c-96a2f4bcc340\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.217222 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59fdca77-b333-44be-ab8c-96a2f4bcc340-rabbitmq-plugins\") pod \"59fdca77-b333-44be-ab8c-96a2f4bcc340\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.217252 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c274eefa-3598-470b-9b07-25928903d425-combined-ca-bundle\") pod \"c274eefa-3598-470b-9b07-25928903d425\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.217276 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c274eefa-3598-470b-9b07-25928903d425-config\") pod \"c274eefa-3598-470b-9b07-25928903d425\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.217309 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c274eefa-3598-470b-9b07-25928903d425-metrics-certs-tls-certs\") pod \"c274eefa-3598-470b-9b07-25928903d425\" (UID: \"c274eefa-3598-470b-9b07-25928903d425\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.217378 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59fdca77-b333-44be-ab8c-96a2f4bcc340-rabbitmq-erlang-cookie\") pod \"59fdca77-b333-44be-ab8c-96a2f4bcc340\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.217408 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-server-conf\") pod \"59fdca77-b333-44be-ab8c-96a2f4bcc340\" (UID: \"59fdca77-b333-44be-ab8c-96a2f4bcc340\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.219043 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c274eefa-3598-470b-9b07-25928903d425-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "c274eefa-3598-470b-9b07-25928903d425" (UID: "c274eefa-3598-470b-9b07-25928903d425"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.223046 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c274eefa-3598-470b-9b07-25928903d425-scripts" (OuterVolumeSpecName: "scripts") pod "c274eefa-3598-470b-9b07-25928903d425" (UID: "c274eefa-3598-470b-9b07-25928903d425"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.227176 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59fdca77-b333-44be-ab8c-96a2f4bcc340-kube-api-access-tgx22" (OuterVolumeSpecName: "kube-api-access-tgx22") pod "59fdca77-b333-44be-ab8c-96a2f4bcc340" (UID: "59fdca77-b333-44be-ab8c-96a2f4bcc340"). InnerVolumeSpecName "kube-api-access-tgx22". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.227252 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59fdca77-b333-44be-ab8c-96a2f4bcc340-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "59fdca77-b333-44be-ab8c-96a2f4bcc340" (UID: "59fdca77-b333-44be-ab8c-96a2f4bcc340"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.227567 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/59fdca77-b333-44be-ab8c-96a2f4bcc340-pod-info" (OuterVolumeSpecName: "pod-info") pod "59fdca77-b333-44be-ab8c-96a2f4bcc340" (UID: "59fdca77-b333-44be-ab8c-96a2f4bcc340"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.227652 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "59fdca77-b333-44be-ab8c-96a2f4bcc340" (UID: "59fdca77-b333-44be-ab8c-96a2f4bcc340"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.227682 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "59fdca77-b333-44be-ab8c-96a2f4bcc340" (UID: "59fdca77-b333-44be-ab8c-96a2f4bcc340"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.227916 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59fdca77-b333-44be-ab8c-96a2f4bcc340-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "59fdca77-b333-44be-ab8c-96a2f4bcc340" (UID: "59fdca77-b333-44be-ab8c-96a2f4bcc340"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.227951 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59fdca77-b333-44be-ab8c-96a2f4bcc340-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "59fdca77-b333-44be-ab8c-96a2f4bcc340" (UID: "59fdca77-b333-44be-ab8c-96a2f4bcc340"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.228337 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c274eefa-3598-470b-9b07-25928903d425-config" (OuterVolumeSpecName: "config") pod "c274eefa-3598-470b-9b07-25928903d425" (UID: "c274eefa-3598-470b-9b07-25928903d425"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.228678 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c274eefa-3598-470b-9b07-25928903d425-kube-api-access-5ggpl" (OuterVolumeSpecName: "kube-api-access-5ggpl") pod "c274eefa-3598-470b-9b07-25928903d425" (UID: "c274eefa-3598-470b-9b07-25928903d425"). InnerVolumeSpecName "kube-api-access-5ggpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.236378 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59fdca77-b333-44be-ab8c-96a2f4bcc340-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "59fdca77-b333-44be-ab8c-96a2f4bcc340" (UID: "59fdca77-b333-44be-ab8c-96a2f4bcc340"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.269133 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-config-data" (OuterVolumeSpecName: "config-data") pod "59fdca77-b333-44be-ab8c-96a2f4bcc340" (UID: "59fdca77-b333-44be-ab8c-96a2f4bcc340"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.276352 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-server-conf" (OuterVolumeSpecName: "server-conf") pod "59fdca77-b333-44be-ab8c-96a2f4bcc340" (UID: "59fdca77-b333-44be-ab8c-96a2f4bcc340"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.287640 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c274eefa-3598-470b-9b07-25928903d425-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c274eefa-3598-470b-9b07-25928903d425" (UID: "c274eefa-3598-470b-9b07-25928903d425"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.297180 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c274eefa-3598-470b-9b07-25928903d425-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "c274eefa-3598-470b-9b07-25928903d425" (UID: "c274eefa-3598-470b-9b07-25928903d425"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.309678 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59fdca77-b333-44be-ab8c-96a2f4bcc340-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "59fdca77-b333-44be-ab8c-96a2f4bcc340" (UID: "59fdca77-b333-44be-ab8c-96a2f4bcc340"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.319181 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c274eefa-3598-470b-9b07-25928903d425-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "c274eefa-3598-470b-9b07-25928903d425" (UID: "c274eefa-3598-470b-9b07-25928903d425"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.322674 4946 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59fdca77-b333-44be-ab8c-96a2f4bcc340-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.322708 4946 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-server-conf\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.322718 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ggpl\" (UniqueName: \"kubernetes.io/projected/c274eefa-3598-470b-9b07-25928903d425-kube-api-access-5ggpl\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.322727 4946 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59fdca77-b333-44be-ab8c-96a2f4bcc340-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.322737 4946 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c274eefa-3598-470b-9b07-25928903d425-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.322745 4946 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59fdca77-b333-44be-ab8c-96a2f4bcc340-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.322753 4946 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59fdca77-b333-44be-ab8c-96a2f4bcc340-pod-info\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.322776 4946 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.322786 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.322794 4946 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59fdca77-b333-44be-ab8c-96a2f4bcc340-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.322803 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c274eefa-3598-470b-9b07-25928903d425-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.322812 4946 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59fdca77-b333-44be-ab8c-96a2f4bcc340-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.322820 4946 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c274eefa-3598-470b-9b07-25928903d425-ovn-rundir\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.322828 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgx22\" (UniqueName: \"kubernetes.io/projected/59fdca77-b333-44be-ab8c-96a2f4bcc340-kube-api-access-tgx22\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.322836 4946 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59fdca77-b333-44be-ab8c-96a2f4bcc340-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.322845 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c274eefa-3598-470b-9b07-25928903d425-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.322853 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c274eefa-3598-470b-9b07-25928903d425-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.322862 4946 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c274eefa-3598-470b-9b07-25928903d425-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.337893 4946 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.374847 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.424448 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-combined-ca-bundle\") pod \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.424523 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-fernet-keys\") pod \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.424642 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-config-data\") pod \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.424686 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92kmg\" (UniqueName: \"kubernetes.io/projected/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-kube-api-access-92kmg\") pod \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.424888 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-internal-tls-certs\") pod \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.425521 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-scripts\") pod \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.425555 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-public-tls-certs\") pod \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.425594 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-credential-keys\") pod \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\" (UID: \"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37\") " Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.426037 4946 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.428161 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37" (UID: "2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.428202 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-kube-api-access-92kmg" (OuterVolumeSpecName: "kube-api-access-92kmg") pod "2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37" (UID: "2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37"). InnerVolumeSpecName "kube-api-access-92kmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.430887 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-scripts" (OuterVolumeSpecName: "scripts") pod "2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37" (UID: "2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.431414 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37" (UID: "2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.455192 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37" (UID: "2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.455220 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-config-data" (OuterVolumeSpecName: "config-data") pod "2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37" (UID: "2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.465025 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37" (UID: "2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.473505 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37" (UID: "2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.528142 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.528174 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92kmg\" (UniqueName: \"kubernetes.io/projected/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-kube-api-access-92kmg\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.528185 4946 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.528194 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.528204 4946 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.528212 4946 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.528221 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.528229 4946 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:47 crc kubenswrapper[4946]: E1128 07:19:47.528282 4946 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 28 07:19:47 crc kubenswrapper[4946]: E1128 07:19:47.528376 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b138df41-1f0c-4edb-9546-e0f5ec16cf06-operator-scripts podName:b138df41-1f0c-4edb-9546-e0f5ec16cf06 nodeName:}" failed. No retries permitted until 2025-11-28 07:19:51.528356115 +0000 UTC m=+1645.906421226 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b138df41-1f0c-4edb-9546-e0f5ec16cf06-operator-scripts") pod "novacell0d35d-account-delete-c848z" (UID: "b138df41-1f0c-4edb-9546-e0f5ec16cf06") : configmap "openstack-scripts" not found Nov 28 07:19:47 crc kubenswrapper[4946]: E1128 07:19:47.948828 4946 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 28 07:19:47 crc kubenswrapper[4946]: E1128 07:19:47.949158 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-config-data podName:3521840d-60d0-450c-8c05-7e2ad0fc4e97 nodeName:}" failed. No retries permitted until 2025-11-28 07:19:55.949131693 +0000 UTC m=+1650.327196804 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-config-data") pod "rabbitmq-cell1-server-0" (UID: "3521840d-60d0-450c-8c05-7e2ad0fc4e97") : configmap "rabbitmq-cell1-config-data" not found Nov 28 07:19:47 crc kubenswrapper[4946]: E1128 07:19:47.972711 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443 is running failed: container process not found" containerID="c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:19:47 crc kubenswrapper[4946]: E1128 07:19:47.973133 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443 is running failed: container process not found" containerID="c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.973352 4946 generic.go:334] "Generic (PLEG): container finished" podID="3521840d-60d0-450c-8c05-7e2ad0fc4e97" containerID="5a518b7e7d038229a500bd8709ec0d601f6bd6d8f0d81ac3077b20e90a835629" exitCode=0 Nov 28 07:19:47 crc kubenswrapper[4946]: E1128 07:19:47.973384 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443 is running failed: container process not found" containerID="c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.973434 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3521840d-60d0-450c-8c05-7e2ad0fc4e97","Type":"ContainerDied","Data":"5a518b7e7d038229a500bd8709ec0d601f6bd6d8f0d81ac3077b20e90a835629"} Nov 28 07:19:47 crc kubenswrapper[4946]: E1128 07:19:47.973433 4946 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-gpnz5" podUID="944abeae-3f1d-4391-a375-b64ed9c17b14" containerName="ovsdb-server" Nov 28 07:19:47 crc kubenswrapper[4946]: E1128 07:19:47.974920 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b12ef267dfaa1906a3e883237d5228860410b06e258081a1cabe06d004313ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:19:47 crc kubenswrapper[4946]: E1128 07:19:47.976932 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b12ef267dfaa1906a3e883237d5228860410b06e258081a1cabe06d004313ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.978325 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"59fdca77-b333-44be-ab8c-96a2f4bcc340","Type":"ContainerDied","Data":"301e2cda2647a234e7bedf243c381acf80fcde3f36946e5a978cd12c8fc3475a"} Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.978368 4946 scope.go:117] "RemoveContainer" containerID="ff38770809a612b7b65b4599d06925432273b10bf7ef83575ef3bffae3781506" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.978415 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 07:19:47 crc kubenswrapper[4946]: E1128 07:19:47.978801 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b12ef267dfaa1906a3e883237d5228860410b06e258081a1cabe06d004313ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:19:47 crc kubenswrapper[4946]: E1128 07:19:47.978844 4946 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-gpnz5" podUID="944abeae-3f1d-4391-a375-b64ed9c17b14" containerName="ovs-vswitchd" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.988009 4946 generic.go:334] "Generic (PLEG): container finished" podID="2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37" containerID="d7ff90ba4ae10a7961b71df89adf06b1201b956ef3dcb71ad66f05d75ef803ef" exitCode=0 Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.988157 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-77dfbb6d46-cf289" Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.988584 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-77dfbb6d46-cf289" event={"ID":"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37","Type":"ContainerDied","Data":"d7ff90ba4ae10a7961b71df89adf06b1201b956ef3dcb71ad66f05d75ef803ef"} Nov 28 07:19:47 crc kubenswrapper[4946]: I1128 07:19:47.988648 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-77dfbb6d46-cf289" event={"ID":"2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37","Type":"ContainerDied","Data":"361c58d9d9eda81e7b6f10f7ce55798b58eb61a8af2c6578a63a57f1664a299b"} Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.028217 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13d10c33-0ca9-47d5-ac49-19391cebfb39" path="/var/lib/kubelet/pods/13d10c33-0ca9-47d5-ac49-19391cebfb39/volumes" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.029107 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1486d432-b3a6-4470-b145-076dafbfca67" path="/var/lib/kubelet/pods/1486d432-b3a6-4470-b145-076dafbfca67/volumes" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.029657 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c274eefa-3598-470b-9b07-25928903d425/ovn-northd/0.log" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.029886 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.029930 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="177d3d9f-5e48-4b4e-9329-9d46daa35557" path="/var/lib/kubelet/pods/177d3d9f-5e48-4b4e-9329-9d46daa35557/volumes" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.031623 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="206851cb-4673-4ce1-b038-c2e425d306b7" path="/var/lib/kubelet/pods/206851cb-4673-4ce1-b038-c2e425d306b7/volumes" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.032564 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30b2d9aa-848b-4f33-9bd8-921f5de5ab36" path="/var/lib/kubelet/pods/30b2d9aa-848b-4f33-9bd8-921f5de5ab36/volumes" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.033890 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="313c5837-e776-49ef-8689-14f6f70d31a1" path="/var/lib/kubelet/pods/313c5837-e776-49ef-8689-14f6f70d31a1/volumes" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.034707 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52101de8-a25c-4372-9df3-3f090167ff5f" path="/var/lib/kubelet/pods/52101de8-a25c-4372-9df3-3f090167ff5f/volumes" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.035526 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="587a9b3d-1634-4af6-96d2-e60c03a7d75f" path="/var/lib/kubelet/pods/587a9b3d-1634-4af6-96d2-e60c03a7d75f/volumes" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.037968 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62fdad5e-59c5-4d8f-87da-79b384fb82be" path="/var/lib/kubelet/pods/62fdad5e-59c5-4d8f-87da-79b384fb82be/volumes" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.038664 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b145d4b-42d7-4d88-b847-d5b470797f4c" path="/var/lib/kubelet/pods/7b145d4b-42d7-4d88-b847-d5b470797f4c/volumes" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.039382 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="813134a8-463b-4f7d-8160-ceb1c5a96853" path="/var/lib/kubelet/pods/813134a8-463b-4f7d-8160-ceb1c5a96853/volumes" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.040706 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f7aa965-bfc0-4db1-a2d2-07fe02be9f18" path="/var/lib/kubelet/pods/9f7aa965-bfc0-4db1-a2d2-07fe02be9f18/volumes" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.042191 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9847c34-a2be-405c-8bd8-34ba251d218d" path="/var/lib/kubelet/pods/b9847c34-a2be-405c-8bd8-34ba251d218d/volumes" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.043013 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3ba1954-566f-4e25-8312-855a58935547" path="/var/lib/kubelet/pods/c3ba1954-566f-4e25-8312-855a58935547/volumes" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.044142 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db7322c8-b99d-4970-85c0-218d683f1ca3" path="/var/lib/kubelet/pods/db7322c8-b99d-4970-85c0-218d683f1ca3/volumes" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.046220 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f80c95e6-2981-4755-ada4-26bbf1372693" path="/var/lib/kubelet/pods/f80c95e6-2981-4755-ada4-26bbf1372693/volumes" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.049131 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c274eefa-3598-470b-9b07-25928903d425","Type":"ContainerDied","Data":"987f48672497b27e918fb98693d89ce4a72d2e67cde202c100496eda8ae7515c"} Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.072705 4946 scope.go:117] "RemoveContainer" containerID="a6bb9670947ddc29c15771ad78d06a158426d4a0b2e7d5a9827785ca30e28082" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.096034 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-77dfbb6d46-cf289"] Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.116692 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-77dfbb6d46-cf289"] Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.130984 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.136448 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-65f88f985c-d964v" podUID="ea608140-3ad6-4c56-9754-ec74fc292781" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.164:8080/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.136884 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.137045 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-65f88f985c-d964v" podUID="ea608140-3ad6-4c56-9754-ec74fc292781" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.164:8080/healthcheck\": dial tcp 10.217.0.164:8080: i/o timeout" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.143237 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.154957 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.159799 4946 scope.go:117] "RemoveContainer" containerID="d7ff90ba4ae10a7961b71df89adf06b1201b956ef3dcb71ad66f05d75ef803ef" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.186993 4946 scope.go:117] "RemoveContainer" containerID="d7ff90ba4ae10a7961b71df89adf06b1201b956ef3dcb71ad66f05d75ef803ef" Nov 28 07:19:48 crc kubenswrapper[4946]: E1128 07:19:48.194289 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7ff90ba4ae10a7961b71df89adf06b1201b956ef3dcb71ad66f05d75ef803ef\": container with ID starting with d7ff90ba4ae10a7961b71df89adf06b1201b956ef3dcb71ad66f05d75ef803ef not found: ID does not exist" containerID="d7ff90ba4ae10a7961b71df89adf06b1201b956ef3dcb71ad66f05d75ef803ef" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.194346 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7ff90ba4ae10a7961b71df89adf06b1201b956ef3dcb71ad66f05d75ef803ef"} err="failed to get container status \"d7ff90ba4ae10a7961b71df89adf06b1201b956ef3dcb71ad66f05d75ef803ef\": rpc error: code = NotFound desc = could not find container \"d7ff90ba4ae10a7961b71df89adf06b1201b956ef3dcb71ad66f05d75ef803ef\": container with ID starting with d7ff90ba4ae10a7961b71df89adf06b1201b956ef3dcb71ad66f05d75ef803ef not found: ID does not exist" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.194377 4946 scope.go:117] "RemoveContainer" containerID="f8b2e10c106db832955230f63c48caf7a4ee259e84e9b344f03d06060ead1493" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.222394 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-kd7m8"] Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.233317 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-kd7m8"] Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.235736 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glanceb763-account-delete-mmnjs"] Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.241200 4946 scope.go:117] "RemoveContainer" containerID="7e494d19861cee1bf5cc7147950495eba143f3dbb2bec8c94fbf0c3c3c16f2dd" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.241350 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b763-account-create-update-vz6db"] Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.245389 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glanceb763-account-delete-mmnjs"] Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.250500 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b763-account-create-update-vz6db"] Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.305823 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.370180 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3521840d-60d0-450c-8c05-7e2ad0fc4e97-rabbitmq-erlang-cookie\") pod \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.370256 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r5q4\" (UniqueName: \"kubernetes.io/projected/3521840d-60d0-450c-8c05-7e2ad0fc4e97-kube-api-access-8r5q4\") pod \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.370336 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3521840d-60d0-450c-8c05-7e2ad0fc4e97-rabbitmq-tls\") pod \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.370359 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3521840d-60d0-450c-8c05-7e2ad0fc4e97-rabbitmq-confd\") pod \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.370381 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3521840d-60d0-450c-8c05-7e2ad0fc4e97-pod-info\") pod \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.370400 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.370420 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-server-conf\") pod \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.370441 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-config-data\") pod \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.370500 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3521840d-60d0-450c-8c05-7e2ad0fc4e97-rabbitmq-plugins\") pod \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.370517 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-plugins-conf\") pod \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.370550 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3521840d-60d0-450c-8c05-7e2ad0fc4e97-erlang-cookie-secret\") pod \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\" (UID: \"3521840d-60d0-450c-8c05-7e2ad0fc4e97\") " Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.371682 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3521840d-60d0-450c-8c05-7e2ad0fc4e97-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "3521840d-60d0-450c-8c05-7e2ad0fc4e97" (UID: "3521840d-60d0-450c-8c05-7e2ad0fc4e97"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.373307 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "3521840d-60d0-450c-8c05-7e2ad0fc4e97" (UID: "3521840d-60d0-450c-8c05-7e2ad0fc4e97"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.373619 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3521840d-60d0-450c-8c05-7e2ad0fc4e97-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "3521840d-60d0-450c-8c05-7e2ad0fc4e97" (UID: "3521840d-60d0-450c-8c05-7e2ad0fc4e97"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.377084 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3521840d-60d0-450c-8c05-7e2ad0fc4e97-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "3521840d-60d0-450c-8c05-7e2ad0fc4e97" (UID: "3521840d-60d0-450c-8c05-7e2ad0fc4e97"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.377667 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3521840d-60d0-450c-8c05-7e2ad0fc4e97-kube-api-access-8r5q4" (OuterVolumeSpecName: "kube-api-access-8r5q4") pod "3521840d-60d0-450c-8c05-7e2ad0fc4e97" (UID: "3521840d-60d0-450c-8c05-7e2ad0fc4e97"). InnerVolumeSpecName "kube-api-access-8r5q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.377754 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3521840d-60d0-450c-8c05-7e2ad0fc4e97-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "3521840d-60d0-450c-8c05-7e2ad0fc4e97" (UID: "3521840d-60d0-450c-8c05-7e2ad0fc4e97"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.377901 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3521840d-60d0-450c-8c05-7e2ad0fc4e97-pod-info" (OuterVolumeSpecName: "pod-info") pod "3521840d-60d0-450c-8c05-7e2ad0fc4e97" (UID: "3521840d-60d0-450c-8c05-7e2ad0fc4e97"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.377926 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "3521840d-60d0-450c-8c05-7e2ad0fc4e97" (UID: "3521840d-60d0-450c-8c05-7e2ad0fc4e97"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.392391 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-config-data" (OuterVolumeSpecName: "config-data") pod "3521840d-60d0-450c-8c05-7e2ad0fc4e97" (UID: "3521840d-60d0-450c-8c05-7e2ad0fc4e97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.408940 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-server-conf" (OuterVolumeSpecName: "server-conf") pod "3521840d-60d0-450c-8c05-7e2ad0fc4e97" (UID: "3521840d-60d0-450c-8c05-7e2ad0fc4e97"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.412510 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-97bfd767f-7zg9s" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.452731 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3521840d-60d0-450c-8c05-7e2ad0fc4e97-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "3521840d-60d0-450c-8c05-7e2ad0fc4e97" (UID: "3521840d-60d0-450c-8c05-7e2ad0fc4e97"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.473312 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxjpz\" (UniqueName: \"kubernetes.io/projected/4619b857-5e70-4ab3-807d-d233c9d9223c-kube-api-access-xxjpz\") pod \"4619b857-5e70-4ab3-807d-d233c9d9223c\" (UID: \"4619b857-5e70-4ab3-807d-d233c9d9223c\") " Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.473385 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4619b857-5e70-4ab3-807d-d233c9d9223c-config-data-custom\") pod \"4619b857-5e70-4ab3-807d-d233c9d9223c\" (UID: \"4619b857-5e70-4ab3-807d-d233c9d9223c\") " Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.473582 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4619b857-5e70-4ab3-807d-d233c9d9223c-logs\") pod \"4619b857-5e70-4ab3-807d-d233c9d9223c\" (UID: \"4619b857-5e70-4ab3-807d-d233c9d9223c\") " Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.473627 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4619b857-5e70-4ab3-807d-d233c9d9223c-config-data\") pod \"4619b857-5e70-4ab3-807d-d233c9d9223c\" (UID: \"4619b857-5e70-4ab3-807d-d233c9d9223c\") " Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.473682 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4619b857-5e70-4ab3-807d-d233c9d9223c-combined-ca-bundle\") pod \"4619b857-5e70-4ab3-807d-d233c9d9223c\" (UID: \"4619b857-5e70-4ab3-807d-d233c9d9223c\") " Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.474034 4946 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3521840d-60d0-450c-8c05-7e2ad0fc4e97-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.474053 4946 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.474068 4946 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3521840d-60d0-450c-8c05-7e2ad0fc4e97-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.474079 4946 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3521840d-60d0-450c-8c05-7e2ad0fc4e97-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.474092 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r5q4\" (UniqueName: \"kubernetes.io/projected/3521840d-60d0-450c-8c05-7e2ad0fc4e97-kube-api-access-8r5q4\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.474101 4946 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3521840d-60d0-450c-8c05-7e2ad0fc4e97-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.474109 4946 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3521840d-60d0-450c-8c05-7e2ad0fc4e97-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.474117 4946 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3521840d-60d0-450c-8c05-7e2ad0fc4e97-pod-info\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.474137 4946 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.474145 4946 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-server-conf\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.474154 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3521840d-60d0-450c-8c05-7e2ad0fc4e97-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.474017 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4619b857-5e70-4ab3-807d-d233c9d9223c-logs" (OuterVolumeSpecName: "logs") pod "4619b857-5e70-4ab3-807d-d233c9d9223c" (UID: "4619b857-5e70-4ab3-807d-d233c9d9223c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.477934 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4619b857-5e70-4ab3-807d-d233c9d9223c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4619b857-5e70-4ab3-807d-d233c9d9223c" (UID: "4619b857-5e70-4ab3-807d-d233c9d9223c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.478952 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4619b857-5e70-4ab3-807d-d233c9d9223c-kube-api-access-xxjpz" (OuterVolumeSpecName: "kube-api-access-xxjpz") pod "4619b857-5e70-4ab3-807d-d233c9d9223c" (UID: "4619b857-5e70-4ab3-807d-d233c9d9223c"). InnerVolumeSpecName "kube-api-access-xxjpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.492806 4946 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.497612 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4619b857-5e70-4ab3-807d-d233c9d9223c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4619b857-5e70-4ab3-807d-d233c9d9223c" (UID: "4619b857-5e70-4ab3-807d-d233c9d9223c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.519753 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4619b857-5e70-4ab3-807d-d233c9d9223c-config-data" (OuterVolumeSpecName: "config-data") pod "4619b857-5e70-4ab3-807d-d233c9d9223c" (UID: "4619b857-5e70-4ab3-807d-d233c9d9223c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.575796 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4619b857-5e70-4ab3-807d-d233c9d9223c-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.575829 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4619b857-5e70-4ab3-807d-d233c9d9223c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.575840 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxjpz\" (UniqueName: \"kubernetes.io/projected/4619b857-5e70-4ab3-807d-d233c9d9223c-kube-api-access-xxjpz\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.575849 4946 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4619b857-5e70-4ab3-807d-d233c9d9223c-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.575858 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4619b857-5e70-4ab3-807d-d233c9d9223c-logs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:48 crc kubenswrapper[4946]: I1128 07:19:48.575867 4946 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:49 crc kubenswrapper[4946]: I1128 07:19:49.054165 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3521840d-60d0-450c-8c05-7e2ad0fc4e97","Type":"ContainerDied","Data":"43803043a516ccf50c5a19a1e0ce796d958a757afcf62c3b93dce03f0af09f7e"} Nov 28 07:19:49 crc kubenswrapper[4946]: I1128 07:19:49.054219 4946 scope.go:117] "RemoveContainer" containerID="5a518b7e7d038229a500bd8709ec0d601f6bd6d8f0d81ac3077b20e90a835629" Nov 28 07:19:49 crc kubenswrapper[4946]: I1128 07:19:49.054335 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 07:19:49 crc kubenswrapper[4946]: I1128 07:19:49.064887 4946 generic.go:334] "Generic (PLEG): container finished" podID="4619b857-5e70-4ab3-807d-d233c9d9223c" containerID="1bd51decff33c36bbc5dd5c23b04d649356ea9e1d0ccc8a4e364d1512e184087" exitCode=0 Nov 28 07:19:49 crc kubenswrapper[4946]: I1128 07:19:49.064944 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-97bfd767f-7zg9s" event={"ID":"4619b857-5e70-4ab3-807d-d233c9d9223c","Type":"ContainerDied","Data":"1bd51decff33c36bbc5dd5c23b04d649356ea9e1d0ccc8a4e364d1512e184087"} Nov 28 07:19:49 crc kubenswrapper[4946]: I1128 07:19:49.064976 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-97bfd767f-7zg9s" event={"ID":"4619b857-5e70-4ab3-807d-d233c9d9223c","Type":"ContainerDied","Data":"af404d73e71434bdd034c7fc0628178eb1c85dfa9f673efa7e2c157a4887187f"} Nov 28 07:19:49 crc kubenswrapper[4946]: I1128 07:19:49.065048 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-97bfd767f-7zg9s" Nov 28 07:19:49 crc kubenswrapper[4946]: I1128 07:19:49.109859 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 07:19:49 crc kubenswrapper[4946]: I1128 07:19:49.118186 4946 scope.go:117] "RemoveContainer" containerID="88c96e9b6c9b6c9a01377b5eb6cd235cde6a0cea15f68f81f0dba3c64839e047" Nov 28 07:19:49 crc kubenswrapper[4946]: I1128 07:19:49.124013 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 07:19:49 crc kubenswrapper[4946]: I1128 07:19:49.132623 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-97bfd767f-7zg9s"] Nov 28 07:19:49 crc kubenswrapper[4946]: I1128 07:19:49.144087 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-97bfd767f-7zg9s"] Nov 28 07:19:49 crc kubenswrapper[4946]: I1128 07:19:49.156578 4946 scope.go:117] "RemoveContainer" containerID="1bd51decff33c36bbc5dd5c23b04d649356ea9e1d0ccc8a4e364d1512e184087" Nov 28 07:19:49 crc kubenswrapper[4946]: I1128 07:19:49.192776 4946 scope.go:117] "RemoveContainer" containerID="9517940817e1a7d770b2a8b25720d839e772ec8d3fd3f428c2debea57ae43b63" Nov 28 07:19:49 crc kubenswrapper[4946]: I1128 07:19:49.209600 4946 scope.go:117] "RemoveContainer" containerID="1bd51decff33c36bbc5dd5c23b04d649356ea9e1d0ccc8a4e364d1512e184087" Nov 28 07:19:49 crc kubenswrapper[4946]: E1128 07:19:49.210014 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bd51decff33c36bbc5dd5c23b04d649356ea9e1d0ccc8a4e364d1512e184087\": container with ID starting with 1bd51decff33c36bbc5dd5c23b04d649356ea9e1d0ccc8a4e364d1512e184087 not found: ID does not exist" containerID="1bd51decff33c36bbc5dd5c23b04d649356ea9e1d0ccc8a4e364d1512e184087" Nov 28 07:19:49 crc kubenswrapper[4946]: I1128 07:19:49.210069 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bd51decff33c36bbc5dd5c23b04d649356ea9e1d0ccc8a4e364d1512e184087"} err="failed to get container status \"1bd51decff33c36bbc5dd5c23b04d649356ea9e1d0ccc8a4e364d1512e184087\": rpc error: code = NotFound desc = could not find container \"1bd51decff33c36bbc5dd5c23b04d649356ea9e1d0ccc8a4e364d1512e184087\": container with ID starting with 1bd51decff33c36bbc5dd5c23b04d649356ea9e1d0ccc8a4e364d1512e184087 not found: ID does not exist" Nov 28 07:19:49 crc kubenswrapper[4946]: I1128 07:19:49.210108 4946 scope.go:117] "RemoveContainer" containerID="9517940817e1a7d770b2a8b25720d839e772ec8d3fd3f428c2debea57ae43b63" Nov 28 07:19:49 crc kubenswrapper[4946]: E1128 07:19:49.210418 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9517940817e1a7d770b2a8b25720d839e772ec8d3fd3f428c2debea57ae43b63\": container with ID starting with 9517940817e1a7d770b2a8b25720d839e772ec8d3fd3f428c2debea57ae43b63 not found: ID does not exist" containerID="9517940817e1a7d770b2a8b25720d839e772ec8d3fd3f428c2debea57ae43b63" Nov 28 07:19:49 crc kubenswrapper[4946]: I1128 07:19:49.210486 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9517940817e1a7d770b2a8b25720d839e772ec8d3fd3f428c2debea57ae43b63"} err="failed to get container status \"9517940817e1a7d770b2a8b25720d839e772ec8d3fd3f428c2debea57ae43b63\": rpc error: code = NotFound desc = could not find container \"9517940817e1a7d770b2a8b25720d839e772ec8d3fd3f428c2debea57ae43b63\": container with ID starting with 9517940817e1a7d770b2a8b25720d839e772ec8d3fd3f428c2debea57ae43b63 not found: ID does not exist" Nov 28 07:19:50 crc kubenswrapper[4946]: I1128 07:19:50.010221 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37" path="/var/lib/kubelet/pods/2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37/volumes" Nov 28 07:19:50 crc kubenswrapper[4946]: I1128 07:19:50.011211 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3521840d-60d0-450c-8c05-7e2ad0fc4e97" path="/var/lib/kubelet/pods/3521840d-60d0-450c-8c05-7e2ad0fc4e97/volumes" Nov 28 07:19:50 crc kubenswrapper[4946]: I1128 07:19:50.012133 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4619b857-5e70-4ab3-807d-d233c9d9223c" path="/var/lib/kubelet/pods/4619b857-5e70-4ab3-807d-d233c9d9223c/volumes" Nov 28 07:19:50 crc kubenswrapper[4946]: I1128 07:19:50.020633 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59fdca77-b333-44be-ab8c-96a2f4bcc340" path="/var/lib/kubelet/pods/59fdca77-b333-44be-ab8c-96a2f4bcc340/volumes" Nov 28 07:19:50 crc kubenswrapper[4946]: I1128 07:19:50.021603 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bab5772e-0e4e-4425-a851-10e59b8b602c" path="/var/lib/kubelet/pods/bab5772e-0e4e-4425-a851-10e59b8b602c/volumes" Nov 28 07:19:50 crc kubenswrapper[4946]: I1128 07:19:50.024324 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c274eefa-3598-470b-9b07-25928903d425" path="/var/lib/kubelet/pods/c274eefa-3598-470b-9b07-25928903d425/volumes" Nov 28 07:19:50 crc kubenswrapper[4946]: I1128 07:19:50.025284 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee" path="/var/lib/kubelet/pods/e44e658f-9e9e-4ee3-98d3-134ba6c8a6ee/volumes" Nov 28 07:19:50 crc kubenswrapper[4946]: I1128 07:19:50.026236 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb07712a-805e-4e0f-9a81-dd8ce42bfb88" path="/var/lib/kubelet/pods/eb07712a-805e-4e0f-9a81-dd8ce42bfb88/volumes" Nov 28 07:19:51 crc kubenswrapper[4946]: E1128 07:19:51.628365 4946 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 28 07:19:51 crc kubenswrapper[4946]: E1128 07:19:51.628727 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b138df41-1f0c-4edb-9546-e0f5ec16cf06-operator-scripts podName:b138df41-1f0c-4edb-9546-e0f5ec16cf06 nodeName:}" failed. No retries permitted until 2025-11-28 07:19:59.628701308 +0000 UTC m=+1654.006766459 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b138df41-1f0c-4edb-9546-e0f5ec16cf06-operator-scripts") pod "novacell0d35d-account-delete-c848z" (UID: "b138df41-1f0c-4edb-9546-e0f5ec16cf06") : configmap "openstack-scripts" not found Nov 28 07:19:52 crc kubenswrapper[4946]: E1128 07:19:52.974452 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443 is running failed: container process not found" containerID="c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:19:52 crc kubenswrapper[4946]: E1128 07:19:52.975162 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443 is running failed: container process not found" containerID="c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:19:52 crc kubenswrapper[4946]: E1128 07:19:52.975586 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443 is running failed: container process not found" containerID="c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:19:52 crc kubenswrapper[4946]: E1128 07:19:52.975633 4946 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-gpnz5" podUID="944abeae-3f1d-4391-a375-b64ed9c17b14" containerName="ovsdb-server" Nov 28 07:19:52 crc kubenswrapper[4946]: E1128 07:19:52.976487 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b12ef267dfaa1906a3e883237d5228860410b06e258081a1cabe06d004313ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:19:52 crc kubenswrapper[4946]: E1128 07:19:52.978338 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b12ef267dfaa1906a3e883237d5228860410b06e258081a1cabe06d004313ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:19:52 crc kubenswrapper[4946]: E1128 07:19:52.980390 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b12ef267dfaa1906a3e883237d5228860410b06e258081a1cabe06d004313ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:19:52 crc kubenswrapper[4946]: E1128 07:19:52.980439 4946 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-gpnz5" podUID="944abeae-3f1d-4391-a375-b64ed9c17b14" containerName="ovs-vswitchd" Nov 28 07:19:54 crc kubenswrapper[4946]: I1128 07:19:54.732104 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:19:54 crc kubenswrapper[4946]: I1128 07:19:54.732211 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:19:54 crc kubenswrapper[4946]: I1128 07:19:54.732280 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 07:19:54 crc kubenswrapper[4946]: I1128 07:19:54.733352 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 07:19:54 crc kubenswrapper[4946]: I1128 07:19:54.733424 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" gracePeriod=600 Nov 28 07:19:54 crc kubenswrapper[4946]: E1128 07:19:54.876055 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.135449 4946 generic.go:334] "Generic (PLEG): container finished" podID="d1578c84-1d87-41b2-bfa7-637c3b53366f" containerID="a520f59984f4e2b3696712434dc811010686939bd64077205d50d0ba4e29000f" exitCode=0 Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.135525 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-777c4856b5-mgnhk" event={"ID":"d1578c84-1d87-41b2-bfa7-637c3b53366f","Type":"ContainerDied","Data":"a520f59984f4e2b3696712434dc811010686939bd64077205d50d0ba4e29000f"} Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.139595 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" exitCode=0 Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.139658 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365"} Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.139744 4946 scope.go:117] "RemoveContainer" containerID="a1b65860bba4b7422a1bd44c20f73ab6d26e45cd22f0c4eba1bdbae4c38acc18" Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.141735 4946 scope.go:117] "RemoveContainer" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" Nov 28 07:19:55 crc kubenswrapper[4946]: E1128 07:19:55.142436 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.235611 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.398490 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-httpd-config\") pod \"d1578c84-1d87-41b2-bfa7-637c3b53366f\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.398628 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-internal-tls-certs\") pod \"d1578c84-1d87-41b2-bfa7-637c3b53366f\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.398799 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-ovndb-tls-certs\") pod \"d1578c84-1d87-41b2-bfa7-637c3b53366f\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.398826 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-combined-ca-bundle\") pod \"d1578c84-1d87-41b2-bfa7-637c3b53366f\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.398871 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-config\") pod \"d1578c84-1d87-41b2-bfa7-637c3b53366f\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.398974 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4rxb\" (UniqueName: \"kubernetes.io/projected/d1578c84-1d87-41b2-bfa7-637c3b53366f-kube-api-access-d4rxb\") pod \"d1578c84-1d87-41b2-bfa7-637c3b53366f\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.399104 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-public-tls-certs\") pod \"d1578c84-1d87-41b2-bfa7-637c3b53366f\" (UID: \"d1578c84-1d87-41b2-bfa7-637c3b53366f\") " Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.404457 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1578c84-1d87-41b2-bfa7-637c3b53366f-kube-api-access-d4rxb" (OuterVolumeSpecName: "kube-api-access-d4rxb") pod "d1578c84-1d87-41b2-bfa7-637c3b53366f" (UID: "d1578c84-1d87-41b2-bfa7-637c3b53366f"). InnerVolumeSpecName "kube-api-access-d4rxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.406182 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "d1578c84-1d87-41b2-bfa7-637c3b53366f" (UID: "d1578c84-1d87-41b2-bfa7-637c3b53366f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.464387 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-config" (OuterVolumeSpecName: "config") pod "d1578c84-1d87-41b2-bfa7-637c3b53366f" (UID: "d1578c84-1d87-41b2-bfa7-637c3b53366f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.466812 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1578c84-1d87-41b2-bfa7-637c3b53366f" (UID: "d1578c84-1d87-41b2-bfa7-637c3b53366f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.471687 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d1578c84-1d87-41b2-bfa7-637c3b53366f" (UID: "d1578c84-1d87-41b2-bfa7-637c3b53366f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.475093 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d1578c84-1d87-41b2-bfa7-637c3b53366f" (UID: "d1578c84-1d87-41b2-bfa7-637c3b53366f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.501682 4946 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.502943 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.503224 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.503406 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4rxb\" (UniqueName: \"kubernetes.io/projected/d1578c84-1d87-41b2-bfa7-637c3b53366f-kube-api-access-d4rxb\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.503587 4946 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.503813 4946 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.505322 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d1578c84-1d87-41b2-bfa7-637c3b53366f" (UID: "d1578c84-1d87-41b2-bfa7-637c3b53366f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:19:55 crc kubenswrapper[4946]: I1128 07:19:55.605741 4946 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1578c84-1d87-41b2-bfa7-637c3b53366f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:56 crc kubenswrapper[4946]: I1128 07:19:56.160046 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-777c4856b5-mgnhk" event={"ID":"d1578c84-1d87-41b2-bfa7-637c3b53366f","Type":"ContainerDied","Data":"f93b507d96c3a2691c211c317621d608fd981f14cc9e4fa676c3d903d95671ae"} Nov 28 07:19:56 crc kubenswrapper[4946]: I1128 07:19:56.160150 4946 scope.go:117] "RemoveContainer" containerID="1b4c77565cb683565551995a4fa5e6f14515e5125068ec275e329eaccc6d274a" Nov 28 07:19:56 crc kubenswrapper[4946]: I1128 07:19:56.160256 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-777c4856b5-mgnhk" Nov 28 07:19:56 crc kubenswrapper[4946]: I1128 07:19:56.213379 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-777c4856b5-mgnhk"] Nov 28 07:19:56 crc kubenswrapper[4946]: I1128 07:19:56.217545 4946 scope.go:117] "RemoveContainer" containerID="a520f59984f4e2b3696712434dc811010686939bd64077205d50d0ba4e29000f" Nov 28 07:19:56 crc kubenswrapper[4946]: I1128 07:19:56.222626 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-777c4856b5-mgnhk"] Nov 28 07:19:57 crc kubenswrapper[4946]: E1128 07:19:57.975698 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b12ef267dfaa1906a3e883237d5228860410b06e258081a1cabe06d004313ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:19:57 crc kubenswrapper[4946]: E1128 07:19:57.977154 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443 is running failed: container process not found" containerID="c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:19:57 crc kubenswrapper[4946]: E1128 07:19:57.977628 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443 is running failed: container process not found" containerID="c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:19:57 crc kubenswrapper[4946]: E1128 07:19:57.978240 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443 is running failed: container process not found" containerID="c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:19:57 crc kubenswrapper[4946]: E1128 07:19:57.978289 4946 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-gpnz5" podUID="944abeae-3f1d-4391-a375-b64ed9c17b14" containerName="ovsdb-server" Nov 28 07:19:57 crc kubenswrapper[4946]: E1128 07:19:57.978362 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b12ef267dfaa1906a3e883237d5228860410b06e258081a1cabe06d004313ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:19:57 crc kubenswrapper[4946]: E1128 07:19:57.981127 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b12ef267dfaa1906a3e883237d5228860410b06e258081a1cabe06d004313ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:19:57 crc kubenswrapper[4946]: E1128 07:19:57.981183 4946 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-gpnz5" podUID="944abeae-3f1d-4391-a375-b64ed9c17b14" containerName="ovs-vswitchd" Nov 28 07:19:58 crc kubenswrapper[4946]: I1128 07:19:58.008634 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1578c84-1d87-41b2-bfa7-637c3b53366f" path="/var/lib/kubelet/pods/d1578c84-1d87-41b2-bfa7-637c3b53366f/volumes" Nov 28 07:19:59 crc kubenswrapper[4946]: E1128 07:19:59.698745 4946 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 28 07:19:59 crc kubenswrapper[4946]: E1128 07:19:59.699946 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b138df41-1f0c-4edb-9546-e0f5ec16cf06-operator-scripts podName:b138df41-1f0c-4edb-9546-e0f5ec16cf06 nodeName:}" failed. No retries permitted until 2025-11-28 07:20:15.699895543 +0000 UTC m=+1670.077960694 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b138df41-1f0c-4edb-9546-e0f5ec16cf06-operator-scripts") pod "novacell0d35d-account-delete-c848z" (UID: "b138df41-1f0c-4edb-9546-e0f5ec16cf06") : configmap "openstack-scripts" not found Nov 28 07:20:02 crc kubenswrapper[4946]: E1128 07:20:02.974249 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443 is running failed: container process not found" containerID="c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:20:02 crc kubenswrapper[4946]: E1128 07:20:02.976030 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443 is running failed: container process not found" containerID="c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:20:02 crc kubenswrapper[4946]: E1128 07:20:02.976191 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b12ef267dfaa1906a3e883237d5228860410b06e258081a1cabe06d004313ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:20:02 crc kubenswrapper[4946]: E1128 07:20:02.977100 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443 is running failed: container process not found" containerID="c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:20:02 crc kubenswrapper[4946]: E1128 07:20:02.977274 4946 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-gpnz5" podUID="944abeae-3f1d-4391-a375-b64ed9c17b14" containerName="ovsdb-server" Nov 28 07:20:02 crc kubenswrapper[4946]: E1128 07:20:02.978177 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b12ef267dfaa1906a3e883237d5228860410b06e258081a1cabe06d004313ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:20:02 crc kubenswrapper[4946]: E1128 07:20:02.981839 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b12ef267dfaa1906a3e883237d5228860410b06e258081a1cabe06d004313ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:20:02 crc kubenswrapper[4946]: E1128 07:20:02.981897 4946 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-gpnz5" podUID="944abeae-3f1d-4391-a375-b64ed9c17b14" containerName="ovs-vswitchd" Nov 28 07:20:07 crc kubenswrapper[4946]: E1128 07:20:07.974325 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443 is running failed: container process not found" containerID="c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:20:07 crc kubenswrapper[4946]: E1128 07:20:07.975959 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443 is running failed: container process not found" containerID="c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:20:07 crc kubenswrapper[4946]: E1128 07:20:07.976537 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b12ef267dfaa1906a3e883237d5228860410b06e258081a1cabe06d004313ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:20:07 crc kubenswrapper[4946]: E1128 07:20:07.976861 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443 is running failed: container process not found" containerID="c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 28 07:20:07 crc kubenswrapper[4946]: E1128 07:20:07.976914 4946 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-gpnz5" podUID="944abeae-3f1d-4391-a375-b64ed9c17b14" containerName="ovsdb-server" Nov 28 07:20:07 crc kubenswrapper[4946]: E1128 07:20:07.979914 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b12ef267dfaa1906a3e883237d5228860410b06e258081a1cabe06d004313ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:20:07 crc kubenswrapper[4946]: E1128 07:20:07.981870 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b12ef267dfaa1906a3e883237d5228860410b06e258081a1cabe06d004313ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 28 07:20:07 crc kubenswrapper[4946]: E1128 07:20:07.981979 4946 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-gpnz5" podUID="944abeae-3f1d-4391-a375-b64ed9c17b14" containerName="ovs-vswitchd" Nov 28 07:20:07 crc kubenswrapper[4946]: I1128 07:20:07.991043 4946 scope.go:117] "RemoveContainer" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" Nov 28 07:20:07 crc kubenswrapper[4946]: E1128 07:20:07.991629 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:20:09 crc kubenswrapper[4946]: I1128 07:20:09.339091 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gpnz5_944abeae-3f1d-4391-a375-b64ed9c17b14/ovs-vswitchd/0.log" Nov 28 07:20:09 crc kubenswrapper[4946]: I1128 07:20:09.341677 4946 generic.go:334] "Generic (PLEG): container finished" podID="944abeae-3f1d-4391-a375-b64ed9c17b14" containerID="b12ef267dfaa1906a3e883237d5228860410b06e258081a1cabe06d004313ce6" exitCode=137 Nov 28 07:20:09 crc kubenswrapper[4946]: I1128 07:20:09.341724 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gpnz5" event={"ID":"944abeae-3f1d-4391-a375-b64ed9c17b14","Type":"ContainerDied","Data":"b12ef267dfaa1906a3e883237d5228860410b06e258081a1cabe06d004313ce6"} Nov 28 07:20:09 crc kubenswrapper[4946]: I1128 07:20:09.607050 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gpnz5_944abeae-3f1d-4391-a375-b64ed9c17b14/ovs-vswitchd/0.log" Nov 28 07:20:09 crc kubenswrapper[4946]: I1128 07:20:09.608517 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:20:09 crc kubenswrapper[4946]: I1128 07:20:09.705631 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/944abeae-3f1d-4391-a375-b64ed9c17b14-var-lib\") pod \"944abeae-3f1d-4391-a375-b64ed9c17b14\" (UID: \"944abeae-3f1d-4391-a375-b64ed9c17b14\") " Nov 28 07:20:09 crc kubenswrapper[4946]: I1128 07:20:09.705724 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/944abeae-3f1d-4391-a375-b64ed9c17b14-scripts\") pod \"944abeae-3f1d-4391-a375-b64ed9c17b14\" (UID: \"944abeae-3f1d-4391-a375-b64ed9c17b14\") " Nov 28 07:20:09 crc kubenswrapper[4946]: I1128 07:20:09.705719 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/944abeae-3f1d-4391-a375-b64ed9c17b14-var-lib" (OuterVolumeSpecName: "var-lib") pod "944abeae-3f1d-4391-a375-b64ed9c17b14" (UID: "944abeae-3f1d-4391-a375-b64ed9c17b14"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:20:09 crc kubenswrapper[4946]: I1128 07:20:09.705841 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcq7l\" (UniqueName: \"kubernetes.io/projected/944abeae-3f1d-4391-a375-b64ed9c17b14-kube-api-access-tcq7l\") pod \"944abeae-3f1d-4391-a375-b64ed9c17b14\" (UID: \"944abeae-3f1d-4391-a375-b64ed9c17b14\") " Nov 28 07:20:09 crc kubenswrapper[4946]: I1128 07:20:09.705874 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/944abeae-3f1d-4391-a375-b64ed9c17b14-var-run\") pod \"944abeae-3f1d-4391-a375-b64ed9c17b14\" (UID: \"944abeae-3f1d-4391-a375-b64ed9c17b14\") " Nov 28 07:20:09 crc kubenswrapper[4946]: I1128 07:20:09.705922 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/944abeae-3f1d-4391-a375-b64ed9c17b14-etc-ovs\") pod \"944abeae-3f1d-4391-a375-b64ed9c17b14\" (UID: \"944abeae-3f1d-4391-a375-b64ed9c17b14\") " Nov 28 07:20:09 crc kubenswrapper[4946]: I1128 07:20:09.705965 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/944abeae-3f1d-4391-a375-b64ed9c17b14-var-log\") pod \"944abeae-3f1d-4391-a375-b64ed9c17b14\" (UID: \"944abeae-3f1d-4391-a375-b64ed9c17b14\") " Nov 28 07:20:09 crc kubenswrapper[4946]: I1128 07:20:09.706104 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/944abeae-3f1d-4391-a375-b64ed9c17b14-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "944abeae-3f1d-4391-a375-b64ed9c17b14" (UID: "944abeae-3f1d-4391-a375-b64ed9c17b14"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:20:09 crc kubenswrapper[4946]: I1128 07:20:09.706069 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/944abeae-3f1d-4391-a375-b64ed9c17b14-var-run" (OuterVolumeSpecName: "var-run") pod "944abeae-3f1d-4391-a375-b64ed9c17b14" (UID: "944abeae-3f1d-4391-a375-b64ed9c17b14"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:20:09 crc kubenswrapper[4946]: I1128 07:20:09.706205 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/944abeae-3f1d-4391-a375-b64ed9c17b14-var-log" (OuterVolumeSpecName: "var-log") pod "944abeae-3f1d-4391-a375-b64ed9c17b14" (UID: "944abeae-3f1d-4391-a375-b64ed9c17b14"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:20:09 crc kubenswrapper[4946]: I1128 07:20:09.706844 4946 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/944abeae-3f1d-4391-a375-b64ed9c17b14-etc-ovs\") on node \"crc\" DevicePath \"\"" Nov 28 07:20:09 crc kubenswrapper[4946]: I1128 07:20:09.706904 4946 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/944abeae-3f1d-4391-a375-b64ed9c17b14-var-log\") on node \"crc\" DevicePath \"\"" Nov 28 07:20:09 crc kubenswrapper[4946]: I1128 07:20:09.706931 4946 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/944abeae-3f1d-4391-a375-b64ed9c17b14-var-lib\") on node \"crc\" DevicePath \"\"" Nov 28 07:20:09 crc kubenswrapper[4946]: I1128 07:20:09.706956 4946 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/944abeae-3f1d-4391-a375-b64ed9c17b14-var-run\") on node \"crc\" DevicePath \"\"" Nov 28 07:20:09 crc kubenswrapper[4946]: I1128 07:20:09.708274 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/944abeae-3f1d-4391-a375-b64ed9c17b14-scripts" (OuterVolumeSpecName: "scripts") pod "944abeae-3f1d-4391-a375-b64ed9c17b14" (UID: "944abeae-3f1d-4391-a375-b64ed9c17b14"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:20:09 crc kubenswrapper[4946]: I1128 07:20:09.714655 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/944abeae-3f1d-4391-a375-b64ed9c17b14-kube-api-access-tcq7l" (OuterVolumeSpecName: "kube-api-access-tcq7l") pod "944abeae-3f1d-4391-a375-b64ed9c17b14" (UID: "944abeae-3f1d-4391-a375-b64ed9c17b14"). InnerVolumeSpecName "kube-api-access-tcq7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:20:09 crc kubenswrapper[4946]: I1128 07:20:09.808538 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/944abeae-3f1d-4391-a375-b64ed9c17b14-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:20:09 crc kubenswrapper[4946]: I1128 07:20:09.808582 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcq7l\" (UniqueName: \"kubernetes.io/projected/944abeae-3f1d-4391-a375-b64ed9c17b14-kube-api-access-tcq7l\") on node \"crc\" DevicePath \"\"" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.362748 4946 generic.go:334] "Generic (PLEG): container finished" podID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerID="5bd491e35ece677c5f005652efd7b988e650a41c30c0108a8430aaaf6dcac5e9" exitCode=137 Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.362994 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerDied","Data":"5bd491e35ece677c5f005652efd7b988e650a41c30c0108a8430aaaf6dcac5e9"} Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.369246 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gpnz5_944abeae-3f1d-4391-a375-b64ed9c17b14/ovs-vswitchd/0.log" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.372125 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gpnz5" event={"ID":"944abeae-3f1d-4391-a375-b64ed9c17b14","Type":"ContainerDied","Data":"2b165cc1856e910a507071e4463eeac3507f736c920a23b80f36be1c7358cf8b"} Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.372251 4946 scope.go:117] "RemoveContainer" containerID="b12ef267dfaa1906a3e883237d5228860410b06e258081a1cabe06d004313ce6" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.372403 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-gpnz5" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.444316 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-gpnz5"] Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.451870 4946 scope.go:117] "RemoveContainer" containerID="c5e052846b04c3f56ae570cc17f4288914d22c94a98566c8edd7f06e4a7f7443" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.451995 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-gpnz5"] Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.489092 4946 scope.go:117] "RemoveContainer" containerID="c4ddf22e9c5893825d45cbc30aa68a2bbb399585a0d6a7695769865ef74f1816" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.627707 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.694435 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.735212 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b7d310ee-b686-4e3d-b554-393fc09a770d-lock\") pod \"b7d310ee-b686-4e3d-b554-393fc09a770d\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.735630 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq5d5\" (UniqueName: \"kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-kube-api-access-zq5d5\") pod \"b7d310ee-b686-4e3d-b554-393fc09a770d\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.735807 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"b7d310ee-b686-4e3d-b554-393fc09a770d\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.735977 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7d310ee-b686-4e3d-b554-393fc09a770d-lock" (OuterVolumeSpecName: "lock") pod "b7d310ee-b686-4e3d-b554-393fc09a770d" (UID: "b7d310ee-b686-4e3d-b554-393fc09a770d"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.736557 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7d310ee-b686-4e3d-b554-393fc09a770d-cache" (OuterVolumeSpecName: "cache") pod "b7d310ee-b686-4e3d-b554-393fc09a770d" (UID: "b7d310ee-b686-4e3d-b554-393fc09a770d"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.736690 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b7d310ee-b686-4e3d-b554-393fc09a770d-cache\") pod \"b7d310ee-b686-4e3d-b554-393fc09a770d\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.736889 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-etc-swift\") pod \"b7d310ee-b686-4e3d-b554-393fc09a770d\" (UID: \"b7d310ee-b686-4e3d-b554-393fc09a770d\") " Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.737346 4946 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b7d310ee-b686-4e3d-b554-393fc09a770d-cache\") on node \"crc\" DevicePath \"\"" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.737510 4946 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b7d310ee-b686-4e3d-b554-393fc09a770d-lock\") on node \"crc\" DevicePath \"\"" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.739504 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "swift") pod "b7d310ee-b686-4e3d-b554-393fc09a770d" (UID: "b7d310ee-b686-4e3d-b554-393fc09a770d"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.740445 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b7d310ee-b686-4e3d-b554-393fc09a770d" (UID: "b7d310ee-b686-4e3d-b554-393fc09a770d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.741002 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-kube-api-access-zq5d5" (OuterVolumeSpecName: "kube-api-access-zq5d5") pod "b7d310ee-b686-4e3d-b554-393fc09a770d" (UID: "b7d310ee-b686-4e3d-b554-393fc09a770d"). InnerVolumeSpecName "kube-api-access-zq5d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.838747 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c485c360-55fc-49da-851d-ab74f7c7fc98-config-data-custom\") pod \"c485c360-55fc-49da-851d-ab74f7c7fc98\" (UID: \"c485c360-55fc-49da-851d-ab74f7c7fc98\") " Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.838824 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l79zk\" (UniqueName: \"kubernetes.io/projected/c485c360-55fc-49da-851d-ab74f7c7fc98-kube-api-access-l79zk\") pod \"c485c360-55fc-49da-851d-ab74f7c7fc98\" (UID: \"c485c360-55fc-49da-851d-ab74f7c7fc98\") " Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.838884 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c485c360-55fc-49da-851d-ab74f7c7fc98-scripts\") pod \"c485c360-55fc-49da-851d-ab74f7c7fc98\" (UID: \"c485c360-55fc-49da-851d-ab74f7c7fc98\") " Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.838915 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c485c360-55fc-49da-851d-ab74f7c7fc98-etc-machine-id\") pod \"c485c360-55fc-49da-851d-ab74f7c7fc98\" (UID: \"c485c360-55fc-49da-851d-ab74f7c7fc98\") " Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.838981 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c485c360-55fc-49da-851d-ab74f7c7fc98-config-data\") pod \"c485c360-55fc-49da-851d-ab74f7c7fc98\" (UID: \"c485c360-55fc-49da-851d-ab74f7c7fc98\") " Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.839051 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c485c360-55fc-49da-851d-ab74f7c7fc98-combined-ca-bundle\") pod \"c485c360-55fc-49da-851d-ab74f7c7fc98\" (UID: \"c485c360-55fc-49da-851d-ab74f7c7fc98\") " Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.839447 4946 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.839485 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq5d5\" (UniqueName: \"kubernetes.io/projected/b7d310ee-b686-4e3d-b554-393fc09a770d-kube-api-access-zq5d5\") on node \"crc\" DevicePath \"\"" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.839519 4946 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.839882 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c485c360-55fc-49da-851d-ab74f7c7fc98-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c485c360-55fc-49da-851d-ab74f7c7fc98" (UID: "c485c360-55fc-49da-851d-ab74f7c7fc98"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.842524 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c485c360-55fc-49da-851d-ab74f7c7fc98-scripts" (OuterVolumeSpecName: "scripts") pod "c485c360-55fc-49da-851d-ab74f7c7fc98" (UID: "c485c360-55fc-49da-851d-ab74f7c7fc98"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.844820 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c485c360-55fc-49da-851d-ab74f7c7fc98-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c485c360-55fc-49da-851d-ab74f7c7fc98" (UID: "c485c360-55fc-49da-851d-ab74f7c7fc98"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.845931 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c485c360-55fc-49da-851d-ab74f7c7fc98-kube-api-access-l79zk" (OuterVolumeSpecName: "kube-api-access-l79zk") pod "c485c360-55fc-49da-851d-ab74f7c7fc98" (UID: "c485c360-55fc-49da-851d-ab74f7c7fc98"). InnerVolumeSpecName "kube-api-access-l79zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.855189 4946 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.885879 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c485c360-55fc-49da-851d-ab74f7c7fc98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c485c360-55fc-49da-851d-ab74f7c7fc98" (UID: "c485c360-55fc-49da-851d-ab74f7c7fc98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.941160 4946 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c485c360-55fc-49da-851d-ab74f7c7fc98-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.941200 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l79zk\" (UniqueName: \"kubernetes.io/projected/c485c360-55fc-49da-851d-ab74f7c7fc98-kube-api-access-l79zk\") on node \"crc\" DevicePath \"\"" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.941220 4946 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.941232 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c485c360-55fc-49da-851d-ab74f7c7fc98-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.941242 4946 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c485c360-55fc-49da-851d-ab74f7c7fc98-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.941255 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c485c360-55fc-49da-851d-ab74f7c7fc98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:20:10 crc kubenswrapper[4946]: I1128 07:20:10.941889 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c485c360-55fc-49da-851d-ab74f7c7fc98-config-data" (OuterVolumeSpecName: "config-data") pod "c485c360-55fc-49da-851d-ab74f7c7fc98" (UID: "c485c360-55fc-49da-851d-ab74f7c7fc98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.043521 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c485c360-55fc-49da-851d-ab74f7c7fc98-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.398511 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d310ee-b686-4e3d-b554-393fc09a770d","Type":"ContainerDied","Data":"0dc3aa7184828a9c2dc73fcaaeda830932197a16333d625c40726297d605a9b2"} Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.398654 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.399598 4946 scope.go:117] "RemoveContainer" containerID="5bd491e35ece677c5f005652efd7b988e650a41c30c0108a8430aaaf6dcac5e9" Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.405538 4946 generic.go:334] "Generic (PLEG): container finished" podID="c485c360-55fc-49da-851d-ab74f7c7fc98" containerID="a144d6e25c9728943275e991de8aeab0546d7b79b5798dc6dd8598ae23f2baf4" exitCode=137 Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.405565 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.405597 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c485c360-55fc-49da-851d-ab74f7c7fc98","Type":"ContainerDied","Data":"a144d6e25c9728943275e991de8aeab0546d7b79b5798dc6dd8598ae23f2baf4"} Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.407080 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c485c360-55fc-49da-851d-ab74f7c7fc98","Type":"ContainerDied","Data":"55a75649ed94bef4b1e825d44fc73179cbcd34309cb1d3de52b730787c61102a"} Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.442650 4946 scope.go:117] "RemoveContainer" containerID="1c43b5f235fc838d55b3e3138910a90b5396838761a0454dd53a66990464a1e9" Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.494777 4946 scope.go:117] "RemoveContainer" containerID="179205cba904e1173a81ebd776bc1a0a5b564f12e0185b57e69ccb4e5be0d40a" Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.497899 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.509273 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.515793 4946 scope.go:117] "RemoveContainer" containerID="882d75cc76bc05932cfdf4ca19f83e0f38f9b3f3c27568b1f8c72bd6dd29c2f0" Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.519307 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.529768 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.535824 4946 scope.go:117] "RemoveContainer" containerID="08e99b903087a98fbf737f58badd6a71a2d6baa168caa8e67046f8c35f351fbf" Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.558530 4946 scope.go:117] "RemoveContainer" containerID="3bb83e39c083e832f8a225589956ac89495d0a3270463d35cd5097cffffacde0" Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.577343 4946 scope.go:117] "RemoveContainer" containerID="7be129de7a59c6e25c3ad9da58ad52ccaea20aa7256f4fab80d19e1a02a9f707" Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.598232 4946 scope.go:117] "RemoveContainer" containerID="61e1d296d26ca679c99de8f8b9f1a8d780c7ec16fe7bd59626b4da870f354696" Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.635955 4946 scope.go:117] "RemoveContainer" containerID="d6be15027db8aca6a49663ad2705ef701b99e1ed3611ca1bdf517ec488caf40d" Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.655298 4946 scope.go:117] "RemoveContainer" containerID="67bb3d551d212ca961ad8ea7d743990298fb96bccbac103b400c071fec12a04a" Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.673601 4946 scope.go:117] "RemoveContainer" containerID="0b64d0bffa35322fe586ccacf74d1f4ab7472d93d29af359db4d89db97ef491e" Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.690230 4946 scope.go:117] "RemoveContainer" containerID="1d5e25dca53ee666bbf126081c00a7502572f05a1dfc6f018a9664b36c521ea0" Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.710128 4946 scope.go:117] "RemoveContainer" containerID="2ac3d6bf342fed17048fda402f024c8913172e1208e753608700d74e53cfe4f3" Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.728077 4946 scope.go:117] "RemoveContainer" containerID="847a01abfc1b8567574b77c168b044fe726b365c808d4ec665d2ff16afd92625" Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.749490 4946 scope.go:117] "RemoveContainer" containerID="849cb349e2d8d74c8a5c873c020253aa6650e18316daa4e4cda33facfa08c64a" Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.775769 4946 scope.go:117] "RemoveContainer" containerID="5edec769179e5fbde067252513cb9a91acd14bb765488eebcacb651d47671cea" Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.802398 4946 scope.go:117] "RemoveContainer" containerID="a144d6e25c9728943275e991de8aeab0546d7b79b5798dc6dd8598ae23f2baf4" Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.830599 4946 scope.go:117] "RemoveContainer" containerID="5edec769179e5fbde067252513cb9a91acd14bb765488eebcacb651d47671cea" Nov 28 07:20:11 crc kubenswrapper[4946]: E1128 07:20:11.831141 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5edec769179e5fbde067252513cb9a91acd14bb765488eebcacb651d47671cea\": container with ID starting with 5edec769179e5fbde067252513cb9a91acd14bb765488eebcacb651d47671cea not found: ID does not exist" containerID="5edec769179e5fbde067252513cb9a91acd14bb765488eebcacb651d47671cea" Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.831200 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5edec769179e5fbde067252513cb9a91acd14bb765488eebcacb651d47671cea"} err="failed to get container status \"5edec769179e5fbde067252513cb9a91acd14bb765488eebcacb651d47671cea\": rpc error: code = NotFound desc = could not find container \"5edec769179e5fbde067252513cb9a91acd14bb765488eebcacb651d47671cea\": container with ID starting with 5edec769179e5fbde067252513cb9a91acd14bb765488eebcacb651d47671cea not found: ID does not exist" Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.831242 4946 scope.go:117] "RemoveContainer" containerID="a144d6e25c9728943275e991de8aeab0546d7b79b5798dc6dd8598ae23f2baf4" Nov 28 07:20:11 crc kubenswrapper[4946]: E1128 07:20:11.831956 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a144d6e25c9728943275e991de8aeab0546d7b79b5798dc6dd8598ae23f2baf4\": container with ID starting with a144d6e25c9728943275e991de8aeab0546d7b79b5798dc6dd8598ae23f2baf4 not found: ID does not exist" containerID="a144d6e25c9728943275e991de8aeab0546d7b79b5798dc6dd8598ae23f2baf4" Nov 28 07:20:11 crc kubenswrapper[4946]: I1128 07:20:11.832014 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a144d6e25c9728943275e991de8aeab0546d7b79b5798dc6dd8598ae23f2baf4"} err="failed to get container status \"a144d6e25c9728943275e991de8aeab0546d7b79b5798dc6dd8598ae23f2baf4\": rpc error: code = NotFound desc = could not find container \"a144d6e25c9728943275e991de8aeab0546d7b79b5798dc6dd8598ae23f2baf4\": container with ID starting with a144d6e25c9728943275e991de8aeab0546d7b79b5798dc6dd8598ae23f2baf4 not found: ID does not exist" Nov 28 07:20:12 crc kubenswrapper[4946]: I1128 07:20:12.008801 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="944abeae-3f1d-4391-a375-b64ed9c17b14" path="/var/lib/kubelet/pods/944abeae-3f1d-4391-a375-b64ed9c17b14/volumes" Nov 28 07:20:12 crc kubenswrapper[4946]: I1128 07:20:12.009498 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" path="/var/lib/kubelet/pods/b7d310ee-b686-4e3d-b554-393fc09a770d/volumes" Nov 28 07:20:12 crc kubenswrapper[4946]: I1128 07:20:12.011408 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c485c360-55fc-49da-851d-ab74f7c7fc98" path="/var/lib/kubelet/pods/c485c360-55fc-49da-851d-ab74f7c7fc98/volumes" Nov 28 07:20:14 crc kubenswrapper[4946]: I1128 07:20:14.153215 4946 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podea608140-3ad6-4c56-9754-ec74fc292781"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podea608140-3ad6-4c56-9754-ec74fc292781] : Timed out while waiting for systemd to remove kubepods-besteffort-podea608140_3ad6_4c56_9754_ec74fc292781.slice" Nov 28 07:20:15 crc kubenswrapper[4946]: E1128 07:20:15.752427 4946 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 28 07:20:15 crc kubenswrapper[4946]: E1128 07:20:15.752612 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b138df41-1f0c-4edb-9546-e0f5ec16cf06-operator-scripts podName:b138df41-1f0c-4edb-9546-e0f5ec16cf06 nodeName:}" failed. No retries permitted until 2025-11-28 07:20:47.752580674 +0000 UTC m=+1702.130645825 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b138df41-1f0c-4edb-9546-e0f5ec16cf06-operator-scripts") pod "novacell0d35d-account-delete-c848z" (UID: "b138df41-1f0c-4edb-9546-e0f5ec16cf06") : configmap "openstack-scripts" not found Nov 28 07:20:16 crc kubenswrapper[4946]: E1128 07:20:16.234903 4946 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb138df41_1f0c_4edb_9546_e0f5ec16cf06.slice/crio-conmon-baba1715ed3450c8abd6adfc3431c885e888fd38f6e6c21194828e801a28628d.scope\": RecentStats: unable to find data in memory cache]" Nov 28 07:20:16 crc kubenswrapper[4946]: I1128 07:20:16.479978 4946 generic.go:334] "Generic (PLEG): container finished" podID="b138df41-1f0c-4edb-9546-e0f5ec16cf06" containerID="baba1715ed3450c8abd6adfc3431c885e888fd38f6e6c21194828e801a28628d" exitCode=137 Nov 28 07:20:16 crc kubenswrapper[4946]: I1128 07:20:16.480165 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell0d35d-account-delete-c848z" event={"ID":"b138df41-1f0c-4edb-9546-e0f5ec16cf06","Type":"ContainerDied","Data":"baba1715ed3450c8abd6adfc3431c885e888fd38f6e6c21194828e801a28628d"} Nov 28 07:20:16 crc kubenswrapper[4946]: I1128 07:20:16.873796 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell0d35d-account-delete-c848z" Nov 28 07:20:17 crc kubenswrapper[4946]: I1128 07:20:17.072535 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhb8l\" (UniqueName: \"kubernetes.io/projected/b138df41-1f0c-4edb-9546-e0f5ec16cf06-kube-api-access-fhb8l\") pod \"b138df41-1f0c-4edb-9546-e0f5ec16cf06\" (UID: \"b138df41-1f0c-4edb-9546-e0f5ec16cf06\") " Nov 28 07:20:17 crc kubenswrapper[4946]: I1128 07:20:17.073061 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b138df41-1f0c-4edb-9546-e0f5ec16cf06-operator-scripts\") pod \"b138df41-1f0c-4edb-9546-e0f5ec16cf06\" (UID: \"b138df41-1f0c-4edb-9546-e0f5ec16cf06\") " Nov 28 07:20:17 crc kubenswrapper[4946]: I1128 07:20:17.074753 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b138df41-1f0c-4edb-9546-e0f5ec16cf06-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b138df41-1f0c-4edb-9546-e0f5ec16cf06" (UID: "b138df41-1f0c-4edb-9546-e0f5ec16cf06"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:20:17 crc kubenswrapper[4946]: I1128 07:20:17.079919 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b138df41-1f0c-4edb-9546-e0f5ec16cf06-kube-api-access-fhb8l" (OuterVolumeSpecName: "kube-api-access-fhb8l") pod "b138df41-1f0c-4edb-9546-e0f5ec16cf06" (UID: "b138df41-1f0c-4edb-9546-e0f5ec16cf06"). InnerVolumeSpecName "kube-api-access-fhb8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:20:17 crc kubenswrapper[4946]: I1128 07:20:17.176643 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhb8l\" (UniqueName: \"kubernetes.io/projected/b138df41-1f0c-4edb-9546-e0f5ec16cf06-kube-api-access-fhb8l\") on node \"crc\" DevicePath \"\"" Nov 28 07:20:17 crc kubenswrapper[4946]: I1128 07:20:17.176723 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b138df41-1f0c-4edb-9546-e0f5ec16cf06-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 07:20:17 crc kubenswrapper[4946]: I1128 07:20:17.496097 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell0d35d-account-delete-c848z" event={"ID":"b138df41-1f0c-4edb-9546-e0f5ec16cf06","Type":"ContainerDied","Data":"a1274a39d0273ac5daf35b3538734606c8afb8baaf853738f70068c3dfe1dc86"} Nov 28 07:20:17 crc kubenswrapper[4946]: I1128 07:20:17.496169 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell0d35d-account-delete-c848z" Nov 28 07:20:17 crc kubenswrapper[4946]: I1128 07:20:17.496448 4946 scope.go:117] "RemoveContainer" containerID="baba1715ed3450c8abd6adfc3431c885e888fd38f6e6c21194828e801a28628d" Nov 28 07:20:17 crc kubenswrapper[4946]: I1128 07:20:17.551731 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell0d35d-account-delete-c848z"] Nov 28 07:20:17 crc kubenswrapper[4946]: I1128 07:20:17.562226 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell0d35d-account-delete-c848z"] Nov 28 07:20:18 crc kubenswrapper[4946]: I1128 07:20:18.005672 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b138df41-1f0c-4edb-9546-e0f5ec16cf06" path="/var/lib/kubelet/pods/b138df41-1f0c-4edb-9546-e0f5ec16cf06/volumes" Nov 28 07:20:20 crc kubenswrapper[4946]: I1128 07:20:20.990595 4946 scope.go:117] "RemoveContainer" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" Nov 28 07:20:20 crc kubenswrapper[4946]: E1128 07:20:20.991955 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:20:29 crc kubenswrapper[4946]: I1128 07:20:29.878999 4946 scope.go:117] "RemoveContainer" containerID="4537c2e5ab62fa6caf7b8e50f18eb201daa55d3f5e4e0b453aef1b2b5a9efd81" Nov 28 07:20:29 crc kubenswrapper[4946]: I1128 07:20:29.933388 4946 scope.go:117] "RemoveContainer" containerID="aa1deedf4ec4d42dcf2f044cbe783e61d0b64e879b3d079f45f29b27749d5e49" Nov 28 07:20:29 crc kubenswrapper[4946]: I1128 07:20:29.978926 4946 scope.go:117] "RemoveContainer" containerID="9d86a0bc4007c7ecc8d7c768fbb98e17ac09a5daa0a1ecf769ccf31eead5582f" Nov 28 07:20:30 crc kubenswrapper[4946]: I1128 07:20:30.022276 4946 scope.go:117] "RemoveContainer" containerID="c1578a212bfd245eec3e5a729841471bea41a9a76672a4ed4f1a0685f80cbb7f" Nov 28 07:20:31 crc kubenswrapper[4946]: I1128 07:20:31.990597 4946 scope.go:117] "RemoveContainer" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" Nov 28 07:20:31 crc kubenswrapper[4946]: E1128 07:20:31.991677 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:20:42 crc kubenswrapper[4946]: I1128 07:20:42.989629 4946 scope.go:117] "RemoveContainer" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" Nov 28 07:20:42 crc kubenswrapper[4946]: E1128 07:20:42.990306 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:20:56 crc kubenswrapper[4946]: I1128 07:20:56.990356 4946 scope.go:117] "RemoveContainer" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" Nov 28 07:20:56 crc kubenswrapper[4946]: E1128 07:20:56.991414 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:21:11 crc kubenswrapper[4946]: I1128 07:21:11.990380 4946 scope.go:117] "RemoveContainer" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" Nov 28 07:21:11 crc kubenswrapper[4946]: E1128 07:21:11.991224 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:21:23 crc kubenswrapper[4946]: I1128 07:21:23.990514 4946 scope.go:117] "RemoveContainer" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" Nov 28 07:21:23 crc kubenswrapper[4946]: E1128 07:21:23.991537 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:21:30 crc kubenswrapper[4946]: I1128 07:21:30.630207 4946 scope.go:117] "RemoveContainer" containerID="b3ab86ce5a317099367dac8c17c5175422e65311efedf83f30335372fd066f0e" Nov 28 07:21:30 crc kubenswrapper[4946]: I1128 07:21:30.685678 4946 scope.go:117] "RemoveContainer" containerID="275e074cabc40bc932fd3ba12868723a484346ce4531f93073acde75bcb63621" Nov 28 07:21:30 crc kubenswrapper[4946]: I1128 07:21:30.728741 4946 scope.go:117] "RemoveContainer" containerID="a70612598df74c451158c1e4c0bf961c5ebacf1837e0399d558530ad6df9f0d1" Nov 28 07:21:30 crc kubenswrapper[4946]: I1128 07:21:30.753689 4946 scope.go:117] "RemoveContainer" containerID="033021f7f40b9435bea47524398b2573a4b1eaaa8d23bc44625948305eabd648" Nov 28 07:21:30 crc kubenswrapper[4946]: I1128 07:21:30.784012 4946 scope.go:117] "RemoveContainer" containerID="17f6467df1ccf59fb978ae9c2f548d312c3f97527025965f75f479954b2b0652" Nov 28 07:21:30 crc kubenswrapper[4946]: I1128 07:21:30.826239 4946 scope.go:117] "RemoveContainer" containerID="fc88598dd6761393b295b22d6e88c16126d2718d69bd736288f3b924ac60358d" Nov 28 07:21:30 crc kubenswrapper[4946]: I1128 07:21:30.876695 4946 scope.go:117] "RemoveContainer" containerID="d18f3a1fbfca471cddca95a6a9f466632a71cd94221c0b60e67c5785129513a6" Nov 28 07:21:30 crc kubenswrapper[4946]: I1128 07:21:30.922892 4946 scope.go:117] "RemoveContainer" containerID="441402af77e9f4e15b43afd73dd8a933edb223fe92fcbbad3e9635df9b92743c" Nov 28 07:21:30 crc kubenswrapper[4946]: I1128 07:21:30.945135 4946 scope.go:117] "RemoveContainer" containerID="708604ae7d5da79105e4d57c3239ec79662e44de189acba45fef8858a84a3729" Nov 28 07:21:30 crc kubenswrapper[4946]: I1128 07:21:30.963512 4946 scope.go:117] "RemoveContainer" containerID="9f56c9f71e0ce25a35f7cf818a1d4df2217737ffb3cd13d7f1a49bcebcfe6150" Nov 28 07:21:31 crc kubenswrapper[4946]: I1128 07:21:31.009486 4946 scope.go:117] "RemoveContainer" containerID="8651895f1595b9ee0a8ba4d22caea8c416ed8eea4ef61868ade64414a8501ff6" Nov 28 07:21:31 crc kubenswrapper[4946]: I1128 07:21:31.033835 4946 scope.go:117] "RemoveContainer" containerID="a4520b1e79a227ffd054996a0b1d87713c95eb613cdf51e4e33231939731796d" Nov 28 07:21:31 crc kubenswrapper[4946]: I1128 07:21:31.052651 4946 scope.go:117] "RemoveContainer" containerID="dc3c127513486f5382ffc7404818074fd53d094defda55359502651b1c8d18ef" Nov 28 07:21:31 crc kubenswrapper[4946]: I1128 07:21:31.073594 4946 scope.go:117] "RemoveContainer" containerID="7c1543b88dc71b87f31608f10e009dc744e9ea56762eb51a9a779b1da61ce75b" Nov 28 07:21:38 crc kubenswrapper[4946]: I1128 07:21:38.990858 4946 scope.go:117] "RemoveContainer" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" Nov 28 07:21:38 crc kubenswrapper[4946]: E1128 07:21:38.992196 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:21:50 crc kubenswrapper[4946]: I1128 07:21:50.990292 4946 scope.go:117] "RemoveContainer" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" Nov 28 07:21:50 crc kubenswrapper[4946]: E1128 07:21:50.991344 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:22:03 crc kubenswrapper[4946]: I1128 07:22:03.990369 4946 scope.go:117] "RemoveContainer" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" Nov 28 07:22:03 crc kubenswrapper[4946]: E1128 07:22:03.991715 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:22:18 crc kubenswrapper[4946]: I1128 07:22:18.990658 4946 scope.go:117] "RemoveContainer" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" Nov 28 07:22:18 crc kubenswrapper[4946]: E1128 07:22:18.991548 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:22:29 crc kubenswrapper[4946]: I1128 07:22:29.990216 4946 scope.go:117] "RemoveContainer" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" Nov 28 07:22:29 crc kubenswrapper[4946]: E1128 07:22:29.991408 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:22:31 crc kubenswrapper[4946]: I1128 07:22:31.370793 4946 scope.go:117] "RemoveContainer" containerID="3d5eeaae441aa25568927d194f96b2d53b36fd0d2f9a8f7955ca367e5464cb34" Nov 28 07:22:31 crc kubenswrapper[4946]: I1128 07:22:31.437725 4946 scope.go:117] "RemoveContainer" containerID="0f5395605e2863d1e20f2cb9839024fbafbfd2d53603e3ecacdf196ff0806098" Nov 28 07:22:31 crc kubenswrapper[4946]: I1128 07:22:31.481163 4946 scope.go:117] "RemoveContainer" containerID="ac71999f7195041c7350010409e92caba1128a1523d424f3e5e2d979a06edf34" Nov 28 07:22:31 crc kubenswrapper[4946]: I1128 07:22:31.525539 4946 scope.go:117] "RemoveContainer" containerID="8e951a45b4b9ae985f7ede9827a6d2b21c5d9745ebabcd97342388504f3849c8" Nov 28 07:22:40 crc kubenswrapper[4946]: I1128 07:22:40.990225 4946 scope.go:117] "RemoveContainer" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" Nov 28 07:22:40 crc kubenswrapper[4946]: E1128 07:22:40.991243 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:22:52 crc kubenswrapper[4946]: I1128 07:22:52.990348 4946 scope.go:117] "RemoveContainer" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" Nov 28 07:22:52 crc kubenswrapper[4946]: E1128 07:22:52.991345 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:23:06 crc kubenswrapper[4946]: I1128 07:23:06.990394 4946 scope.go:117] "RemoveContainer" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" Nov 28 07:23:06 crc kubenswrapper[4946]: E1128 07:23:06.991531 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:23:21 crc kubenswrapper[4946]: I1128 07:23:21.001091 4946 scope.go:117] "RemoveContainer" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" Nov 28 07:23:21 crc kubenswrapper[4946]: E1128 07:23:21.002741 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:23:31 crc kubenswrapper[4946]: I1128 07:23:31.629318 4946 scope.go:117] "RemoveContainer" containerID="84092b1962d9db4523e0c74821365501324a13449cfcf66d36f378c07b1fecea" Nov 28 07:23:31 crc kubenswrapper[4946]: I1128 07:23:31.663897 4946 scope.go:117] "RemoveContainer" containerID="9df069bb1b0f7f202b6a0360d34734cc278c73b6fc9f9ccab947c2326a62097b" Nov 28 07:23:31 crc kubenswrapper[4946]: I1128 07:23:31.712210 4946 scope.go:117] "RemoveContainer" containerID="d67738d9866152125af3a1266102ca4656a4f5012f313fb1146ddac533f56549" Nov 28 07:23:31 crc kubenswrapper[4946]: I1128 07:23:31.733132 4946 scope.go:117] "RemoveContainer" containerID="c6e6d37ec18105c33db4cdfae75f73255b47ec55fc3ee1889fa5c3ca22ae96de" Nov 28 07:23:31 crc kubenswrapper[4946]: I1128 07:23:31.754661 4946 scope.go:117] "RemoveContainer" containerID="34c03cab919d7afdbbafc7c5ec8251dff41d8908c91a4de2fb931f02e29401a2" Nov 28 07:23:31 crc kubenswrapper[4946]: I1128 07:23:31.790417 4946 scope.go:117] "RemoveContainer" containerID="d9ce4709d1cb40bce3dab99a4b8e4867e80357a705bfcf4276ab80331db85044" Nov 28 07:23:31 crc kubenswrapper[4946]: I1128 07:23:31.817476 4946 scope.go:117] "RemoveContainer" containerID="0dd7ceecbb40811425d322c6721c5aa9fee8214c0c3c536f96eb6a228e8a1dfb" Nov 28 07:23:31 crc kubenswrapper[4946]: I1128 07:23:31.838201 4946 scope.go:117] "RemoveContainer" containerID="703b3ac1dfd5613d3a24ce5b3cd50e11a43939a141bc6b9d66c051b90300a96b" Nov 28 07:23:31 crc kubenswrapper[4946]: I1128 07:23:31.865453 4946 scope.go:117] "RemoveContainer" containerID="94714a0d2515a34c244c85744cb38ff42cd4784182d2bc23af41f47d94d54c00" Nov 28 07:23:31 crc kubenswrapper[4946]: I1128 07:23:31.929294 4946 scope.go:117] "RemoveContainer" containerID="33cf1521825b65d2a855bb33ed486535726448c3aa1cde0589bb5e6245126d61" Nov 28 07:23:31 crc kubenswrapper[4946]: I1128 07:23:31.962162 4946 scope.go:117] "RemoveContainer" containerID="787af9da38e7f9ea7c2d782a0a03c764e1ffa2af9e7e27e0eb8c6049ba138058" Nov 28 07:23:36 crc kubenswrapper[4946]: I1128 07:23:36.000906 4946 scope.go:117] "RemoveContainer" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" Nov 28 07:23:36 crc kubenswrapper[4946]: E1128 07:23:36.002173 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:23:48 crc kubenswrapper[4946]: I1128 07:23:48.990935 4946 scope.go:117] "RemoveContainer" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" Nov 28 07:23:48 crc kubenswrapper[4946]: E1128 07:23:48.991959 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:24:01 crc kubenswrapper[4946]: I1128 07:24:01.989997 4946 scope.go:117] "RemoveContainer" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" Nov 28 07:24:01 crc kubenswrapper[4946]: E1128 07:24:01.991299 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:24:15 crc kubenswrapper[4946]: I1128 07:24:15.998706 4946 scope.go:117] "RemoveContainer" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" Nov 28 07:24:16 crc kubenswrapper[4946]: E1128 07:24:15.999816 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.371756 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6grsv"] Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.373207 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3521840d-60d0-450c-8c05-7e2ad0fc4e97" containerName="rabbitmq" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.373231 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="3521840d-60d0-450c-8c05-7e2ad0fc4e97" containerName="rabbitmq" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.373252 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1578c84-1d87-41b2-bfa7-637c3b53366f" containerName="neutron-httpd" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.373265 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1578c84-1d87-41b2-bfa7-637c3b53366f" containerName="neutron-httpd" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.373282 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3521840d-60d0-450c-8c05-7e2ad0fc4e97" containerName="setup-container" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.373295 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="3521840d-60d0-450c-8c05-7e2ad0fc4e97" containerName="setup-container" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.373312 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="313c5837-e776-49ef-8689-14f6f70d31a1" containerName="barbican-api" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.373324 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="313c5837-e776-49ef-8689-14f6f70d31a1" containerName="barbican-api" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.373351 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206851cb-4673-4ce1-b038-c2e425d306b7" containerName="nova-metadata-log" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.373363 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="206851cb-4673-4ce1-b038-c2e425d306b7" containerName="nova-metadata-log" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.373378 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4619b857-5e70-4ab3-807d-d233c9d9223c" containerName="barbican-worker-log" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.373390 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="4619b857-5e70-4ab3-807d-d233c9d9223c" containerName="barbican-worker-log" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.373406 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="944abeae-3f1d-4391-a375-b64ed9c17b14" containerName="ovsdb-server" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.373418 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="944abeae-3f1d-4391-a375-b64ed9c17b14" containerName="ovsdb-server" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.373431 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05997c14-3116-4439-8e63-230bf0e5c411" containerName="cinder-api" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.373443 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="05997c14-3116-4439-8e63-230bf0e5c411" containerName="cinder-api" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.373459 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9847c34-a2be-405c-8bd8-34ba251d218d" containerName="nova-cell0-conductor-conductor" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.373510 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9847c34-a2be-405c-8bd8-34ba251d218d" containerName="nova-cell0-conductor-conductor" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.373538 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80c95e6-2981-4755-ada4-26bbf1372693" containerName="mariadb-account-delete" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.373551 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80c95e6-2981-4755-ada4-26bbf1372693" containerName="mariadb-account-delete" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.373570 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="account-server" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.373583 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="account-server" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.373606 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177d3d9f-5e48-4b4e-9329-9d46daa35557" containerName="sg-core" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.373618 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="177d3d9f-5e48-4b4e-9329-9d46daa35557" containerName="sg-core" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.373638 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="313c5837-e776-49ef-8689-14f6f70d31a1" containerName="barbican-api-log" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.373650 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="313c5837-e776-49ef-8689-14f6f70d31a1" containerName="barbican-api-log" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.373672 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177d3d9f-5e48-4b4e-9329-9d46daa35557" containerName="ceilometer-notification-agent" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.373684 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="177d3d9f-5e48-4b4e-9329-9d46daa35557" containerName="ceilometer-notification-agent" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.373704 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb07712a-805e-4e0f-9a81-dd8ce42bfb88" containerName="mariadb-account-delete" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.373717 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb07712a-805e-4e0f-9a81-dd8ce42bfb88" containerName="mariadb-account-delete" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.373737 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177d3d9f-5e48-4b4e-9329-9d46daa35557" containerName="proxy-httpd" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.373749 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="177d3d9f-5e48-4b4e-9329-9d46daa35557" containerName="proxy-httpd" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.373772 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="object-auditor" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.373784 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="object-auditor" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.373807 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db7322c8-b99d-4970-85c0-218d683f1ca3" containerName="nova-cell1-conductor-conductor" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.373819 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7322c8-b99d-4970-85c0-218d683f1ca3" containerName="nova-cell1-conductor-conductor" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.373843 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13d10c33-0ca9-47d5-ac49-19391cebfb39" containerName="mysql-bootstrap" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.373855 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="13d10c33-0ca9-47d5-ac49-19391cebfb39" containerName="mysql-bootstrap" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.373874 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="container-auditor" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.373885 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="container-auditor" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374085 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="rsync" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374098 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="rsync" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374116 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7aa965-bfc0-4db1-a2d2-07fe02be9f18" containerName="mariadb-account-delete" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374128 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7aa965-bfc0-4db1-a2d2-07fe02be9f18" containerName="mariadb-account-delete" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374145 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c485c360-55fc-49da-851d-ab74f7c7fc98" containerName="probe" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374157 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="c485c360-55fc-49da-851d-ab74f7c7fc98" containerName="probe" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374171 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="account-reaper" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374186 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="account-reaper" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374211 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52101de8-a25c-4372-9df3-3f090167ff5f" containerName="placement-api" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374226 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="52101de8-a25c-4372-9df3-3f090167ff5f" containerName="placement-api" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374245 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05997c14-3116-4439-8e63-230bf0e5c411" containerName="cinder-api-log" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374262 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="05997c14-3116-4439-8e63-230bf0e5c411" containerName="cinder-api-log" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374288 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="813134a8-463b-4f7d-8160-ceb1c5a96853" containerName="glance-httpd" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374301 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="813134a8-463b-4f7d-8160-ceb1c5a96853" containerName="glance-httpd" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374323 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="944abeae-3f1d-4391-a375-b64ed9c17b14" containerName="ovsdb-server-init" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374336 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="944abeae-3f1d-4391-a375-b64ed9c17b14" containerName="ovsdb-server-init" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374349 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37" containerName="keystone-api" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374361 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37" containerName="keystone-api" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374375 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="object-server" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374387 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="object-server" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374402 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="object-replicator" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374414 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="object-replicator" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374437 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="account-replicator" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374449 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="account-replicator" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374500 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177d3d9f-5e48-4b4e-9329-9d46daa35557" containerName="ceilometer-central-agent" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374513 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="177d3d9f-5e48-4b4e-9329-9d46daa35557" containerName="ceilometer-central-agent" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374534 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13d10c33-0ca9-47d5-ac49-19391cebfb39" containerName="galera" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374546 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="13d10c33-0ca9-47d5-ac49-19391cebfb39" containerName="galera" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374561 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1486d432-b3a6-4470-b145-076dafbfca67" containerName="mariadb-account-delete" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374573 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1486d432-b3a6-4470-b145-076dafbfca67" containerName="mariadb-account-delete" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374590 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b138df41-1f0c-4edb-9546-e0f5ec16cf06" containerName="mariadb-account-delete" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374607 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b138df41-1f0c-4edb-9546-e0f5ec16cf06" containerName="mariadb-account-delete" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374622 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30b2d9aa-848b-4f33-9bd8-921f5de5ab36" containerName="mariadb-account-delete" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374635 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="30b2d9aa-848b-4f33-9bd8-921f5de5ab36" containerName="mariadb-account-delete" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374657 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c485c360-55fc-49da-851d-ab74f7c7fc98" containerName="cinder-scheduler" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374669 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="c485c360-55fc-49da-851d-ab74f7c7fc98" containerName="cinder-scheduler" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374684 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206851cb-4673-4ce1-b038-c2e425d306b7" containerName="nova-metadata-metadata" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374696 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="206851cb-4673-4ce1-b038-c2e425d306b7" containerName="nova-metadata-metadata" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374718 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59fdca77-b333-44be-ab8c-96a2f4bcc340" containerName="rabbitmq" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374730 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="59fdca77-b333-44be-ab8c-96a2f4bcc340" containerName="rabbitmq" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374754 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="container-server" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374769 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="container-server" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374783 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="376693c8-e03f-4085-9be2-0ef9a0e27c5c" containerName="kube-state-metrics" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374795 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="376693c8-e03f-4085-9be2-0ef9a0e27c5c" containerName="kube-state-metrics" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374815 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="container-updater" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374827 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="container-updater" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374842 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c274eefa-3598-470b-9b07-25928903d425" containerName="ovn-northd" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374853 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="c274eefa-3598-470b-9b07-25928903d425" containerName="ovn-northd" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374867 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="swift-recon-cron" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374878 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="swift-recon-cron" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374901 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ed9247-5959-4a5b-a879-52fac366f999" containerName="mariadb-account-delete" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374913 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ed9247-5959-4a5b-a879-52fac366f999" containerName="mariadb-account-delete" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374935 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59fdca77-b333-44be-ab8c-96a2f4bcc340" containerName="setup-container" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374946 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="59fdca77-b333-44be-ab8c-96a2f4bcc340" containerName="setup-container" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374967 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="813134a8-463b-4f7d-8160-ceb1c5a96853" containerName="glance-log" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.374979 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="813134a8-463b-4f7d-8160-ceb1c5a96853" containerName="glance-log" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.374992 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3ba1954-566f-4e25-8312-855a58935547" containerName="nova-scheduler-scheduler" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375004 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3ba1954-566f-4e25-8312-855a58935547" containerName="nova-scheduler-scheduler" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.375021 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c274eefa-3598-470b-9b07-25928903d425" containerName="openstack-network-exporter" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375034 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="c274eefa-3598-470b-9b07-25928903d425" containerName="openstack-network-exporter" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.375051 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="587a9b3d-1634-4af6-96d2-e60c03a7d75f" containerName="nova-api-api" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375062 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="587a9b3d-1634-4af6-96d2-e60c03a7d75f" containerName="nova-api-api" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.375078 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acadbe07-94b0-4a5d-ac42-6524f0e4ce61" containerName="glance-httpd" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375091 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="acadbe07-94b0-4a5d-ac42-6524f0e4ce61" containerName="glance-httpd" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.375112 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="container-replicator" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375123 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="container-replicator" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.375143 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4619b857-5e70-4ab3-807d-d233c9d9223c" containerName="barbican-worker" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375155 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="4619b857-5e70-4ab3-807d-d233c9d9223c" containerName="barbican-worker" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.375171 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="587a9b3d-1634-4af6-96d2-e60c03a7d75f" containerName="nova-api-log" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375184 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="587a9b3d-1634-4af6-96d2-e60c03a7d75f" containerName="nova-api-log" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.375205 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52101de8-a25c-4372-9df3-3f090167ff5f" containerName="placement-log" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375217 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="52101de8-a25c-4372-9df3-3f090167ff5f" containerName="placement-log" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.375237 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acadbe07-94b0-4a5d-ac42-6524f0e4ce61" containerName="glance-log" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375248 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="acadbe07-94b0-4a5d-ac42-6524f0e4ce61" containerName="glance-log" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.375264 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62fdad5e-59c5-4d8f-87da-79b384fb82be" containerName="memcached" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375276 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="62fdad5e-59c5-4d8f-87da-79b384fb82be" containerName="memcached" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.375298 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="account-auditor" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375311 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="account-auditor" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.375327 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="object-expirer" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375339 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="object-expirer" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.375357 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="944abeae-3f1d-4391-a375-b64ed9c17b14" containerName="ovs-vswitchd" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375368 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="944abeae-3f1d-4391-a375-b64ed9c17b14" containerName="ovs-vswitchd" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.375389 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="object-updater" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375400 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="object-updater" Nov 28 07:24:27 crc kubenswrapper[4946]: E1128 07:24:27.375413 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1578c84-1d87-41b2-bfa7-637c3b53366f" containerName="neutron-api" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375425 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1578c84-1d87-41b2-bfa7-637c3b53366f" containerName="neutron-api" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375704 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="62fdad5e-59c5-4d8f-87da-79b384fb82be" containerName="memcached" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375727 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3ba1954-566f-4e25-8312-855a58935547" containerName="nova-scheduler-scheduler" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375745 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="acadbe07-94b0-4a5d-ac42-6524f0e4ce61" containerName="glance-log" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375768 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="813134a8-463b-4f7d-8160-ceb1c5a96853" containerName="glance-log" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375783 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="db7322c8-b99d-4970-85c0-218d683f1ca3" containerName="nova-cell1-conductor-conductor" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375799 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="206851cb-4673-4ce1-b038-c2e425d306b7" containerName="nova-metadata-metadata" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375819 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="container-auditor" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375842 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="32ed9247-5959-4a5b-a879-52fac366f999" containerName="mariadb-account-delete" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375858 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="944abeae-3f1d-4391-a375-b64ed9c17b14" containerName="ovs-vswitchd" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375870 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="587a9b3d-1634-4af6-96d2-e60c03a7d75f" containerName="nova-api-log" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375882 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="account-server" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375894 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="rsync" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375913 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="4619b857-5e70-4ab3-807d-d233c9d9223c" containerName="barbican-worker" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375933 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="206851cb-4673-4ce1-b038-c2e425d306b7" containerName="nova-metadata-log" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375952 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="59fdca77-b333-44be-ab8c-96a2f4bcc340" containerName="rabbitmq" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375968 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="account-auditor" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375982 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1578c84-1d87-41b2-bfa7-637c3b53366f" containerName="neutron-httpd" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.375998 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="52101de8-a25c-4372-9df3-3f090167ff5f" containerName="placement-api" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376012 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9847c34-a2be-405c-8bd8-34ba251d218d" containerName="nova-cell0-conductor-conductor" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376028 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="c485c360-55fc-49da-851d-ab74f7c7fc98" containerName="probe" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376049 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="account-replicator" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376065 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="52101de8-a25c-4372-9df3-3f090167ff5f" containerName="placement-log" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376078 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="4619b857-5e70-4ab3-807d-d233c9d9223c" containerName="barbican-worker-log" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376097 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="acadbe07-94b0-4a5d-ac42-6524f0e4ce61" containerName="glance-httpd" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376115 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="swift-recon-cron" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376128 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="object-auditor" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376149 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="944abeae-3f1d-4391-a375-b64ed9c17b14" containerName="ovsdb-server" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376161 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="05997c14-3116-4439-8e63-230bf0e5c411" containerName="cinder-api" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376186 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="177d3d9f-5e48-4b4e-9329-9d46daa35557" containerName="sg-core" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376206 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="30b2d9aa-848b-4f33-9bd8-921f5de5ab36" containerName="mariadb-account-delete" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376220 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="177d3d9f-5e48-4b4e-9329-9d46daa35557" containerName="ceilometer-central-agent" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376238 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="c485c360-55fc-49da-851d-ab74f7c7fc98" containerName="cinder-scheduler" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376256 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="c274eefa-3598-470b-9b07-25928903d425" containerName="ovn-northd" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376278 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="376693c8-e03f-4085-9be2-0ef9a0e27c5c" containerName="kube-state-metrics" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376291 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="587a9b3d-1634-4af6-96d2-e60c03a7d75f" containerName="nova-api-api" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376306 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="object-server" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376320 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c7d9b55-63f4-4b00-97ab-0b1d23f4dc37" containerName="keystone-api" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376336 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="813134a8-463b-4f7d-8160-ceb1c5a96853" containerName="glance-httpd" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376355 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="account-reaper" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376371 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="container-updater" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376384 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="object-replicator" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376398 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="object-expirer" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376411 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="f80c95e6-2981-4755-ada4-26bbf1372693" containerName="mariadb-account-delete" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376431 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="13d10c33-0ca9-47d5-ac49-19391cebfb39" containerName="galera" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376448 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f7aa965-bfc0-4db1-a2d2-07fe02be9f18" containerName="mariadb-account-delete" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376487 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="313c5837-e776-49ef-8689-14f6f70d31a1" containerName="barbican-api-log" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376501 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="177d3d9f-5e48-4b4e-9329-9d46daa35557" containerName="ceilometer-notification-agent" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376519 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="05997c14-3116-4439-8e63-230bf0e5c411" containerName="cinder-api-log" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376535 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="c274eefa-3598-470b-9b07-25928903d425" containerName="openstack-network-exporter" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376552 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b138df41-1f0c-4edb-9546-e0f5ec16cf06" containerName="mariadb-account-delete" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376565 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="3521840d-60d0-450c-8c05-7e2ad0fc4e97" containerName="rabbitmq" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376582 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="313c5837-e776-49ef-8689-14f6f70d31a1" containerName="barbican-api" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376602 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="1486d432-b3a6-4470-b145-076dafbfca67" containerName="mariadb-account-delete" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376619 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="177d3d9f-5e48-4b4e-9329-9d46daa35557" containerName="proxy-httpd" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376634 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="container-server" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376651 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb07712a-805e-4e0f-9a81-dd8ce42bfb88" containerName="mariadb-account-delete" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376671 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="container-replicator" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376694 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d310ee-b686-4e3d-b554-393fc09a770d" containerName="object-updater" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.376710 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1578c84-1d87-41b2-bfa7-637c3b53366f" containerName="neutron-api" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.379540 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6grsv" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.381417 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6grsv"] Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.487410 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/add26c7a-239e-4bbb-b966-cf1fda1b12f5-utilities\") pod \"community-operators-6grsv\" (UID: \"add26c7a-239e-4bbb-b966-cf1fda1b12f5\") " pod="openshift-marketplace/community-operators-6grsv" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.487744 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/add26c7a-239e-4bbb-b966-cf1fda1b12f5-catalog-content\") pod \"community-operators-6grsv\" (UID: \"add26c7a-239e-4bbb-b966-cf1fda1b12f5\") " pod="openshift-marketplace/community-operators-6grsv" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.487807 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s7mz\" (UniqueName: \"kubernetes.io/projected/add26c7a-239e-4bbb-b966-cf1fda1b12f5-kube-api-access-7s7mz\") pod \"community-operators-6grsv\" (UID: \"add26c7a-239e-4bbb-b966-cf1fda1b12f5\") " pod="openshift-marketplace/community-operators-6grsv" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.589588 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/add26c7a-239e-4bbb-b966-cf1fda1b12f5-utilities\") pod \"community-operators-6grsv\" (UID: \"add26c7a-239e-4bbb-b966-cf1fda1b12f5\") " pod="openshift-marketplace/community-operators-6grsv" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.589664 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/add26c7a-239e-4bbb-b966-cf1fda1b12f5-catalog-content\") pod \"community-operators-6grsv\" (UID: \"add26c7a-239e-4bbb-b966-cf1fda1b12f5\") " pod="openshift-marketplace/community-operators-6grsv" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.589692 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s7mz\" (UniqueName: \"kubernetes.io/projected/add26c7a-239e-4bbb-b966-cf1fda1b12f5-kube-api-access-7s7mz\") pod \"community-operators-6grsv\" (UID: \"add26c7a-239e-4bbb-b966-cf1fda1b12f5\") " pod="openshift-marketplace/community-operators-6grsv" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.590113 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/add26c7a-239e-4bbb-b966-cf1fda1b12f5-utilities\") pod \"community-operators-6grsv\" (UID: \"add26c7a-239e-4bbb-b966-cf1fda1b12f5\") " pod="openshift-marketplace/community-operators-6grsv" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.590151 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/add26c7a-239e-4bbb-b966-cf1fda1b12f5-catalog-content\") pod \"community-operators-6grsv\" (UID: \"add26c7a-239e-4bbb-b966-cf1fda1b12f5\") " pod="openshift-marketplace/community-operators-6grsv" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.609374 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s7mz\" (UniqueName: \"kubernetes.io/projected/add26c7a-239e-4bbb-b966-cf1fda1b12f5-kube-api-access-7s7mz\") pod \"community-operators-6grsv\" (UID: \"add26c7a-239e-4bbb-b966-cf1fda1b12f5\") " pod="openshift-marketplace/community-operators-6grsv" Nov 28 07:24:27 crc kubenswrapper[4946]: I1128 07:24:27.713186 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6grsv" Nov 28 07:24:28 crc kubenswrapper[4946]: I1128 07:24:28.228382 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6grsv"] Nov 28 07:24:29 crc kubenswrapper[4946]: I1128 07:24:29.242551 4946 generic.go:334] "Generic (PLEG): container finished" podID="add26c7a-239e-4bbb-b966-cf1fda1b12f5" containerID="2b57f8b8592a0a3b5a58ab5f7d91d2a5cf9d18a5105a52e0fc444cd827c87145" exitCode=0 Nov 28 07:24:29 crc kubenswrapper[4946]: I1128 07:24:29.242622 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6grsv" event={"ID":"add26c7a-239e-4bbb-b966-cf1fda1b12f5","Type":"ContainerDied","Data":"2b57f8b8592a0a3b5a58ab5f7d91d2a5cf9d18a5105a52e0fc444cd827c87145"} Nov 28 07:24:29 crc kubenswrapper[4946]: I1128 07:24:29.242661 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6grsv" event={"ID":"add26c7a-239e-4bbb-b966-cf1fda1b12f5","Type":"ContainerStarted","Data":"19fc436a1ea48131ca87b50c5b106eea37f50f9a60ff99da758213e4ceaefdc6"} Nov 28 07:24:29 crc kubenswrapper[4946]: I1128 07:24:29.246761 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 07:24:30 crc kubenswrapper[4946]: I1128 07:24:30.990600 4946 scope.go:117] "RemoveContainer" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" Nov 28 07:24:30 crc kubenswrapper[4946]: E1128 07:24:30.992719 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:24:31 crc kubenswrapper[4946]: I1128 07:24:31.265033 4946 generic.go:334] "Generic (PLEG): container finished" podID="add26c7a-239e-4bbb-b966-cf1fda1b12f5" containerID="f02247464e2e2edcc4135f39223474aff1ef7b5de465f742409116d9325fe424" exitCode=0 Nov 28 07:24:31 crc kubenswrapper[4946]: I1128 07:24:31.265114 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6grsv" event={"ID":"add26c7a-239e-4bbb-b966-cf1fda1b12f5","Type":"ContainerDied","Data":"f02247464e2e2edcc4135f39223474aff1ef7b5de465f742409116d9325fe424"} Nov 28 07:24:32 crc kubenswrapper[4946]: I1128 07:24:32.141879 4946 scope.go:117] "RemoveContainer" containerID="c93f03ceec40d0cc831cb034e30236277dd7917f19967a2a82daec8f7e2ea5a1" Nov 28 07:24:32 crc kubenswrapper[4946]: I1128 07:24:32.173063 4946 scope.go:117] "RemoveContainer" containerID="013a463e5332a4ac23726892ef20f93b135d9eefa71f004d0bc295ed866d6e69" Nov 28 07:24:32 crc kubenswrapper[4946]: I1128 07:24:32.206662 4946 scope.go:117] "RemoveContainer" containerID="a348da2efed5cb9f971db2057ab00687cdd610c28bf4bcbcd484d5b3b192867c" Nov 28 07:24:32 crc kubenswrapper[4946]: I1128 07:24:32.241610 4946 scope.go:117] "RemoveContainer" containerID="d1a874f7dc46f5856603f1816207ac7f5219e5bf8113380c75e03dd02f1318d2" Nov 28 07:24:32 crc kubenswrapper[4946]: I1128 07:24:32.277723 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6grsv" event={"ID":"add26c7a-239e-4bbb-b966-cf1fda1b12f5","Type":"ContainerStarted","Data":"956d2ab6f2e7a95a782f718c753bad2f72a8f5d8c12a5f818108dd219e57d992"} Nov 28 07:24:32 crc kubenswrapper[4946]: I1128 07:24:32.303264 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6grsv" podStartSLOduration=2.678744363 podStartE2EDuration="5.303246544s" podCreationTimestamp="2025-11-28 07:24:27 +0000 UTC" firstStartedPulling="2025-11-28 07:24:29.246391589 +0000 UTC m=+1923.624456720" lastFinishedPulling="2025-11-28 07:24:31.87089379 +0000 UTC m=+1926.248958901" observedRunningTime="2025-11-28 07:24:32.299388679 +0000 UTC m=+1926.677453790" watchObservedRunningTime="2025-11-28 07:24:32.303246544 +0000 UTC m=+1926.681311655" Nov 28 07:24:32 crc kubenswrapper[4946]: I1128 07:24:32.316684 4946 scope.go:117] "RemoveContainer" containerID="9a6b1a4ad119bfed32568355cec70a94a7fcf22a0b1f5a06ffdb17c6c3ad1bf9" Nov 28 07:24:37 crc kubenswrapper[4946]: I1128 07:24:37.713846 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6grsv" Nov 28 07:24:37 crc kubenswrapper[4946]: I1128 07:24:37.714520 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6grsv" Nov 28 07:24:37 crc kubenswrapper[4946]: I1128 07:24:37.784029 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6grsv" Nov 28 07:24:38 crc kubenswrapper[4946]: I1128 07:24:38.420560 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6grsv" Nov 28 07:24:38 crc kubenswrapper[4946]: I1128 07:24:38.488747 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6grsv"] Nov 28 07:24:40 crc kubenswrapper[4946]: I1128 07:24:40.361208 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6grsv" podUID="add26c7a-239e-4bbb-b966-cf1fda1b12f5" containerName="registry-server" containerID="cri-o://956d2ab6f2e7a95a782f718c753bad2f72a8f5d8c12a5f818108dd219e57d992" gracePeriod=2 Nov 28 07:24:41 crc kubenswrapper[4946]: I1128 07:24:41.390408 4946 generic.go:334] "Generic (PLEG): container finished" podID="add26c7a-239e-4bbb-b966-cf1fda1b12f5" containerID="956d2ab6f2e7a95a782f718c753bad2f72a8f5d8c12a5f818108dd219e57d992" exitCode=0 Nov 28 07:24:41 crc kubenswrapper[4946]: I1128 07:24:41.390603 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6grsv" event={"ID":"add26c7a-239e-4bbb-b966-cf1fda1b12f5","Type":"ContainerDied","Data":"956d2ab6f2e7a95a782f718c753bad2f72a8f5d8c12a5f818108dd219e57d992"} Nov 28 07:24:41 crc kubenswrapper[4946]: I1128 07:24:41.516397 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6grsv" Nov 28 07:24:41 crc kubenswrapper[4946]: I1128 07:24:41.631274 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/add26c7a-239e-4bbb-b966-cf1fda1b12f5-utilities\") pod \"add26c7a-239e-4bbb-b966-cf1fda1b12f5\" (UID: \"add26c7a-239e-4bbb-b966-cf1fda1b12f5\") " Nov 28 07:24:41 crc kubenswrapper[4946]: I1128 07:24:41.631388 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s7mz\" (UniqueName: \"kubernetes.io/projected/add26c7a-239e-4bbb-b966-cf1fda1b12f5-kube-api-access-7s7mz\") pod \"add26c7a-239e-4bbb-b966-cf1fda1b12f5\" (UID: \"add26c7a-239e-4bbb-b966-cf1fda1b12f5\") " Nov 28 07:24:41 crc kubenswrapper[4946]: I1128 07:24:41.632578 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/add26c7a-239e-4bbb-b966-cf1fda1b12f5-catalog-content\") pod \"add26c7a-239e-4bbb-b966-cf1fda1b12f5\" (UID: \"add26c7a-239e-4bbb-b966-cf1fda1b12f5\") " Nov 28 07:24:41 crc kubenswrapper[4946]: I1128 07:24:41.632966 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/add26c7a-239e-4bbb-b966-cf1fda1b12f5-utilities" (OuterVolumeSpecName: "utilities") pod "add26c7a-239e-4bbb-b966-cf1fda1b12f5" (UID: "add26c7a-239e-4bbb-b966-cf1fda1b12f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:24:41 crc kubenswrapper[4946]: I1128 07:24:41.638798 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/add26c7a-239e-4bbb-b966-cf1fda1b12f5-kube-api-access-7s7mz" (OuterVolumeSpecName: "kube-api-access-7s7mz") pod "add26c7a-239e-4bbb-b966-cf1fda1b12f5" (UID: "add26c7a-239e-4bbb-b966-cf1fda1b12f5"). InnerVolumeSpecName "kube-api-access-7s7mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:24:41 crc kubenswrapper[4946]: I1128 07:24:41.719005 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/add26c7a-239e-4bbb-b966-cf1fda1b12f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "add26c7a-239e-4bbb-b966-cf1fda1b12f5" (UID: "add26c7a-239e-4bbb-b966-cf1fda1b12f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:24:41 crc kubenswrapper[4946]: I1128 07:24:41.734065 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/add26c7a-239e-4bbb-b966-cf1fda1b12f5-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:24:41 crc kubenswrapper[4946]: I1128 07:24:41.734340 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s7mz\" (UniqueName: \"kubernetes.io/projected/add26c7a-239e-4bbb-b966-cf1fda1b12f5-kube-api-access-7s7mz\") on node \"crc\" DevicePath \"\"" Nov 28 07:24:41 crc kubenswrapper[4946]: I1128 07:24:41.734358 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/add26c7a-239e-4bbb-b966-cf1fda1b12f5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:24:42 crc kubenswrapper[4946]: I1128 07:24:42.401160 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6grsv" event={"ID":"add26c7a-239e-4bbb-b966-cf1fda1b12f5","Type":"ContainerDied","Data":"19fc436a1ea48131ca87b50c5b106eea37f50f9a60ff99da758213e4ceaefdc6"} Nov 28 07:24:42 crc kubenswrapper[4946]: I1128 07:24:42.401217 4946 scope.go:117] "RemoveContainer" containerID="956d2ab6f2e7a95a782f718c753bad2f72a8f5d8c12a5f818108dd219e57d992" Nov 28 07:24:42 crc kubenswrapper[4946]: I1128 07:24:42.401339 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6grsv" Nov 28 07:24:42 crc kubenswrapper[4946]: I1128 07:24:42.430872 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6grsv"] Nov 28 07:24:42 crc kubenswrapper[4946]: I1128 07:24:42.433405 4946 scope.go:117] "RemoveContainer" containerID="f02247464e2e2edcc4135f39223474aff1ef7b5de465f742409116d9325fe424" Nov 28 07:24:42 crc kubenswrapper[4946]: I1128 07:24:42.439992 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6grsv"] Nov 28 07:24:42 crc kubenswrapper[4946]: I1128 07:24:42.468869 4946 scope.go:117] "RemoveContainer" containerID="2b57f8b8592a0a3b5a58ab5f7d91d2a5cf9d18a5105a52e0fc444cd827c87145" Nov 28 07:24:44 crc kubenswrapper[4946]: I1128 07:24:44.014397 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="add26c7a-239e-4bbb-b966-cf1fda1b12f5" path="/var/lib/kubelet/pods/add26c7a-239e-4bbb-b966-cf1fda1b12f5/volumes" Nov 28 07:24:44 crc kubenswrapper[4946]: I1128 07:24:44.990360 4946 scope.go:117] "RemoveContainer" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" Nov 28 07:24:44 crc kubenswrapper[4946]: E1128 07:24:44.990857 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:24:57 crc kubenswrapper[4946]: I1128 07:24:57.990360 4946 scope.go:117] "RemoveContainer" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" Nov 28 07:24:58 crc kubenswrapper[4946]: I1128 07:24:58.557495 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"5c6bc18cf8e1fc0e610966a3fd2dcb94036d907b5319dc2c88288f5b8ad7bb25"} Nov 28 07:25:32 crc kubenswrapper[4946]: I1128 07:25:32.420914 4946 scope.go:117] "RemoveContainer" containerID="bdf5d4a6d7c993569bedcc568b4c82d9c837e7bbc8a9e5ecdc19c96b5ce2738b" Nov 28 07:26:27 crc kubenswrapper[4946]: I1128 07:26:27.349925 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x8qql"] Nov 28 07:26:27 crc kubenswrapper[4946]: E1128 07:26:27.351176 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add26c7a-239e-4bbb-b966-cf1fda1b12f5" containerName="extract-content" Nov 28 07:26:27 crc kubenswrapper[4946]: I1128 07:26:27.351199 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="add26c7a-239e-4bbb-b966-cf1fda1b12f5" containerName="extract-content" Nov 28 07:26:27 crc kubenswrapper[4946]: E1128 07:26:27.351245 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add26c7a-239e-4bbb-b966-cf1fda1b12f5" containerName="extract-utilities" Nov 28 07:26:27 crc kubenswrapper[4946]: I1128 07:26:27.351259 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="add26c7a-239e-4bbb-b966-cf1fda1b12f5" containerName="extract-utilities" Nov 28 07:26:27 crc kubenswrapper[4946]: E1128 07:26:27.351284 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add26c7a-239e-4bbb-b966-cf1fda1b12f5" containerName="registry-server" Nov 28 07:26:27 crc kubenswrapper[4946]: I1128 07:26:27.351297 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="add26c7a-239e-4bbb-b966-cf1fda1b12f5" containerName="registry-server" Nov 28 07:26:27 crc kubenswrapper[4946]: I1128 07:26:27.351578 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="add26c7a-239e-4bbb-b966-cf1fda1b12f5" containerName="registry-server" Nov 28 07:26:27 crc kubenswrapper[4946]: I1128 07:26:27.353400 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8qql" Nov 28 07:26:27 crc kubenswrapper[4946]: I1128 07:26:27.364147 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x8qql"] Nov 28 07:26:27 crc kubenswrapper[4946]: I1128 07:26:27.370906 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27486823-a92b-46f6-952c-30598e02bad4-catalog-content\") pod \"certified-operators-x8qql\" (UID: \"27486823-a92b-46f6-952c-30598e02bad4\") " pod="openshift-marketplace/certified-operators-x8qql" Nov 28 07:26:27 crc kubenswrapper[4946]: I1128 07:26:27.371030 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27486823-a92b-46f6-952c-30598e02bad4-utilities\") pod \"certified-operators-x8qql\" (UID: \"27486823-a92b-46f6-952c-30598e02bad4\") " pod="openshift-marketplace/certified-operators-x8qql" Nov 28 07:26:27 crc kubenswrapper[4946]: I1128 07:26:27.371103 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc2tb\" (UniqueName: \"kubernetes.io/projected/27486823-a92b-46f6-952c-30598e02bad4-kube-api-access-jc2tb\") pod \"certified-operators-x8qql\" (UID: \"27486823-a92b-46f6-952c-30598e02bad4\") " pod="openshift-marketplace/certified-operators-x8qql" Nov 28 07:26:27 crc kubenswrapper[4946]: I1128 07:26:27.472698 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27486823-a92b-46f6-952c-30598e02bad4-utilities\") pod \"certified-operators-x8qql\" (UID: \"27486823-a92b-46f6-952c-30598e02bad4\") " pod="openshift-marketplace/certified-operators-x8qql" Nov 28 07:26:27 crc kubenswrapper[4946]: I1128 07:26:27.472814 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc2tb\" (UniqueName: \"kubernetes.io/projected/27486823-a92b-46f6-952c-30598e02bad4-kube-api-access-jc2tb\") pod \"certified-operators-x8qql\" (UID: \"27486823-a92b-46f6-952c-30598e02bad4\") " pod="openshift-marketplace/certified-operators-x8qql" Nov 28 07:26:27 crc kubenswrapper[4946]: I1128 07:26:27.472955 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27486823-a92b-46f6-952c-30598e02bad4-catalog-content\") pod \"certified-operators-x8qql\" (UID: \"27486823-a92b-46f6-952c-30598e02bad4\") " pod="openshift-marketplace/certified-operators-x8qql" Nov 28 07:26:27 crc kubenswrapper[4946]: I1128 07:26:27.473921 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27486823-a92b-46f6-952c-30598e02bad4-catalog-content\") pod \"certified-operators-x8qql\" (UID: \"27486823-a92b-46f6-952c-30598e02bad4\") " pod="openshift-marketplace/certified-operators-x8qql" Nov 28 07:26:27 crc kubenswrapper[4946]: I1128 07:26:27.473978 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27486823-a92b-46f6-952c-30598e02bad4-utilities\") pod \"certified-operators-x8qql\" (UID: \"27486823-a92b-46f6-952c-30598e02bad4\") " pod="openshift-marketplace/certified-operators-x8qql" Nov 28 07:26:27 crc kubenswrapper[4946]: I1128 07:26:27.507810 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc2tb\" (UniqueName: \"kubernetes.io/projected/27486823-a92b-46f6-952c-30598e02bad4-kube-api-access-jc2tb\") pod \"certified-operators-x8qql\" (UID: \"27486823-a92b-46f6-952c-30598e02bad4\") " pod="openshift-marketplace/certified-operators-x8qql" Nov 28 07:26:27 crc kubenswrapper[4946]: I1128 07:26:27.711232 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8qql" Nov 28 07:26:28 crc kubenswrapper[4946]: I1128 07:26:28.155384 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x8qql"] Nov 28 07:26:28 crc kubenswrapper[4946]: I1128 07:26:28.445183 4946 generic.go:334] "Generic (PLEG): container finished" podID="27486823-a92b-46f6-952c-30598e02bad4" containerID="bae223cb865530193a7824364abb3d2845e035547539bc9fb8ca3c9b96b01bf7" exitCode=0 Nov 28 07:26:28 crc kubenswrapper[4946]: I1128 07:26:28.445248 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8qql" event={"ID":"27486823-a92b-46f6-952c-30598e02bad4","Type":"ContainerDied","Data":"bae223cb865530193a7824364abb3d2845e035547539bc9fb8ca3c9b96b01bf7"} Nov 28 07:26:28 crc kubenswrapper[4946]: I1128 07:26:28.445629 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8qql" event={"ID":"27486823-a92b-46f6-952c-30598e02bad4","Type":"ContainerStarted","Data":"e169e0d579d709e08cfaa3abd51c5ee1592ec3c79817740c04add1036c6eb1df"} Nov 28 07:26:29 crc kubenswrapper[4946]: I1128 07:26:29.456540 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8qql" event={"ID":"27486823-a92b-46f6-952c-30598e02bad4","Type":"ContainerStarted","Data":"f3707b7ad8a85ec37d200e43d0db952a71c625a1fcbda327b9d37d78b7a9404e"} Nov 28 07:26:30 crc kubenswrapper[4946]: I1128 07:26:30.469369 4946 generic.go:334] "Generic (PLEG): container finished" podID="27486823-a92b-46f6-952c-30598e02bad4" containerID="f3707b7ad8a85ec37d200e43d0db952a71c625a1fcbda327b9d37d78b7a9404e" exitCode=0 Nov 28 07:26:30 crc kubenswrapper[4946]: I1128 07:26:30.469507 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8qql" event={"ID":"27486823-a92b-46f6-952c-30598e02bad4","Type":"ContainerDied","Data":"f3707b7ad8a85ec37d200e43d0db952a71c625a1fcbda327b9d37d78b7a9404e"} Nov 28 07:26:30 crc kubenswrapper[4946]: I1128 07:26:30.540571 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r29k2"] Nov 28 07:26:30 crc kubenswrapper[4946]: I1128 07:26:30.542951 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r29k2" Nov 28 07:26:30 crc kubenswrapper[4946]: I1128 07:26:30.576414 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r29k2"] Nov 28 07:26:30 crc kubenswrapper[4946]: I1128 07:26:30.622608 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ea4caf-bf96-4c2c-9f55-7cba786de89e-catalog-content\") pod \"redhat-operators-r29k2\" (UID: \"70ea4caf-bf96-4c2c-9f55-7cba786de89e\") " pod="openshift-marketplace/redhat-operators-r29k2" Nov 28 07:26:30 crc kubenswrapper[4946]: I1128 07:26:30.622811 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ea4caf-bf96-4c2c-9f55-7cba786de89e-utilities\") pod \"redhat-operators-r29k2\" (UID: \"70ea4caf-bf96-4c2c-9f55-7cba786de89e\") " pod="openshift-marketplace/redhat-operators-r29k2" Nov 28 07:26:30 crc kubenswrapper[4946]: I1128 07:26:30.622959 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njpfs\" (UniqueName: \"kubernetes.io/projected/70ea4caf-bf96-4c2c-9f55-7cba786de89e-kube-api-access-njpfs\") pod \"redhat-operators-r29k2\" (UID: \"70ea4caf-bf96-4c2c-9f55-7cba786de89e\") " pod="openshift-marketplace/redhat-operators-r29k2" Nov 28 07:26:30 crc kubenswrapper[4946]: I1128 07:26:30.724634 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njpfs\" (UniqueName: \"kubernetes.io/projected/70ea4caf-bf96-4c2c-9f55-7cba786de89e-kube-api-access-njpfs\") pod \"redhat-operators-r29k2\" (UID: \"70ea4caf-bf96-4c2c-9f55-7cba786de89e\") " pod="openshift-marketplace/redhat-operators-r29k2" Nov 28 07:26:30 crc kubenswrapper[4946]: I1128 07:26:30.724718 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ea4caf-bf96-4c2c-9f55-7cba786de89e-catalog-content\") pod \"redhat-operators-r29k2\" (UID: \"70ea4caf-bf96-4c2c-9f55-7cba786de89e\") " pod="openshift-marketplace/redhat-operators-r29k2" Nov 28 07:26:30 crc kubenswrapper[4946]: I1128 07:26:30.724789 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ea4caf-bf96-4c2c-9f55-7cba786de89e-utilities\") pod \"redhat-operators-r29k2\" (UID: \"70ea4caf-bf96-4c2c-9f55-7cba786de89e\") " pod="openshift-marketplace/redhat-operators-r29k2" Nov 28 07:26:30 crc kubenswrapper[4946]: I1128 07:26:30.725303 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ea4caf-bf96-4c2c-9f55-7cba786de89e-utilities\") pod \"redhat-operators-r29k2\" (UID: \"70ea4caf-bf96-4c2c-9f55-7cba786de89e\") " pod="openshift-marketplace/redhat-operators-r29k2" Nov 28 07:26:30 crc kubenswrapper[4946]: I1128 07:26:30.725392 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ea4caf-bf96-4c2c-9f55-7cba786de89e-catalog-content\") pod \"redhat-operators-r29k2\" (UID: \"70ea4caf-bf96-4c2c-9f55-7cba786de89e\") " pod="openshift-marketplace/redhat-operators-r29k2" Nov 28 07:26:30 crc kubenswrapper[4946]: I1128 07:26:30.748487 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njpfs\" (UniqueName: \"kubernetes.io/projected/70ea4caf-bf96-4c2c-9f55-7cba786de89e-kube-api-access-njpfs\") pod \"redhat-operators-r29k2\" (UID: \"70ea4caf-bf96-4c2c-9f55-7cba786de89e\") " pod="openshift-marketplace/redhat-operators-r29k2" Nov 28 07:26:30 crc kubenswrapper[4946]: I1128 07:26:30.875090 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r29k2" Nov 28 07:26:31 crc kubenswrapper[4946]: I1128 07:26:31.378096 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r29k2"] Nov 28 07:26:31 crc kubenswrapper[4946]: I1128 07:26:31.479044 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8qql" event={"ID":"27486823-a92b-46f6-952c-30598e02bad4","Type":"ContainerStarted","Data":"1563b880717209c5ca36d66eeb1eab35ab93d1ff192c11001542cacb28d6449a"} Nov 28 07:26:31 crc kubenswrapper[4946]: I1128 07:26:31.481540 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r29k2" event={"ID":"70ea4caf-bf96-4c2c-9f55-7cba786de89e","Type":"ContainerStarted","Data":"09b08b79bf0ae09300b9694d51c65f74ad87e045e4afc8443ed302360360e2e6"} Nov 28 07:26:31 crc kubenswrapper[4946]: I1128 07:26:31.498857 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x8qql" podStartSLOduration=2.046999786 podStartE2EDuration="4.498839801s" podCreationTimestamp="2025-11-28 07:26:27 +0000 UTC" firstStartedPulling="2025-11-28 07:26:28.448527568 +0000 UTC m=+2042.826592689" lastFinishedPulling="2025-11-28 07:26:30.900367573 +0000 UTC m=+2045.278432704" observedRunningTime="2025-11-28 07:26:31.495227052 +0000 UTC m=+2045.873292173" watchObservedRunningTime="2025-11-28 07:26:31.498839801 +0000 UTC m=+2045.876904922" Nov 28 07:26:32 crc kubenswrapper[4946]: I1128 07:26:32.489226 4946 generic.go:334] "Generic (PLEG): container finished" podID="70ea4caf-bf96-4c2c-9f55-7cba786de89e" containerID="f53dcdc37cf6cad9e07bd3bc81ee4be8c87a951e69eafa10a7da015d5f7e9b31" exitCode=0 Nov 28 07:26:32 crc kubenswrapper[4946]: I1128 07:26:32.489340 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r29k2" event={"ID":"70ea4caf-bf96-4c2c-9f55-7cba786de89e","Type":"ContainerDied","Data":"f53dcdc37cf6cad9e07bd3bc81ee4be8c87a951e69eafa10a7da015d5f7e9b31"} Nov 28 07:26:32 crc kubenswrapper[4946]: I1128 07:26:32.531963 4946 scope.go:117] "RemoveContainer" containerID="b85b412d05146dbed3aa695cab3bb9464a18b9f516b6b0592e0bcd619a892c43" Nov 28 07:26:32 crc kubenswrapper[4946]: I1128 07:26:32.563168 4946 scope.go:117] "RemoveContainer" containerID="f54d7b1d7eeb5228437fef5261e64be91c65fc15631465cf2866a020ad146692" Nov 28 07:26:32 crc kubenswrapper[4946]: I1128 07:26:32.599689 4946 scope.go:117] "RemoveContainer" containerID="a833a51c90092a7c66ad1c8d79f182f5d4ed7bcf164f9e13778a6a030a938bf7" Nov 28 07:26:32 crc kubenswrapper[4946]: I1128 07:26:32.637171 4946 scope.go:117] "RemoveContainer" containerID="aae00d1916fbbab97dcf9cfdea6a8d9c1fa2821f3aa4344ef7c74ebbc52befe1" Nov 28 07:26:32 crc kubenswrapper[4946]: I1128 07:26:32.665195 4946 scope.go:117] "RemoveContainer" containerID="da9335986b308565d71e74cb7101aedf1862f35b9597f2c00d9b2911df63b62a" Nov 28 07:26:32 crc kubenswrapper[4946]: I1128 07:26:32.687418 4946 scope.go:117] "RemoveContainer" containerID="5680ae2c5d4972fb1ab727edfb43325e6ccc9add15115250e4af13fb02631c0a" Nov 28 07:26:33 crc kubenswrapper[4946]: I1128 07:26:33.500155 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r29k2" event={"ID":"70ea4caf-bf96-4c2c-9f55-7cba786de89e","Type":"ContainerStarted","Data":"505a05465693a8d09105bb04e5daa6163c85f7718c1b80492947bdc7d7928fb7"} Nov 28 07:26:34 crc kubenswrapper[4946]: I1128 07:26:34.513050 4946 generic.go:334] "Generic (PLEG): container finished" podID="70ea4caf-bf96-4c2c-9f55-7cba786de89e" containerID="505a05465693a8d09105bb04e5daa6163c85f7718c1b80492947bdc7d7928fb7" exitCode=0 Nov 28 07:26:34 crc kubenswrapper[4946]: I1128 07:26:34.513171 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r29k2" event={"ID":"70ea4caf-bf96-4c2c-9f55-7cba786de89e","Type":"ContainerDied","Data":"505a05465693a8d09105bb04e5daa6163c85f7718c1b80492947bdc7d7928fb7"} Nov 28 07:26:36 crc kubenswrapper[4946]: I1128 07:26:36.537250 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r29k2" event={"ID":"70ea4caf-bf96-4c2c-9f55-7cba786de89e","Type":"ContainerStarted","Data":"86cfbe892c7b4ca117fddc411f0134831e42e1de9bba50882a5e1ca7fc2d1db6"} Nov 28 07:26:36 crc kubenswrapper[4946]: I1128 07:26:36.563129 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r29k2" podStartSLOduration=3.728425483 podStartE2EDuration="6.563104528s" podCreationTimestamp="2025-11-28 07:26:30 +0000 UTC" firstStartedPulling="2025-11-28 07:26:32.491642152 +0000 UTC m=+2046.869707293" lastFinishedPulling="2025-11-28 07:26:35.326321187 +0000 UTC m=+2049.704386338" observedRunningTime="2025-11-28 07:26:36.555980412 +0000 UTC m=+2050.934045533" watchObservedRunningTime="2025-11-28 07:26:36.563104528 +0000 UTC m=+2050.941169649" Nov 28 07:26:37 crc kubenswrapper[4946]: I1128 07:26:37.712194 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x8qql" Nov 28 07:26:37 crc kubenswrapper[4946]: I1128 07:26:37.712526 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x8qql" Nov 28 07:26:37 crc kubenswrapper[4946]: I1128 07:26:37.763993 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x8qql" Nov 28 07:26:38 crc kubenswrapper[4946]: I1128 07:26:38.619258 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x8qql" Nov 28 07:26:40 crc kubenswrapper[4946]: I1128 07:26:40.875750 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r29k2" Nov 28 07:26:40 crc kubenswrapper[4946]: I1128 07:26:40.876748 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r29k2" Nov 28 07:26:41 crc kubenswrapper[4946]: I1128 07:26:41.936170 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r29k2" podUID="70ea4caf-bf96-4c2c-9f55-7cba786de89e" containerName="registry-server" probeResult="failure" output=< Nov 28 07:26:41 crc kubenswrapper[4946]: timeout: failed to connect service ":50051" within 1s Nov 28 07:26:41 crc kubenswrapper[4946]: > Nov 28 07:26:42 crc kubenswrapper[4946]: I1128 07:26:42.118361 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x8qql"] Nov 28 07:26:42 crc kubenswrapper[4946]: I1128 07:26:42.119141 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x8qql" podUID="27486823-a92b-46f6-952c-30598e02bad4" containerName="registry-server" containerID="cri-o://1563b880717209c5ca36d66eeb1eab35ab93d1ff192c11001542cacb28d6449a" gracePeriod=2 Nov 28 07:26:43 crc kubenswrapper[4946]: I1128 07:26:43.602840 4946 generic.go:334] "Generic (PLEG): container finished" podID="27486823-a92b-46f6-952c-30598e02bad4" containerID="1563b880717209c5ca36d66eeb1eab35ab93d1ff192c11001542cacb28d6449a" exitCode=0 Nov 28 07:26:43 crc kubenswrapper[4946]: I1128 07:26:43.603281 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8qql" event={"ID":"27486823-a92b-46f6-952c-30598e02bad4","Type":"ContainerDied","Data":"1563b880717209c5ca36d66eeb1eab35ab93d1ff192c11001542cacb28d6449a"} Nov 28 07:26:43 crc kubenswrapper[4946]: I1128 07:26:43.707137 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8qql" Nov 28 07:26:43 crc kubenswrapper[4946]: I1128 07:26:43.724013 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27486823-a92b-46f6-952c-30598e02bad4-catalog-content\") pod \"27486823-a92b-46f6-952c-30598e02bad4\" (UID: \"27486823-a92b-46f6-952c-30598e02bad4\") " Nov 28 07:26:43 crc kubenswrapper[4946]: I1128 07:26:43.724083 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27486823-a92b-46f6-952c-30598e02bad4-utilities\") pod \"27486823-a92b-46f6-952c-30598e02bad4\" (UID: \"27486823-a92b-46f6-952c-30598e02bad4\") " Nov 28 07:26:43 crc kubenswrapper[4946]: I1128 07:26:43.724283 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc2tb\" (UniqueName: \"kubernetes.io/projected/27486823-a92b-46f6-952c-30598e02bad4-kube-api-access-jc2tb\") pod \"27486823-a92b-46f6-952c-30598e02bad4\" (UID: \"27486823-a92b-46f6-952c-30598e02bad4\") " Nov 28 07:26:43 crc kubenswrapper[4946]: I1128 07:26:43.724924 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27486823-a92b-46f6-952c-30598e02bad4-utilities" (OuterVolumeSpecName: "utilities") pod "27486823-a92b-46f6-952c-30598e02bad4" (UID: "27486823-a92b-46f6-952c-30598e02bad4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:26:43 crc kubenswrapper[4946]: I1128 07:26:43.741376 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27486823-a92b-46f6-952c-30598e02bad4-kube-api-access-jc2tb" (OuterVolumeSpecName: "kube-api-access-jc2tb") pod "27486823-a92b-46f6-952c-30598e02bad4" (UID: "27486823-a92b-46f6-952c-30598e02bad4"). InnerVolumeSpecName "kube-api-access-jc2tb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:26:43 crc kubenswrapper[4946]: I1128 07:26:43.777253 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27486823-a92b-46f6-952c-30598e02bad4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "27486823-a92b-46f6-952c-30598e02bad4" (UID: "27486823-a92b-46f6-952c-30598e02bad4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:26:43 crc kubenswrapper[4946]: I1128 07:26:43.826296 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc2tb\" (UniqueName: \"kubernetes.io/projected/27486823-a92b-46f6-952c-30598e02bad4-kube-api-access-jc2tb\") on node \"crc\" DevicePath \"\"" Nov 28 07:26:43 crc kubenswrapper[4946]: I1128 07:26:43.826342 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27486823-a92b-46f6-952c-30598e02bad4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:26:43 crc kubenswrapper[4946]: I1128 07:26:43.826351 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27486823-a92b-46f6-952c-30598e02bad4-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:26:44 crc kubenswrapper[4946]: I1128 07:26:44.613856 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8qql" event={"ID":"27486823-a92b-46f6-952c-30598e02bad4","Type":"ContainerDied","Data":"e169e0d579d709e08cfaa3abd51c5ee1592ec3c79817740c04add1036c6eb1df"} Nov 28 07:26:44 crc kubenswrapper[4946]: I1128 07:26:44.614114 4946 scope.go:117] "RemoveContainer" containerID="1563b880717209c5ca36d66eeb1eab35ab93d1ff192c11001542cacb28d6449a" Nov 28 07:26:44 crc kubenswrapper[4946]: I1128 07:26:44.613925 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8qql" Nov 28 07:26:44 crc kubenswrapper[4946]: I1128 07:26:44.638559 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x8qql"] Nov 28 07:26:44 crc kubenswrapper[4946]: I1128 07:26:44.640160 4946 scope.go:117] "RemoveContainer" containerID="f3707b7ad8a85ec37d200e43d0db952a71c625a1fcbda327b9d37d78b7a9404e" Nov 28 07:26:44 crc kubenswrapper[4946]: I1128 07:26:44.646072 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x8qql"] Nov 28 07:26:44 crc kubenswrapper[4946]: I1128 07:26:44.658781 4946 scope.go:117] "RemoveContainer" containerID="bae223cb865530193a7824364abb3d2845e035547539bc9fb8ca3c9b96b01bf7" Nov 28 07:26:46 crc kubenswrapper[4946]: I1128 07:26:46.009900 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27486823-a92b-46f6-952c-30598e02bad4" path="/var/lib/kubelet/pods/27486823-a92b-46f6-952c-30598e02bad4/volumes" Nov 28 07:26:50 crc kubenswrapper[4946]: I1128 07:26:50.937125 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r29k2" Nov 28 07:26:51 crc kubenswrapper[4946]: I1128 07:26:51.005947 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r29k2" Nov 28 07:26:51 crc kubenswrapper[4946]: I1128 07:26:51.185797 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r29k2"] Nov 28 07:26:52 crc kubenswrapper[4946]: I1128 07:26:52.684742 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r29k2" podUID="70ea4caf-bf96-4c2c-9f55-7cba786de89e" containerName="registry-server" containerID="cri-o://86cfbe892c7b4ca117fddc411f0134831e42e1de9bba50882a5e1ca7fc2d1db6" gracePeriod=2 Nov 28 07:26:53 crc kubenswrapper[4946]: I1128 07:26:53.139891 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r29k2" Nov 28 07:26:53 crc kubenswrapper[4946]: I1128 07:26:53.276627 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njpfs\" (UniqueName: \"kubernetes.io/projected/70ea4caf-bf96-4c2c-9f55-7cba786de89e-kube-api-access-njpfs\") pod \"70ea4caf-bf96-4c2c-9f55-7cba786de89e\" (UID: \"70ea4caf-bf96-4c2c-9f55-7cba786de89e\") " Nov 28 07:26:53 crc kubenswrapper[4946]: I1128 07:26:53.276776 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ea4caf-bf96-4c2c-9f55-7cba786de89e-utilities\") pod \"70ea4caf-bf96-4c2c-9f55-7cba786de89e\" (UID: \"70ea4caf-bf96-4c2c-9f55-7cba786de89e\") " Nov 28 07:26:53 crc kubenswrapper[4946]: I1128 07:26:53.276819 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ea4caf-bf96-4c2c-9f55-7cba786de89e-catalog-content\") pod \"70ea4caf-bf96-4c2c-9f55-7cba786de89e\" (UID: \"70ea4caf-bf96-4c2c-9f55-7cba786de89e\") " Nov 28 07:26:53 crc kubenswrapper[4946]: I1128 07:26:53.277779 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70ea4caf-bf96-4c2c-9f55-7cba786de89e-utilities" (OuterVolumeSpecName: "utilities") pod "70ea4caf-bf96-4c2c-9f55-7cba786de89e" (UID: "70ea4caf-bf96-4c2c-9f55-7cba786de89e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:26:53 crc kubenswrapper[4946]: I1128 07:26:53.284706 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70ea4caf-bf96-4c2c-9f55-7cba786de89e-kube-api-access-njpfs" (OuterVolumeSpecName: "kube-api-access-njpfs") pod "70ea4caf-bf96-4c2c-9f55-7cba786de89e" (UID: "70ea4caf-bf96-4c2c-9f55-7cba786de89e"). InnerVolumeSpecName "kube-api-access-njpfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:26:53 crc kubenswrapper[4946]: I1128 07:26:53.380348 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njpfs\" (UniqueName: \"kubernetes.io/projected/70ea4caf-bf96-4c2c-9f55-7cba786de89e-kube-api-access-njpfs\") on node \"crc\" DevicePath \"\"" Nov 28 07:26:53 crc kubenswrapper[4946]: I1128 07:26:53.380394 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ea4caf-bf96-4c2c-9f55-7cba786de89e-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:26:53 crc kubenswrapper[4946]: I1128 07:26:53.442917 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70ea4caf-bf96-4c2c-9f55-7cba786de89e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70ea4caf-bf96-4c2c-9f55-7cba786de89e" (UID: "70ea4caf-bf96-4c2c-9f55-7cba786de89e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:26:53 crc kubenswrapper[4946]: I1128 07:26:53.482359 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ea4caf-bf96-4c2c-9f55-7cba786de89e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:26:53 crc kubenswrapper[4946]: I1128 07:26:53.702551 4946 generic.go:334] "Generic (PLEG): container finished" podID="70ea4caf-bf96-4c2c-9f55-7cba786de89e" containerID="86cfbe892c7b4ca117fddc411f0134831e42e1de9bba50882a5e1ca7fc2d1db6" exitCode=0 Nov 28 07:26:53 crc kubenswrapper[4946]: I1128 07:26:53.702829 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r29k2" event={"ID":"70ea4caf-bf96-4c2c-9f55-7cba786de89e","Type":"ContainerDied","Data":"86cfbe892c7b4ca117fddc411f0134831e42e1de9bba50882a5e1ca7fc2d1db6"} Nov 28 07:26:53 crc kubenswrapper[4946]: I1128 07:26:53.703758 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r29k2" event={"ID":"70ea4caf-bf96-4c2c-9f55-7cba786de89e","Type":"ContainerDied","Data":"09b08b79bf0ae09300b9694d51c65f74ad87e045e4afc8443ed302360360e2e6"} Nov 28 07:26:53 crc kubenswrapper[4946]: I1128 07:26:53.703817 4946 scope.go:117] "RemoveContainer" containerID="86cfbe892c7b4ca117fddc411f0134831e42e1de9bba50882a5e1ca7fc2d1db6" Nov 28 07:26:53 crc kubenswrapper[4946]: I1128 07:26:53.702969 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r29k2" Nov 28 07:26:53 crc kubenswrapper[4946]: I1128 07:26:53.727706 4946 scope.go:117] "RemoveContainer" containerID="505a05465693a8d09105bb04e5daa6163c85f7718c1b80492947bdc7d7928fb7" Nov 28 07:26:53 crc kubenswrapper[4946]: I1128 07:26:53.740770 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r29k2"] Nov 28 07:26:53 crc kubenswrapper[4946]: I1128 07:26:53.747945 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r29k2"] Nov 28 07:26:53 crc kubenswrapper[4946]: I1128 07:26:53.774984 4946 scope.go:117] "RemoveContainer" containerID="f53dcdc37cf6cad9e07bd3bc81ee4be8c87a951e69eafa10a7da015d5f7e9b31" Nov 28 07:26:53 crc kubenswrapper[4946]: I1128 07:26:53.797364 4946 scope.go:117] "RemoveContainer" containerID="86cfbe892c7b4ca117fddc411f0134831e42e1de9bba50882a5e1ca7fc2d1db6" Nov 28 07:26:53 crc kubenswrapper[4946]: E1128 07:26:53.798097 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86cfbe892c7b4ca117fddc411f0134831e42e1de9bba50882a5e1ca7fc2d1db6\": container with ID starting with 86cfbe892c7b4ca117fddc411f0134831e42e1de9bba50882a5e1ca7fc2d1db6 not found: ID does not exist" containerID="86cfbe892c7b4ca117fddc411f0134831e42e1de9bba50882a5e1ca7fc2d1db6" Nov 28 07:26:53 crc kubenswrapper[4946]: I1128 07:26:53.798156 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86cfbe892c7b4ca117fddc411f0134831e42e1de9bba50882a5e1ca7fc2d1db6"} err="failed to get container status \"86cfbe892c7b4ca117fddc411f0134831e42e1de9bba50882a5e1ca7fc2d1db6\": rpc error: code = NotFound desc = could not find container \"86cfbe892c7b4ca117fddc411f0134831e42e1de9bba50882a5e1ca7fc2d1db6\": container with ID starting with 86cfbe892c7b4ca117fddc411f0134831e42e1de9bba50882a5e1ca7fc2d1db6 not found: ID does not exist" Nov 28 07:26:53 crc kubenswrapper[4946]: I1128 07:26:53.798193 4946 scope.go:117] "RemoveContainer" containerID="505a05465693a8d09105bb04e5daa6163c85f7718c1b80492947bdc7d7928fb7" Nov 28 07:26:53 crc kubenswrapper[4946]: E1128 07:26:53.799139 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"505a05465693a8d09105bb04e5daa6163c85f7718c1b80492947bdc7d7928fb7\": container with ID starting with 505a05465693a8d09105bb04e5daa6163c85f7718c1b80492947bdc7d7928fb7 not found: ID does not exist" containerID="505a05465693a8d09105bb04e5daa6163c85f7718c1b80492947bdc7d7928fb7" Nov 28 07:26:53 crc kubenswrapper[4946]: I1128 07:26:53.799200 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"505a05465693a8d09105bb04e5daa6163c85f7718c1b80492947bdc7d7928fb7"} err="failed to get container status \"505a05465693a8d09105bb04e5daa6163c85f7718c1b80492947bdc7d7928fb7\": rpc error: code = NotFound desc = could not find container \"505a05465693a8d09105bb04e5daa6163c85f7718c1b80492947bdc7d7928fb7\": container with ID starting with 505a05465693a8d09105bb04e5daa6163c85f7718c1b80492947bdc7d7928fb7 not found: ID does not exist" Nov 28 07:26:53 crc kubenswrapper[4946]: I1128 07:26:53.799247 4946 scope.go:117] "RemoveContainer" containerID="f53dcdc37cf6cad9e07bd3bc81ee4be8c87a951e69eafa10a7da015d5f7e9b31" Nov 28 07:26:53 crc kubenswrapper[4946]: E1128 07:26:53.799945 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f53dcdc37cf6cad9e07bd3bc81ee4be8c87a951e69eafa10a7da015d5f7e9b31\": container with ID starting with f53dcdc37cf6cad9e07bd3bc81ee4be8c87a951e69eafa10a7da015d5f7e9b31 not found: ID does not exist" containerID="f53dcdc37cf6cad9e07bd3bc81ee4be8c87a951e69eafa10a7da015d5f7e9b31" Nov 28 07:26:53 crc kubenswrapper[4946]: I1128 07:26:53.799979 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f53dcdc37cf6cad9e07bd3bc81ee4be8c87a951e69eafa10a7da015d5f7e9b31"} err="failed to get container status \"f53dcdc37cf6cad9e07bd3bc81ee4be8c87a951e69eafa10a7da015d5f7e9b31\": rpc error: code = NotFound desc = could not find container \"f53dcdc37cf6cad9e07bd3bc81ee4be8c87a951e69eafa10a7da015d5f7e9b31\": container with ID starting with f53dcdc37cf6cad9e07bd3bc81ee4be8c87a951e69eafa10a7da015d5f7e9b31 not found: ID does not exist" Nov 28 07:26:54 crc kubenswrapper[4946]: I1128 07:26:54.009394 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70ea4caf-bf96-4c2c-9f55-7cba786de89e" path="/var/lib/kubelet/pods/70ea4caf-bf96-4c2c-9f55-7cba786de89e/volumes" Nov 28 07:27:24 crc kubenswrapper[4946]: I1128 07:27:24.731423 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:27:24 crc kubenswrapper[4946]: I1128 07:27:24.732176 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:27:54 crc kubenswrapper[4946]: I1128 07:27:54.731079 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:27:54 crc kubenswrapper[4946]: I1128 07:27:54.731782 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:28:24 crc kubenswrapper[4946]: I1128 07:28:24.730787 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:28:24 crc kubenswrapper[4946]: I1128 07:28:24.731239 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:28:24 crc kubenswrapper[4946]: I1128 07:28:24.731290 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 07:28:24 crc kubenswrapper[4946]: I1128 07:28:24.732070 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c6bc18cf8e1fc0e610966a3fd2dcb94036d907b5319dc2c88288f5b8ad7bb25"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 07:28:24 crc kubenswrapper[4946]: I1128 07:28:24.732135 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://5c6bc18cf8e1fc0e610966a3fd2dcb94036d907b5319dc2c88288f5b8ad7bb25" gracePeriod=600 Nov 28 07:28:25 crc kubenswrapper[4946]: I1128 07:28:25.605039 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="5c6bc18cf8e1fc0e610966a3fd2dcb94036d907b5319dc2c88288f5b8ad7bb25" exitCode=0 Nov 28 07:28:25 crc kubenswrapper[4946]: I1128 07:28:25.605283 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"5c6bc18cf8e1fc0e610966a3fd2dcb94036d907b5319dc2c88288f5b8ad7bb25"} Nov 28 07:28:25 crc kubenswrapper[4946]: I1128 07:28:25.605824 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92"} Nov 28 07:28:25 crc kubenswrapper[4946]: I1128 07:28:25.605858 4946 scope.go:117] "RemoveContainer" containerID="589c494ccdc76b2885f05d8ee6cc9d6370d1839c40f46f9651cf6d69eb740365" Nov 28 07:29:34 crc kubenswrapper[4946]: I1128 07:29:34.807906 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vmq75"] Nov 28 07:29:34 crc kubenswrapper[4946]: E1128 07:29:34.811958 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27486823-a92b-46f6-952c-30598e02bad4" containerName="registry-server" Nov 28 07:29:34 crc kubenswrapper[4946]: I1128 07:29:34.811985 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="27486823-a92b-46f6-952c-30598e02bad4" containerName="registry-server" Nov 28 07:29:34 crc kubenswrapper[4946]: E1128 07:29:34.812005 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ea4caf-bf96-4c2c-9f55-7cba786de89e" containerName="extract-content" Nov 28 07:29:34 crc kubenswrapper[4946]: I1128 07:29:34.812016 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ea4caf-bf96-4c2c-9f55-7cba786de89e" containerName="extract-content" Nov 28 07:29:34 crc kubenswrapper[4946]: E1128 07:29:34.812043 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ea4caf-bf96-4c2c-9f55-7cba786de89e" containerName="registry-server" Nov 28 07:29:34 crc kubenswrapper[4946]: I1128 07:29:34.812057 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ea4caf-bf96-4c2c-9f55-7cba786de89e" containerName="registry-server" Nov 28 07:29:34 crc kubenswrapper[4946]: E1128 07:29:34.812084 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27486823-a92b-46f6-952c-30598e02bad4" containerName="extract-content" Nov 28 07:29:34 crc kubenswrapper[4946]: I1128 07:29:34.812094 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="27486823-a92b-46f6-952c-30598e02bad4" containerName="extract-content" Nov 28 07:29:34 crc kubenswrapper[4946]: E1128 07:29:34.812110 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27486823-a92b-46f6-952c-30598e02bad4" containerName="extract-utilities" Nov 28 07:29:34 crc kubenswrapper[4946]: I1128 07:29:34.812118 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="27486823-a92b-46f6-952c-30598e02bad4" containerName="extract-utilities" Nov 28 07:29:34 crc kubenswrapper[4946]: E1128 07:29:34.812131 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ea4caf-bf96-4c2c-9f55-7cba786de89e" containerName="extract-utilities" Nov 28 07:29:34 crc kubenswrapper[4946]: I1128 07:29:34.812139 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ea4caf-bf96-4c2c-9f55-7cba786de89e" containerName="extract-utilities" Nov 28 07:29:34 crc kubenswrapper[4946]: I1128 07:29:34.812308 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="70ea4caf-bf96-4c2c-9f55-7cba786de89e" containerName="registry-server" Nov 28 07:29:34 crc kubenswrapper[4946]: I1128 07:29:34.812331 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="27486823-a92b-46f6-952c-30598e02bad4" containerName="registry-server" Nov 28 07:29:34 crc kubenswrapper[4946]: I1128 07:29:34.813674 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmq75" Nov 28 07:29:34 crc kubenswrapper[4946]: I1128 07:29:34.829087 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmq75"] Nov 28 07:29:34 crc kubenswrapper[4946]: I1128 07:29:34.886956 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bdgf\" (UniqueName: \"kubernetes.io/projected/b60323d9-7fd2-4044-91c3-bd898a71f2af-kube-api-access-7bdgf\") pod \"redhat-marketplace-vmq75\" (UID: \"b60323d9-7fd2-4044-91c3-bd898a71f2af\") " pod="openshift-marketplace/redhat-marketplace-vmq75" Nov 28 07:29:34 crc kubenswrapper[4946]: I1128 07:29:34.887014 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b60323d9-7fd2-4044-91c3-bd898a71f2af-utilities\") pod \"redhat-marketplace-vmq75\" (UID: \"b60323d9-7fd2-4044-91c3-bd898a71f2af\") " pod="openshift-marketplace/redhat-marketplace-vmq75" Nov 28 07:29:34 crc kubenswrapper[4946]: I1128 07:29:34.887049 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b60323d9-7fd2-4044-91c3-bd898a71f2af-catalog-content\") pod \"redhat-marketplace-vmq75\" (UID: \"b60323d9-7fd2-4044-91c3-bd898a71f2af\") " pod="openshift-marketplace/redhat-marketplace-vmq75" Nov 28 07:29:34 crc kubenswrapper[4946]: I1128 07:29:34.988491 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bdgf\" (UniqueName: \"kubernetes.io/projected/b60323d9-7fd2-4044-91c3-bd898a71f2af-kube-api-access-7bdgf\") pod \"redhat-marketplace-vmq75\" (UID: \"b60323d9-7fd2-4044-91c3-bd898a71f2af\") " pod="openshift-marketplace/redhat-marketplace-vmq75" Nov 28 07:29:34 crc kubenswrapper[4946]: I1128 07:29:34.988776 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b60323d9-7fd2-4044-91c3-bd898a71f2af-utilities\") pod \"redhat-marketplace-vmq75\" (UID: \"b60323d9-7fd2-4044-91c3-bd898a71f2af\") " pod="openshift-marketplace/redhat-marketplace-vmq75" Nov 28 07:29:34 crc kubenswrapper[4946]: I1128 07:29:34.988860 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b60323d9-7fd2-4044-91c3-bd898a71f2af-catalog-content\") pod \"redhat-marketplace-vmq75\" (UID: \"b60323d9-7fd2-4044-91c3-bd898a71f2af\") " pod="openshift-marketplace/redhat-marketplace-vmq75" Nov 28 07:29:34 crc kubenswrapper[4946]: I1128 07:29:34.989591 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b60323d9-7fd2-4044-91c3-bd898a71f2af-utilities\") pod \"redhat-marketplace-vmq75\" (UID: \"b60323d9-7fd2-4044-91c3-bd898a71f2af\") " pod="openshift-marketplace/redhat-marketplace-vmq75" Nov 28 07:29:34 crc kubenswrapper[4946]: I1128 07:29:34.989638 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b60323d9-7fd2-4044-91c3-bd898a71f2af-catalog-content\") pod \"redhat-marketplace-vmq75\" (UID: \"b60323d9-7fd2-4044-91c3-bd898a71f2af\") " pod="openshift-marketplace/redhat-marketplace-vmq75" Nov 28 07:29:35 crc kubenswrapper[4946]: I1128 07:29:35.013410 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bdgf\" (UniqueName: \"kubernetes.io/projected/b60323d9-7fd2-4044-91c3-bd898a71f2af-kube-api-access-7bdgf\") pod \"redhat-marketplace-vmq75\" (UID: \"b60323d9-7fd2-4044-91c3-bd898a71f2af\") " pod="openshift-marketplace/redhat-marketplace-vmq75" Nov 28 07:29:35 crc kubenswrapper[4946]: I1128 07:29:35.143560 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmq75" Nov 28 07:29:35 crc kubenswrapper[4946]: I1128 07:29:35.632046 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmq75"] Nov 28 07:29:36 crc kubenswrapper[4946]: I1128 07:29:36.483700 4946 generic.go:334] "Generic (PLEG): container finished" podID="b60323d9-7fd2-4044-91c3-bd898a71f2af" containerID="fc1353be27c80f8fcd0ee30be3631cd1e8eb49a54cd3a589a1357b99bd20293e" exitCode=0 Nov 28 07:29:36 crc kubenswrapper[4946]: I1128 07:29:36.484741 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmq75" event={"ID":"b60323d9-7fd2-4044-91c3-bd898a71f2af","Type":"ContainerDied","Data":"fc1353be27c80f8fcd0ee30be3631cd1e8eb49a54cd3a589a1357b99bd20293e"} Nov 28 07:29:36 crc kubenswrapper[4946]: I1128 07:29:36.484847 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmq75" event={"ID":"b60323d9-7fd2-4044-91c3-bd898a71f2af","Type":"ContainerStarted","Data":"48c9ac8c308460fe4821b85c947661e82dcbeecba53cc257307dc000deff5ae8"} Nov 28 07:29:36 crc kubenswrapper[4946]: I1128 07:29:36.486841 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 07:29:37 crc kubenswrapper[4946]: I1128 07:29:37.501361 4946 generic.go:334] "Generic (PLEG): container finished" podID="b60323d9-7fd2-4044-91c3-bd898a71f2af" containerID="2f1ac0775a92c46e375d0881ec9cd3e1cee595c3c1e21a09368104ed76e776d2" exitCode=0 Nov 28 07:29:37 crc kubenswrapper[4946]: I1128 07:29:37.501690 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmq75" event={"ID":"b60323d9-7fd2-4044-91c3-bd898a71f2af","Type":"ContainerDied","Data":"2f1ac0775a92c46e375d0881ec9cd3e1cee595c3c1e21a09368104ed76e776d2"} Nov 28 07:29:38 crc kubenswrapper[4946]: I1128 07:29:38.515301 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmq75" event={"ID":"b60323d9-7fd2-4044-91c3-bd898a71f2af","Type":"ContainerStarted","Data":"806560e2609865e914b7f98d1ca15766c941ccfaa6f8be921d664b7579c68c6d"} Nov 28 07:29:38 crc kubenswrapper[4946]: I1128 07:29:38.549908 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vmq75" podStartSLOduration=3.131299953 podStartE2EDuration="4.549858271s" podCreationTimestamp="2025-11-28 07:29:34 +0000 UTC" firstStartedPulling="2025-11-28 07:29:36.486233897 +0000 UTC m=+2230.864299018" lastFinishedPulling="2025-11-28 07:29:37.904792175 +0000 UTC m=+2232.282857336" observedRunningTime="2025-11-28 07:29:38.542119909 +0000 UTC m=+2232.920185090" watchObservedRunningTime="2025-11-28 07:29:38.549858271 +0000 UTC m=+2232.927923392" Nov 28 07:29:45 crc kubenswrapper[4946]: I1128 07:29:45.144177 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vmq75" Nov 28 07:29:45 crc kubenswrapper[4946]: I1128 07:29:45.145007 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vmq75" Nov 28 07:29:45 crc kubenswrapper[4946]: I1128 07:29:45.225644 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vmq75" Nov 28 07:29:45 crc kubenswrapper[4946]: I1128 07:29:45.636822 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vmq75" Nov 28 07:29:45 crc kubenswrapper[4946]: I1128 07:29:45.709505 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmq75"] Nov 28 07:29:47 crc kubenswrapper[4946]: I1128 07:29:47.598828 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vmq75" podUID="b60323d9-7fd2-4044-91c3-bd898a71f2af" containerName="registry-server" containerID="cri-o://806560e2609865e914b7f98d1ca15766c941ccfaa6f8be921d664b7579c68c6d" gracePeriod=2 Nov 28 07:29:48 crc kubenswrapper[4946]: I1128 07:29:48.064503 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmq75" Nov 28 07:29:48 crc kubenswrapper[4946]: I1128 07:29:48.100436 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b60323d9-7fd2-4044-91c3-bd898a71f2af-catalog-content\") pod \"b60323d9-7fd2-4044-91c3-bd898a71f2af\" (UID: \"b60323d9-7fd2-4044-91c3-bd898a71f2af\") " Nov 28 07:29:48 crc kubenswrapper[4946]: I1128 07:29:48.100576 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bdgf\" (UniqueName: \"kubernetes.io/projected/b60323d9-7fd2-4044-91c3-bd898a71f2af-kube-api-access-7bdgf\") pod \"b60323d9-7fd2-4044-91c3-bd898a71f2af\" (UID: \"b60323d9-7fd2-4044-91c3-bd898a71f2af\") " Nov 28 07:29:48 crc kubenswrapper[4946]: I1128 07:29:48.100619 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b60323d9-7fd2-4044-91c3-bd898a71f2af-utilities\") pod \"b60323d9-7fd2-4044-91c3-bd898a71f2af\" (UID: \"b60323d9-7fd2-4044-91c3-bd898a71f2af\") " Nov 28 07:29:48 crc kubenswrapper[4946]: I1128 07:29:48.102579 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b60323d9-7fd2-4044-91c3-bd898a71f2af-utilities" (OuterVolumeSpecName: "utilities") pod "b60323d9-7fd2-4044-91c3-bd898a71f2af" (UID: "b60323d9-7fd2-4044-91c3-bd898a71f2af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:29:48 crc kubenswrapper[4946]: I1128 07:29:48.113012 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b60323d9-7fd2-4044-91c3-bd898a71f2af-kube-api-access-7bdgf" (OuterVolumeSpecName: "kube-api-access-7bdgf") pod "b60323d9-7fd2-4044-91c3-bd898a71f2af" (UID: "b60323d9-7fd2-4044-91c3-bd898a71f2af"). InnerVolumeSpecName "kube-api-access-7bdgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:29:48 crc kubenswrapper[4946]: I1128 07:29:48.128282 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b60323d9-7fd2-4044-91c3-bd898a71f2af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b60323d9-7fd2-4044-91c3-bd898a71f2af" (UID: "b60323d9-7fd2-4044-91c3-bd898a71f2af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:29:48 crc kubenswrapper[4946]: I1128 07:29:48.202903 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bdgf\" (UniqueName: \"kubernetes.io/projected/b60323d9-7fd2-4044-91c3-bd898a71f2af-kube-api-access-7bdgf\") on node \"crc\" DevicePath \"\"" Nov 28 07:29:48 crc kubenswrapper[4946]: I1128 07:29:48.202944 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b60323d9-7fd2-4044-91c3-bd898a71f2af-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:29:48 crc kubenswrapper[4946]: I1128 07:29:48.202958 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b60323d9-7fd2-4044-91c3-bd898a71f2af-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:29:48 crc kubenswrapper[4946]: I1128 07:29:48.615590 4946 generic.go:334] "Generic (PLEG): container finished" podID="b60323d9-7fd2-4044-91c3-bd898a71f2af" containerID="806560e2609865e914b7f98d1ca15766c941ccfaa6f8be921d664b7579c68c6d" exitCode=0 Nov 28 07:29:48 crc kubenswrapper[4946]: I1128 07:29:48.615649 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmq75" Nov 28 07:29:48 crc kubenswrapper[4946]: I1128 07:29:48.618219 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmq75" event={"ID":"b60323d9-7fd2-4044-91c3-bd898a71f2af","Type":"ContainerDied","Data":"806560e2609865e914b7f98d1ca15766c941ccfaa6f8be921d664b7579c68c6d"} Nov 28 07:29:48 crc kubenswrapper[4946]: I1128 07:29:48.618311 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmq75" event={"ID":"b60323d9-7fd2-4044-91c3-bd898a71f2af","Type":"ContainerDied","Data":"48c9ac8c308460fe4821b85c947661e82dcbeecba53cc257307dc000deff5ae8"} Nov 28 07:29:48 crc kubenswrapper[4946]: I1128 07:29:48.618354 4946 scope.go:117] "RemoveContainer" containerID="806560e2609865e914b7f98d1ca15766c941ccfaa6f8be921d664b7579c68c6d" Nov 28 07:29:48 crc kubenswrapper[4946]: I1128 07:29:48.669637 4946 scope.go:117] "RemoveContainer" containerID="2f1ac0775a92c46e375d0881ec9cd3e1cee595c3c1e21a09368104ed76e776d2" Nov 28 07:29:48 crc kubenswrapper[4946]: I1128 07:29:48.670622 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmq75"] Nov 28 07:29:48 crc kubenswrapper[4946]: I1128 07:29:48.679802 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmq75"] Nov 28 07:29:48 crc kubenswrapper[4946]: I1128 07:29:48.697838 4946 scope.go:117] "RemoveContainer" containerID="fc1353be27c80f8fcd0ee30be3631cd1e8eb49a54cd3a589a1357b99bd20293e" Nov 28 07:29:48 crc kubenswrapper[4946]: I1128 07:29:48.716022 4946 scope.go:117] "RemoveContainer" containerID="806560e2609865e914b7f98d1ca15766c941ccfaa6f8be921d664b7579c68c6d" Nov 28 07:29:48 crc kubenswrapper[4946]: E1128 07:29:48.716612 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"806560e2609865e914b7f98d1ca15766c941ccfaa6f8be921d664b7579c68c6d\": container with ID starting with 806560e2609865e914b7f98d1ca15766c941ccfaa6f8be921d664b7579c68c6d not found: ID does not exist" containerID="806560e2609865e914b7f98d1ca15766c941ccfaa6f8be921d664b7579c68c6d" Nov 28 07:29:48 crc kubenswrapper[4946]: I1128 07:29:48.716668 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"806560e2609865e914b7f98d1ca15766c941ccfaa6f8be921d664b7579c68c6d"} err="failed to get container status \"806560e2609865e914b7f98d1ca15766c941ccfaa6f8be921d664b7579c68c6d\": rpc error: code = NotFound desc = could not find container \"806560e2609865e914b7f98d1ca15766c941ccfaa6f8be921d664b7579c68c6d\": container with ID starting with 806560e2609865e914b7f98d1ca15766c941ccfaa6f8be921d664b7579c68c6d not found: ID does not exist" Nov 28 07:29:48 crc kubenswrapper[4946]: I1128 07:29:48.716702 4946 scope.go:117] "RemoveContainer" containerID="2f1ac0775a92c46e375d0881ec9cd3e1cee595c3c1e21a09368104ed76e776d2" Nov 28 07:29:48 crc kubenswrapper[4946]: E1128 07:29:48.717039 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f1ac0775a92c46e375d0881ec9cd3e1cee595c3c1e21a09368104ed76e776d2\": container with ID starting with 2f1ac0775a92c46e375d0881ec9cd3e1cee595c3c1e21a09368104ed76e776d2 not found: ID does not exist" containerID="2f1ac0775a92c46e375d0881ec9cd3e1cee595c3c1e21a09368104ed76e776d2" Nov 28 07:29:48 crc kubenswrapper[4946]: I1128 07:29:48.717067 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f1ac0775a92c46e375d0881ec9cd3e1cee595c3c1e21a09368104ed76e776d2"} err="failed to get container status \"2f1ac0775a92c46e375d0881ec9cd3e1cee595c3c1e21a09368104ed76e776d2\": rpc error: code = NotFound desc = could not find container \"2f1ac0775a92c46e375d0881ec9cd3e1cee595c3c1e21a09368104ed76e776d2\": container with ID starting with 2f1ac0775a92c46e375d0881ec9cd3e1cee595c3c1e21a09368104ed76e776d2 not found: ID does not exist" Nov 28 07:29:48 crc kubenswrapper[4946]: I1128 07:29:48.717087 4946 scope.go:117] "RemoveContainer" containerID="fc1353be27c80f8fcd0ee30be3631cd1e8eb49a54cd3a589a1357b99bd20293e" Nov 28 07:29:48 crc kubenswrapper[4946]: E1128 07:29:48.717391 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc1353be27c80f8fcd0ee30be3631cd1e8eb49a54cd3a589a1357b99bd20293e\": container with ID starting with fc1353be27c80f8fcd0ee30be3631cd1e8eb49a54cd3a589a1357b99bd20293e not found: ID does not exist" containerID="fc1353be27c80f8fcd0ee30be3631cd1e8eb49a54cd3a589a1357b99bd20293e" Nov 28 07:29:48 crc kubenswrapper[4946]: I1128 07:29:48.717435 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc1353be27c80f8fcd0ee30be3631cd1e8eb49a54cd3a589a1357b99bd20293e"} err="failed to get container status \"fc1353be27c80f8fcd0ee30be3631cd1e8eb49a54cd3a589a1357b99bd20293e\": rpc error: code = NotFound desc = could not find container \"fc1353be27c80f8fcd0ee30be3631cd1e8eb49a54cd3a589a1357b99bd20293e\": container with ID starting with fc1353be27c80f8fcd0ee30be3631cd1e8eb49a54cd3a589a1357b99bd20293e not found: ID does not exist" Nov 28 07:29:50 crc kubenswrapper[4946]: I1128 07:29:50.002497 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b60323d9-7fd2-4044-91c3-bd898a71f2af" path="/var/lib/kubelet/pods/b60323d9-7fd2-4044-91c3-bd898a71f2af/volumes" Nov 28 07:30:00 crc kubenswrapper[4946]: I1128 07:30:00.174886 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405250-n9zfg"] Nov 28 07:30:00 crc kubenswrapper[4946]: E1128 07:30:00.176250 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b60323d9-7fd2-4044-91c3-bd898a71f2af" containerName="extract-utilities" Nov 28 07:30:00 crc kubenswrapper[4946]: I1128 07:30:00.176280 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b60323d9-7fd2-4044-91c3-bd898a71f2af" containerName="extract-utilities" Nov 28 07:30:00 crc kubenswrapper[4946]: E1128 07:30:00.176305 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b60323d9-7fd2-4044-91c3-bd898a71f2af" containerName="extract-content" Nov 28 07:30:00 crc kubenswrapper[4946]: I1128 07:30:00.176318 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b60323d9-7fd2-4044-91c3-bd898a71f2af" containerName="extract-content" Nov 28 07:30:00 crc kubenswrapper[4946]: E1128 07:30:00.176374 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b60323d9-7fd2-4044-91c3-bd898a71f2af" containerName="registry-server" Nov 28 07:30:00 crc kubenswrapper[4946]: I1128 07:30:00.176388 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b60323d9-7fd2-4044-91c3-bd898a71f2af" containerName="registry-server" Nov 28 07:30:00 crc kubenswrapper[4946]: I1128 07:30:00.176694 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b60323d9-7fd2-4044-91c3-bd898a71f2af" containerName="registry-server" Nov 28 07:30:00 crc kubenswrapper[4946]: I1128 07:30:00.177613 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-n9zfg" Nov 28 07:30:00 crc kubenswrapper[4946]: I1128 07:30:00.184003 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 07:30:00 crc kubenswrapper[4946]: I1128 07:30:00.184055 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 07:30:00 crc kubenswrapper[4946]: I1128 07:30:00.187348 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405250-n9zfg"] Nov 28 07:30:00 crc kubenswrapper[4946]: I1128 07:30:00.335597 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9831846a-e0ea-4a91-9650-2e1de09bcf32-secret-volume\") pod \"collect-profiles-29405250-n9zfg\" (UID: \"9831846a-e0ea-4a91-9650-2e1de09bcf32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-n9zfg" Nov 28 07:30:00 crc kubenswrapper[4946]: I1128 07:30:00.335705 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9831846a-e0ea-4a91-9650-2e1de09bcf32-config-volume\") pod \"collect-profiles-29405250-n9zfg\" (UID: \"9831846a-e0ea-4a91-9650-2e1de09bcf32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-n9zfg" Nov 28 07:30:00 crc kubenswrapper[4946]: I1128 07:30:00.335742 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrzv7\" (UniqueName: \"kubernetes.io/projected/9831846a-e0ea-4a91-9650-2e1de09bcf32-kube-api-access-nrzv7\") pod \"collect-profiles-29405250-n9zfg\" (UID: \"9831846a-e0ea-4a91-9650-2e1de09bcf32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-n9zfg" Nov 28 07:30:00 crc kubenswrapper[4946]: I1128 07:30:00.437313 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9831846a-e0ea-4a91-9650-2e1de09bcf32-secret-volume\") pod \"collect-profiles-29405250-n9zfg\" (UID: \"9831846a-e0ea-4a91-9650-2e1de09bcf32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-n9zfg" Nov 28 07:30:00 crc kubenswrapper[4946]: I1128 07:30:00.437408 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9831846a-e0ea-4a91-9650-2e1de09bcf32-config-volume\") pod \"collect-profiles-29405250-n9zfg\" (UID: \"9831846a-e0ea-4a91-9650-2e1de09bcf32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-n9zfg" Nov 28 07:30:00 crc kubenswrapper[4946]: I1128 07:30:00.437444 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzv7\" (UniqueName: \"kubernetes.io/projected/9831846a-e0ea-4a91-9650-2e1de09bcf32-kube-api-access-nrzv7\") pod \"collect-profiles-29405250-n9zfg\" (UID: \"9831846a-e0ea-4a91-9650-2e1de09bcf32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-n9zfg" Nov 28 07:30:00 crc kubenswrapper[4946]: I1128 07:30:00.438950 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9831846a-e0ea-4a91-9650-2e1de09bcf32-config-volume\") pod \"collect-profiles-29405250-n9zfg\" (UID: \"9831846a-e0ea-4a91-9650-2e1de09bcf32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-n9zfg" Nov 28 07:30:00 crc kubenswrapper[4946]: I1128 07:30:00.445294 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9831846a-e0ea-4a91-9650-2e1de09bcf32-secret-volume\") pod \"collect-profiles-29405250-n9zfg\" (UID: \"9831846a-e0ea-4a91-9650-2e1de09bcf32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-n9zfg" Nov 28 07:30:00 crc kubenswrapper[4946]: I1128 07:30:00.469672 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrzv7\" (UniqueName: \"kubernetes.io/projected/9831846a-e0ea-4a91-9650-2e1de09bcf32-kube-api-access-nrzv7\") pod \"collect-profiles-29405250-n9zfg\" (UID: \"9831846a-e0ea-4a91-9650-2e1de09bcf32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-n9zfg" Nov 28 07:30:00 crc kubenswrapper[4946]: I1128 07:30:00.500750 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-n9zfg" Nov 28 07:30:00 crc kubenswrapper[4946]: I1128 07:30:00.775094 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405250-n9zfg"] Nov 28 07:30:00 crc kubenswrapper[4946]: I1128 07:30:00.957774 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-n9zfg" event={"ID":"9831846a-e0ea-4a91-9650-2e1de09bcf32","Type":"ContainerStarted","Data":"002bb95dbcdd18328998fc0884683f7beeeb8f2e7e44f960a9fa93ae5d27001c"} Nov 28 07:30:01 crc kubenswrapper[4946]: I1128 07:30:01.965715 4946 generic.go:334] "Generic (PLEG): container finished" podID="9831846a-e0ea-4a91-9650-2e1de09bcf32" containerID="d4afeb8a0611f08069ded1dbf1f8a5e920708063b3c3c403d5283406490a7c8a" exitCode=0 Nov 28 07:30:01 crc kubenswrapper[4946]: I1128 07:30:01.965764 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-n9zfg" event={"ID":"9831846a-e0ea-4a91-9650-2e1de09bcf32","Type":"ContainerDied","Data":"d4afeb8a0611f08069ded1dbf1f8a5e920708063b3c3c403d5283406490a7c8a"} Nov 28 07:30:03 crc kubenswrapper[4946]: I1128 07:30:03.305784 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-n9zfg" Nov 28 07:30:03 crc kubenswrapper[4946]: I1128 07:30:03.481812 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrzv7\" (UniqueName: \"kubernetes.io/projected/9831846a-e0ea-4a91-9650-2e1de09bcf32-kube-api-access-nrzv7\") pod \"9831846a-e0ea-4a91-9650-2e1de09bcf32\" (UID: \"9831846a-e0ea-4a91-9650-2e1de09bcf32\") " Nov 28 07:30:03 crc kubenswrapper[4946]: I1128 07:30:03.481919 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9831846a-e0ea-4a91-9650-2e1de09bcf32-config-volume\") pod \"9831846a-e0ea-4a91-9650-2e1de09bcf32\" (UID: \"9831846a-e0ea-4a91-9650-2e1de09bcf32\") " Nov 28 07:30:03 crc kubenswrapper[4946]: I1128 07:30:03.482039 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9831846a-e0ea-4a91-9650-2e1de09bcf32-secret-volume\") pod \"9831846a-e0ea-4a91-9650-2e1de09bcf32\" (UID: \"9831846a-e0ea-4a91-9650-2e1de09bcf32\") " Nov 28 07:30:03 crc kubenswrapper[4946]: I1128 07:30:03.482739 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9831846a-e0ea-4a91-9650-2e1de09bcf32-config-volume" (OuterVolumeSpecName: "config-volume") pod "9831846a-e0ea-4a91-9650-2e1de09bcf32" (UID: "9831846a-e0ea-4a91-9650-2e1de09bcf32"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:30:03 crc kubenswrapper[4946]: I1128 07:30:03.487826 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9831846a-e0ea-4a91-9650-2e1de09bcf32-kube-api-access-nrzv7" (OuterVolumeSpecName: "kube-api-access-nrzv7") pod "9831846a-e0ea-4a91-9650-2e1de09bcf32" (UID: "9831846a-e0ea-4a91-9650-2e1de09bcf32"). InnerVolumeSpecName "kube-api-access-nrzv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:30:03 crc kubenswrapper[4946]: I1128 07:30:03.489704 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9831846a-e0ea-4a91-9650-2e1de09bcf32-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9831846a-e0ea-4a91-9650-2e1de09bcf32" (UID: "9831846a-e0ea-4a91-9650-2e1de09bcf32"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:30:03 crc kubenswrapper[4946]: I1128 07:30:03.583552 4946 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9831846a-e0ea-4a91-9650-2e1de09bcf32-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 07:30:03 crc kubenswrapper[4946]: I1128 07:30:03.583599 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrzv7\" (UniqueName: \"kubernetes.io/projected/9831846a-e0ea-4a91-9650-2e1de09bcf32-kube-api-access-nrzv7\") on node \"crc\" DevicePath \"\"" Nov 28 07:30:03 crc kubenswrapper[4946]: I1128 07:30:03.583608 4946 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9831846a-e0ea-4a91-9650-2e1de09bcf32-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 07:30:03 crc kubenswrapper[4946]: I1128 07:30:03.984943 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-n9zfg" event={"ID":"9831846a-e0ea-4a91-9650-2e1de09bcf32","Type":"ContainerDied","Data":"002bb95dbcdd18328998fc0884683f7beeeb8f2e7e44f960a9fa93ae5d27001c"} Nov 28 07:30:03 crc kubenswrapper[4946]: I1128 07:30:03.985321 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="002bb95dbcdd18328998fc0884683f7beeeb8f2e7e44f960a9fa93ae5d27001c" Nov 28 07:30:03 crc kubenswrapper[4946]: I1128 07:30:03.985022 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-n9zfg" Nov 28 07:30:04 crc kubenswrapper[4946]: I1128 07:30:04.407324 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405205-hbxpf"] Nov 28 07:30:04 crc kubenswrapper[4946]: I1128 07:30:04.417588 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405205-hbxpf"] Nov 28 07:30:06 crc kubenswrapper[4946]: I1128 07:30:06.024608 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="175d3bf2-c969-401c-9b35-d91a066d0305" path="/var/lib/kubelet/pods/175d3bf2-c969-401c-9b35-d91a066d0305/volumes" Nov 28 07:30:32 crc kubenswrapper[4946]: I1128 07:30:32.870630 4946 scope.go:117] "RemoveContainer" containerID="4ffa1b14e943c4fc1748dd0166c9ec7e3837202da277f99b0b4f6853ba87b4d9" Nov 28 07:30:54 crc kubenswrapper[4946]: I1128 07:30:54.730937 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:30:54 crc kubenswrapper[4946]: I1128 07:30:54.732626 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:31:24 crc kubenswrapper[4946]: I1128 07:31:24.731228 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:31:24 crc kubenswrapper[4946]: I1128 07:31:24.732439 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:31:54 crc kubenswrapper[4946]: I1128 07:31:54.730406 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:31:54 crc kubenswrapper[4946]: I1128 07:31:54.730918 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:31:54 crc kubenswrapper[4946]: I1128 07:31:54.730964 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 07:31:54 crc kubenswrapper[4946]: I1128 07:31:54.731505 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 07:31:54 crc kubenswrapper[4946]: I1128 07:31:54.731545 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" gracePeriod=600 Nov 28 07:31:54 crc kubenswrapper[4946]: E1128 07:31:54.858926 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:31:55 crc kubenswrapper[4946]: I1128 07:31:55.069842 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" exitCode=0 Nov 28 07:31:55 crc kubenswrapper[4946]: I1128 07:31:55.069929 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92"} Nov 28 07:31:55 crc kubenswrapper[4946]: I1128 07:31:55.069987 4946 scope.go:117] "RemoveContainer" containerID="5c6bc18cf8e1fc0e610966a3fd2dcb94036d907b5319dc2c88288f5b8ad7bb25" Nov 28 07:31:55 crc kubenswrapper[4946]: I1128 07:31:55.070975 4946 scope.go:117] "RemoveContainer" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" Nov 28 07:31:55 crc kubenswrapper[4946]: E1128 07:31:55.071388 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:32:08 crc kubenswrapper[4946]: I1128 07:32:08.990058 4946 scope.go:117] "RemoveContainer" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" Nov 28 07:32:08 crc kubenswrapper[4946]: E1128 07:32:08.991172 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:32:23 crc kubenswrapper[4946]: I1128 07:32:23.990515 4946 scope.go:117] "RemoveContainer" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" Nov 28 07:32:23 crc kubenswrapper[4946]: E1128 07:32:23.991620 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:32:36 crc kubenswrapper[4946]: I1128 07:32:36.989761 4946 scope.go:117] "RemoveContainer" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" Nov 28 07:32:36 crc kubenswrapper[4946]: E1128 07:32:36.990799 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:32:49 crc kubenswrapper[4946]: I1128 07:32:49.990547 4946 scope.go:117] "RemoveContainer" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" Nov 28 07:32:49 crc kubenswrapper[4946]: E1128 07:32:49.991651 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:33:00 crc kubenswrapper[4946]: I1128 07:33:00.989855 4946 scope.go:117] "RemoveContainer" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" Nov 28 07:33:00 crc kubenswrapper[4946]: E1128 07:33:00.990707 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:33:15 crc kubenswrapper[4946]: I1128 07:33:15.996728 4946 scope.go:117] "RemoveContainer" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" Nov 28 07:33:15 crc kubenswrapper[4946]: E1128 07:33:15.997516 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:33:30 crc kubenswrapper[4946]: I1128 07:33:30.989855 4946 scope.go:117] "RemoveContainer" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" Nov 28 07:33:30 crc kubenswrapper[4946]: E1128 07:33:30.990576 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:33:45 crc kubenswrapper[4946]: I1128 07:33:45.994885 4946 scope.go:117] "RemoveContainer" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" Nov 28 07:33:45 crc kubenswrapper[4946]: E1128 07:33:45.996441 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:33:59 crc kubenswrapper[4946]: I1128 07:33:59.990882 4946 scope.go:117] "RemoveContainer" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" Nov 28 07:33:59 crc kubenswrapper[4946]: E1128 07:33:59.991791 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:34:10 crc kubenswrapper[4946]: I1128 07:34:10.990255 4946 scope.go:117] "RemoveContainer" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" Nov 28 07:34:10 crc kubenswrapper[4946]: E1128 07:34:10.991244 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:34:23 crc kubenswrapper[4946]: I1128 07:34:23.991032 4946 scope.go:117] "RemoveContainer" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" Nov 28 07:34:23 crc kubenswrapper[4946]: E1128 07:34:23.992313 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:34:34 crc kubenswrapper[4946]: I1128 07:34:34.989986 4946 scope.go:117] "RemoveContainer" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" Nov 28 07:34:34 crc kubenswrapper[4946]: E1128 07:34:34.992266 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:34:46 crc kubenswrapper[4946]: I1128 07:34:46.990768 4946 scope.go:117] "RemoveContainer" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" Nov 28 07:34:46 crc kubenswrapper[4946]: E1128 07:34:46.992075 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:34:51 crc kubenswrapper[4946]: I1128 07:34:51.428229 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j8h7q"] Nov 28 07:34:51 crc kubenswrapper[4946]: E1128 07:34:51.430974 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9831846a-e0ea-4a91-9650-2e1de09bcf32" containerName="collect-profiles" Nov 28 07:34:51 crc kubenswrapper[4946]: I1128 07:34:51.431010 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="9831846a-e0ea-4a91-9650-2e1de09bcf32" containerName="collect-profiles" Nov 28 07:34:51 crc kubenswrapper[4946]: I1128 07:34:51.431353 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="9831846a-e0ea-4a91-9650-2e1de09bcf32" containerName="collect-profiles" Nov 28 07:34:51 crc kubenswrapper[4946]: I1128 07:34:51.433832 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8h7q" Nov 28 07:34:51 crc kubenswrapper[4946]: I1128 07:34:51.444334 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j8h7q"] Nov 28 07:34:51 crc kubenswrapper[4946]: I1128 07:34:51.578210 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv2m8\" (UniqueName: \"kubernetes.io/projected/2b474b33-7ab1-46aa-ba17-3f3d3b0546b4-kube-api-access-gv2m8\") pod \"community-operators-j8h7q\" (UID: \"2b474b33-7ab1-46aa-ba17-3f3d3b0546b4\") " pod="openshift-marketplace/community-operators-j8h7q" Nov 28 07:34:51 crc kubenswrapper[4946]: I1128 07:34:51.578277 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b474b33-7ab1-46aa-ba17-3f3d3b0546b4-utilities\") pod \"community-operators-j8h7q\" (UID: \"2b474b33-7ab1-46aa-ba17-3f3d3b0546b4\") " pod="openshift-marketplace/community-operators-j8h7q" Nov 28 07:34:51 crc kubenswrapper[4946]: I1128 07:34:51.578485 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b474b33-7ab1-46aa-ba17-3f3d3b0546b4-catalog-content\") pod \"community-operators-j8h7q\" (UID: \"2b474b33-7ab1-46aa-ba17-3f3d3b0546b4\") " pod="openshift-marketplace/community-operators-j8h7q" Nov 28 07:34:51 crc kubenswrapper[4946]: I1128 07:34:51.679711 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv2m8\" (UniqueName: \"kubernetes.io/projected/2b474b33-7ab1-46aa-ba17-3f3d3b0546b4-kube-api-access-gv2m8\") pod \"community-operators-j8h7q\" (UID: \"2b474b33-7ab1-46aa-ba17-3f3d3b0546b4\") " pod="openshift-marketplace/community-operators-j8h7q" Nov 28 07:34:51 crc kubenswrapper[4946]: I1128 07:34:51.679775 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b474b33-7ab1-46aa-ba17-3f3d3b0546b4-utilities\") pod \"community-operators-j8h7q\" (UID: \"2b474b33-7ab1-46aa-ba17-3f3d3b0546b4\") " pod="openshift-marketplace/community-operators-j8h7q" Nov 28 07:34:51 crc kubenswrapper[4946]: I1128 07:34:51.679814 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b474b33-7ab1-46aa-ba17-3f3d3b0546b4-catalog-content\") pod \"community-operators-j8h7q\" (UID: \"2b474b33-7ab1-46aa-ba17-3f3d3b0546b4\") " pod="openshift-marketplace/community-operators-j8h7q" Nov 28 07:34:51 crc kubenswrapper[4946]: I1128 07:34:51.680337 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b474b33-7ab1-46aa-ba17-3f3d3b0546b4-utilities\") pod \"community-operators-j8h7q\" (UID: \"2b474b33-7ab1-46aa-ba17-3f3d3b0546b4\") " pod="openshift-marketplace/community-operators-j8h7q" Nov 28 07:34:51 crc kubenswrapper[4946]: I1128 07:34:51.680487 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b474b33-7ab1-46aa-ba17-3f3d3b0546b4-catalog-content\") pod \"community-operators-j8h7q\" (UID: \"2b474b33-7ab1-46aa-ba17-3f3d3b0546b4\") " pod="openshift-marketplace/community-operators-j8h7q" Nov 28 07:34:51 crc kubenswrapper[4946]: I1128 07:34:51.701190 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv2m8\" (UniqueName: \"kubernetes.io/projected/2b474b33-7ab1-46aa-ba17-3f3d3b0546b4-kube-api-access-gv2m8\") pod \"community-operators-j8h7q\" (UID: \"2b474b33-7ab1-46aa-ba17-3f3d3b0546b4\") " pod="openshift-marketplace/community-operators-j8h7q" Nov 28 07:34:51 crc kubenswrapper[4946]: I1128 07:34:51.755845 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8h7q" Nov 28 07:34:52 crc kubenswrapper[4946]: I1128 07:34:52.053137 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j8h7q"] Nov 28 07:34:52 crc kubenswrapper[4946]: I1128 07:34:52.785534 4946 generic.go:334] "Generic (PLEG): container finished" podID="2b474b33-7ab1-46aa-ba17-3f3d3b0546b4" containerID="22145b0cf282cd76e470f4d2f32e5af160d5ba37a83883b32fea09aec9f868ec" exitCode=0 Nov 28 07:34:52 crc kubenswrapper[4946]: I1128 07:34:52.785678 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8h7q" event={"ID":"2b474b33-7ab1-46aa-ba17-3f3d3b0546b4","Type":"ContainerDied","Data":"22145b0cf282cd76e470f4d2f32e5af160d5ba37a83883b32fea09aec9f868ec"} Nov 28 07:34:52 crc kubenswrapper[4946]: I1128 07:34:52.786726 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8h7q" event={"ID":"2b474b33-7ab1-46aa-ba17-3f3d3b0546b4","Type":"ContainerStarted","Data":"4671bb1ae154527fd20033e7062377777a28febdd716e948f667f82e29c2d2f0"} Nov 28 07:34:52 crc kubenswrapper[4946]: I1128 07:34:52.789281 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 07:34:54 crc kubenswrapper[4946]: I1128 07:34:54.803650 4946 generic.go:334] "Generic (PLEG): container finished" podID="2b474b33-7ab1-46aa-ba17-3f3d3b0546b4" containerID="7a87aa9eab8f102d3bd7a54feb9e7ccc96e26551b0fe8387f5dd2ef5db7cc7e4" exitCode=0 Nov 28 07:34:54 crc kubenswrapper[4946]: I1128 07:34:54.803780 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8h7q" event={"ID":"2b474b33-7ab1-46aa-ba17-3f3d3b0546b4","Type":"ContainerDied","Data":"7a87aa9eab8f102d3bd7a54feb9e7ccc96e26551b0fe8387f5dd2ef5db7cc7e4"} Nov 28 07:34:55 crc kubenswrapper[4946]: I1128 07:34:55.815921 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8h7q" event={"ID":"2b474b33-7ab1-46aa-ba17-3f3d3b0546b4","Type":"ContainerStarted","Data":"0a402f4a2d0e995797489d0859fb2c2ac82fe10adbc6dac37d5a3d72b172500e"} Nov 28 07:34:55 crc kubenswrapper[4946]: I1128 07:34:55.849959 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j8h7q" podStartSLOduration=2.289023669 podStartE2EDuration="4.849922879s" podCreationTimestamp="2025-11-28 07:34:51 +0000 UTC" firstStartedPulling="2025-11-28 07:34:52.788891137 +0000 UTC m=+2547.166956288" lastFinishedPulling="2025-11-28 07:34:55.349790357 +0000 UTC m=+2549.727855498" observedRunningTime="2025-11-28 07:34:55.846256477 +0000 UTC m=+2550.224321618" watchObservedRunningTime="2025-11-28 07:34:55.849922879 +0000 UTC m=+2550.227988040" Nov 28 07:35:00 crc kubenswrapper[4946]: I1128 07:35:00.989756 4946 scope.go:117] "RemoveContainer" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" Nov 28 07:35:00 crc kubenswrapper[4946]: E1128 07:35:00.990888 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:35:01 crc kubenswrapper[4946]: I1128 07:35:01.756412 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j8h7q" Nov 28 07:35:01 crc kubenswrapper[4946]: I1128 07:35:01.757121 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j8h7q" Nov 28 07:35:01 crc kubenswrapper[4946]: I1128 07:35:01.840365 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j8h7q" Nov 28 07:35:01 crc kubenswrapper[4946]: I1128 07:35:01.943435 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j8h7q" Nov 28 07:35:02 crc kubenswrapper[4946]: I1128 07:35:02.090413 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j8h7q"] Nov 28 07:35:03 crc kubenswrapper[4946]: I1128 07:35:03.888164 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j8h7q" podUID="2b474b33-7ab1-46aa-ba17-3f3d3b0546b4" containerName="registry-server" containerID="cri-o://0a402f4a2d0e995797489d0859fb2c2ac82fe10adbc6dac37d5a3d72b172500e" gracePeriod=2 Nov 28 07:35:04 crc kubenswrapper[4946]: I1128 07:35:04.899777 4946 generic.go:334] "Generic (PLEG): container finished" podID="2b474b33-7ab1-46aa-ba17-3f3d3b0546b4" containerID="0a402f4a2d0e995797489d0859fb2c2ac82fe10adbc6dac37d5a3d72b172500e" exitCode=0 Nov 28 07:35:04 crc kubenswrapper[4946]: I1128 07:35:04.899971 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8h7q" event={"ID":"2b474b33-7ab1-46aa-ba17-3f3d3b0546b4","Type":"ContainerDied","Data":"0a402f4a2d0e995797489d0859fb2c2ac82fe10adbc6dac37d5a3d72b172500e"} Nov 28 07:35:05 crc kubenswrapper[4946]: I1128 07:35:05.266745 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8h7q" Nov 28 07:35:05 crc kubenswrapper[4946]: I1128 07:35:05.393309 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b474b33-7ab1-46aa-ba17-3f3d3b0546b4-utilities\") pod \"2b474b33-7ab1-46aa-ba17-3f3d3b0546b4\" (UID: \"2b474b33-7ab1-46aa-ba17-3f3d3b0546b4\") " Nov 28 07:35:05 crc kubenswrapper[4946]: I1128 07:35:05.393358 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv2m8\" (UniqueName: \"kubernetes.io/projected/2b474b33-7ab1-46aa-ba17-3f3d3b0546b4-kube-api-access-gv2m8\") pod \"2b474b33-7ab1-46aa-ba17-3f3d3b0546b4\" (UID: \"2b474b33-7ab1-46aa-ba17-3f3d3b0546b4\") " Nov 28 07:35:05 crc kubenswrapper[4946]: I1128 07:35:05.393382 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b474b33-7ab1-46aa-ba17-3f3d3b0546b4-catalog-content\") pod \"2b474b33-7ab1-46aa-ba17-3f3d3b0546b4\" (UID: \"2b474b33-7ab1-46aa-ba17-3f3d3b0546b4\") " Nov 28 07:35:05 crc kubenswrapper[4946]: I1128 07:35:05.394836 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b474b33-7ab1-46aa-ba17-3f3d3b0546b4-utilities" (OuterVolumeSpecName: "utilities") pod "2b474b33-7ab1-46aa-ba17-3f3d3b0546b4" (UID: "2b474b33-7ab1-46aa-ba17-3f3d3b0546b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:35:05 crc kubenswrapper[4946]: I1128 07:35:05.400070 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b474b33-7ab1-46aa-ba17-3f3d3b0546b4-kube-api-access-gv2m8" (OuterVolumeSpecName: "kube-api-access-gv2m8") pod "2b474b33-7ab1-46aa-ba17-3f3d3b0546b4" (UID: "2b474b33-7ab1-46aa-ba17-3f3d3b0546b4"). InnerVolumeSpecName "kube-api-access-gv2m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:35:05 crc kubenswrapper[4946]: I1128 07:35:05.442538 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b474b33-7ab1-46aa-ba17-3f3d3b0546b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b474b33-7ab1-46aa-ba17-3f3d3b0546b4" (UID: "2b474b33-7ab1-46aa-ba17-3f3d3b0546b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:35:05 crc kubenswrapper[4946]: I1128 07:35:05.495328 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b474b33-7ab1-46aa-ba17-3f3d3b0546b4-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:35:05 crc kubenswrapper[4946]: I1128 07:35:05.495350 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv2m8\" (UniqueName: \"kubernetes.io/projected/2b474b33-7ab1-46aa-ba17-3f3d3b0546b4-kube-api-access-gv2m8\") on node \"crc\" DevicePath \"\"" Nov 28 07:35:05 crc kubenswrapper[4946]: I1128 07:35:05.495360 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b474b33-7ab1-46aa-ba17-3f3d3b0546b4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:35:05 crc kubenswrapper[4946]: I1128 07:35:05.911845 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8h7q" event={"ID":"2b474b33-7ab1-46aa-ba17-3f3d3b0546b4","Type":"ContainerDied","Data":"4671bb1ae154527fd20033e7062377777a28febdd716e948f667f82e29c2d2f0"} Nov 28 07:35:05 crc kubenswrapper[4946]: I1128 07:35:05.911890 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8h7q" Nov 28 07:35:05 crc kubenswrapper[4946]: I1128 07:35:05.911901 4946 scope.go:117] "RemoveContainer" containerID="0a402f4a2d0e995797489d0859fb2c2ac82fe10adbc6dac37d5a3d72b172500e" Nov 28 07:35:05 crc kubenswrapper[4946]: I1128 07:35:05.957762 4946 scope.go:117] "RemoveContainer" containerID="7a87aa9eab8f102d3bd7a54feb9e7ccc96e26551b0fe8387f5dd2ef5db7cc7e4" Nov 28 07:35:05 crc kubenswrapper[4946]: I1128 07:35:05.958530 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j8h7q"] Nov 28 07:35:05 crc kubenswrapper[4946]: I1128 07:35:05.976836 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j8h7q"] Nov 28 07:35:05 crc kubenswrapper[4946]: I1128 07:35:05.982558 4946 scope.go:117] "RemoveContainer" containerID="22145b0cf282cd76e470f4d2f32e5af160d5ba37a83883b32fea09aec9f868ec" Nov 28 07:35:06 crc kubenswrapper[4946]: I1128 07:35:06.000842 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b474b33-7ab1-46aa-ba17-3f3d3b0546b4" path="/var/lib/kubelet/pods/2b474b33-7ab1-46aa-ba17-3f3d3b0546b4/volumes" Nov 28 07:35:14 crc kubenswrapper[4946]: I1128 07:35:14.990457 4946 scope.go:117] "RemoveContainer" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" Nov 28 07:35:14 crc kubenswrapper[4946]: E1128 07:35:14.991683 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:35:26 crc kubenswrapper[4946]: I1128 07:35:26.001510 4946 scope.go:117] "RemoveContainer" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" Nov 28 07:35:26 crc kubenswrapper[4946]: E1128 07:35:26.002391 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:35:37 crc kubenswrapper[4946]: I1128 07:35:37.990859 4946 scope.go:117] "RemoveContainer" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" Nov 28 07:35:37 crc kubenswrapper[4946]: E1128 07:35:37.991890 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:35:49 crc kubenswrapper[4946]: I1128 07:35:49.990162 4946 scope.go:117] "RemoveContainer" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" Nov 28 07:35:49 crc kubenswrapper[4946]: E1128 07:35:49.991275 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:36:04 crc kubenswrapper[4946]: I1128 07:36:04.990221 4946 scope.go:117] "RemoveContainer" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" Nov 28 07:36:04 crc kubenswrapper[4946]: E1128 07:36:04.990912 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:36:19 crc kubenswrapper[4946]: I1128 07:36:19.990007 4946 scope.go:117] "RemoveContainer" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" Nov 28 07:36:19 crc kubenswrapper[4946]: E1128 07:36:19.991767 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:36:32 crc kubenswrapper[4946]: I1128 07:36:32.989926 4946 scope.go:117] "RemoveContainer" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" Nov 28 07:36:32 crc kubenswrapper[4946]: E1128 07:36:32.991181 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:36:45 crc kubenswrapper[4946]: I1128 07:36:45.997533 4946 scope.go:117] "RemoveContainer" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" Nov 28 07:36:45 crc kubenswrapper[4946]: E1128 07:36:45.998512 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:36:48 crc kubenswrapper[4946]: I1128 07:36:48.528690 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2tjsf"] Nov 28 07:36:48 crc kubenswrapper[4946]: E1128 07:36:48.529837 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b474b33-7ab1-46aa-ba17-3f3d3b0546b4" containerName="extract-utilities" Nov 28 07:36:48 crc kubenswrapper[4946]: I1128 07:36:48.529857 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b474b33-7ab1-46aa-ba17-3f3d3b0546b4" containerName="extract-utilities" Nov 28 07:36:48 crc kubenswrapper[4946]: E1128 07:36:48.529879 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b474b33-7ab1-46aa-ba17-3f3d3b0546b4" containerName="registry-server" Nov 28 07:36:48 crc kubenswrapper[4946]: I1128 07:36:48.529887 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b474b33-7ab1-46aa-ba17-3f3d3b0546b4" containerName="registry-server" Nov 28 07:36:48 crc kubenswrapper[4946]: E1128 07:36:48.529936 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b474b33-7ab1-46aa-ba17-3f3d3b0546b4" containerName="extract-content" Nov 28 07:36:48 crc kubenswrapper[4946]: I1128 07:36:48.529946 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b474b33-7ab1-46aa-ba17-3f3d3b0546b4" containerName="extract-content" Nov 28 07:36:48 crc kubenswrapper[4946]: I1128 07:36:48.533315 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b474b33-7ab1-46aa-ba17-3f3d3b0546b4" containerName="registry-server" Nov 28 07:36:48 crc kubenswrapper[4946]: I1128 07:36:48.544366 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2tjsf" Nov 28 07:36:48 crc kubenswrapper[4946]: I1128 07:36:48.558457 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2tjsf"] Nov 28 07:36:48 crc kubenswrapper[4946]: I1128 07:36:48.645243 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91512b2d-25ad-4ecf-a2aa-054a88c0a293-catalog-content\") pod \"certified-operators-2tjsf\" (UID: \"91512b2d-25ad-4ecf-a2aa-054a88c0a293\") " pod="openshift-marketplace/certified-operators-2tjsf" Nov 28 07:36:48 crc kubenswrapper[4946]: I1128 07:36:48.645604 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91512b2d-25ad-4ecf-a2aa-054a88c0a293-utilities\") pod \"certified-operators-2tjsf\" (UID: \"91512b2d-25ad-4ecf-a2aa-054a88c0a293\") " pod="openshift-marketplace/certified-operators-2tjsf" Nov 28 07:36:48 crc kubenswrapper[4946]: I1128 07:36:48.645679 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc2r2\" (UniqueName: \"kubernetes.io/projected/91512b2d-25ad-4ecf-a2aa-054a88c0a293-kube-api-access-cc2r2\") pod \"certified-operators-2tjsf\" (UID: \"91512b2d-25ad-4ecf-a2aa-054a88c0a293\") " pod="openshift-marketplace/certified-operators-2tjsf" Nov 28 07:36:48 crc kubenswrapper[4946]: I1128 07:36:48.746882 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91512b2d-25ad-4ecf-a2aa-054a88c0a293-catalog-content\") pod \"certified-operators-2tjsf\" (UID: \"91512b2d-25ad-4ecf-a2aa-054a88c0a293\") " pod="openshift-marketplace/certified-operators-2tjsf" Nov 28 07:36:48 crc kubenswrapper[4946]: I1128 07:36:48.746981 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91512b2d-25ad-4ecf-a2aa-054a88c0a293-utilities\") pod \"certified-operators-2tjsf\" (UID: \"91512b2d-25ad-4ecf-a2aa-054a88c0a293\") " pod="openshift-marketplace/certified-operators-2tjsf" Nov 28 07:36:48 crc kubenswrapper[4946]: I1128 07:36:48.747070 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc2r2\" (UniqueName: \"kubernetes.io/projected/91512b2d-25ad-4ecf-a2aa-054a88c0a293-kube-api-access-cc2r2\") pod \"certified-operators-2tjsf\" (UID: \"91512b2d-25ad-4ecf-a2aa-054a88c0a293\") " pod="openshift-marketplace/certified-operators-2tjsf" Nov 28 07:36:48 crc kubenswrapper[4946]: I1128 07:36:48.747898 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91512b2d-25ad-4ecf-a2aa-054a88c0a293-catalog-content\") pod \"certified-operators-2tjsf\" (UID: \"91512b2d-25ad-4ecf-a2aa-054a88c0a293\") " pod="openshift-marketplace/certified-operators-2tjsf" Nov 28 07:36:48 crc kubenswrapper[4946]: I1128 07:36:48.747992 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91512b2d-25ad-4ecf-a2aa-054a88c0a293-utilities\") pod \"certified-operators-2tjsf\" (UID: \"91512b2d-25ad-4ecf-a2aa-054a88c0a293\") " pod="openshift-marketplace/certified-operators-2tjsf" Nov 28 07:36:48 crc kubenswrapper[4946]: I1128 07:36:48.777173 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc2r2\" (UniqueName: \"kubernetes.io/projected/91512b2d-25ad-4ecf-a2aa-054a88c0a293-kube-api-access-cc2r2\") pod \"certified-operators-2tjsf\" (UID: \"91512b2d-25ad-4ecf-a2aa-054a88c0a293\") " pod="openshift-marketplace/certified-operators-2tjsf" Nov 28 07:36:48 crc kubenswrapper[4946]: I1128 07:36:48.875198 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2tjsf" Nov 28 07:36:49 crc kubenswrapper[4946]: I1128 07:36:49.317957 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2tjsf"] Nov 28 07:36:49 crc kubenswrapper[4946]: I1128 07:36:49.967093 4946 generic.go:334] "Generic (PLEG): container finished" podID="91512b2d-25ad-4ecf-a2aa-054a88c0a293" containerID="b8eb6348490af681c7f14ac9d27d308eb5ca57d59120766ba39f5cff248d47ad" exitCode=0 Nov 28 07:36:49 crc kubenswrapper[4946]: I1128 07:36:49.967204 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tjsf" event={"ID":"91512b2d-25ad-4ecf-a2aa-054a88c0a293","Type":"ContainerDied","Data":"b8eb6348490af681c7f14ac9d27d308eb5ca57d59120766ba39f5cff248d47ad"} Nov 28 07:36:49 crc kubenswrapper[4946]: I1128 07:36:49.967436 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tjsf" event={"ID":"91512b2d-25ad-4ecf-a2aa-054a88c0a293","Type":"ContainerStarted","Data":"d55d9692b6af7be82d62c1a36eefde6d2c59a78f537de506093c0f3b9c3dd08e"} Nov 28 07:36:51 crc kubenswrapper[4946]: I1128 07:36:51.991825 4946 generic.go:334] "Generic (PLEG): container finished" podID="91512b2d-25ad-4ecf-a2aa-054a88c0a293" containerID="27887c2d2aaaa9ea042497b41326ec76fac91d26982103dad6e47fd85d1f34d7" exitCode=0 Nov 28 07:36:52 crc kubenswrapper[4946]: I1128 07:36:52.014484 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tjsf" event={"ID":"91512b2d-25ad-4ecf-a2aa-054a88c0a293","Type":"ContainerDied","Data":"27887c2d2aaaa9ea042497b41326ec76fac91d26982103dad6e47fd85d1f34d7"} Nov 28 07:36:53 crc kubenswrapper[4946]: I1128 07:36:53.005215 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tjsf" event={"ID":"91512b2d-25ad-4ecf-a2aa-054a88c0a293","Type":"ContainerStarted","Data":"b84873bf8645514007612f5ec5c4bbefd7b9989b3ed6571cab9cc72586f42ad5"} Nov 28 07:36:53 crc kubenswrapper[4946]: I1128 07:36:53.040209 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2tjsf" podStartSLOduration=2.580628515 podStartE2EDuration="5.04018979s" podCreationTimestamp="2025-11-28 07:36:48 +0000 UTC" firstStartedPulling="2025-11-28 07:36:49.970195884 +0000 UTC m=+2664.348261035" lastFinishedPulling="2025-11-28 07:36:52.429757159 +0000 UTC m=+2666.807822310" observedRunningTime="2025-11-28 07:36:53.039017341 +0000 UTC m=+2667.417082482" watchObservedRunningTime="2025-11-28 07:36:53.04018979 +0000 UTC m=+2667.418254911" Nov 28 07:36:56 crc kubenswrapper[4946]: I1128 07:36:56.989723 4946 scope.go:117] "RemoveContainer" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" Nov 28 07:36:58 crc kubenswrapper[4946]: I1128 07:36:58.057973 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"bbb706433b03c56e1cbbc8b4eeefaa97c14b4a7ae35b752d7341d83d3d461c66"} Nov 28 07:36:58 crc kubenswrapper[4946]: I1128 07:36:58.875973 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2tjsf" Nov 28 07:36:58 crc kubenswrapper[4946]: I1128 07:36:58.876345 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2tjsf" Nov 28 07:36:58 crc kubenswrapper[4946]: I1128 07:36:58.938815 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2tjsf" Nov 28 07:36:59 crc kubenswrapper[4946]: I1128 07:36:59.148258 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2tjsf" Nov 28 07:36:59 crc kubenswrapper[4946]: I1128 07:36:59.201672 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2tjsf"] Nov 28 07:37:01 crc kubenswrapper[4946]: I1128 07:37:01.092757 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2tjsf" podUID="91512b2d-25ad-4ecf-a2aa-054a88c0a293" containerName="registry-server" containerID="cri-o://b84873bf8645514007612f5ec5c4bbefd7b9989b3ed6571cab9cc72586f42ad5" gracePeriod=2 Nov 28 07:37:01 crc kubenswrapper[4946]: I1128 07:37:01.553282 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2tjsf" Nov 28 07:37:01 crc kubenswrapper[4946]: I1128 07:37:01.699382 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91512b2d-25ad-4ecf-a2aa-054a88c0a293-catalog-content\") pod \"91512b2d-25ad-4ecf-a2aa-054a88c0a293\" (UID: \"91512b2d-25ad-4ecf-a2aa-054a88c0a293\") " Nov 28 07:37:01 crc kubenswrapper[4946]: I1128 07:37:01.699552 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc2r2\" (UniqueName: \"kubernetes.io/projected/91512b2d-25ad-4ecf-a2aa-054a88c0a293-kube-api-access-cc2r2\") pod \"91512b2d-25ad-4ecf-a2aa-054a88c0a293\" (UID: \"91512b2d-25ad-4ecf-a2aa-054a88c0a293\") " Nov 28 07:37:01 crc kubenswrapper[4946]: I1128 07:37:01.699581 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91512b2d-25ad-4ecf-a2aa-054a88c0a293-utilities\") pod \"91512b2d-25ad-4ecf-a2aa-054a88c0a293\" (UID: \"91512b2d-25ad-4ecf-a2aa-054a88c0a293\") " Nov 28 07:37:01 crc kubenswrapper[4946]: I1128 07:37:01.700786 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91512b2d-25ad-4ecf-a2aa-054a88c0a293-utilities" (OuterVolumeSpecName: "utilities") pod "91512b2d-25ad-4ecf-a2aa-054a88c0a293" (UID: "91512b2d-25ad-4ecf-a2aa-054a88c0a293"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:37:01 crc kubenswrapper[4946]: I1128 07:37:01.705144 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91512b2d-25ad-4ecf-a2aa-054a88c0a293-kube-api-access-cc2r2" (OuterVolumeSpecName: "kube-api-access-cc2r2") pod "91512b2d-25ad-4ecf-a2aa-054a88c0a293" (UID: "91512b2d-25ad-4ecf-a2aa-054a88c0a293"). InnerVolumeSpecName "kube-api-access-cc2r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:37:01 crc kubenswrapper[4946]: I1128 07:37:01.767431 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91512b2d-25ad-4ecf-a2aa-054a88c0a293-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91512b2d-25ad-4ecf-a2aa-054a88c0a293" (UID: "91512b2d-25ad-4ecf-a2aa-054a88c0a293"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:37:01 crc kubenswrapper[4946]: I1128 07:37:01.801031 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc2r2\" (UniqueName: \"kubernetes.io/projected/91512b2d-25ad-4ecf-a2aa-054a88c0a293-kube-api-access-cc2r2\") on node \"crc\" DevicePath \"\"" Nov 28 07:37:01 crc kubenswrapper[4946]: I1128 07:37:01.801062 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91512b2d-25ad-4ecf-a2aa-054a88c0a293-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:37:01 crc kubenswrapper[4946]: I1128 07:37:01.801075 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91512b2d-25ad-4ecf-a2aa-054a88c0a293-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:37:02 crc kubenswrapper[4946]: I1128 07:37:02.108266 4946 generic.go:334] "Generic (PLEG): container finished" podID="91512b2d-25ad-4ecf-a2aa-054a88c0a293" containerID="b84873bf8645514007612f5ec5c4bbefd7b9989b3ed6571cab9cc72586f42ad5" exitCode=0 Nov 28 07:37:02 crc kubenswrapper[4946]: I1128 07:37:02.108432 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tjsf" event={"ID":"91512b2d-25ad-4ecf-a2aa-054a88c0a293","Type":"ContainerDied","Data":"b84873bf8645514007612f5ec5c4bbefd7b9989b3ed6571cab9cc72586f42ad5"} Nov 28 07:37:02 crc kubenswrapper[4946]: I1128 07:37:02.108518 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2tjsf" Nov 28 07:37:02 crc kubenswrapper[4946]: I1128 07:37:02.108612 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tjsf" event={"ID":"91512b2d-25ad-4ecf-a2aa-054a88c0a293","Type":"ContainerDied","Data":"d55d9692b6af7be82d62c1a36eefde6d2c59a78f537de506093c0f3b9c3dd08e"} Nov 28 07:37:02 crc kubenswrapper[4946]: I1128 07:37:02.108723 4946 scope.go:117] "RemoveContainer" containerID="b84873bf8645514007612f5ec5c4bbefd7b9989b3ed6571cab9cc72586f42ad5" Nov 28 07:37:02 crc kubenswrapper[4946]: I1128 07:37:02.153852 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2tjsf"] Nov 28 07:37:02 crc kubenswrapper[4946]: I1128 07:37:02.159325 4946 scope.go:117] "RemoveContainer" containerID="27887c2d2aaaa9ea042497b41326ec76fac91d26982103dad6e47fd85d1f34d7" Nov 28 07:37:02 crc kubenswrapper[4946]: I1128 07:37:02.163927 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2tjsf"] Nov 28 07:37:02 crc kubenswrapper[4946]: I1128 07:37:02.192227 4946 scope.go:117] "RemoveContainer" containerID="b8eb6348490af681c7f14ac9d27d308eb5ca57d59120766ba39f5cff248d47ad" Nov 28 07:37:02 crc kubenswrapper[4946]: I1128 07:37:02.233160 4946 scope.go:117] "RemoveContainer" containerID="b84873bf8645514007612f5ec5c4bbefd7b9989b3ed6571cab9cc72586f42ad5" Nov 28 07:37:02 crc kubenswrapper[4946]: E1128 07:37:02.235305 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b84873bf8645514007612f5ec5c4bbefd7b9989b3ed6571cab9cc72586f42ad5\": container with ID starting with b84873bf8645514007612f5ec5c4bbefd7b9989b3ed6571cab9cc72586f42ad5 not found: ID does not exist" containerID="b84873bf8645514007612f5ec5c4bbefd7b9989b3ed6571cab9cc72586f42ad5" Nov 28 07:37:02 crc kubenswrapper[4946]: I1128 07:37:02.235355 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b84873bf8645514007612f5ec5c4bbefd7b9989b3ed6571cab9cc72586f42ad5"} err="failed to get container status \"b84873bf8645514007612f5ec5c4bbefd7b9989b3ed6571cab9cc72586f42ad5\": rpc error: code = NotFound desc = could not find container \"b84873bf8645514007612f5ec5c4bbefd7b9989b3ed6571cab9cc72586f42ad5\": container with ID starting with b84873bf8645514007612f5ec5c4bbefd7b9989b3ed6571cab9cc72586f42ad5 not found: ID does not exist" Nov 28 07:37:02 crc kubenswrapper[4946]: I1128 07:37:02.235387 4946 scope.go:117] "RemoveContainer" containerID="27887c2d2aaaa9ea042497b41326ec76fac91d26982103dad6e47fd85d1f34d7" Nov 28 07:37:02 crc kubenswrapper[4946]: E1128 07:37:02.235829 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27887c2d2aaaa9ea042497b41326ec76fac91d26982103dad6e47fd85d1f34d7\": container with ID starting with 27887c2d2aaaa9ea042497b41326ec76fac91d26982103dad6e47fd85d1f34d7 not found: ID does not exist" containerID="27887c2d2aaaa9ea042497b41326ec76fac91d26982103dad6e47fd85d1f34d7" Nov 28 07:37:02 crc kubenswrapper[4946]: I1128 07:37:02.235921 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27887c2d2aaaa9ea042497b41326ec76fac91d26982103dad6e47fd85d1f34d7"} err="failed to get container status \"27887c2d2aaaa9ea042497b41326ec76fac91d26982103dad6e47fd85d1f34d7\": rpc error: code = NotFound desc = could not find container \"27887c2d2aaaa9ea042497b41326ec76fac91d26982103dad6e47fd85d1f34d7\": container with ID starting with 27887c2d2aaaa9ea042497b41326ec76fac91d26982103dad6e47fd85d1f34d7 not found: ID does not exist" Nov 28 07:37:02 crc kubenswrapper[4946]: I1128 07:37:02.235983 4946 scope.go:117] "RemoveContainer" containerID="b8eb6348490af681c7f14ac9d27d308eb5ca57d59120766ba39f5cff248d47ad" Nov 28 07:37:02 crc kubenswrapper[4946]: E1128 07:37:02.236318 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8eb6348490af681c7f14ac9d27d308eb5ca57d59120766ba39f5cff248d47ad\": container with ID starting with b8eb6348490af681c7f14ac9d27d308eb5ca57d59120766ba39f5cff248d47ad not found: ID does not exist" containerID="b8eb6348490af681c7f14ac9d27d308eb5ca57d59120766ba39f5cff248d47ad" Nov 28 07:37:02 crc kubenswrapper[4946]: I1128 07:37:02.236350 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8eb6348490af681c7f14ac9d27d308eb5ca57d59120766ba39f5cff248d47ad"} err="failed to get container status \"b8eb6348490af681c7f14ac9d27d308eb5ca57d59120766ba39f5cff248d47ad\": rpc error: code = NotFound desc = could not find container \"b8eb6348490af681c7f14ac9d27d308eb5ca57d59120766ba39f5cff248d47ad\": container with ID starting with b8eb6348490af681c7f14ac9d27d308eb5ca57d59120766ba39f5cff248d47ad not found: ID does not exist" Nov 28 07:37:04 crc kubenswrapper[4946]: I1128 07:37:04.007744 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91512b2d-25ad-4ecf-a2aa-054a88c0a293" path="/var/lib/kubelet/pods/91512b2d-25ad-4ecf-a2aa-054a88c0a293/volumes" Nov 28 07:37:49 crc kubenswrapper[4946]: I1128 07:37:49.877802 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-snqdk"] Nov 28 07:37:49 crc kubenswrapper[4946]: E1128 07:37:49.882079 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91512b2d-25ad-4ecf-a2aa-054a88c0a293" containerName="extract-content" Nov 28 07:37:49 crc kubenswrapper[4946]: I1128 07:37:49.882364 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="91512b2d-25ad-4ecf-a2aa-054a88c0a293" containerName="extract-content" Nov 28 07:37:49 crc kubenswrapper[4946]: E1128 07:37:49.882530 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91512b2d-25ad-4ecf-a2aa-054a88c0a293" containerName="registry-server" Nov 28 07:37:49 crc kubenswrapper[4946]: I1128 07:37:49.882668 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="91512b2d-25ad-4ecf-a2aa-054a88c0a293" containerName="registry-server" Nov 28 07:37:49 crc kubenswrapper[4946]: E1128 07:37:49.882793 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91512b2d-25ad-4ecf-a2aa-054a88c0a293" containerName="extract-utilities" Nov 28 07:37:49 crc kubenswrapper[4946]: I1128 07:37:49.882903 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="91512b2d-25ad-4ecf-a2aa-054a88c0a293" containerName="extract-utilities" Nov 28 07:37:49 crc kubenswrapper[4946]: I1128 07:37:49.883295 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="91512b2d-25ad-4ecf-a2aa-054a88c0a293" containerName="registry-server" Nov 28 07:37:49 crc kubenswrapper[4946]: I1128 07:37:49.885101 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-snqdk" Nov 28 07:37:49 crc kubenswrapper[4946]: I1128 07:37:49.905814 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-snqdk"] Nov 28 07:37:49 crc kubenswrapper[4946]: I1128 07:37:49.980056 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67dc05d7-fa75-4a26-b666-4447ed142959-catalog-content\") pod \"redhat-operators-snqdk\" (UID: \"67dc05d7-fa75-4a26-b666-4447ed142959\") " pod="openshift-marketplace/redhat-operators-snqdk" Nov 28 07:37:49 crc kubenswrapper[4946]: I1128 07:37:49.980212 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67dc05d7-fa75-4a26-b666-4447ed142959-utilities\") pod \"redhat-operators-snqdk\" (UID: \"67dc05d7-fa75-4a26-b666-4447ed142959\") " pod="openshift-marketplace/redhat-operators-snqdk" Nov 28 07:37:49 crc kubenswrapper[4946]: I1128 07:37:49.980282 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94hpd\" (UniqueName: \"kubernetes.io/projected/67dc05d7-fa75-4a26-b666-4447ed142959-kube-api-access-94hpd\") pod \"redhat-operators-snqdk\" (UID: \"67dc05d7-fa75-4a26-b666-4447ed142959\") " pod="openshift-marketplace/redhat-operators-snqdk" Nov 28 07:37:50 crc kubenswrapper[4946]: I1128 07:37:50.081430 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67dc05d7-fa75-4a26-b666-4447ed142959-utilities\") pod \"redhat-operators-snqdk\" (UID: \"67dc05d7-fa75-4a26-b666-4447ed142959\") " pod="openshift-marketplace/redhat-operators-snqdk" Nov 28 07:37:50 crc kubenswrapper[4946]: I1128 07:37:50.081541 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94hpd\" (UniqueName: \"kubernetes.io/projected/67dc05d7-fa75-4a26-b666-4447ed142959-kube-api-access-94hpd\") pod \"redhat-operators-snqdk\" (UID: \"67dc05d7-fa75-4a26-b666-4447ed142959\") " pod="openshift-marketplace/redhat-operators-snqdk" Nov 28 07:37:50 crc kubenswrapper[4946]: I1128 07:37:50.081673 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67dc05d7-fa75-4a26-b666-4447ed142959-catalog-content\") pod \"redhat-operators-snqdk\" (UID: \"67dc05d7-fa75-4a26-b666-4447ed142959\") " pod="openshift-marketplace/redhat-operators-snqdk" Nov 28 07:37:50 crc kubenswrapper[4946]: I1128 07:37:50.082111 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67dc05d7-fa75-4a26-b666-4447ed142959-utilities\") pod \"redhat-operators-snqdk\" (UID: \"67dc05d7-fa75-4a26-b666-4447ed142959\") " pod="openshift-marketplace/redhat-operators-snqdk" Nov 28 07:37:50 crc kubenswrapper[4946]: I1128 07:37:50.082165 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67dc05d7-fa75-4a26-b666-4447ed142959-catalog-content\") pod \"redhat-operators-snqdk\" (UID: \"67dc05d7-fa75-4a26-b666-4447ed142959\") " pod="openshift-marketplace/redhat-operators-snqdk" Nov 28 07:37:50 crc kubenswrapper[4946]: I1128 07:37:50.129830 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94hpd\" (UniqueName: \"kubernetes.io/projected/67dc05d7-fa75-4a26-b666-4447ed142959-kube-api-access-94hpd\") pod \"redhat-operators-snqdk\" (UID: \"67dc05d7-fa75-4a26-b666-4447ed142959\") " pod="openshift-marketplace/redhat-operators-snqdk" Nov 28 07:37:50 crc kubenswrapper[4946]: I1128 07:37:50.228257 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-snqdk" Nov 28 07:37:50 crc kubenswrapper[4946]: I1128 07:37:50.636370 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-snqdk"] Nov 28 07:37:50 crc kubenswrapper[4946]: W1128 07:37:50.643129 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67dc05d7_fa75_4a26_b666_4447ed142959.slice/crio-aad81a72e7ab446c3bf82683d31d771d974b5ab3d21c7cdd43ecb0032dd11b00 WatchSource:0}: Error finding container aad81a72e7ab446c3bf82683d31d771d974b5ab3d21c7cdd43ecb0032dd11b00: Status 404 returned error can't find the container with id aad81a72e7ab446c3bf82683d31d771d974b5ab3d21c7cdd43ecb0032dd11b00 Nov 28 07:37:51 crc kubenswrapper[4946]: I1128 07:37:51.606580 4946 generic.go:334] "Generic (PLEG): container finished" podID="67dc05d7-fa75-4a26-b666-4447ed142959" containerID="4bc9d404d2c03da44cc3ec0da790ef8dbac8b81e6a21db815689cf9be037222e" exitCode=0 Nov 28 07:37:51 crc kubenswrapper[4946]: I1128 07:37:51.606861 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-snqdk" event={"ID":"67dc05d7-fa75-4a26-b666-4447ed142959","Type":"ContainerDied","Data":"4bc9d404d2c03da44cc3ec0da790ef8dbac8b81e6a21db815689cf9be037222e"} Nov 28 07:37:51 crc kubenswrapper[4946]: I1128 07:37:51.606978 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-snqdk" event={"ID":"67dc05d7-fa75-4a26-b666-4447ed142959","Type":"ContainerStarted","Data":"aad81a72e7ab446c3bf82683d31d771d974b5ab3d21c7cdd43ecb0032dd11b00"} Nov 28 07:37:52 crc kubenswrapper[4946]: I1128 07:37:52.615901 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-snqdk" event={"ID":"67dc05d7-fa75-4a26-b666-4447ed142959","Type":"ContainerStarted","Data":"3a171b5d9b23724f37638ea2d2732e4bbde19c4fa1a1491f261b163e09167b29"} Nov 28 07:37:53 crc kubenswrapper[4946]: I1128 07:37:53.628778 4946 generic.go:334] "Generic (PLEG): container finished" podID="67dc05d7-fa75-4a26-b666-4447ed142959" containerID="3a171b5d9b23724f37638ea2d2732e4bbde19c4fa1a1491f261b163e09167b29" exitCode=0 Nov 28 07:37:53 crc kubenswrapper[4946]: I1128 07:37:53.628859 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-snqdk" event={"ID":"67dc05d7-fa75-4a26-b666-4447ed142959","Type":"ContainerDied","Data":"3a171b5d9b23724f37638ea2d2732e4bbde19c4fa1a1491f261b163e09167b29"} Nov 28 07:37:54 crc kubenswrapper[4946]: I1128 07:37:54.640436 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-snqdk" event={"ID":"67dc05d7-fa75-4a26-b666-4447ed142959","Type":"ContainerStarted","Data":"6e6487f3dfefce84e75c971c98353dd676937e2bee68e60565654a73e471df83"} Nov 28 07:37:54 crc kubenswrapper[4946]: I1128 07:37:54.672818 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-snqdk" podStartSLOduration=3.224619856 podStartE2EDuration="5.672792967s" podCreationTimestamp="2025-11-28 07:37:49 +0000 UTC" firstStartedPulling="2025-11-28 07:37:51.609319994 +0000 UTC m=+2725.987385155" lastFinishedPulling="2025-11-28 07:37:54.057493115 +0000 UTC m=+2728.435558266" observedRunningTime="2025-11-28 07:37:54.666516151 +0000 UTC m=+2729.044581302" watchObservedRunningTime="2025-11-28 07:37:54.672792967 +0000 UTC m=+2729.050858118" Nov 28 07:38:00 crc kubenswrapper[4946]: I1128 07:38:00.228585 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-snqdk" Nov 28 07:38:00 crc kubenswrapper[4946]: I1128 07:38:00.229414 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-snqdk" Nov 28 07:38:01 crc kubenswrapper[4946]: I1128 07:38:01.299149 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-snqdk" podUID="67dc05d7-fa75-4a26-b666-4447ed142959" containerName="registry-server" probeResult="failure" output=< Nov 28 07:38:01 crc kubenswrapper[4946]: timeout: failed to connect service ":50051" within 1s Nov 28 07:38:01 crc kubenswrapper[4946]: > Nov 28 07:38:10 crc kubenswrapper[4946]: I1128 07:38:10.313534 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-snqdk" Nov 28 07:38:10 crc kubenswrapper[4946]: I1128 07:38:10.392435 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-snqdk" Nov 28 07:38:10 crc kubenswrapper[4946]: I1128 07:38:10.561020 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-snqdk"] Nov 28 07:38:11 crc kubenswrapper[4946]: I1128 07:38:11.821840 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-snqdk" podUID="67dc05d7-fa75-4a26-b666-4447ed142959" containerName="registry-server" containerID="cri-o://6e6487f3dfefce84e75c971c98353dd676937e2bee68e60565654a73e471df83" gracePeriod=2 Nov 28 07:38:12 crc kubenswrapper[4946]: I1128 07:38:12.414269 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-snqdk" Nov 28 07:38:12 crc kubenswrapper[4946]: I1128 07:38:12.488700 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94hpd\" (UniqueName: \"kubernetes.io/projected/67dc05d7-fa75-4a26-b666-4447ed142959-kube-api-access-94hpd\") pod \"67dc05d7-fa75-4a26-b666-4447ed142959\" (UID: \"67dc05d7-fa75-4a26-b666-4447ed142959\") " Nov 28 07:38:12 crc kubenswrapper[4946]: I1128 07:38:12.488782 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67dc05d7-fa75-4a26-b666-4447ed142959-utilities\") pod \"67dc05d7-fa75-4a26-b666-4447ed142959\" (UID: \"67dc05d7-fa75-4a26-b666-4447ed142959\") " Nov 28 07:38:12 crc kubenswrapper[4946]: I1128 07:38:12.488833 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67dc05d7-fa75-4a26-b666-4447ed142959-catalog-content\") pod \"67dc05d7-fa75-4a26-b666-4447ed142959\" (UID: \"67dc05d7-fa75-4a26-b666-4447ed142959\") " Nov 28 07:38:12 crc kubenswrapper[4946]: I1128 07:38:12.490334 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67dc05d7-fa75-4a26-b666-4447ed142959-utilities" (OuterVolumeSpecName: "utilities") pod "67dc05d7-fa75-4a26-b666-4447ed142959" (UID: "67dc05d7-fa75-4a26-b666-4447ed142959"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:38:12 crc kubenswrapper[4946]: I1128 07:38:12.501018 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67dc05d7-fa75-4a26-b666-4447ed142959-kube-api-access-94hpd" (OuterVolumeSpecName: "kube-api-access-94hpd") pod "67dc05d7-fa75-4a26-b666-4447ed142959" (UID: "67dc05d7-fa75-4a26-b666-4447ed142959"). InnerVolumeSpecName "kube-api-access-94hpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:38:12 crc kubenswrapper[4946]: I1128 07:38:12.591013 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94hpd\" (UniqueName: \"kubernetes.io/projected/67dc05d7-fa75-4a26-b666-4447ed142959-kube-api-access-94hpd\") on node \"crc\" DevicePath \"\"" Nov 28 07:38:12 crc kubenswrapper[4946]: I1128 07:38:12.591078 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67dc05d7-fa75-4a26-b666-4447ed142959-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:38:12 crc kubenswrapper[4946]: I1128 07:38:12.618088 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67dc05d7-fa75-4a26-b666-4447ed142959-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67dc05d7-fa75-4a26-b666-4447ed142959" (UID: "67dc05d7-fa75-4a26-b666-4447ed142959"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:38:12 crc kubenswrapper[4946]: I1128 07:38:12.693008 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67dc05d7-fa75-4a26-b666-4447ed142959-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:38:12 crc kubenswrapper[4946]: I1128 07:38:12.834035 4946 generic.go:334] "Generic (PLEG): container finished" podID="67dc05d7-fa75-4a26-b666-4447ed142959" containerID="6e6487f3dfefce84e75c971c98353dd676937e2bee68e60565654a73e471df83" exitCode=0 Nov 28 07:38:12 crc kubenswrapper[4946]: I1128 07:38:12.834135 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-snqdk" event={"ID":"67dc05d7-fa75-4a26-b666-4447ed142959","Type":"ContainerDied","Data":"6e6487f3dfefce84e75c971c98353dd676937e2bee68e60565654a73e471df83"} Nov 28 07:38:12 crc kubenswrapper[4946]: I1128 07:38:12.835554 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-snqdk" event={"ID":"67dc05d7-fa75-4a26-b666-4447ed142959","Type":"ContainerDied","Data":"aad81a72e7ab446c3bf82683d31d771d974b5ab3d21c7cdd43ecb0032dd11b00"} Nov 28 07:38:12 crc kubenswrapper[4946]: I1128 07:38:12.834194 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-snqdk" Nov 28 07:38:12 crc kubenswrapper[4946]: I1128 07:38:12.835609 4946 scope.go:117] "RemoveContainer" containerID="6e6487f3dfefce84e75c971c98353dd676937e2bee68e60565654a73e471df83" Nov 28 07:38:12 crc kubenswrapper[4946]: I1128 07:38:12.866388 4946 scope.go:117] "RemoveContainer" containerID="3a171b5d9b23724f37638ea2d2732e4bbde19c4fa1a1491f261b163e09167b29" Nov 28 07:38:12 crc kubenswrapper[4946]: I1128 07:38:12.897670 4946 scope.go:117] "RemoveContainer" containerID="4bc9d404d2c03da44cc3ec0da790ef8dbac8b81e6a21db815689cf9be037222e" Nov 28 07:38:12 crc kubenswrapper[4946]: I1128 07:38:12.897690 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-snqdk"] Nov 28 07:38:12 crc kubenswrapper[4946]: I1128 07:38:12.901139 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-snqdk"] Nov 28 07:38:12 crc kubenswrapper[4946]: I1128 07:38:12.914338 4946 scope.go:117] "RemoveContainer" containerID="6e6487f3dfefce84e75c971c98353dd676937e2bee68e60565654a73e471df83" Nov 28 07:38:12 crc kubenswrapper[4946]: E1128 07:38:12.915214 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e6487f3dfefce84e75c971c98353dd676937e2bee68e60565654a73e471df83\": container with ID starting with 6e6487f3dfefce84e75c971c98353dd676937e2bee68e60565654a73e471df83 not found: ID does not exist" containerID="6e6487f3dfefce84e75c971c98353dd676937e2bee68e60565654a73e471df83" Nov 28 07:38:12 crc kubenswrapper[4946]: I1128 07:38:12.915274 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e6487f3dfefce84e75c971c98353dd676937e2bee68e60565654a73e471df83"} err="failed to get container status \"6e6487f3dfefce84e75c971c98353dd676937e2bee68e60565654a73e471df83\": rpc error: code = NotFound desc = could not find container \"6e6487f3dfefce84e75c971c98353dd676937e2bee68e60565654a73e471df83\": container with ID starting with 6e6487f3dfefce84e75c971c98353dd676937e2bee68e60565654a73e471df83 not found: ID does not exist" Nov 28 07:38:12 crc kubenswrapper[4946]: I1128 07:38:12.915307 4946 scope.go:117] "RemoveContainer" containerID="3a171b5d9b23724f37638ea2d2732e4bbde19c4fa1a1491f261b163e09167b29" Nov 28 07:38:12 crc kubenswrapper[4946]: E1128 07:38:12.915827 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a171b5d9b23724f37638ea2d2732e4bbde19c4fa1a1491f261b163e09167b29\": container with ID starting with 3a171b5d9b23724f37638ea2d2732e4bbde19c4fa1a1491f261b163e09167b29 not found: ID does not exist" containerID="3a171b5d9b23724f37638ea2d2732e4bbde19c4fa1a1491f261b163e09167b29" Nov 28 07:38:12 crc kubenswrapper[4946]: I1128 07:38:12.915876 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a171b5d9b23724f37638ea2d2732e4bbde19c4fa1a1491f261b163e09167b29"} err="failed to get container status \"3a171b5d9b23724f37638ea2d2732e4bbde19c4fa1a1491f261b163e09167b29\": rpc error: code = NotFound desc = could not find container \"3a171b5d9b23724f37638ea2d2732e4bbde19c4fa1a1491f261b163e09167b29\": container with ID starting with 3a171b5d9b23724f37638ea2d2732e4bbde19c4fa1a1491f261b163e09167b29 not found: ID does not exist" Nov 28 07:38:12 crc kubenswrapper[4946]: I1128 07:38:12.915902 4946 scope.go:117] "RemoveContainer" containerID="4bc9d404d2c03da44cc3ec0da790ef8dbac8b81e6a21db815689cf9be037222e" Nov 28 07:38:12 crc kubenswrapper[4946]: E1128 07:38:12.916214 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bc9d404d2c03da44cc3ec0da790ef8dbac8b81e6a21db815689cf9be037222e\": container with ID starting with 4bc9d404d2c03da44cc3ec0da790ef8dbac8b81e6a21db815689cf9be037222e not found: ID does not exist" containerID="4bc9d404d2c03da44cc3ec0da790ef8dbac8b81e6a21db815689cf9be037222e" Nov 28 07:38:12 crc kubenswrapper[4946]: I1128 07:38:12.916253 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bc9d404d2c03da44cc3ec0da790ef8dbac8b81e6a21db815689cf9be037222e"} err="failed to get container status \"4bc9d404d2c03da44cc3ec0da790ef8dbac8b81e6a21db815689cf9be037222e\": rpc error: code = NotFound desc = could not find container \"4bc9d404d2c03da44cc3ec0da790ef8dbac8b81e6a21db815689cf9be037222e\": container with ID starting with 4bc9d404d2c03da44cc3ec0da790ef8dbac8b81e6a21db815689cf9be037222e not found: ID does not exist" Nov 28 07:38:14 crc kubenswrapper[4946]: I1128 07:38:14.007603 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67dc05d7-fa75-4a26-b666-4447ed142959" path="/var/lib/kubelet/pods/67dc05d7-fa75-4a26-b666-4447ed142959/volumes" Nov 28 07:39:24 crc kubenswrapper[4946]: I1128 07:39:24.730568 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:39:24 crc kubenswrapper[4946]: I1128 07:39:24.731043 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:39:54 crc kubenswrapper[4946]: I1128 07:39:54.730549 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:39:54 crc kubenswrapper[4946]: I1128 07:39:54.731223 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:40:24 crc kubenswrapper[4946]: I1128 07:40:24.731561 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:40:24 crc kubenswrapper[4946]: I1128 07:40:24.732407 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:40:24 crc kubenswrapper[4946]: I1128 07:40:24.732530 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 07:40:24 crc kubenswrapper[4946]: I1128 07:40:24.733407 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bbb706433b03c56e1cbbc8b4eeefaa97c14b4a7ae35b752d7341d83d3d461c66"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 07:40:24 crc kubenswrapper[4946]: I1128 07:40:24.733527 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://bbb706433b03c56e1cbbc8b4eeefaa97c14b4a7ae35b752d7341d83d3d461c66" gracePeriod=600 Nov 28 07:40:25 crc kubenswrapper[4946]: I1128 07:40:25.246071 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="bbb706433b03c56e1cbbc8b4eeefaa97c14b4a7ae35b752d7341d83d3d461c66" exitCode=0 Nov 28 07:40:25 crc kubenswrapper[4946]: I1128 07:40:25.246259 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"bbb706433b03c56e1cbbc8b4eeefaa97c14b4a7ae35b752d7341d83d3d461c66"} Nov 28 07:40:25 crc kubenswrapper[4946]: I1128 07:40:25.247328 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312"} Nov 28 07:40:25 crc kubenswrapper[4946]: I1128 07:40:25.247555 4946 scope.go:117] "RemoveContainer" containerID="b07cedbddf6e03e4563f2465ebd8f57c47f2f3822962ec91b284bcc0b7624a92" Nov 28 07:40:56 crc kubenswrapper[4946]: I1128 07:40:56.099229 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kch4f"] Nov 28 07:40:56 crc kubenswrapper[4946]: E1128 07:40:56.100295 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67dc05d7-fa75-4a26-b666-4447ed142959" containerName="extract-content" Nov 28 07:40:56 crc kubenswrapper[4946]: I1128 07:40:56.100316 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="67dc05d7-fa75-4a26-b666-4447ed142959" containerName="extract-content" Nov 28 07:40:56 crc kubenswrapper[4946]: E1128 07:40:56.100349 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67dc05d7-fa75-4a26-b666-4447ed142959" containerName="extract-utilities" Nov 28 07:40:56 crc kubenswrapper[4946]: I1128 07:40:56.100361 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="67dc05d7-fa75-4a26-b666-4447ed142959" containerName="extract-utilities" Nov 28 07:40:56 crc kubenswrapper[4946]: E1128 07:40:56.100378 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67dc05d7-fa75-4a26-b666-4447ed142959" containerName="registry-server" Nov 28 07:40:56 crc kubenswrapper[4946]: I1128 07:40:56.100394 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="67dc05d7-fa75-4a26-b666-4447ed142959" containerName="registry-server" Nov 28 07:40:56 crc kubenswrapper[4946]: I1128 07:40:56.100765 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="67dc05d7-fa75-4a26-b666-4447ed142959" containerName="registry-server" Nov 28 07:40:56 crc kubenswrapper[4946]: I1128 07:40:56.108373 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kch4f" Nov 28 07:40:56 crc kubenswrapper[4946]: I1128 07:40:56.123134 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kch4f"] Nov 28 07:40:56 crc kubenswrapper[4946]: I1128 07:40:56.191169 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/804845d6-0d98-4c95-a3ee-07b170bfcb87-catalog-content\") pod \"redhat-marketplace-kch4f\" (UID: \"804845d6-0d98-4c95-a3ee-07b170bfcb87\") " pod="openshift-marketplace/redhat-marketplace-kch4f" Nov 28 07:40:56 crc kubenswrapper[4946]: I1128 07:40:56.191503 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v6t7\" (UniqueName: \"kubernetes.io/projected/804845d6-0d98-4c95-a3ee-07b170bfcb87-kube-api-access-9v6t7\") pod \"redhat-marketplace-kch4f\" (UID: \"804845d6-0d98-4c95-a3ee-07b170bfcb87\") " pod="openshift-marketplace/redhat-marketplace-kch4f" Nov 28 07:40:56 crc kubenswrapper[4946]: I1128 07:40:56.191693 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/804845d6-0d98-4c95-a3ee-07b170bfcb87-utilities\") pod \"redhat-marketplace-kch4f\" (UID: \"804845d6-0d98-4c95-a3ee-07b170bfcb87\") " pod="openshift-marketplace/redhat-marketplace-kch4f" Nov 28 07:40:56 crc kubenswrapper[4946]: I1128 07:40:56.293537 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v6t7\" (UniqueName: \"kubernetes.io/projected/804845d6-0d98-4c95-a3ee-07b170bfcb87-kube-api-access-9v6t7\") pod \"redhat-marketplace-kch4f\" (UID: \"804845d6-0d98-4c95-a3ee-07b170bfcb87\") " pod="openshift-marketplace/redhat-marketplace-kch4f" Nov 28 07:40:56 crc kubenswrapper[4946]: I1128 07:40:56.293606 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/804845d6-0d98-4c95-a3ee-07b170bfcb87-utilities\") pod \"redhat-marketplace-kch4f\" (UID: \"804845d6-0d98-4c95-a3ee-07b170bfcb87\") " pod="openshift-marketplace/redhat-marketplace-kch4f" Nov 28 07:40:56 crc kubenswrapper[4946]: I1128 07:40:56.294261 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/804845d6-0d98-4c95-a3ee-07b170bfcb87-utilities\") pod \"redhat-marketplace-kch4f\" (UID: \"804845d6-0d98-4c95-a3ee-07b170bfcb87\") " pod="openshift-marketplace/redhat-marketplace-kch4f" Nov 28 07:40:56 crc kubenswrapper[4946]: I1128 07:40:56.294772 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/804845d6-0d98-4c95-a3ee-07b170bfcb87-catalog-content\") pod \"redhat-marketplace-kch4f\" (UID: \"804845d6-0d98-4c95-a3ee-07b170bfcb87\") " pod="openshift-marketplace/redhat-marketplace-kch4f" Nov 28 07:40:56 crc kubenswrapper[4946]: I1128 07:40:56.294838 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/804845d6-0d98-4c95-a3ee-07b170bfcb87-catalog-content\") pod \"redhat-marketplace-kch4f\" (UID: \"804845d6-0d98-4c95-a3ee-07b170bfcb87\") " pod="openshift-marketplace/redhat-marketplace-kch4f" Nov 28 07:40:56 crc kubenswrapper[4946]: I1128 07:40:56.314582 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v6t7\" (UniqueName: \"kubernetes.io/projected/804845d6-0d98-4c95-a3ee-07b170bfcb87-kube-api-access-9v6t7\") pod \"redhat-marketplace-kch4f\" (UID: \"804845d6-0d98-4c95-a3ee-07b170bfcb87\") " pod="openshift-marketplace/redhat-marketplace-kch4f" Nov 28 07:40:56 crc kubenswrapper[4946]: I1128 07:40:56.448286 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kch4f" Nov 28 07:40:56 crc kubenswrapper[4946]: I1128 07:40:56.943843 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kch4f"] Nov 28 07:40:57 crc kubenswrapper[4946]: I1128 07:40:57.597253 4946 generic.go:334] "Generic (PLEG): container finished" podID="804845d6-0d98-4c95-a3ee-07b170bfcb87" containerID="d42de96cfa0757cdfa4da0fddcdcd6173b6aba294aec72d153663d5b4cb7f9fc" exitCode=0 Nov 28 07:40:57 crc kubenswrapper[4946]: I1128 07:40:57.597332 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kch4f" event={"ID":"804845d6-0d98-4c95-a3ee-07b170bfcb87","Type":"ContainerDied","Data":"d42de96cfa0757cdfa4da0fddcdcd6173b6aba294aec72d153663d5b4cb7f9fc"} Nov 28 07:40:57 crc kubenswrapper[4946]: I1128 07:40:57.597708 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kch4f" event={"ID":"804845d6-0d98-4c95-a3ee-07b170bfcb87","Type":"ContainerStarted","Data":"1ab74a899e4f1673e6421747ac8334cfe04912f6d927698ccb302b3c5dbcdcf3"} Nov 28 07:40:57 crc kubenswrapper[4946]: I1128 07:40:57.601007 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 07:40:58 crc kubenswrapper[4946]: I1128 07:40:58.609622 4946 generic.go:334] "Generic (PLEG): container finished" podID="804845d6-0d98-4c95-a3ee-07b170bfcb87" containerID="2d46c421348380839f28bedc9d7147198aea40c1aed56f9eb7e83648c76473ce" exitCode=0 Nov 28 07:40:58 crc kubenswrapper[4946]: I1128 07:40:58.609721 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kch4f" event={"ID":"804845d6-0d98-4c95-a3ee-07b170bfcb87","Type":"ContainerDied","Data":"2d46c421348380839f28bedc9d7147198aea40c1aed56f9eb7e83648c76473ce"} Nov 28 07:40:59 crc kubenswrapper[4946]: I1128 07:40:59.622432 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kch4f" event={"ID":"804845d6-0d98-4c95-a3ee-07b170bfcb87","Type":"ContainerStarted","Data":"047729fdbcfaf86c905f9245e30a3d893df9e8c90cf183fe329c3e510afa01cc"} Nov 28 07:40:59 crc kubenswrapper[4946]: I1128 07:40:59.661554 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kch4f" podStartSLOduration=2.2130731790000002 podStartE2EDuration="3.66152592s" podCreationTimestamp="2025-11-28 07:40:56 +0000 UTC" firstStartedPulling="2025-11-28 07:40:57.600613268 +0000 UTC m=+2911.978678419" lastFinishedPulling="2025-11-28 07:40:59.049066049 +0000 UTC m=+2913.427131160" observedRunningTime="2025-11-28 07:40:59.652846994 +0000 UTC m=+2914.030912145" watchObservedRunningTime="2025-11-28 07:40:59.66152592 +0000 UTC m=+2914.039591081" Nov 28 07:41:06 crc kubenswrapper[4946]: I1128 07:41:06.449439 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kch4f" Nov 28 07:41:06 crc kubenswrapper[4946]: I1128 07:41:06.450201 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kch4f" Nov 28 07:41:06 crc kubenswrapper[4946]: I1128 07:41:06.502369 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kch4f" Nov 28 07:41:06 crc kubenswrapper[4946]: I1128 07:41:06.753598 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kch4f" Nov 28 07:41:09 crc kubenswrapper[4946]: I1128 07:41:09.479279 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kch4f"] Nov 28 07:41:09 crc kubenswrapper[4946]: I1128 07:41:09.479696 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kch4f" podUID="804845d6-0d98-4c95-a3ee-07b170bfcb87" containerName="registry-server" containerID="cri-o://047729fdbcfaf86c905f9245e30a3d893df9e8c90cf183fe329c3e510afa01cc" gracePeriod=2 Nov 28 07:41:09 crc kubenswrapper[4946]: I1128 07:41:09.724331 4946 generic.go:334] "Generic (PLEG): container finished" podID="804845d6-0d98-4c95-a3ee-07b170bfcb87" containerID="047729fdbcfaf86c905f9245e30a3d893df9e8c90cf183fe329c3e510afa01cc" exitCode=0 Nov 28 07:41:09 crc kubenswrapper[4946]: I1128 07:41:09.724431 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kch4f" event={"ID":"804845d6-0d98-4c95-a3ee-07b170bfcb87","Type":"ContainerDied","Data":"047729fdbcfaf86c905f9245e30a3d893df9e8c90cf183fe329c3e510afa01cc"} Nov 28 07:41:09 crc kubenswrapper[4946]: I1128 07:41:09.983652 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kch4f" Nov 28 07:41:10 crc kubenswrapper[4946]: I1128 07:41:10.103927 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/804845d6-0d98-4c95-a3ee-07b170bfcb87-utilities\") pod \"804845d6-0d98-4c95-a3ee-07b170bfcb87\" (UID: \"804845d6-0d98-4c95-a3ee-07b170bfcb87\") " Nov 28 07:41:10 crc kubenswrapper[4946]: I1128 07:41:10.104270 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/804845d6-0d98-4c95-a3ee-07b170bfcb87-catalog-content\") pod \"804845d6-0d98-4c95-a3ee-07b170bfcb87\" (UID: \"804845d6-0d98-4c95-a3ee-07b170bfcb87\") " Nov 28 07:41:10 crc kubenswrapper[4946]: I1128 07:41:10.104372 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v6t7\" (UniqueName: \"kubernetes.io/projected/804845d6-0d98-4c95-a3ee-07b170bfcb87-kube-api-access-9v6t7\") pod \"804845d6-0d98-4c95-a3ee-07b170bfcb87\" (UID: \"804845d6-0d98-4c95-a3ee-07b170bfcb87\") " Nov 28 07:41:10 crc kubenswrapper[4946]: I1128 07:41:10.105979 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/804845d6-0d98-4c95-a3ee-07b170bfcb87-utilities" (OuterVolumeSpecName: "utilities") pod "804845d6-0d98-4c95-a3ee-07b170bfcb87" (UID: "804845d6-0d98-4c95-a3ee-07b170bfcb87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:41:10 crc kubenswrapper[4946]: I1128 07:41:10.107380 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/804845d6-0d98-4c95-a3ee-07b170bfcb87-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:41:10 crc kubenswrapper[4946]: I1128 07:41:10.120417 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/804845d6-0d98-4c95-a3ee-07b170bfcb87-kube-api-access-9v6t7" (OuterVolumeSpecName: "kube-api-access-9v6t7") pod "804845d6-0d98-4c95-a3ee-07b170bfcb87" (UID: "804845d6-0d98-4c95-a3ee-07b170bfcb87"). InnerVolumeSpecName "kube-api-access-9v6t7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:41:10 crc kubenswrapper[4946]: I1128 07:41:10.144227 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/804845d6-0d98-4c95-a3ee-07b170bfcb87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "804845d6-0d98-4c95-a3ee-07b170bfcb87" (UID: "804845d6-0d98-4c95-a3ee-07b170bfcb87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:41:10 crc kubenswrapper[4946]: I1128 07:41:10.208854 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/804845d6-0d98-4c95-a3ee-07b170bfcb87-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:41:10 crc kubenswrapper[4946]: I1128 07:41:10.208903 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v6t7\" (UniqueName: \"kubernetes.io/projected/804845d6-0d98-4c95-a3ee-07b170bfcb87-kube-api-access-9v6t7\") on node \"crc\" DevicePath \"\"" Nov 28 07:41:10 crc kubenswrapper[4946]: I1128 07:41:10.735528 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kch4f" event={"ID":"804845d6-0d98-4c95-a3ee-07b170bfcb87","Type":"ContainerDied","Data":"1ab74a899e4f1673e6421747ac8334cfe04912f6d927698ccb302b3c5dbcdcf3"} Nov 28 07:41:10 crc kubenswrapper[4946]: I1128 07:41:10.735588 4946 scope.go:117] "RemoveContainer" containerID="047729fdbcfaf86c905f9245e30a3d893df9e8c90cf183fe329c3e510afa01cc" Nov 28 07:41:10 crc kubenswrapper[4946]: I1128 07:41:10.735637 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kch4f" Nov 28 07:41:10 crc kubenswrapper[4946]: I1128 07:41:10.765696 4946 scope.go:117] "RemoveContainer" containerID="2d46c421348380839f28bedc9d7147198aea40c1aed56f9eb7e83648c76473ce" Nov 28 07:41:10 crc kubenswrapper[4946]: I1128 07:41:10.797384 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kch4f"] Nov 28 07:41:10 crc kubenswrapper[4946]: I1128 07:41:10.807499 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kch4f"] Nov 28 07:41:10 crc kubenswrapper[4946]: I1128 07:41:10.808813 4946 scope.go:117] "RemoveContainer" containerID="d42de96cfa0757cdfa4da0fddcdcd6173b6aba294aec72d153663d5b4cb7f9fc" Nov 28 07:41:12 crc kubenswrapper[4946]: I1128 07:41:12.009398 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="804845d6-0d98-4c95-a3ee-07b170bfcb87" path="/var/lib/kubelet/pods/804845d6-0d98-4c95-a3ee-07b170bfcb87/volumes" Nov 28 07:42:54 crc kubenswrapper[4946]: I1128 07:42:54.731436 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:42:54 crc kubenswrapper[4946]: I1128 07:42:54.732077 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:43:24 crc kubenswrapper[4946]: I1128 07:43:24.730596 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:43:24 crc kubenswrapper[4946]: I1128 07:43:24.731245 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:43:54 crc kubenswrapper[4946]: I1128 07:43:54.731103 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:43:54 crc kubenswrapper[4946]: I1128 07:43:54.734018 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:43:54 crc kubenswrapper[4946]: I1128 07:43:54.734615 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 07:43:54 crc kubenswrapper[4946]: I1128 07:43:54.735834 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 07:43:54 crc kubenswrapper[4946]: I1128 07:43:54.736307 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" gracePeriod=600 Nov 28 07:43:54 crc kubenswrapper[4946]: E1128 07:43:54.868152 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:43:55 crc kubenswrapper[4946]: I1128 07:43:55.353759 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" exitCode=0 Nov 28 07:43:55 crc kubenswrapper[4946]: I1128 07:43:55.353807 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312"} Nov 28 07:43:55 crc kubenswrapper[4946]: I1128 07:43:55.353843 4946 scope.go:117] "RemoveContainer" containerID="bbb706433b03c56e1cbbc8b4eeefaa97c14b4a7ae35b752d7341d83d3d461c66" Nov 28 07:43:55 crc kubenswrapper[4946]: I1128 07:43:55.355982 4946 scope.go:117] "RemoveContainer" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" Nov 28 07:43:55 crc kubenswrapper[4946]: E1128 07:43:55.356702 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:44:05 crc kubenswrapper[4946]: I1128 07:44:05.997579 4946 scope.go:117] "RemoveContainer" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" Nov 28 07:44:06 crc kubenswrapper[4946]: E1128 07:44:05.998991 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:44:19 crc kubenswrapper[4946]: I1128 07:44:19.989942 4946 scope.go:117] "RemoveContainer" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" Nov 28 07:44:19 crc kubenswrapper[4946]: E1128 07:44:19.991059 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:44:34 crc kubenswrapper[4946]: I1128 07:44:34.991309 4946 scope.go:117] "RemoveContainer" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" Nov 28 07:44:34 crc kubenswrapper[4946]: E1128 07:44:34.992992 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:44:46 crc kubenswrapper[4946]: I1128 07:44:46.990101 4946 scope.go:117] "RemoveContainer" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" Nov 28 07:44:46 crc kubenswrapper[4946]: E1128 07:44:46.990871 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:44:58 crc kubenswrapper[4946]: I1128 07:44:58.990764 4946 scope.go:117] "RemoveContainer" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" Nov 28 07:44:58 crc kubenswrapper[4946]: E1128 07:44:58.991527 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:45:00 crc kubenswrapper[4946]: I1128 07:45:00.162859 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405265-rmvt2"] Nov 28 07:45:00 crc kubenswrapper[4946]: E1128 07:45:00.163836 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804845d6-0d98-4c95-a3ee-07b170bfcb87" containerName="extract-content" Nov 28 07:45:00 crc kubenswrapper[4946]: I1128 07:45:00.163868 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="804845d6-0d98-4c95-a3ee-07b170bfcb87" containerName="extract-content" Nov 28 07:45:00 crc kubenswrapper[4946]: E1128 07:45:00.163917 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804845d6-0d98-4c95-a3ee-07b170bfcb87" containerName="extract-utilities" Nov 28 07:45:00 crc kubenswrapper[4946]: I1128 07:45:00.163935 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="804845d6-0d98-4c95-a3ee-07b170bfcb87" containerName="extract-utilities" Nov 28 07:45:00 crc kubenswrapper[4946]: E1128 07:45:00.163971 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804845d6-0d98-4c95-a3ee-07b170bfcb87" containerName="registry-server" Nov 28 07:45:00 crc kubenswrapper[4946]: I1128 07:45:00.163989 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="804845d6-0d98-4c95-a3ee-07b170bfcb87" containerName="registry-server" Nov 28 07:45:00 crc kubenswrapper[4946]: I1128 07:45:00.164338 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="804845d6-0d98-4c95-a3ee-07b170bfcb87" containerName="registry-server" Nov 28 07:45:00 crc kubenswrapper[4946]: I1128 07:45:00.165377 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405265-rmvt2" Nov 28 07:45:00 crc kubenswrapper[4946]: I1128 07:45:00.168132 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 07:45:00 crc kubenswrapper[4946]: I1128 07:45:00.178862 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405265-rmvt2"] Nov 28 07:45:00 crc kubenswrapper[4946]: I1128 07:45:00.179503 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 07:45:00 crc kubenswrapper[4946]: I1128 07:45:00.194365 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdslr\" (UniqueName: \"kubernetes.io/projected/cc6460c4-8209-4350-90b0-31a14dc20858-kube-api-access-xdslr\") pod \"collect-profiles-29405265-rmvt2\" (UID: \"cc6460c4-8209-4350-90b0-31a14dc20858\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405265-rmvt2" Nov 28 07:45:00 crc kubenswrapper[4946]: I1128 07:45:00.194535 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc6460c4-8209-4350-90b0-31a14dc20858-secret-volume\") pod \"collect-profiles-29405265-rmvt2\" (UID: \"cc6460c4-8209-4350-90b0-31a14dc20858\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405265-rmvt2" Nov 28 07:45:00 crc kubenswrapper[4946]: I1128 07:45:00.194632 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc6460c4-8209-4350-90b0-31a14dc20858-config-volume\") pod \"collect-profiles-29405265-rmvt2\" (UID: \"cc6460c4-8209-4350-90b0-31a14dc20858\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405265-rmvt2" Nov 28 07:45:00 crc kubenswrapper[4946]: I1128 07:45:00.296399 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdslr\" (UniqueName: \"kubernetes.io/projected/cc6460c4-8209-4350-90b0-31a14dc20858-kube-api-access-xdslr\") pod \"collect-profiles-29405265-rmvt2\" (UID: \"cc6460c4-8209-4350-90b0-31a14dc20858\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405265-rmvt2" Nov 28 07:45:00 crc kubenswrapper[4946]: I1128 07:45:00.296489 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc6460c4-8209-4350-90b0-31a14dc20858-secret-volume\") pod \"collect-profiles-29405265-rmvt2\" (UID: \"cc6460c4-8209-4350-90b0-31a14dc20858\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405265-rmvt2" Nov 28 07:45:00 crc kubenswrapper[4946]: I1128 07:45:00.296571 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc6460c4-8209-4350-90b0-31a14dc20858-config-volume\") pod \"collect-profiles-29405265-rmvt2\" (UID: \"cc6460c4-8209-4350-90b0-31a14dc20858\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405265-rmvt2" Nov 28 07:45:00 crc kubenswrapper[4946]: I1128 07:45:00.298121 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc6460c4-8209-4350-90b0-31a14dc20858-config-volume\") pod \"collect-profiles-29405265-rmvt2\" (UID: \"cc6460c4-8209-4350-90b0-31a14dc20858\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405265-rmvt2" Nov 28 07:45:00 crc kubenswrapper[4946]: I1128 07:45:00.303382 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc6460c4-8209-4350-90b0-31a14dc20858-secret-volume\") pod \"collect-profiles-29405265-rmvt2\" (UID: \"cc6460c4-8209-4350-90b0-31a14dc20858\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405265-rmvt2" Nov 28 07:45:00 crc kubenswrapper[4946]: I1128 07:45:00.327326 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdslr\" (UniqueName: \"kubernetes.io/projected/cc6460c4-8209-4350-90b0-31a14dc20858-kube-api-access-xdslr\") pod \"collect-profiles-29405265-rmvt2\" (UID: \"cc6460c4-8209-4350-90b0-31a14dc20858\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405265-rmvt2" Nov 28 07:45:00 crc kubenswrapper[4946]: I1128 07:45:00.493109 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405265-rmvt2" Nov 28 07:45:00 crc kubenswrapper[4946]: I1128 07:45:00.755134 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405265-rmvt2"] Nov 28 07:45:00 crc kubenswrapper[4946]: I1128 07:45:00.961420 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405265-rmvt2" event={"ID":"cc6460c4-8209-4350-90b0-31a14dc20858","Type":"ContainerStarted","Data":"e9d585782ee48ae7ee7532dc54d5453e3c6db7118c2862aa60320be63fa5ae38"} Nov 28 07:45:00 crc kubenswrapper[4946]: I1128 07:45:00.961489 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405265-rmvt2" event={"ID":"cc6460c4-8209-4350-90b0-31a14dc20858","Type":"ContainerStarted","Data":"c0fe8f19e8568666d433d4149c9285ed75f66e18a304b5124e62e43d8f349cec"} Nov 28 07:45:00 crc kubenswrapper[4946]: I1128 07:45:00.996256 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29405265-rmvt2" podStartSLOduration=0.995712343 podStartE2EDuration="995.712343ms" podCreationTimestamp="2025-11-28 07:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:45:00.981035031 +0000 UTC m=+3155.359100152" watchObservedRunningTime="2025-11-28 07:45:00.995712343 +0000 UTC m=+3155.373777454" Nov 28 07:45:01 crc kubenswrapper[4946]: I1128 07:45:01.969047 4946 generic.go:334] "Generic (PLEG): container finished" podID="cc6460c4-8209-4350-90b0-31a14dc20858" containerID="e9d585782ee48ae7ee7532dc54d5453e3c6db7118c2862aa60320be63fa5ae38" exitCode=0 Nov 28 07:45:01 crc kubenswrapper[4946]: I1128 07:45:01.969148 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405265-rmvt2" event={"ID":"cc6460c4-8209-4350-90b0-31a14dc20858","Type":"ContainerDied","Data":"e9d585782ee48ae7ee7532dc54d5453e3c6db7118c2862aa60320be63fa5ae38"} Nov 28 07:45:03 crc kubenswrapper[4946]: I1128 07:45:03.346106 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405265-rmvt2" Nov 28 07:45:03 crc kubenswrapper[4946]: I1128 07:45:03.442951 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc6460c4-8209-4350-90b0-31a14dc20858-secret-volume\") pod \"cc6460c4-8209-4350-90b0-31a14dc20858\" (UID: \"cc6460c4-8209-4350-90b0-31a14dc20858\") " Nov 28 07:45:03 crc kubenswrapper[4946]: I1128 07:45:03.443741 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdslr\" (UniqueName: \"kubernetes.io/projected/cc6460c4-8209-4350-90b0-31a14dc20858-kube-api-access-xdslr\") pod \"cc6460c4-8209-4350-90b0-31a14dc20858\" (UID: \"cc6460c4-8209-4350-90b0-31a14dc20858\") " Nov 28 07:45:03 crc kubenswrapper[4946]: I1128 07:45:03.443812 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc6460c4-8209-4350-90b0-31a14dc20858-config-volume\") pod \"cc6460c4-8209-4350-90b0-31a14dc20858\" (UID: \"cc6460c4-8209-4350-90b0-31a14dc20858\") " Nov 28 07:45:03 crc kubenswrapper[4946]: I1128 07:45:03.444609 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc6460c4-8209-4350-90b0-31a14dc20858-config-volume" (OuterVolumeSpecName: "config-volume") pod "cc6460c4-8209-4350-90b0-31a14dc20858" (UID: "cc6460c4-8209-4350-90b0-31a14dc20858"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:45:03 crc kubenswrapper[4946]: I1128 07:45:03.449852 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc6460c4-8209-4350-90b0-31a14dc20858-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cc6460c4-8209-4350-90b0-31a14dc20858" (UID: "cc6460c4-8209-4350-90b0-31a14dc20858"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:45:03 crc kubenswrapper[4946]: I1128 07:45:03.450564 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc6460c4-8209-4350-90b0-31a14dc20858-kube-api-access-xdslr" (OuterVolumeSpecName: "kube-api-access-xdslr") pod "cc6460c4-8209-4350-90b0-31a14dc20858" (UID: "cc6460c4-8209-4350-90b0-31a14dc20858"). InnerVolumeSpecName "kube-api-access-xdslr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:45:03 crc kubenswrapper[4946]: I1128 07:45:03.546195 4946 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc6460c4-8209-4350-90b0-31a14dc20858-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 07:45:03 crc kubenswrapper[4946]: I1128 07:45:03.546251 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdslr\" (UniqueName: \"kubernetes.io/projected/cc6460c4-8209-4350-90b0-31a14dc20858-kube-api-access-xdslr\") on node \"crc\" DevicePath \"\"" Nov 28 07:45:03 crc kubenswrapper[4946]: I1128 07:45:03.546278 4946 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc6460c4-8209-4350-90b0-31a14dc20858-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 07:45:03 crc kubenswrapper[4946]: I1128 07:45:03.988424 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405265-rmvt2" event={"ID":"cc6460c4-8209-4350-90b0-31a14dc20858","Type":"ContainerDied","Data":"c0fe8f19e8568666d433d4149c9285ed75f66e18a304b5124e62e43d8f349cec"} Nov 28 07:45:03 crc kubenswrapper[4946]: I1128 07:45:03.988490 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0fe8f19e8568666d433d4149c9285ed75f66e18a304b5124e62e43d8f349cec" Nov 28 07:45:03 crc kubenswrapper[4946]: I1128 07:45:03.988565 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405265-rmvt2" Nov 28 07:45:04 crc kubenswrapper[4946]: I1128 07:45:04.429110 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405220-67wkg"] Nov 28 07:45:04 crc kubenswrapper[4946]: I1128 07:45:04.435117 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405220-67wkg"] Nov 28 07:45:06 crc kubenswrapper[4946]: I1128 07:45:06.008401 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90ff5457-2e1e-4e21-8fba-a7f0c99dea9f" path="/var/lib/kubelet/pods/90ff5457-2e1e-4e21-8fba-a7f0c99dea9f/volumes" Nov 28 07:45:13 crc kubenswrapper[4946]: I1128 07:45:13.990965 4946 scope.go:117] "RemoveContainer" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" Nov 28 07:45:13 crc kubenswrapper[4946]: E1128 07:45:13.992536 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:45:25 crc kubenswrapper[4946]: I1128 07:45:25.998377 4946 scope.go:117] "RemoveContainer" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" Nov 28 07:45:26 crc kubenswrapper[4946]: E1128 07:45:25.999170 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:45:33 crc kubenswrapper[4946]: I1128 07:45:33.341729 4946 scope.go:117] "RemoveContainer" containerID="993815f62811cc23fc8537561e35245840757164bd7c177a8e48f8b06f9fa147" Nov 28 07:45:39 crc kubenswrapper[4946]: I1128 07:45:39.990212 4946 scope.go:117] "RemoveContainer" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" Nov 28 07:45:39 crc kubenswrapper[4946]: E1128 07:45:39.991261 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:45:54 crc kubenswrapper[4946]: I1128 07:45:54.990510 4946 scope.go:117] "RemoveContainer" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" Nov 28 07:45:54 crc kubenswrapper[4946]: E1128 07:45:54.992904 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:46:06 crc kubenswrapper[4946]: I1128 07:46:05.999405 4946 scope.go:117] "RemoveContainer" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" Nov 28 07:46:06 crc kubenswrapper[4946]: E1128 07:46:06.000420 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:46:12 crc kubenswrapper[4946]: I1128 07:46:12.069434 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rtp52"] Nov 28 07:46:12 crc kubenswrapper[4946]: E1128 07:46:12.071258 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc6460c4-8209-4350-90b0-31a14dc20858" containerName="collect-profiles" Nov 28 07:46:12 crc kubenswrapper[4946]: I1128 07:46:12.071287 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc6460c4-8209-4350-90b0-31a14dc20858" containerName="collect-profiles" Nov 28 07:46:12 crc kubenswrapper[4946]: I1128 07:46:12.071625 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc6460c4-8209-4350-90b0-31a14dc20858" containerName="collect-profiles" Nov 28 07:46:12 crc kubenswrapper[4946]: I1128 07:46:12.074452 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rtp52" Nov 28 07:46:12 crc kubenswrapper[4946]: I1128 07:46:12.099674 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rtp52"] Nov 28 07:46:12 crc kubenswrapper[4946]: I1128 07:46:12.263610 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f673326b-f3ae-48cc-9199-91c1960b534a-utilities\") pod \"community-operators-rtp52\" (UID: \"f673326b-f3ae-48cc-9199-91c1960b534a\") " pod="openshift-marketplace/community-operators-rtp52" Nov 28 07:46:12 crc kubenswrapper[4946]: I1128 07:46:12.263949 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f673326b-f3ae-48cc-9199-91c1960b534a-catalog-content\") pod \"community-operators-rtp52\" (UID: \"f673326b-f3ae-48cc-9199-91c1960b534a\") " pod="openshift-marketplace/community-operators-rtp52" Nov 28 07:46:12 crc kubenswrapper[4946]: I1128 07:46:12.264041 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gbqb\" (UniqueName: \"kubernetes.io/projected/f673326b-f3ae-48cc-9199-91c1960b534a-kube-api-access-9gbqb\") pod \"community-operators-rtp52\" (UID: \"f673326b-f3ae-48cc-9199-91c1960b534a\") " pod="openshift-marketplace/community-operators-rtp52" Nov 28 07:46:12 crc kubenswrapper[4946]: I1128 07:46:12.365116 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f673326b-f3ae-48cc-9199-91c1960b534a-catalog-content\") pod \"community-operators-rtp52\" (UID: \"f673326b-f3ae-48cc-9199-91c1960b534a\") " pod="openshift-marketplace/community-operators-rtp52" Nov 28 07:46:12 crc kubenswrapper[4946]: I1128 07:46:12.365189 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gbqb\" (UniqueName: \"kubernetes.io/projected/f673326b-f3ae-48cc-9199-91c1960b534a-kube-api-access-9gbqb\") pod \"community-operators-rtp52\" (UID: \"f673326b-f3ae-48cc-9199-91c1960b534a\") " pod="openshift-marketplace/community-operators-rtp52" Nov 28 07:46:12 crc kubenswrapper[4946]: I1128 07:46:12.365230 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f673326b-f3ae-48cc-9199-91c1960b534a-utilities\") pod \"community-operators-rtp52\" (UID: \"f673326b-f3ae-48cc-9199-91c1960b534a\") " pod="openshift-marketplace/community-operators-rtp52" Nov 28 07:46:12 crc kubenswrapper[4946]: I1128 07:46:12.365807 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f673326b-f3ae-48cc-9199-91c1960b534a-catalog-content\") pod \"community-operators-rtp52\" (UID: \"f673326b-f3ae-48cc-9199-91c1960b534a\") " pod="openshift-marketplace/community-operators-rtp52" Nov 28 07:46:12 crc kubenswrapper[4946]: I1128 07:46:12.365877 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f673326b-f3ae-48cc-9199-91c1960b534a-utilities\") pod \"community-operators-rtp52\" (UID: \"f673326b-f3ae-48cc-9199-91c1960b534a\") " pod="openshift-marketplace/community-operators-rtp52" Nov 28 07:46:12 crc kubenswrapper[4946]: I1128 07:46:12.392099 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gbqb\" (UniqueName: \"kubernetes.io/projected/f673326b-f3ae-48cc-9199-91c1960b534a-kube-api-access-9gbqb\") pod \"community-operators-rtp52\" (UID: \"f673326b-f3ae-48cc-9199-91c1960b534a\") " pod="openshift-marketplace/community-operators-rtp52" Nov 28 07:46:12 crc kubenswrapper[4946]: I1128 07:46:12.407750 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rtp52" Nov 28 07:46:12 crc kubenswrapper[4946]: I1128 07:46:12.917441 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rtp52"] Nov 28 07:46:13 crc kubenswrapper[4946]: I1128 07:46:13.697075 4946 generic.go:334] "Generic (PLEG): container finished" podID="f673326b-f3ae-48cc-9199-91c1960b534a" containerID="ec0602c0952ed1d133494985107a00b1ecd17f7d5c58b4cb3b5e25e094e19263" exitCode=0 Nov 28 07:46:13 crc kubenswrapper[4946]: I1128 07:46:13.697137 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtp52" event={"ID":"f673326b-f3ae-48cc-9199-91c1960b534a","Type":"ContainerDied","Data":"ec0602c0952ed1d133494985107a00b1ecd17f7d5c58b4cb3b5e25e094e19263"} Nov 28 07:46:13 crc kubenswrapper[4946]: I1128 07:46:13.697517 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtp52" event={"ID":"f673326b-f3ae-48cc-9199-91c1960b534a","Type":"ContainerStarted","Data":"57af2686218bb4fbda3cb4ea6e1e5abcecf48a8ec01f037bbbcc0865635b46f3"} Nov 28 07:46:13 crc kubenswrapper[4946]: I1128 07:46:13.699660 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 07:46:15 crc kubenswrapper[4946]: I1128 07:46:15.720723 4946 generic.go:334] "Generic (PLEG): container finished" podID="f673326b-f3ae-48cc-9199-91c1960b534a" containerID="7218b8efc27eca6d140a93c658ba61170011e74e175721b44278c89a9c40d189" exitCode=0 Nov 28 07:46:15 crc kubenswrapper[4946]: I1128 07:46:15.720776 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtp52" event={"ID":"f673326b-f3ae-48cc-9199-91c1960b534a","Type":"ContainerDied","Data":"7218b8efc27eca6d140a93c658ba61170011e74e175721b44278c89a9c40d189"} Nov 28 07:46:16 crc kubenswrapper[4946]: I1128 07:46:16.732999 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtp52" event={"ID":"f673326b-f3ae-48cc-9199-91c1960b534a","Type":"ContainerStarted","Data":"20e7d5372c47096865d6533b596f69fac50a432392197042727b3162013c3432"} Nov 28 07:46:16 crc kubenswrapper[4946]: I1128 07:46:16.768545 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rtp52" podStartSLOduration=2.208234591 podStartE2EDuration="4.768523932s" podCreationTimestamp="2025-11-28 07:46:12 +0000 UTC" firstStartedPulling="2025-11-28 07:46:13.699168991 +0000 UTC m=+3228.077234132" lastFinishedPulling="2025-11-28 07:46:16.259458372 +0000 UTC m=+3230.637523473" observedRunningTime="2025-11-28 07:46:16.759491413 +0000 UTC m=+3231.137556534" watchObservedRunningTime="2025-11-28 07:46:16.768523932 +0000 UTC m=+3231.146589053" Nov 28 07:46:18 crc kubenswrapper[4946]: I1128 07:46:18.990418 4946 scope.go:117] "RemoveContainer" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" Nov 28 07:46:18 crc kubenswrapper[4946]: E1128 07:46:18.991050 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:46:22 crc kubenswrapper[4946]: I1128 07:46:22.410988 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rtp52" Nov 28 07:46:22 crc kubenswrapper[4946]: I1128 07:46:22.411660 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rtp52" Nov 28 07:46:22 crc kubenswrapper[4946]: I1128 07:46:22.461316 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rtp52" Nov 28 07:46:22 crc kubenswrapper[4946]: I1128 07:46:22.862450 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rtp52" Nov 28 07:46:22 crc kubenswrapper[4946]: I1128 07:46:22.932204 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rtp52"] Nov 28 07:46:24 crc kubenswrapper[4946]: I1128 07:46:24.804627 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rtp52" podUID="f673326b-f3ae-48cc-9199-91c1960b534a" containerName="registry-server" containerID="cri-o://20e7d5372c47096865d6533b596f69fac50a432392197042727b3162013c3432" gracePeriod=2 Nov 28 07:46:25 crc kubenswrapper[4946]: I1128 07:46:25.817442 4946 generic.go:334] "Generic (PLEG): container finished" podID="f673326b-f3ae-48cc-9199-91c1960b534a" containerID="20e7d5372c47096865d6533b596f69fac50a432392197042727b3162013c3432" exitCode=0 Nov 28 07:46:25 crc kubenswrapper[4946]: I1128 07:46:25.817554 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtp52" event={"ID":"f673326b-f3ae-48cc-9199-91c1960b534a","Type":"ContainerDied","Data":"20e7d5372c47096865d6533b596f69fac50a432392197042727b3162013c3432"} Nov 28 07:46:25 crc kubenswrapper[4946]: I1128 07:46:25.935770 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rtp52" Nov 28 07:46:26 crc kubenswrapper[4946]: I1128 07:46:26.092962 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f673326b-f3ae-48cc-9199-91c1960b534a-utilities\") pod \"f673326b-f3ae-48cc-9199-91c1960b534a\" (UID: \"f673326b-f3ae-48cc-9199-91c1960b534a\") " Nov 28 07:46:26 crc kubenswrapper[4946]: I1128 07:46:26.093413 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gbqb\" (UniqueName: \"kubernetes.io/projected/f673326b-f3ae-48cc-9199-91c1960b534a-kube-api-access-9gbqb\") pod \"f673326b-f3ae-48cc-9199-91c1960b534a\" (UID: \"f673326b-f3ae-48cc-9199-91c1960b534a\") " Nov 28 07:46:26 crc kubenswrapper[4946]: I1128 07:46:26.093664 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f673326b-f3ae-48cc-9199-91c1960b534a-catalog-content\") pod \"f673326b-f3ae-48cc-9199-91c1960b534a\" (UID: \"f673326b-f3ae-48cc-9199-91c1960b534a\") " Nov 28 07:46:26 crc kubenswrapper[4946]: I1128 07:46:26.094164 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f673326b-f3ae-48cc-9199-91c1960b534a-utilities" (OuterVolumeSpecName: "utilities") pod "f673326b-f3ae-48cc-9199-91c1960b534a" (UID: "f673326b-f3ae-48cc-9199-91c1960b534a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:46:26 crc kubenswrapper[4946]: I1128 07:46:26.102281 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f673326b-f3ae-48cc-9199-91c1960b534a-kube-api-access-9gbqb" (OuterVolumeSpecName: "kube-api-access-9gbqb") pod "f673326b-f3ae-48cc-9199-91c1960b534a" (UID: "f673326b-f3ae-48cc-9199-91c1960b534a"). InnerVolumeSpecName "kube-api-access-9gbqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:46:26 crc kubenswrapper[4946]: I1128 07:46:26.174831 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f673326b-f3ae-48cc-9199-91c1960b534a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f673326b-f3ae-48cc-9199-91c1960b534a" (UID: "f673326b-f3ae-48cc-9199-91c1960b534a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:46:26 crc kubenswrapper[4946]: I1128 07:46:26.195159 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f673326b-f3ae-48cc-9199-91c1960b534a-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:46:26 crc kubenswrapper[4946]: I1128 07:46:26.195196 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gbqb\" (UniqueName: \"kubernetes.io/projected/f673326b-f3ae-48cc-9199-91c1960b534a-kube-api-access-9gbqb\") on node \"crc\" DevicePath \"\"" Nov 28 07:46:26 crc kubenswrapper[4946]: I1128 07:46:26.195212 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f673326b-f3ae-48cc-9199-91c1960b534a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:46:26 crc kubenswrapper[4946]: I1128 07:46:26.831741 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtp52" event={"ID":"f673326b-f3ae-48cc-9199-91c1960b534a","Type":"ContainerDied","Data":"57af2686218bb4fbda3cb4ea6e1e5abcecf48a8ec01f037bbbcc0865635b46f3"} Nov 28 07:46:26 crc kubenswrapper[4946]: I1128 07:46:26.831817 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rtp52" Nov 28 07:46:26 crc kubenswrapper[4946]: I1128 07:46:26.833352 4946 scope.go:117] "RemoveContainer" containerID="20e7d5372c47096865d6533b596f69fac50a432392197042727b3162013c3432" Nov 28 07:46:26 crc kubenswrapper[4946]: I1128 07:46:26.866100 4946 scope.go:117] "RemoveContainer" containerID="7218b8efc27eca6d140a93c658ba61170011e74e175721b44278c89a9c40d189" Nov 28 07:46:26 crc kubenswrapper[4946]: I1128 07:46:26.895152 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rtp52"] Nov 28 07:46:26 crc kubenswrapper[4946]: I1128 07:46:26.906362 4946 scope.go:117] "RemoveContainer" containerID="ec0602c0952ed1d133494985107a00b1ecd17f7d5c58b4cb3b5e25e094e19263" Nov 28 07:46:26 crc kubenswrapper[4946]: I1128 07:46:26.906572 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rtp52"] Nov 28 07:46:27 crc kubenswrapper[4946]: I1128 07:46:27.999366 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f673326b-f3ae-48cc-9199-91c1960b534a" path="/var/lib/kubelet/pods/f673326b-f3ae-48cc-9199-91c1960b534a/volumes" Nov 28 07:46:29 crc kubenswrapper[4946]: I1128 07:46:29.990934 4946 scope.go:117] "RemoveContainer" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" Nov 28 07:46:29 crc kubenswrapper[4946]: E1128 07:46:29.991388 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:46:40 crc kubenswrapper[4946]: I1128 07:46:40.990420 4946 scope.go:117] "RemoveContainer" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" Nov 28 07:46:40 crc kubenswrapper[4946]: E1128 07:46:40.991395 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:46:55 crc kubenswrapper[4946]: I1128 07:46:55.997359 4946 scope.go:117] "RemoveContainer" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" Nov 28 07:46:55 crc kubenswrapper[4946]: E1128 07:46:55.998065 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:47:03 crc kubenswrapper[4946]: I1128 07:47:03.033583 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cjtsc"] Nov 28 07:47:03 crc kubenswrapper[4946]: E1128 07:47:03.035673 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f673326b-f3ae-48cc-9199-91c1960b534a" containerName="registry-server" Nov 28 07:47:03 crc kubenswrapper[4946]: I1128 07:47:03.035699 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="f673326b-f3ae-48cc-9199-91c1960b534a" containerName="registry-server" Nov 28 07:47:03 crc kubenswrapper[4946]: E1128 07:47:03.035719 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f673326b-f3ae-48cc-9199-91c1960b534a" containerName="extract-utilities" Nov 28 07:47:03 crc kubenswrapper[4946]: I1128 07:47:03.035732 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="f673326b-f3ae-48cc-9199-91c1960b534a" containerName="extract-utilities" Nov 28 07:47:03 crc kubenswrapper[4946]: E1128 07:47:03.035766 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f673326b-f3ae-48cc-9199-91c1960b534a" containerName="extract-content" Nov 28 07:47:03 crc kubenswrapper[4946]: I1128 07:47:03.035780 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="f673326b-f3ae-48cc-9199-91c1960b534a" containerName="extract-content" Nov 28 07:47:03 crc kubenswrapper[4946]: I1128 07:47:03.036064 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="f673326b-f3ae-48cc-9199-91c1960b534a" containerName="registry-server" Nov 28 07:47:03 crc kubenswrapper[4946]: I1128 07:47:03.037905 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjtsc" Nov 28 07:47:03 crc kubenswrapper[4946]: I1128 07:47:03.060451 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cjtsc"] Nov 28 07:47:03 crc kubenswrapper[4946]: I1128 07:47:03.217517 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c94gb\" (UniqueName: \"kubernetes.io/projected/92cd023b-7489-4208-8431-c3cc6d77b7b3-kube-api-access-c94gb\") pod \"certified-operators-cjtsc\" (UID: \"92cd023b-7489-4208-8431-c3cc6d77b7b3\") " pod="openshift-marketplace/certified-operators-cjtsc" Nov 28 07:47:03 crc kubenswrapper[4946]: I1128 07:47:03.217581 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92cd023b-7489-4208-8431-c3cc6d77b7b3-catalog-content\") pod \"certified-operators-cjtsc\" (UID: \"92cd023b-7489-4208-8431-c3cc6d77b7b3\") " pod="openshift-marketplace/certified-operators-cjtsc" Nov 28 07:47:03 crc kubenswrapper[4946]: I1128 07:47:03.217678 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92cd023b-7489-4208-8431-c3cc6d77b7b3-utilities\") pod \"certified-operators-cjtsc\" (UID: \"92cd023b-7489-4208-8431-c3cc6d77b7b3\") " pod="openshift-marketplace/certified-operators-cjtsc" Nov 28 07:47:03 crc kubenswrapper[4946]: I1128 07:47:03.318823 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92cd023b-7489-4208-8431-c3cc6d77b7b3-catalog-content\") pod \"certified-operators-cjtsc\" (UID: \"92cd023b-7489-4208-8431-c3cc6d77b7b3\") " pod="openshift-marketplace/certified-operators-cjtsc" Nov 28 07:47:03 crc kubenswrapper[4946]: I1128 07:47:03.318954 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92cd023b-7489-4208-8431-c3cc6d77b7b3-utilities\") pod \"certified-operators-cjtsc\" (UID: \"92cd023b-7489-4208-8431-c3cc6d77b7b3\") " pod="openshift-marketplace/certified-operators-cjtsc" Nov 28 07:47:03 crc kubenswrapper[4946]: I1128 07:47:03.319010 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c94gb\" (UniqueName: \"kubernetes.io/projected/92cd023b-7489-4208-8431-c3cc6d77b7b3-kube-api-access-c94gb\") pod \"certified-operators-cjtsc\" (UID: \"92cd023b-7489-4208-8431-c3cc6d77b7b3\") " pod="openshift-marketplace/certified-operators-cjtsc" Nov 28 07:47:03 crc kubenswrapper[4946]: I1128 07:47:03.319245 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92cd023b-7489-4208-8431-c3cc6d77b7b3-catalog-content\") pod \"certified-operators-cjtsc\" (UID: \"92cd023b-7489-4208-8431-c3cc6d77b7b3\") " pod="openshift-marketplace/certified-operators-cjtsc" Nov 28 07:47:03 crc kubenswrapper[4946]: I1128 07:47:03.319619 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92cd023b-7489-4208-8431-c3cc6d77b7b3-utilities\") pod \"certified-operators-cjtsc\" (UID: \"92cd023b-7489-4208-8431-c3cc6d77b7b3\") " pod="openshift-marketplace/certified-operators-cjtsc" Nov 28 07:47:03 crc kubenswrapper[4946]: I1128 07:47:03.338079 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c94gb\" (UniqueName: \"kubernetes.io/projected/92cd023b-7489-4208-8431-c3cc6d77b7b3-kube-api-access-c94gb\") pod \"certified-operators-cjtsc\" (UID: \"92cd023b-7489-4208-8431-c3cc6d77b7b3\") " pod="openshift-marketplace/certified-operators-cjtsc" Nov 28 07:47:03 crc kubenswrapper[4946]: I1128 07:47:03.388627 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjtsc" Nov 28 07:47:03 crc kubenswrapper[4946]: I1128 07:47:03.689939 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cjtsc"] Nov 28 07:47:04 crc kubenswrapper[4946]: I1128 07:47:04.177584 4946 generic.go:334] "Generic (PLEG): container finished" podID="92cd023b-7489-4208-8431-c3cc6d77b7b3" containerID="6b739eab42c01a2978df4b820f50251f2e628d9c048f18ef13431c3c40300de5" exitCode=0 Nov 28 07:47:04 crc kubenswrapper[4946]: I1128 07:47:04.177622 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjtsc" event={"ID":"92cd023b-7489-4208-8431-c3cc6d77b7b3","Type":"ContainerDied","Data":"6b739eab42c01a2978df4b820f50251f2e628d9c048f18ef13431c3c40300de5"} Nov 28 07:47:04 crc kubenswrapper[4946]: I1128 07:47:04.177647 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjtsc" event={"ID":"92cd023b-7489-4208-8431-c3cc6d77b7b3","Type":"ContainerStarted","Data":"d5db45e595250143e5d14e86a5781a912b5ca3df43ed1188da1ab5f653ade976"} Nov 28 07:47:06 crc kubenswrapper[4946]: I1128 07:47:06.200260 4946 generic.go:334] "Generic (PLEG): container finished" podID="92cd023b-7489-4208-8431-c3cc6d77b7b3" containerID="ca7739a053c59e4807a960d6175ec4eb1f1d126573b327708077524481f598fe" exitCode=0 Nov 28 07:47:06 crc kubenswrapper[4946]: I1128 07:47:06.200420 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjtsc" event={"ID":"92cd023b-7489-4208-8431-c3cc6d77b7b3","Type":"ContainerDied","Data":"ca7739a053c59e4807a960d6175ec4eb1f1d126573b327708077524481f598fe"} Nov 28 07:47:07 crc kubenswrapper[4946]: I1128 07:47:07.215081 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjtsc" event={"ID":"92cd023b-7489-4208-8431-c3cc6d77b7b3","Type":"ContainerStarted","Data":"3af874f5a32c53a2b7d87bb2617f26d33bd915b2c7374435c4fdccace88c76de"} Nov 28 07:47:07 crc kubenswrapper[4946]: I1128 07:47:07.242799 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cjtsc" podStartSLOduration=2.769775252 podStartE2EDuration="5.242775519s" podCreationTimestamp="2025-11-28 07:47:02 +0000 UTC" firstStartedPulling="2025-11-28 07:47:04.179753709 +0000 UTC m=+3278.557818820" lastFinishedPulling="2025-11-28 07:47:06.652753946 +0000 UTC m=+3281.030819087" observedRunningTime="2025-11-28 07:47:07.231983545 +0000 UTC m=+3281.610048686" watchObservedRunningTime="2025-11-28 07:47:07.242775519 +0000 UTC m=+3281.620840640" Nov 28 07:47:07 crc kubenswrapper[4946]: I1128 07:47:07.990375 4946 scope.go:117] "RemoveContainer" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" Nov 28 07:47:07 crc kubenswrapper[4946]: E1128 07:47:07.991187 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:47:13 crc kubenswrapper[4946]: I1128 07:47:13.388909 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cjtsc" Nov 28 07:47:13 crc kubenswrapper[4946]: I1128 07:47:13.389539 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cjtsc" Nov 28 07:47:13 crc kubenswrapper[4946]: I1128 07:47:13.468329 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cjtsc" Nov 28 07:47:14 crc kubenswrapper[4946]: I1128 07:47:14.383147 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cjtsc" Nov 28 07:47:14 crc kubenswrapper[4946]: I1128 07:47:14.459152 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cjtsc"] Nov 28 07:47:16 crc kubenswrapper[4946]: I1128 07:47:16.313965 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cjtsc" podUID="92cd023b-7489-4208-8431-c3cc6d77b7b3" containerName="registry-server" containerID="cri-o://3af874f5a32c53a2b7d87bb2617f26d33bd915b2c7374435c4fdccace88c76de" gracePeriod=2 Nov 28 07:47:17 crc kubenswrapper[4946]: I1128 07:47:17.273132 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjtsc" Nov 28 07:47:17 crc kubenswrapper[4946]: I1128 07:47:17.329851 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjtsc" event={"ID":"92cd023b-7489-4208-8431-c3cc6d77b7b3","Type":"ContainerDied","Data":"3af874f5a32c53a2b7d87bb2617f26d33bd915b2c7374435c4fdccace88c76de"} Nov 28 07:47:17 crc kubenswrapper[4946]: I1128 07:47:17.329911 4946 scope.go:117] "RemoveContainer" containerID="3af874f5a32c53a2b7d87bb2617f26d33bd915b2c7374435c4fdccace88c76de" Nov 28 07:47:17 crc kubenswrapper[4946]: I1128 07:47:17.329862 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjtsc" Nov 28 07:47:17 crc kubenswrapper[4946]: I1128 07:47:17.329833 4946 generic.go:334] "Generic (PLEG): container finished" podID="92cd023b-7489-4208-8431-c3cc6d77b7b3" containerID="3af874f5a32c53a2b7d87bb2617f26d33bd915b2c7374435c4fdccace88c76de" exitCode=0 Nov 28 07:47:17 crc kubenswrapper[4946]: I1128 07:47:17.330046 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjtsc" event={"ID":"92cd023b-7489-4208-8431-c3cc6d77b7b3","Type":"ContainerDied","Data":"d5db45e595250143e5d14e86a5781a912b5ca3df43ed1188da1ab5f653ade976"} Nov 28 07:47:17 crc kubenswrapper[4946]: I1128 07:47:17.352017 4946 scope.go:117] "RemoveContainer" containerID="ca7739a053c59e4807a960d6175ec4eb1f1d126573b327708077524481f598fe" Nov 28 07:47:17 crc kubenswrapper[4946]: I1128 07:47:17.372634 4946 scope.go:117] "RemoveContainer" containerID="6b739eab42c01a2978df4b820f50251f2e628d9c048f18ef13431c3c40300de5" Nov 28 07:47:17 crc kubenswrapper[4946]: I1128 07:47:17.413598 4946 scope.go:117] "RemoveContainer" containerID="3af874f5a32c53a2b7d87bb2617f26d33bd915b2c7374435c4fdccace88c76de" Nov 28 07:47:17 crc kubenswrapper[4946]: E1128 07:47:17.413988 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3af874f5a32c53a2b7d87bb2617f26d33bd915b2c7374435c4fdccace88c76de\": container with ID starting with 3af874f5a32c53a2b7d87bb2617f26d33bd915b2c7374435c4fdccace88c76de not found: ID does not exist" containerID="3af874f5a32c53a2b7d87bb2617f26d33bd915b2c7374435c4fdccace88c76de" Nov 28 07:47:17 crc kubenswrapper[4946]: I1128 07:47:17.414033 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3af874f5a32c53a2b7d87bb2617f26d33bd915b2c7374435c4fdccace88c76de"} err="failed to get container status \"3af874f5a32c53a2b7d87bb2617f26d33bd915b2c7374435c4fdccace88c76de\": rpc error: code = NotFound desc = could not find container \"3af874f5a32c53a2b7d87bb2617f26d33bd915b2c7374435c4fdccace88c76de\": container with ID starting with 3af874f5a32c53a2b7d87bb2617f26d33bd915b2c7374435c4fdccace88c76de not found: ID does not exist" Nov 28 07:47:17 crc kubenswrapper[4946]: I1128 07:47:17.414057 4946 scope.go:117] "RemoveContainer" containerID="ca7739a053c59e4807a960d6175ec4eb1f1d126573b327708077524481f598fe" Nov 28 07:47:17 crc kubenswrapper[4946]: E1128 07:47:17.414356 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca7739a053c59e4807a960d6175ec4eb1f1d126573b327708077524481f598fe\": container with ID starting with ca7739a053c59e4807a960d6175ec4eb1f1d126573b327708077524481f598fe not found: ID does not exist" containerID="ca7739a053c59e4807a960d6175ec4eb1f1d126573b327708077524481f598fe" Nov 28 07:47:17 crc kubenswrapper[4946]: I1128 07:47:17.414393 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca7739a053c59e4807a960d6175ec4eb1f1d126573b327708077524481f598fe"} err="failed to get container status \"ca7739a053c59e4807a960d6175ec4eb1f1d126573b327708077524481f598fe\": rpc error: code = NotFound desc = could not find container \"ca7739a053c59e4807a960d6175ec4eb1f1d126573b327708077524481f598fe\": container with ID starting with ca7739a053c59e4807a960d6175ec4eb1f1d126573b327708077524481f598fe not found: ID does not exist" Nov 28 07:47:17 crc kubenswrapper[4946]: I1128 07:47:17.414615 4946 scope.go:117] "RemoveContainer" containerID="6b739eab42c01a2978df4b820f50251f2e628d9c048f18ef13431c3c40300de5" Nov 28 07:47:17 crc kubenswrapper[4946]: E1128 07:47:17.414868 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b739eab42c01a2978df4b820f50251f2e628d9c048f18ef13431c3c40300de5\": container with ID starting with 6b739eab42c01a2978df4b820f50251f2e628d9c048f18ef13431c3c40300de5 not found: ID does not exist" containerID="6b739eab42c01a2978df4b820f50251f2e628d9c048f18ef13431c3c40300de5" Nov 28 07:47:17 crc kubenswrapper[4946]: I1128 07:47:17.414893 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b739eab42c01a2978df4b820f50251f2e628d9c048f18ef13431c3c40300de5"} err="failed to get container status \"6b739eab42c01a2978df4b820f50251f2e628d9c048f18ef13431c3c40300de5\": rpc error: code = NotFound desc = could not find container \"6b739eab42c01a2978df4b820f50251f2e628d9c048f18ef13431c3c40300de5\": container with ID starting with 6b739eab42c01a2978df4b820f50251f2e628d9c048f18ef13431c3c40300de5 not found: ID does not exist" Nov 28 07:47:17 crc kubenswrapper[4946]: I1128 07:47:17.452038 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92cd023b-7489-4208-8431-c3cc6d77b7b3-catalog-content\") pod \"92cd023b-7489-4208-8431-c3cc6d77b7b3\" (UID: \"92cd023b-7489-4208-8431-c3cc6d77b7b3\") " Nov 28 07:47:17 crc kubenswrapper[4946]: I1128 07:47:17.452105 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c94gb\" (UniqueName: \"kubernetes.io/projected/92cd023b-7489-4208-8431-c3cc6d77b7b3-kube-api-access-c94gb\") pod \"92cd023b-7489-4208-8431-c3cc6d77b7b3\" (UID: \"92cd023b-7489-4208-8431-c3cc6d77b7b3\") " Nov 28 07:47:17 crc kubenswrapper[4946]: I1128 07:47:17.452284 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92cd023b-7489-4208-8431-c3cc6d77b7b3-utilities\") pod \"92cd023b-7489-4208-8431-c3cc6d77b7b3\" (UID: \"92cd023b-7489-4208-8431-c3cc6d77b7b3\") " Nov 28 07:47:17 crc kubenswrapper[4946]: I1128 07:47:17.453411 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92cd023b-7489-4208-8431-c3cc6d77b7b3-utilities" (OuterVolumeSpecName: "utilities") pod "92cd023b-7489-4208-8431-c3cc6d77b7b3" (UID: "92cd023b-7489-4208-8431-c3cc6d77b7b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:47:17 crc kubenswrapper[4946]: I1128 07:47:17.461379 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92cd023b-7489-4208-8431-c3cc6d77b7b3-kube-api-access-c94gb" (OuterVolumeSpecName: "kube-api-access-c94gb") pod "92cd023b-7489-4208-8431-c3cc6d77b7b3" (UID: "92cd023b-7489-4208-8431-c3cc6d77b7b3"). InnerVolumeSpecName "kube-api-access-c94gb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:47:17 crc kubenswrapper[4946]: I1128 07:47:17.506399 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92cd023b-7489-4208-8431-c3cc6d77b7b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92cd023b-7489-4208-8431-c3cc6d77b7b3" (UID: "92cd023b-7489-4208-8431-c3cc6d77b7b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:47:17 crc kubenswrapper[4946]: I1128 07:47:17.553803 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92cd023b-7489-4208-8431-c3cc6d77b7b3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:47:17 crc kubenswrapper[4946]: I1128 07:47:17.553836 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c94gb\" (UniqueName: \"kubernetes.io/projected/92cd023b-7489-4208-8431-c3cc6d77b7b3-kube-api-access-c94gb\") on node \"crc\" DevicePath \"\"" Nov 28 07:47:17 crc kubenswrapper[4946]: I1128 07:47:17.553850 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92cd023b-7489-4208-8431-c3cc6d77b7b3-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:47:17 crc kubenswrapper[4946]: I1128 07:47:17.677197 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cjtsc"] Nov 28 07:47:17 crc kubenswrapper[4946]: I1128 07:47:17.688010 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cjtsc"] Nov 28 07:47:18 crc kubenswrapper[4946]: I1128 07:47:18.004085 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92cd023b-7489-4208-8431-c3cc6d77b7b3" path="/var/lib/kubelet/pods/92cd023b-7489-4208-8431-c3cc6d77b7b3/volumes" Nov 28 07:47:20 crc kubenswrapper[4946]: I1128 07:47:20.990318 4946 scope.go:117] "RemoveContainer" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" Nov 28 07:47:20 crc kubenswrapper[4946]: E1128 07:47:20.991232 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:47:34 crc kubenswrapper[4946]: I1128 07:47:34.989849 4946 scope.go:117] "RemoveContainer" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" Nov 28 07:47:34 crc kubenswrapper[4946]: E1128 07:47:34.990831 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:47:46 crc kubenswrapper[4946]: I1128 07:47:46.990169 4946 scope.go:117] "RemoveContainer" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" Nov 28 07:47:46 crc kubenswrapper[4946]: E1128 07:47:46.991152 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:48:01 crc kubenswrapper[4946]: I1128 07:48:01.990560 4946 scope.go:117] "RemoveContainer" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" Nov 28 07:48:01 crc kubenswrapper[4946]: E1128 07:48:01.991790 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:48:05 crc kubenswrapper[4946]: I1128 07:48:05.650961 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5nqdc"] Nov 28 07:48:05 crc kubenswrapper[4946]: E1128 07:48:05.651992 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92cd023b-7489-4208-8431-c3cc6d77b7b3" containerName="extract-content" Nov 28 07:48:05 crc kubenswrapper[4946]: I1128 07:48:05.652012 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="92cd023b-7489-4208-8431-c3cc6d77b7b3" containerName="extract-content" Nov 28 07:48:05 crc kubenswrapper[4946]: E1128 07:48:05.652036 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92cd023b-7489-4208-8431-c3cc6d77b7b3" containerName="registry-server" Nov 28 07:48:05 crc kubenswrapper[4946]: I1128 07:48:05.652051 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="92cd023b-7489-4208-8431-c3cc6d77b7b3" containerName="registry-server" Nov 28 07:48:05 crc kubenswrapper[4946]: E1128 07:48:05.652097 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92cd023b-7489-4208-8431-c3cc6d77b7b3" containerName="extract-utilities" Nov 28 07:48:05 crc kubenswrapper[4946]: I1128 07:48:05.652110 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="92cd023b-7489-4208-8431-c3cc6d77b7b3" containerName="extract-utilities" Nov 28 07:48:05 crc kubenswrapper[4946]: I1128 07:48:05.652349 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="92cd023b-7489-4208-8431-c3cc6d77b7b3" containerName="registry-server" Nov 28 07:48:05 crc kubenswrapper[4946]: I1128 07:48:05.655644 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nqdc" Nov 28 07:48:05 crc kubenswrapper[4946]: I1128 07:48:05.664783 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5nqdc"] Nov 28 07:48:05 crc kubenswrapper[4946]: I1128 07:48:05.796834 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xdjp\" (UniqueName: \"kubernetes.io/projected/daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3-kube-api-access-9xdjp\") pod \"redhat-operators-5nqdc\" (UID: \"daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3\") " pod="openshift-marketplace/redhat-operators-5nqdc" Nov 28 07:48:05 crc kubenswrapper[4946]: I1128 07:48:05.796884 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3-catalog-content\") pod \"redhat-operators-5nqdc\" (UID: \"daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3\") " pod="openshift-marketplace/redhat-operators-5nqdc" Nov 28 07:48:05 crc kubenswrapper[4946]: I1128 07:48:05.796910 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3-utilities\") pod \"redhat-operators-5nqdc\" (UID: \"daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3\") " pod="openshift-marketplace/redhat-operators-5nqdc" Nov 28 07:48:05 crc kubenswrapper[4946]: I1128 07:48:05.897873 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xdjp\" (UniqueName: \"kubernetes.io/projected/daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3-kube-api-access-9xdjp\") pod \"redhat-operators-5nqdc\" (UID: \"daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3\") " pod="openshift-marketplace/redhat-operators-5nqdc" Nov 28 07:48:05 crc kubenswrapper[4946]: I1128 07:48:05.897928 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3-catalog-content\") pod \"redhat-operators-5nqdc\" (UID: \"daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3\") " pod="openshift-marketplace/redhat-operators-5nqdc" Nov 28 07:48:05 crc kubenswrapper[4946]: I1128 07:48:05.897963 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3-utilities\") pod \"redhat-operators-5nqdc\" (UID: \"daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3\") " pod="openshift-marketplace/redhat-operators-5nqdc" Nov 28 07:48:05 crc kubenswrapper[4946]: I1128 07:48:05.898522 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3-utilities\") pod \"redhat-operators-5nqdc\" (UID: \"daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3\") " pod="openshift-marketplace/redhat-operators-5nqdc" Nov 28 07:48:05 crc kubenswrapper[4946]: I1128 07:48:05.898814 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3-catalog-content\") pod \"redhat-operators-5nqdc\" (UID: \"daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3\") " pod="openshift-marketplace/redhat-operators-5nqdc" Nov 28 07:48:05 crc kubenswrapper[4946]: I1128 07:48:05.926980 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xdjp\" (UniqueName: \"kubernetes.io/projected/daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3-kube-api-access-9xdjp\") pod \"redhat-operators-5nqdc\" (UID: \"daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3\") " pod="openshift-marketplace/redhat-operators-5nqdc" Nov 28 07:48:05 crc kubenswrapper[4946]: I1128 07:48:05.986450 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nqdc" Nov 28 07:48:06 crc kubenswrapper[4946]: I1128 07:48:06.438362 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5nqdc"] Nov 28 07:48:06 crc kubenswrapper[4946]: I1128 07:48:06.813780 4946 generic.go:334] "Generic (PLEG): container finished" podID="daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3" containerID="b8b1dbeb12c4500991009ee6f6b421600f299e4d73d8cd1a4a837ca08ab6659a" exitCode=0 Nov 28 07:48:06 crc kubenswrapper[4946]: I1128 07:48:06.813886 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nqdc" event={"ID":"daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3","Type":"ContainerDied","Data":"b8b1dbeb12c4500991009ee6f6b421600f299e4d73d8cd1a4a837ca08ab6659a"} Nov 28 07:48:06 crc kubenswrapper[4946]: I1128 07:48:06.813994 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nqdc" event={"ID":"daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3","Type":"ContainerStarted","Data":"469317902f148dccc47c717df515e8303a82c4f550fc3571484e75d9dd6a2a4d"} Nov 28 07:48:07 crc kubenswrapper[4946]: I1128 07:48:07.825909 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nqdc" event={"ID":"daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3","Type":"ContainerStarted","Data":"6f6151bb0f8a89c0ea7f4374dc627d921a3cae5f8c92266d52e954cf28993a88"} Nov 28 07:48:08 crc kubenswrapper[4946]: I1128 07:48:08.838668 4946 generic.go:334] "Generic (PLEG): container finished" podID="daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3" containerID="6f6151bb0f8a89c0ea7f4374dc627d921a3cae5f8c92266d52e954cf28993a88" exitCode=0 Nov 28 07:48:08 crc kubenswrapper[4946]: I1128 07:48:08.838789 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nqdc" event={"ID":"daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3","Type":"ContainerDied","Data":"6f6151bb0f8a89c0ea7f4374dc627d921a3cae5f8c92266d52e954cf28993a88"} Nov 28 07:48:09 crc kubenswrapper[4946]: I1128 07:48:09.852199 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nqdc" event={"ID":"daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3","Type":"ContainerStarted","Data":"04541d9e355429605e7d8b553aa0ee678ba705926a13bd4eb9d848ae2c32a18f"} Nov 28 07:48:09 crc kubenswrapper[4946]: I1128 07:48:09.892308 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5nqdc" podStartSLOduration=2.389462371 podStartE2EDuration="4.892281833s" podCreationTimestamp="2025-11-28 07:48:05 +0000 UTC" firstStartedPulling="2025-11-28 07:48:06.815451543 +0000 UTC m=+3341.193516664" lastFinishedPulling="2025-11-28 07:48:09.318270975 +0000 UTC m=+3343.696336126" observedRunningTime="2025-11-28 07:48:09.880671688 +0000 UTC m=+3344.258736839" watchObservedRunningTime="2025-11-28 07:48:09.892281833 +0000 UTC m=+3344.270346974" Nov 28 07:48:14 crc kubenswrapper[4946]: I1128 07:48:14.989931 4946 scope.go:117] "RemoveContainer" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" Nov 28 07:48:14 crc kubenswrapper[4946]: E1128 07:48:14.990904 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:48:15 crc kubenswrapper[4946]: I1128 07:48:15.986767 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5nqdc" Nov 28 07:48:15 crc kubenswrapper[4946]: I1128 07:48:15.986871 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5nqdc" Nov 28 07:48:17 crc kubenswrapper[4946]: I1128 07:48:17.064844 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5nqdc" podUID="daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3" containerName="registry-server" probeResult="failure" output=< Nov 28 07:48:17 crc kubenswrapper[4946]: timeout: failed to connect service ":50051" within 1s Nov 28 07:48:17 crc kubenswrapper[4946]: > Nov 28 07:48:26 crc kubenswrapper[4946]: I1128 07:48:26.062799 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5nqdc" Nov 28 07:48:26 crc kubenswrapper[4946]: I1128 07:48:26.149345 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5nqdc" Nov 28 07:48:26 crc kubenswrapper[4946]: I1128 07:48:26.307574 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5nqdc"] Nov 28 07:48:27 crc kubenswrapper[4946]: I1128 07:48:27.989879 4946 scope.go:117] "RemoveContainer" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" Nov 28 07:48:27 crc kubenswrapper[4946]: E1128 07:48:27.990616 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:48:28 crc kubenswrapper[4946]: I1128 07:48:28.015108 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5nqdc" podUID="daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3" containerName="registry-server" containerID="cri-o://04541d9e355429605e7d8b553aa0ee678ba705926a13bd4eb9d848ae2c32a18f" gracePeriod=2 Nov 28 07:48:29 crc kubenswrapper[4946]: I1128 07:48:29.025368 4946 generic.go:334] "Generic (PLEG): container finished" podID="daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3" containerID="04541d9e355429605e7d8b553aa0ee678ba705926a13bd4eb9d848ae2c32a18f" exitCode=0 Nov 28 07:48:29 crc kubenswrapper[4946]: I1128 07:48:29.025563 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nqdc" event={"ID":"daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3","Type":"ContainerDied","Data":"04541d9e355429605e7d8b553aa0ee678ba705926a13bd4eb9d848ae2c32a18f"} Nov 28 07:48:29 crc kubenswrapper[4946]: I1128 07:48:29.581117 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nqdc" Nov 28 07:48:29 crc kubenswrapper[4946]: I1128 07:48:29.639658 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3-catalog-content\") pod \"daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3\" (UID: \"daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3\") " Nov 28 07:48:29 crc kubenswrapper[4946]: I1128 07:48:29.639712 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xdjp\" (UniqueName: \"kubernetes.io/projected/daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3-kube-api-access-9xdjp\") pod \"daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3\" (UID: \"daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3\") " Nov 28 07:48:29 crc kubenswrapper[4946]: I1128 07:48:29.639831 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3-utilities\") pod \"daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3\" (UID: \"daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3\") " Nov 28 07:48:29 crc kubenswrapper[4946]: I1128 07:48:29.640997 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3-utilities" (OuterVolumeSpecName: "utilities") pod "daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3" (UID: "daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:48:29 crc kubenswrapper[4946]: I1128 07:48:29.646806 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3-kube-api-access-9xdjp" (OuterVolumeSpecName: "kube-api-access-9xdjp") pod "daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3" (UID: "daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3"). InnerVolumeSpecName "kube-api-access-9xdjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:48:29 crc kubenswrapper[4946]: I1128 07:48:29.741656 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:48:29 crc kubenswrapper[4946]: I1128 07:48:29.741698 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xdjp\" (UniqueName: \"kubernetes.io/projected/daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3-kube-api-access-9xdjp\") on node \"crc\" DevicePath \"\"" Nov 28 07:48:29 crc kubenswrapper[4946]: I1128 07:48:29.757816 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3" (UID: "daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:48:29 crc kubenswrapper[4946]: I1128 07:48:29.842507 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:48:30 crc kubenswrapper[4946]: I1128 07:48:30.034994 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nqdc" event={"ID":"daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3","Type":"ContainerDied","Data":"469317902f148dccc47c717df515e8303a82c4f550fc3571484e75d9dd6a2a4d"} Nov 28 07:48:30 crc kubenswrapper[4946]: I1128 07:48:30.035042 4946 scope.go:117] "RemoveContainer" containerID="04541d9e355429605e7d8b553aa0ee678ba705926a13bd4eb9d848ae2c32a18f" Nov 28 07:48:30 crc kubenswrapper[4946]: I1128 07:48:30.035144 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nqdc" Nov 28 07:48:30 crc kubenswrapper[4946]: I1128 07:48:30.064256 4946 scope.go:117] "RemoveContainer" containerID="6f6151bb0f8a89c0ea7f4374dc627d921a3cae5f8c92266d52e954cf28993a88" Nov 28 07:48:30 crc kubenswrapper[4946]: I1128 07:48:30.073968 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5nqdc"] Nov 28 07:48:30 crc kubenswrapper[4946]: I1128 07:48:30.094356 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5nqdc"] Nov 28 07:48:30 crc kubenswrapper[4946]: I1128 07:48:30.094891 4946 scope.go:117] "RemoveContainer" containerID="b8b1dbeb12c4500991009ee6f6b421600f299e4d73d8cd1a4a837ca08ab6659a" Nov 28 07:48:32 crc kubenswrapper[4946]: I1128 07:48:32.000822 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3" path="/var/lib/kubelet/pods/daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3/volumes" Nov 28 07:48:42 crc kubenswrapper[4946]: I1128 07:48:42.990156 4946 scope.go:117] "RemoveContainer" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" Nov 28 07:48:42 crc kubenswrapper[4946]: E1128 07:48:42.990909 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:48:57 crc kubenswrapper[4946]: I1128 07:48:57.997742 4946 scope.go:117] "RemoveContainer" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" Nov 28 07:48:58 crc kubenswrapper[4946]: I1128 07:48:58.337489 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"c59441afced84a025675e7ef8f0b84c6fe8726457da630c80c7f9423301549ad"} Nov 28 07:51:20 crc kubenswrapper[4946]: I1128 07:51:20.036881 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lxcpt"] Nov 28 07:51:20 crc kubenswrapper[4946]: E1128 07:51:20.037812 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3" containerName="extract-content" Nov 28 07:51:20 crc kubenswrapper[4946]: I1128 07:51:20.037828 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3" containerName="extract-content" Nov 28 07:51:20 crc kubenswrapper[4946]: E1128 07:51:20.037842 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3" containerName="extract-utilities" Nov 28 07:51:20 crc kubenswrapper[4946]: I1128 07:51:20.037852 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3" containerName="extract-utilities" Nov 28 07:51:20 crc kubenswrapper[4946]: E1128 07:51:20.037875 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3" containerName="registry-server" Nov 28 07:51:20 crc kubenswrapper[4946]: I1128 07:51:20.037883 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3" containerName="registry-server" Nov 28 07:51:20 crc kubenswrapper[4946]: I1128 07:51:20.038055 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="daf97745-3d4a-48fe-b9e1-4bffb4c7d1a3" containerName="registry-server" Nov 28 07:51:20 crc kubenswrapper[4946]: I1128 07:51:20.039511 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxcpt" Nov 28 07:51:20 crc kubenswrapper[4946]: I1128 07:51:20.058708 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxcpt"] Nov 28 07:51:20 crc kubenswrapper[4946]: I1128 07:51:20.229740 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fe94b53-9a8d-4f92-b29c-b5126659c31c-catalog-content\") pod \"redhat-marketplace-lxcpt\" (UID: \"4fe94b53-9a8d-4f92-b29c-b5126659c31c\") " pod="openshift-marketplace/redhat-marketplace-lxcpt" Nov 28 07:51:20 crc kubenswrapper[4946]: I1128 07:51:20.229880 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpw8c\" (UniqueName: \"kubernetes.io/projected/4fe94b53-9a8d-4f92-b29c-b5126659c31c-kube-api-access-bpw8c\") pod \"redhat-marketplace-lxcpt\" (UID: \"4fe94b53-9a8d-4f92-b29c-b5126659c31c\") " pod="openshift-marketplace/redhat-marketplace-lxcpt" Nov 28 07:51:20 crc kubenswrapper[4946]: I1128 07:51:20.230191 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fe94b53-9a8d-4f92-b29c-b5126659c31c-utilities\") pod \"redhat-marketplace-lxcpt\" (UID: \"4fe94b53-9a8d-4f92-b29c-b5126659c31c\") " pod="openshift-marketplace/redhat-marketplace-lxcpt" Nov 28 07:51:20 crc kubenswrapper[4946]: I1128 07:51:20.331312 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fe94b53-9a8d-4f92-b29c-b5126659c31c-utilities\") pod \"redhat-marketplace-lxcpt\" (UID: \"4fe94b53-9a8d-4f92-b29c-b5126659c31c\") " pod="openshift-marketplace/redhat-marketplace-lxcpt" Nov 28 07:51:20 crc kubenswrapper[4946]: I1128 07:51:20.332287 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fe94b53-9a8d-4f92-b29c-b5126659c31c-utilities\") pod \"redhat-marketplace-lxcpt\" (UID: \"4fe94b53-9a8d-4f92-b29c-b5126659c31c\") " pod="openshift-marketplace/redhat-marketplace-lxcpt" Nov 28 07:51:20 crc kubenswrapper[4946]: I1128 07:51:20.332571 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fe94b53-9a8d-4f92-b29c-b5126659c31c-catalog-content\") pod \"redhat-marketplace-lxcpt\" (UID: \"4fe94b53-9a8d-4f92-b29c-b5126659c31c\") " pod="openshift-marketplace/redhat-marketplace-lxcpt" Nov 28 07:51:20 crc kubenswrapper[4946]: I1128 07:51:20.333116 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fe94b53-9a8d-4f92-b29c-b5126659c31c-catalog-content\") pod \"redhat-marketplace-lxcpt\" (UID: \"4fe94b53-9a8d-4f92-b29c-b5126659c31c\") " pod="openshift-marketplace/redhat-marketplace-lxcpt" Nov 28 07:51:20 crc kubenswrapper[4946]: I1128 07:51:20.333312 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpw8c\" (UniqueName: \"kubernetes.io/projected/4fe94b53-9a8d-4f92-b29c-b5126659c31c-kube-api-access-bpw8c\") pod \"redhat-marketplace-lxcpt\" (UID: \"4fe94b53-9a8d-4f92-b29c-b5126659c31c\") " pod="openshift-marketplace/redhat-marketplace-lxcpt" Nov 28 07:51:20 crc kubenswrapper[4946]: I1128 07:51:20.359960 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpw8c\" (UniqueName: \"kubernetes.io/projected/4fe94b53-9a8d-4f92-b29c-b5126659c31c-kube-api-access-bpw8c\") pod \"redhat-marketplace-lxcpt\" (UID: \"4fe94b53-9a8d-4f92-b29c-b5126659c31c\") " pod="openshift-marketplace/redhat-marketplace-lxcpt" Nov 28 07:51:20 crc kubenswrapper[4946]: I1128 07:51:20.369732 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxcpt" Nov 28 07:51:20 crc kubenswrapper[4946]: I1128 07:51:20.928141 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxcpt"] Nov 28 07:51:21 crc kubenswrapper[4946]: I1128 07:51:21.769215 4946 generic.go:334] "Generic (PLEG): container finished" podID="4fe94b53-9a8d-4f92-b29c-b5126659c31c" containerID="28a28b822174f0abc5bdfd210b1342bcdaeb5baed85bc0fcfbfeebe3b4ba8c82" exitCode=0 Nov 28 07:51:21 crc kubenswrapper[4946]: I1128 07:51:21.769526 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxcpt" event={"ID":"4fe94b53-9a8d-4f92-b29c-b5126659c31c","Type":"ContainerDied","Data":"28a28b822174f0abc5bdfd210b1342bcdaeb5baed85bc0fcfbfeebe3b4ba8c82"} Nov 28 07:51:21 crc kubenswrapper[4946]: I1128 07:51:21.769614 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxcpt" event={"ID":"4fe94b53-9a8d-4f92-b29c-b5126659c31c","Type":"ContainerStarted","Data":"c8972bc67a26567fdd08d813c85a01f7b4dbc3d4cbcead9846ab03ffe965c239"} Nov 28 07:51:21 crc kubenswrapper[4946]: I1128 07:51:21.772968 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 07:51:22 crc kubenswrapper[4946]: I1128 07:51:22.778566 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxcpt" event={"ID":"4fe94b53-9a8d-4f92-b29c-b5126659c31c","Type":"ContainerStarted","Data":"238528774eb73ccfc5f3579642b3c172b0665d552895170e1a95665d73f9e395"} Nov 28 07:51:23 crc kubenswrapper[4946]: I1128 07:51:23.792192 4946 generic.go:334] "Generic (PLEG): container finished" podID="4fe94b53-9a8d-4f92-b29c-b5126659c31c" containerID="238528774eb73ccfc5f3579642b3c172b0665d552895170e1a95665d73f9e395" exitCode=0 Nov 28 07:51:23 crc kubenswrapper[4946]: I1128 07:51:23.792807 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxcpt" event={"ID":"4fe94b53-9a8d-4f92-b29c-b5126659c31c","Type":"ContainerDied","Data":"238528774eb73ccfc5f3579642b3c172b0665d552895170e1a95665d73f9e395"} Nov 28 07:51:24 crc kubenswrapper[4946]: I1128 07:51:24.731484 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:51:24 crc kubenswrapper[4946]: I1128 07:51:24.731923 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:51:24 crc kubenswrapper[4946]: I1128 07:51:24.805305 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxcpt" event={"ID":"4fe94b53-9a8d-4f92-b29c-b5126659c31c","Type":"ContainerStarted","Data":"d297af66cecc25f1e5dd33eaef1aaa4c5378a59bb469f6e3df08ebe7a2111585"} Nov 28 07:51:24 crc kubenswrapper[4946]: I1128 07:51:24.832366 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lxcpt" podStartSLOduration=2.31442932 podStartE2EDuration="4.832333297s" podCreationTimestamp="2025-11-28 07:51:20 +0000 UTC" firstStartedPulling="2025-11-28 07:51:21.772683026 +0000 UTC m=+3536.150748147" lastFinishedPulling="2025-11-28 07:51:24.290586983 +0000 UTC m=+3538.668652124" observedRunningTime="2025-11-28 07:51:24.826567144 +0000 UTC m=+3539.204632315" watchObservedRunningTime="2025-11-28 07:51:24.832333297 +0000 UTC m=+3539.210398448" Nov 28 07:51:30 crc kubenswrapper[4946]: I1128 07:51:30.370411 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lxcpt" Nov 28 07:51:30 crc kubenswrapper[4946]: I1128 07:51:30.371619 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lxcpt" Nov 28 07:51:30 crc kubenswrapper[4946]: I1128 07:51:30.451298 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lxcpt" Nov 28 07:51:30 crc kubenswrapper[4946]: I1128 07:51:30.959530 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lxcpt" Nov 28 07:51:31 crc kubenswrapper[4946]: I1128 07:51:31.020049 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxcpt"] Nov 28 07:51:32 crc kubenswrapper[4946]: I1128 07:51:32.903625 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lxcpt" podUID="4fe94b53-9a8d-4f92-b29c-b5126659c31c" containerName="registry-server" containerID="cri-o://d297af66cecc25f1e5dd33eaef1aaa4c5378a59bb469f6e3df08ebe7a2111585" gracePeriod=2 Nov 28 07:51:33 crc kubenswrapper[4946]: I1128 07:51:33.925342 4946 generic.go:334] "Generic (PLEG): container finished" podID="4fe94b53-9a8d-4f92-b29c-b5126659c31c" containerID="d297af66cecc25f1e5dd33eaef1aaa4c5378a59bb469f6e3df08ebe7a2111585" exitCode=0 Nov 28 07:51:33 crc kubenswrapper[4946]: I1128 07:51:33.925417 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxcpt" event={"ID":"4fe94b53-9a8d-4f92-b29c-b5126659c31c","Type":"ContainerDied","Data":"d297af66cecc25f1e5dd33eaef1aaa4c5378a59bb469f6e3df08ebe7a2111585"} Nov 28 07:51:33 crc kubenswrapper[4946]: I1128 07:51:33.925756 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxcpt" event={"ID":"4fe94b53-9a8d-4f92-b29c-b5126659c31c","Type":"ContainerDied","Data":"c8972bc67a26567fdd08d813c85a01f7b4dbc3d4cbcead9846ab03ffe965c239"} Nov 28 07:51:33 crc kubenswrapper[4946]: I1128 07:51:33.925772 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8972bc67a26567fdd08d813c85a01f7b4dbc3d4cbcead9846ab03ffe965c239" Nov 28 07:51:33 crc kubenswrapper[4946]: I1128 07:51:33.961380 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxcpt" Nov 28 07:51:34 crc kubenswrapper[4946]: I1128 07:51:34.062544 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fe94b53-9a8d-4f92-b29c-b5126659c31c-catalog-content\") pod \"4fe94b53-9a8d-4f92-b29c-b5126659c31c\" (UID: \"4fe94b53-9a8d-4f92-b29c-b5126659c31c\") " Nov 28 07:51:34 crc kubenswrapper[4946]: I1128 07:51:34.062647 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fe94b53-9a8d-4f92-b29c-b5126659c31c-utilities\") pod \"4fe94b53-9a8d-4f92-b29c-b5126659c31c\" (UID: \"4fe94b53-9a8d-4f92-b29c-b5126659c31c\") " Nov 28 07:51:34 crc kubenswrapper[4946]: I1128 07:51:34.062690 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpw8c\" (UniqueName: \"kubernetes.io/projected/4fe94b53-9a8d-4f92-b29c-b5126659c31c-kube-api-access-bpw8c\") pod \"4fe94b53-9a8d-4f92-b29c-b5126659c31c\" (UID: \"4fe94b53-9a8d-4f92-b29c-b5126659c31c\") " Nov 28 07:51:34 crc kubenswrapper[4946]: I1128 07:51:34.063865 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fe94b53-9a8d-4f92-b29c-b5126659c31c-utilities" (OuterVolumeSpecName: "utilities") pod "4fe94b53-9a8d-4f92-b29c-b5126659c31c" (UID: "4fe94b53-9a8d-4f92-b29c-b5126659c31c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:51:34 crc kubenswrapper[4946]: I1128 07:51:34.071178 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fe94b53-9a8d-4f92-b29c-b5126659c31c-kube-api-access-bpw8c" (OuterVolumeSpecName: "kube-api-access-bpw8c") pod "4fe94b53-9a8d-4f92-b29c-b5126659c31c" (UID: "4fe94b53-9a8d-4f92-b29c-b5126659c31c"). InnerVolumeSpecName "kube-api-access-bpw8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:51:34 crc kubenswrapper[4946]: I1128 07:51:34.101335 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fe94b53-9a8d-4f92-b29c-b5126659c31c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fe94b53-9a8d-4f92-b29c-b5126659c31c" (UID: "4fe94b53-9a8d-4f92-b29c-b5126659c31c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:51:34 crc kubenswrapper[4946]: I1128 07:51:34.164154 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fe94b53-9a8d-4f92-b29c-b5126659c31c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:51:34 crc kubenswrapper[4946]: I1128 07:51:34.164203 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fe94b53-9a8d-4f92-b29c-b5126659c31c-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:51:34 crc kubenswrapper[4946]: I1128 07:51:34.164217 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpw8c\" (UniqueName: \"kubernetes.io/projected/4fe94b53-9a8d-4f92-b29c-b5126659c31c-kube-api-access-bpw8c\") on node \"crc\" DevicePath \"\"" Nov 28 07:51:34 crc kubenswrapper[4946]: I1128 07:51:34.934456 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxcpt" Nov 28 07:51:34 crc kubenswrapper[4946]: I1128 07:51:34.974771 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxcpt"] Nov 28 07:51:34 crc kubenswrapper[4946]: I1128 07:51:34.981851 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxcpt"] Nov 28 07:51:36 crc kubenswrapper[4946]: I1128 07:51:36.007555 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fe94b53-9a8d-4f92-b29c-b5126659c31c" path="/var/lib/kubelet/pods/4fe94b53-9a8d-4f92-b29c-b5126659c31c/volumes" Nov 28 07:51:54 crc kubenswrapper[4946]: I1128 07:51:54.731386 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:51:54 crc kubenswrapper[4946]: I1128 07:51:54.732275 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:52:24 crc kubenswrapper[4946]: I1128 07:52:24.732830 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:52:24 crc kubenswrapper[4946]: I1128 07:52:24.734238 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:52:24 crc kubenswrapper[4946]: I1128 07:52:24.734356 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 07:52:24 crc kubenswrapper[4946]: I1128 07:52:24.736015 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c59441afced84a025675e7ef8f0b84c6fe8726457da630c80c7f9423301549ad"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 07:52:24 crc kubenswrapper[4946]: I1128 07:52:24.736172 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://c59441afced84a025675e7ef8f0b84c6fe8726457da630c80c7f9423301549ad" gracePeriod=600 Nov 28 07:52:25 crc kubenswrapper[4946]: I1128 07:52:25.506086 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="c59441afced84a025675e7ef8f0b84c6fe8726457da630c80c7f9423301549ad" exitCode=0 Nov 28 07:52:25 crc kubenswrapper[4946]: I1128 07:52:25.506186 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"c59441afced84a025675e7ef8f0b84c6fe8726457da630c80c7f9423301549ad"} Nov 28 07:52:25 crc kubenswrapper[4946]: I1128 07:52:25.507040 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba"} Nov 28 07:52:25 crc kubenswrapper[4946]: I1128 07:52:25.507078 4946 scope.go:117] "RemoveContainer" containerID="eb3f36a4aa48df3549c531105a6f28faf4368bb81d61354d7bcd3512cf642312" Nov 28 07:54:54 crc kubenswrapper[4946]: I1128 07:54:54.731255 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:54:54 crc kubenswrapper[4946]: I1128 07:54:54.734274 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:55:24 crc kubenswrapper[4946]: I1128 07:55:24.731103 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:55:24 crc kubenswrapper[4946]: I1128 07:55:24.732151 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:55:54 crc kubenswrapper[4946]: I1128 07:55:54.730626 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:55:54 crc kubenswrapper[4946]: I1128 07:55:54.731551 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:55:54 crc kubenswrapper[4946]: I1128 07:55:54.731694 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 07:55:54 crc kubenswrapper[4946]: I1128 07:55:54.732810 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 07:55:54 crc kubenswrapper[4946]: I1128 07:55:54.732919 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" gracePeriod=600 Nov 28 07:55:54 crc kubenswrapper[4946]: E1128 07:55:54.874285 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:55:55 crc kubenswrapper[4946]: I1128 07:55:55.686731 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" exitCode=0 Nov 28 07:55:55 crc kubenswrapper[4946]: I1128 07:55:55.686781 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba"} Nov 28 07:55:55 crc kubenswrapper[4946]: I1128 07:55:55.686828 4946 scope.go:117] "RemoveContainer" containerID="c59441afced84a025675e7ef8f0b84c6fe8726457da630c80c7f9423301549ad" Nov 28 07:55:55 crc kubenswrapper[4946]: I1128 07:55:55.687287 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 07:55:55 crc kubenswrapper[4946]: E1128 07:55:55.687629 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:56:10 crc kubenswrapper[4946]: I1128 07:56:10.990329 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 07:56:10 crc kubenswrapper[4946]: E1128 07:56:10.990940 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:56:22 crc kubenswrapper[4946]: I1128 07:56:22.989748 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 07:56:22 crc kubenswrapper[4946]: E1128 07:56:22.990246 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:56:36 crc kubenswrapper[4946]: I1128 07:56:36.989454 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 07:56:36 crc kubenswrapper[4946]: E1128 07:56:36.990179 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:56:48 crc kubenswrapper[4946]: I1128 07:56:48.989710 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 07:56:48 crc kubenswrapper[4946]: E1128 07:56:48.991332 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:57:02 crc kubenswrapper[4946]: I1128 07:57:02.990026 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 07:57:02 crc kubenswrapper[4946]: E1128 07:57:02.991051 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:57:13 crc kubenswrapper[4946]: I1128 07:57:13.990872 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 07:57:13 crc kubenswrapper[4946]: E1128 07:57:13.992121 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:57:24 crc kubenswrapper[4946]: I1128 07:57:24.990404 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 07:57:24 crc kubenswrapper[4946]: E1128 07:57:24.992186 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:57:33 crc kubenswrapper[4946]: I1128 07:57:33.714882 4946 scope.go:117] "RemoveContainer" containerID="d297af66cecc25f1e5dd33eaef1aaa4c5378a59bb469f6e3df08ebe7a2111585" Nov 28 07:57:33 crc kubenswrapper[4946]: I1128 07:57:33.747297 4946 scope.go:117] "RemoveContainer" containerID="238528774eb73ccfc5f3579642b3c172b0665d552895170e1a95665d73f9e395" Nov 28 07:57:33 crc kubenswrapper[4946]: I1128 07:57:33.776907 4946 scope.go:117] "RemoveContainer" containerID="28a28b822174f0abc5bdfd210b1342bcdaeb5baed85bc0fcfbfeebe3b4ba8c82" Nov 28 07:57:38 crc kubenswrapper[4946]: I1128 07:57:38.990094 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 07:57:38 crc kubenswrapper[4946]: E1128 07:57:38.991233 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:57:50 crc kubenswrapper[4946]: I1128 07:57:50.991003 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 07:57:50 crc kubenswrapper[4946]: E1128 07:57:50.992156 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:58:01 crc kubenswrapper[4946]: I1128 07:58:01.411215 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8zxwx"] Nov 28 07:58:01 crc kubenswrapper[4946]: E1128 07:58:01.412412 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe94b53-9a8d-4f92-b29c-b5126659c31c" containerName="extract-content" Nov 28 07:58:01 crc kubenswrapper[4946]: I1128 07:58:01.412439 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe94b53-9a8d-4f92-b29c-b5126659c31c" containerName="extract-content" Nov 28 07:58:01 crc kubenswrapper[4946]: E1128 07:58:01.412454 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe94b53-9a8d-4f92-b29c-b5126659c31c" containerName="extract-utilities" Nov 28 07:58:01 crc kubenswrapper[4946]: I1128 07:58:01.412495 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe94b53-9a8d-4f92-b29c-b5126659c31c" containerName="extract-utilities" Nov 28 07:58:01 crc kubenswrapper[4946]: E1128 07:58:01.412584 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe94b53-9a8d-4f92-b29c-b5126659c31c" containerName="registry-server" Nov 28 07:58:01 crc kubenswrapper[4946]: I1128 07:58:01.412598 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe94b53-9a8d-4f92-b29c-b5126659c31c" containerName="registry-server" Nov 28 07:58:01 crc kubenswrapper[4946]: I1128 07:58:01.412847 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fe94b53-9a8d-4f92-b29c-b5126659c31c" containerName="registry-server" Nov 28 07:58:01 crc kubenswrapper[4946]: I1128 07:58:01.414766 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8zxwx" Nov 28 07:58:01 crc kubenswrapper[4946]: I1128 07:58:01.442095 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8zxwx"] Nov 28 07:58:01 crc kubenswrapper[4946]: I1128 07:58:01.490248 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f4c60b-0278-4b39-b437-6c32a1b02761-catalog-content\") pod \"certified-operators-8zxwx\" (UID: \"45f4c60b-0278-4b39-b437-6c32a1b02761\") " pod="openshift-marketplace/certified-operators-8zxwx" Nov 28 07:58:01 crc kubenswrapper[4946]: I1128 07:58:01.490362 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf74z\" (UniqueName: \"kubernetes.io/projected/45f4c60b-0278-4b39-b437-6c32a1b02761-kube-api-access-cf74z\") pod \"certified-operators-8zxwx\" (UID: \"45f4c60b-0278-4b39-b437-6c32a1b02761\") " pod="openshift-marketplace/certified-operators-8zxwx" Nov 28 07:58:01 crc kubenswrapper[4946]: I1128 07:58:01.490428 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f4c60b-0278-4b39-b437-6c32a1b02761-utilities\") pod \"certified-operators-8zxwx\" (UID: \"45f4c60b-0278-4b39-b437-6c32a1b02761\") " pod="openshift-marketplace/certified-operators-8zxwx" Nov 28 07:58:01 crc kubenswrapper[4946]: I1128 07:58:01.591608 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f4c60b-0278-4b39-b437-6c32a1b02761-catalog-content\") pod \"certified-operators-8zxwx\" (UID: \"45f4c60b-0278-4b39-b437-6c32a1b02761\") " pod="openshift-marketplace/certified-operators-8zxwx" Nov 28 07:58:01 crc kubenswrapper[4946]: I1128 07:58:01.591677 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf74z\" (UniqueName: \"kubernetes.io/projected/45f4c60b-0278-4b39-b437-6c32a1b02761-kube-api-access-cf74z\") pod \"certified-operators-8zxwx\" (UID: \"45f4c60b-0278-4b39-b437-6c32a1b02761\") " pod="openshift-marketplace/certified-operators-8zxwx" Nov 28 07:58:01 crc kubenswrapper[4946]: I1128 07:58:01.591715 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f4c60b-0278-4b39-b437-6c32a1b02761-utilities\") pod \"certified-operators-8zxwx\" (UID: \"45f4c60b-0278-4b39-b437-6c32a1b02761\") " pod="openshift-marketplace/certified-operators-8zxwx" Nov 28 07:58:01 crc kubenswrapper[4946]: I1128 07:58:01.592251 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f4c60b-0278-4b39-b437-6c32a1b02761-utilities\") pod \"certified-operators-8zxwx\" (UID: \"45f4c60b-0278-4b39-b437-6c32a1b02761\") " pod="openshift-marketplace/certified-operators-8zxwx" Nov 28 07:58:01 crc kubenswrapper[4946]: I1128 07:58:01.592352 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f4c60b-0278-4b39-b437-6c32a1b02761-catalog-content\") pod \"certified-operators-8zxwx\" (UID: \"45f4c60b-0278-4b39-b437-6c32a1b02761\") " pod="openshift-marketplace/certified-operators-8zxwx" Nov 28 07:58:01 crc kubenswrapper[4946]: I1128 07:58:01.616709 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf74z\" (UniqueName: \"kubernetes.io/projected/45f4c60b-0278-4b39-b437-6c32a1b02761-kube-api-access-cf74z\") pod \"certified-operators-8zxwx\" (UID: \"45f4c60b-0278-4b39-b437-6c32a1b02761\") " pod="openshift-marketplace/certified-operators-8zxwx" Nov 28 07:58:01 crc kubenswrapper[4946]: I1128 07:58:01.776185 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8zxwx" Nov 28 07:58:02 crc kubenswrapper[4946]: I1128 07:58:02.281070 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8zxwx"] Nov 28 07:58:02 crc kubenswrapper[4946]: I1128 07:58:02.870807 4946 generic.go:334] "Generic (PLEG): container finished" podID="45f4c60b-0278-4b39-b437-6c32a1b02761" containerID="c727d4db5d17934aaa9218f66e9f9648cca9ac4ead73bc116af90fdd9fdd2c11" exitCode=0 Nov 28 07:58:02 crc kubenswrapper[4946]: I1128 07:58:02.870907 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8zxwx" event={"ID":"45f4c60b-0278-4b39-b437-6c32a1b02761","Type":"ContainerDied","Data":"c727d4db5d17934aaa9218f66e9f9648cca9ac4ead73bc116af90fdd9fdd2c11"} Nov 28 07:58:02 crc kubenswrapper[4946]: I1128 07:58:02.873055 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8zxwx" event={"ID":"45f4c60b-0278-4b39-b437-6c32a1b02761","Type":"ContainerStarted","Data":"3fea6104762f98ad80fa93430ec0ecb8f876054ae029b0f2e5f15623d09ddab7"} Nov 28 07:58:02 crc kubenswrapper[4946]: I1128 07:58:02.873541 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 07:58:03 crc kubenswrapper[4946]: I1128 07:58:03.879829 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8zxwx" event={"ID":"45f4c60b-0278-4b39-b437-6c32a1b02761","Type":"ContainerStarted","Data":"826921970cfdec813348adfe371aea7fe901c3379be48bbe13974cb69546e54c"} Nov 28 07:58:04 crc kubenswrapper[4946]: I1128 07:58:04.894776 4946 generic.go:334] "Generic (PLEG): container finished" podID="45f4c60b-0278-4b39-b437-6c32a1b02761" containerID="826921970cfdec813348adfe371aea7fe901c3379be48bbe13974cb69546e54c" exitCode=0 Nov 28 07:58:04 crc kubenswrapper[4946]: I1128 07:58:04.894857 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8zxwx" event={"ID":"45f4c60b-0278-4b39-b437-6c32a1b02761","Type":"ContainerDied","Data":"826921970cfdec813348adfe371aea7fe901c3379be48bbe13974cb69546e54c"} Nov 28 07:58:04 crc kubenswrapper[4946]: I1128 07:58:04.989806 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 07:58:04 crc kubenswrapper[4946]: E1128 07:58:04.990204 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:58:05 crc kubenswrapper[4946]: I1128 07:58:05.904251 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8zxwx" event={"ID":"45f4c60b-0278-4b39-b437-6c32a1b02761","Type":"ContainerStarted","Data":"4ab54da3e80b56004d103c6964e7ab71536517b2b99a3fe0ad4abb6af5651b64"} Nov 28 07:58:05 crc kubenswrapper[4946]: I1128 07:58:05.929309 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8zxwx" podStartSLOduration=2.417245825 podStartE2EDuration="4.929290516s" podCreationTimestamp="2025-11-28 07:58:01 +0000 UTC" firstStartedPulling="2025-11-28 07:58:02.872995859 +0000 UTC m=+3937.251061010" lastFinishedPulling="2025-11-28 07:58:05.38504058 +0000 UTC m=+3939.763105701" observedRunningTime="2025-11-28 07:58:05.928761733 +0000 UTC m=+3940.306826844" watchObservedRunningTime="2025-11-28 07:58:05.929290516 +0000 UTC m=+3940.307355637" Nov 28 07:58:11 crc kubenswrapper[4946]: I1128 07:58:11.777252 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8zxwx" Nov 28 07:58:11 crc kubenswrapper[4946]: I1128 07:58:11.778950 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8zxwx" Nov 28 07:58:11 crc kubenswrapper[4946]: I1128 07:58:11.844826 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8zxwx" Nov 28 07:58:12 crc kubenswrapper[4946]: I1128 07:58:12.021143 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8zxwx" Nov 28 07:58:12 crc kubenswrapper[4946]: I1128 07:58:12.097624 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8zxwx"] Nov 28 07:58:13 crc kubenswrapper[4946]: I1128 07:58:13.982453 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8zxwx" podUID="45f4c60b-0278-4b39-b437-6c32a1b02761" containerName="registry-server" containerID="cri-o://4ab54da3e80b56004d103c6964e7ab71536517b2b99a3fe0ad4abb6af5651b64" gracePeriod=2 Nov 28 07:58:14 crc kubenswrapper[4946]: I1128 07:58:14.994576 4946 generic.go:334] "Generic (PLEG): container finished" podID="45f4c60b-0278-4b39-b437-6c32a1b02761" containerID="4ab54da3e80b56004d103c6964e7ab71536517b2b99a3fe0ad4abb6af5651b64" exitCode=0 Nov 28 07:58:14 crc kubenswrapper[4946]: I1128 07:58:14.994622 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8zxwx" event={"ID":"45f4c60b-0278-4b39-b437-6c32a1b02761","Type":"ContainerDied","Data":"4ab54da3e80b56004d103c6964e7ab71536517b2b99a3fe0ad4abb6af5651b64"} Nov 28 07:58:15 crc kubenswrapper[4946]: I1128 07:58:15.690564 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8zxwx" Nov 28 07:58:15 crc kubenswrapper[4946]: I1128 07:58:15.840521 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf74z\" (UniqueName: \"kubernetes.io/projected/45f4c60b-0278-4b39-b437-6c32a1b02761-kube-api-access-cf74z\") pod \"45f4c60b-0278-4b39-b437-6c32a1b02761\" (UID: \"45f4c60b-0278-4b39-b437-6c32a1b02761\") " Nov 28 07:58:15 crc kubenswrapper[4946]: I1128 07:58:15.840604 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f4c60b-0278-4b39-b437-6c32a1b02761-utilities\") pod \"45f4c60b-0278-4b39-b437-6c32a1b02761\" (UID: \"45f4c60b-0278-4b39-b437-6c32a1b02761\") " Nov 28 07:58:15 crc kubenswrapper[4946]: I1128 07:58:15.840649 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f4c60b-0278-4b39-b437-6c32a1b02761-catalog-content\") pod \"45f4c60b-0278-4b39-b437-6c32a1b02761\" (UID: \"45f4c60b-0278-4b39-b437-6c32a1b02761\") " Nov 28 07:58:15 crc kubenswrapper[4946]: I1128 07:58:15.841970 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45f4c60b-0278-4b39-b437-6c32a1b02761-utilities" (OuterVolumeSpecName: "utilities") pod "45f4c60b-0278-4b39-b437-6c32a1b02761" (UID: "45f4c60b-0278-4b39-b437-6c32a1b02761"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:58:15 crc kubenswrapper[4946]: I1128 07:58:15.846988 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45f4c60b-0278-4b39-b437-6c32a1b02761-kube-api-access-cf74z" (OuterVolumeSpecName: "kube-api-access-cf74z") pod "45f4c60b-0278-4b39-b437-6c32a1b02761" (UID: "45f4c60b-0278-4b39-b437-6c32a1b02761"). InnerVolumeSpecName "kube-api-access-cf74z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:58:15 crc kubenswrapper[4946]: I1128 07:58:15.917723 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45f4c60b-0278-4b39-b437-6c32a1b02761-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45f4c60b-0278-4b39-b437-6c32a1b02761" (UID: "45f4c60b-0278-4b39-b437-6c32a1b02761"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:58:15 crc kubenswrapper[4946]: I1128 07:58:15.942698 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf74z\" (UniqueName: \"kubernetes.io/projected/45f4c60b-0278-4b39-b437-6c32a1b02761-kube-api-access-cf74z\") on node \"crc\" DevicePath \"\"" Nov 28 07:58:15 crc kubenswrapper[4946]: I1128 07:58:15.942733 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f4c60b-0278-4b39-b437-6c32a1b02761-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:58:15 crc kubenswrapper[4946]: I1128 07:58:15.942749 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f4c60b-0278-4b39-b437-6c32a1b02761-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:58:15 crc kubenswrapper[4946]: I1128 07:58:15.994275 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 07:58:15 crc kubenswrapper[4946]: E1128 07:58:15.994936 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:58:16 crc kubenswrapper[4946]: I1128 07:58:16.021518 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8zxwx" event={"ID":"45f4c60b-0278-4b39-b437-6c32a1b02761","Type":"ContainerDied","Data":"3fea6104762f98ad80fa93430ec0ecb8f876054ae029b0f2e5f15623d09ddab7"} Nov 28 07:58:16 crc kubenswrapper[4946]: I1128 07:58:16.021777 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8zxwx" Nov 28 07:58:16 crc kubenswrapper[4946]: I1128 07:58:16.022586 4946 scope.go:117] "RemoveContainer" containerID="4ab54da3e80b56004d103c6964e7ab71536517b2b99a3fe0ad4abb6af5651b64" Nov 28 07:58:16 crc kubenswrapper[4946]: I1128 07:58:16.052818 4946 scope.go:117] "RemoveContainer" containerID="826921970cfdec813348adfe371aea7fe901c3379be48bbe13974cb69546e54c" Nov 28 07:58:16 crc kubenswrapper[4946]: I1128 07:58:16.069256 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8zxwx"] Nov 28 07:58:16 crc kubenswrapper[4946]: I1128 07:58:16.082595 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8zxwx"] Nov 28 07:58:16 crc kubenswrapper[4946]: I1128 07:58:16.093127 4946 scope.go:117] "RemoveContainer" containerID="c727d4db5d17934aaa9218f66e9f9648cca9ac4ead73bc116af90fdd9fdd2c11" Nov 28 07:58:18 crc kubenswrapper[4946]: I1128 07:58:18.000941 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45f4c60b-0278-4b39-b437-6c32a1b02761" path="/var/lib/kubelet/pods/45f4c60b-0278-4b39-b437-6c32a1b02761/volumes" Nov 28 07:58:29 crc kubenswrapper[4946]: I1128 07:58:29.990528 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 07:58:29 crc kubenswrapper[4946]: E1128 07:58:29.991779 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:58:40 crc kubenswrapper[4946]: I1128 07:58:40.990422 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 07:58:40 crc kubenswrapper[4946]: E1128 07:58:40.991560 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:58:52 crc kubenswrapper[4946]: I1128 07:58:52.957156 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-whrkp"] Nov 28 07:58:52 crc kubenswrapper[4946]: E1128 07:58:52.961407 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45f4c60b-0278-4b39-b437-6c32a1b02761" containerName="extract-content" Nov 28 07:58:52 crc kubenswrapper[4946]: I1128 07:58:52.961458 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f4c60b-0278-4b39-b437-6c32a1b02761" containerName="extract-content" Nov 28 07:58:52 crc kubenswrapper[4946]: E1128 07:58:52.961546 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45f4c60b-0278-4b39-b437-6c32a1b02761" containerName="registry-server" Nov 28 07:58:52 crc kubenswrapper[4946]: I1128 07:58:52.961564 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f4c60b-0278-4b39-b437-6c32a1b02761" containerName="registry-server" Nov 28 07:58:52 crc kubenswrapper[4946]: E1128 07:58:52.961597 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45f4c60b-0278-4b39-b437-6c32a1b02761" containerName="extract-utilities" Nov 28 07:58:52 crc kubenswrapper[4946]: I1128 07:58:52.961612 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f4c60b-0278-4b39-b437-6c32a1b02761" containerName="extract-utilities" Nov 28 07:58:52 crc kubenswrapper[4946]: I1128 07:58:52.961859 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="45f4c60b-0278-4b39-b437-6c32a1b02761" containerName="registry-server" Nov 28 07:58:52 crc kubenswrapper[4946]: I1128 07:58:52.963648 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-whrkp" Nov 28 07:58:52 crc kubenswrapper[4946]: I1128 07:58:52.967058 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73987f23-97f0-4a80-99a1-03af7f5344bb-catalog-content\") pod \"redhat-operators-whrkp\" (UID: \"73987f23-97f0-4a80-99a1-03af7f5344bb\") " pod="openshift-marketplace/redhat-operators-whrkp" Nov 28 07:58:52 crc kubenswrapper[4946]: I1128 07:58:52.967203 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73987f23-97f0-4a80-99a1-03af7f5344bb-utilities\") pod \"redhat-operators-whrkp\" (UID: \"73987f23-97f0-4a80-99a1-03af7f5344bb\") " pod="openshift-marketplace/redhat-operators-whrkp" Nov 28 07:58:52 crc kubenswrapper[4946]: I1128 07:58:52.967368 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74swh\" (UniqueName: \"kubernetes.io/projected/73987f23-97f0-4a80-99a1-03af7f5344bb-kube-api-access-74swh\") pod \"redhat-operators-whrkp\" (UID: \"73987f23-97f0-4a80-99a1-03af7f5344bb\") " pod="openshift-marketplace/redhat-operators-whrkp" Nov 28 07:58:52 crc kubenswrapper[4946]: I1128 07:58:52.976511 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-whrkp"] Nov 28 07:58:52 crc kubenswrapper[4946]: I1128 07:58:52.990285 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 07:58:52 crc kubenswrapper[4946]: E1128 07:58:52.991156 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:58:53 crc kubenswrapper[4946]: I1128 07:58:53.068176 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74swh\" (UniqueName: \"kubernetes.io/projected/73987f23-97f0-4a80-99a1-03af7f5344bb-kube-api-access-74swh\") pod \"redhat-operators-whrkp\" (UID: \"73987f23-97f0-4a80-99a1-03af7f5344bb\") " pod="openshift-marketplace/redhat-operators-whrkp" Nov 28 07:58:53 crc kubenswrapper[4946]: I1128 07:58:53.068415 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73987f23-97f0-4a80-99a1-03af7f5344bb-catalog-content\") pod \"redhat-operators-whrkp\" (UID: \"73987f23-97f0-4a80-99a1-03af7f5344bb\") " pod="openshift-marketplace/redhat-operators-whrkp" Nov 28 07:58:53 crc kubenswrapper[4946]: I1128 07:58:53.068552 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73987f23-97f0-4a80-99a1-03af7f5344bb-utilities\") pod \"redhat-operators-whrkp\" (UID: \"73987f23-97f0-4a80-99a1-03af7f5344bb\") " pod="openshift-marketplace/redhat-operators-whrkp" Nov 28 07:58:53 crc kubenswrapper[4946]: I1128 07:58:53.069211 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73987f23-97f0-4a80-99a1-03af7f5344bb-utilities\") pod \"redhat-operators-whrkp\" (UID: \"73987f23-97f0-4a80-99a1-03af7f5344bb\") " pod="openshift-marketplace/redhat-operators-whrkp" Nov 28 07:58:53 crc kubenswrapper[4946]: I1128 07:58:53.069650 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73987f23-97f0-4a80-99a1-03af7f5344bb-catalog-content\") pod \"redhat-operators-whrkp\" (UID: \"73987f23-97f0-4a80-99a1-03af7f5344bb\") " pod="openshift-marketplace/redhat-operators-whrkp" Nov 28 07:58:53 crc kubenswrapper[4946]: I1128 07:58:53.091926 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74swh\" (UniqueName: \"kubernetes.io/projected/73987f23-97f0-4a80-99a1-03af7f5344bb-kube-api-access-74swh\") pod \"redhat-operators-whrkp\" (UID: \"73987f23-97f0-4a80-99a1-03af7f5344bb\") " pod="openshift-marketplace/redhat-operators-whrkp" Nov 28 07:58:53 crc kubenswrapper[4946]: I1128 07:58:53.297509 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-whrkp" Nov 28 07:58:53 crc kubenswrapper[4946]: I1128 07:58:53.776614 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-whrkp"] Nov 28 07:58:54 crc kubenswrapper[4946]: I1128 07:58:54.371084 4946 generic.go:334] "Generic (PLEG): container finished" podID="73987f23-97f0-4a80-99a1-03af7f5344bb" containerID="645de69ddfc7c17281560671e49193b53ae44e1203a8dcec8e85eafa1b79d54b" exitCode=0 Nov 28 07:58:54 crc kubenswrapper[4946]: I1128 07:58:54.371207 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whrkp" event={"ID":"73987f23-97f0-4a80-99a1-03af7f5344bb","Type":"ContainerDied","Data":"645de69ddfc7c17281560671e49193b53ae44e1203a8dcec8e85eafa1b79d54b"} Nov 28 07:58:54 crc kubenswrapper[4946]: I1128 07:58:54.371586 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whrkp" event={"ID":"73987f23-97f0-4a80-99a1-03af7f5344bb","Type":"ContainerStarted","Data":"083c845016af1139e021ae26b0c9e53c988a723641c46a4419c790b0ccce1c68"} Nov 28 07:58:55 crc kubenswrapper[4946]: I1128 07:58:55.385786 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whrkp" event={"ID":"73987f23-97f0-4a80-99a1-03af7f5344bb","Type":"ContainerStarted","Data":"e9a93d1311d7a237935d1e45e9c74b28a4df4f5b89ab45d88cfef8a6cbd56663"} Nov 28 07:58:56 crc kubenswrapper[4946]: I1128 07:58:56.397612 4946 generic.go:334] "Generic (PLEG): container finished" podID="73987f23-97f0-4a80-99a1-03af7f5344bb" containerID="e9a93d1311d7a237935d1e45e9c74b28a4df4f5b89ab45d88cfef8a6cbd56663" exitCode=0 Nov 28 07:58:56 crc kubenswrapper[4946]: I1128 07:58:56.397741 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whrkp" event={"ID":"73987f23-97f0-4a80-99a1-03af7f5344bb","Type":"ContainerDied","Data":"e9a93d1311d7a237935d1e45e9c74b28a4df4f5b89ab45d88cfef8a6cbd56663"} Nov 28 07:58:57 crc kubenswrapper[4946]: I1128 07:58:57.410996 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whrkp" event={"ID":"73987f23-97f0-4a80-99a1-03af7f5344bb","Type":"ContainerStarted","Data":"f9143c693c0af6506a3a7695bcee1e5dff7f7809f9b4a942aff42bc974bd6a71"} Nov 28 07:58:57 crc kubenswrapper[4946]: I1128 07:58:57.440903 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-whrkp" podStartSLOduration=2.9769854540000003 podStartE2EDuration="5.440885221s" podCreationTimestamp="2025-11-28 07:58:52 +0000 UTC" firstStartedPulling="2025-11-28 07:58:54.372686622 +0000 UTC m=+3988.750751743" lastFinishedPulling="2025-11-28 07:58:56.836586359 +0000 UTC m=+3991.214651510" observedRunningTime="2025-11-28 07:58:57.439179569 +0000 UTC m=+3991.817244710" watchObservedRunningTime="2025-11-28 07:58:57.440885221 +0000 UTC m=+3991.818950342" Nov 28 07:59:03 crc kubenswrapper[4946]: I1128 07:59:03.297884 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-whrkp" Nov 28 07:59:03 crc kubenswrapper[4946]: I1128 07:59:03.299889 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-whrkp" Nov 28 07:59:04 crc kubenswrapper[4946]: I1128 07:59:04.366751 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-whrkp" podUID="73987f23-97f0-4a80-99a1-03af7f5344bb" containerName="registry-server" probeResult="failure" output=< Nov 28 07:59:04 crc kubenswrapper[4946]: timeout: failed to connect service ":50051" within 1s Nov 28 07:59:04 crc kubenswrapper[4946]: > Nov 28 07:59:05 crc kubenswrapper[4946]: I1128 07:59:05.997887 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 07:59:06 crc kubenswrapper[4946]: E1128 07:59:05.998903 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:59:13 crc kubenswrapper[4946]: I1128 07:59:13.371081 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-whrkp" Nov 28 07:59:13 crc kubenswrapper[4946]: I1128 07:59:13.435361 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-whrkp" Nov 28 07:59:13 crc kubenswrapper[4946]: I1128 07:59:13.627150 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-whrkp"] Nov 28 07:59:14 crc kubenswrapper[4946]: I1128 07:59:14.578619 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-whrkp" podUID="73987f23-97f0-4a80-99a1-03af7f5344bb" containerName="registry-server" containerID="cri-o://f9143c693c0af6506a3a7695bcee1e5dff7f7809f9b4a942aff42bc974bd6a71" gracePeriod=2 Nov 28 07:59:15 crc kubenswrapper[4946]: I1128 07:59:15.035964 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-whrkp" Nov 28 07:59:15 crc kubenswrapper[4946]: I1128 07:59:15.147304 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73987f23-97f0-4a80-99a1-03af7f5344bb-catalog-content\") pod \"73987f23-97f0-4a80-99a1-03af7f5344bb\" (UID: \"73987f23-97f0-4a80-99a1-03af7f5344bb\") " Nov 28 07:59:15 crc kubenswrapper[4946]: I1128 07:59:15.147430 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74swh\" (UniqueName: \"kubernetes.io/projected/73987f23-97f0-4a80-99a1-03af7f5344bb-kube-api-access-74swh\") pod \"73987f23-97f0-4a80-99a1-03af7f5344bb\" (UID: \"73987f23-97f0-4a80-99a1-03af7f5344bb\") " Nov 28 07:59:15 crc kubenswrapper[4946]: I1128 07:59:15.147587 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73987f23-97f0-4a80-99a1-03af7f5344bb-utilities\") pod \"73987f23-97f0-4a80-99a1-03af7f5344bb\" (UID: \"73987f23-97f0-4a80-99a1-03af7f5344bb\") " Nov 28 07:59:15 crc kubenswrapper[4946]: I1128 07:59:15.148848 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73987f23-97f0-4a80-99a1-03af7f5344bb-utilities" (OuterVolumeSpecName: "utilities") pod "73987f23-97f0-4a80-99a1-03af7f5344bb" (UID: "73987f23-97f0-4a80-99a1-03af7f5344bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:59:15 crc kubenswrapper[4946]: I1128 07:59:15.159749 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73987f23-97f0-4a80-99a1-03af7f5344bb-kube-api-access-74swh" (OuterVolumeSpecName: "kube-api-access-74swh") pod "73987f23-97f0-4a80-99a1-03af7f5344bb" (UID: "73987f23-97f0-4a80-99a1-03af7f5344bb"). InnerVolumeSpecName "kube-api-access-74swh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:59:15 crc kubenswrapper[4946]: I1128 07:59:15.250115 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74swh\" (UniqueName: \"kubernetes.io/projected/73987f23-97f0-4a80-99a1-03af7f5344bb-kube-api-access-74swh\") on node \"crc\" DevicePath \"\"" Nov 28 07:59:15 crc kubenswrapper[4946]: I1128 07:59:15.250159 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73987f23-97f0-4a80-99a1-03af7f5344bb-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:59:15 crc kubenswrapper[4946]: I1128 07:59:15.277232 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73987f23-97f0-4a80-99a1-03af7f5344bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73987f23-97f0-4a80-99a1-03af7f5344bb" (UID: "73987f23-97f0-4a80-99a1-03af7f5344bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:59:15 crc kubenswrapper[4946]: I1128 07:59:15.351109 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73987f23-97f0-4a80-99a1-03af7f5344bb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:59:15 crc kubenswrapper[4946]: I1128 07:59:15.592779 4946 generic.go:334] "Generic (PLEG): container finished" podID="73987f23-97f0-4a80-99a1-03af7f5344bb" containerID="f9143c693c0af6506a3a7695bcee1e5dff7f7809f9b4a942aff42bc974bd6a71" exitCode=0 Nov 28 07:59:15 crc kubenswrapper[4946]: I1128 07:59:15.592843 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whrkp" event={"ID":"73987f23-97f0-4a80-99a1-03af7f5344bb","Type":"ContainerDied","Data":"f9143c693c0af6506a3a7695bcee1e5dff7f7809f9b4a942aff42bc974bd6a71"} Nov 28 07:59:15 crc kubenswrapper[4946]: I1128 07:59:15.592921 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whrkp" event={"ID":"73987f23-97f0-4a80-99a1-03af7f5344bb","Type":"ContainerDied","Data":"083c845016af1139e021ae26b0c9e53c988a723641c46a4419c790b0ccce1c68"} Nov 28 07:59:15 crc kubenswrapper[4946]: I1128 07:59:15.592930 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-whrkp" Nov 28 07:59:15 crc kubenswrapper[4946]: I1128 07:59:15.592947 4946 scope.go:117] "RemoveContainer" containerID="f9143c693c0af6506a3a7695bcee1e5dff7f7809f9b4a942aff42bc974bd6a71" Nov 28 07:59:15 crc kubenswrapper[4946]: I1128 07:59:15.633063 4946 scope.go:117] "RemoveContainer" containerID="e9a93d1311d7a237935d1e45e9c74b28a4df4f5b89ab45d88cfef8a6cbd56663" Nov 28 07:59:15 crc kubenswrapper[4946]: I1128 07:59:15.651399 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-whrkp"] Nov 28 07:59:15 crc kubenswrapper[4946]: I1128 07:59:15.658323 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-whrkp"] Nov 28 07:59:15 crc kubenswrapper[4946]: I1128 07:59:15.665593 4946 scope.go:117] "RemoveContainer" containerID="645de69ddfc7c17281560671e49193b53ae44e1203a8dcec8e85eafa1b79d54b" Nov 28 07:59:15 crc kubenswrapper[4946]: I1128 07:59:15.706252 4946 scope.go:117] "RemoveContainer" containerID="f9143c693c0af6506a3a7695bcee1e5dff7f7809f9b4a942aff42bc974bd6a71" Nov 28 07:59:15 crc kubenswrapper[4946]: E1128 07:59:15.706790 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9143c693c0af6506a3a7695bcee1e5dff7f7809f9b4a942aff42bc974bd6a71\": container with ID starting with f9143c693c0af6506a3a7695bcee1e5dff7f7809f9b4a942aff42bc974bd6a71 not found: ID does not exist" containerID="f9143c693c0af6506a3a7695bcee1e5dff7f7809f9b4a942aff42bc974bd6a71" Nov 28 07:59:15 crc kubenswrapper[4946]: I1128 07:59:15.706837 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9143c693c0af6506a3a7695bcee1e5dff7f7809f9b4a942aff42bc974bd6a71"} err="failed to get container status \"f9143c693c0af6506a3a7695bcee1e5dff7f7809f9b4a942aff42bc974bd6a71\": rpc error: code = NotFound desc = could not find container \"f9143c693c0af6506a3a7695bcee1e5dff7f7809f9b4a942aff42bc974bd6a71\": container with ID starting with f9143c693c0af6506a3a7695bcee1e5dff7f7809f9b4a942aff42bc974bd6a71 not found: ID does not exist" Nov 28 07:59:15 crc kubenswrapper[4946]: I1128 07:59:15.706864 4946 scope.go:117] "RemoveContainer" containerID="e9a93d1311d7a237935d1e45e9c74b28a4df4f5b89ab45d88cfef8a6cbd56663" Nov 28 07:59:15 crc kubenswrapper[4946]: E1128 07:59:15.707262 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9a93d1311d7a237935d1e45e9c74b28a4df4f5b89ab45d88cfef8a6cbd56663\": container with ID starting with e9a93d1311d7a237935d1e45e9c74b28a4df4f5b89ab45d88cfef8a6cbd56663 not found: ID does not exist" containerID="e9a93d1311d7a237935d1e45e9c74b28a4df4f5b89ab45d88cfef8a6cbd56663" Nov 28 07:59:15 crc kubenswrapper[4946]: I1128 07:59:15.707316 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9a93d1311d7a237935d1e45e9c74b28a4df4f5b89ab45d88cfef8a6cbd56663"} err="failed to get container status \"e9a93d1311d7a237935d1e45e9c74b28a4df4f5b89ab45d88cfef8a6cbd56663\": rpc error: code = NotFound desc = could not find container \"e9a93d1311d7a237935d1e45e9c74b28a4df4f5b89ab45d88cfef8a6cbd56663\": container with ID starting with e9a93d1311d7a237935d1e45e9c74b28a4df4f5b89ab45d88cfef8a6cbd56663 not found: ID does not exist" Nov 28 07:59:15 crc kubenswrapper[4946]: I1128 07:59:15.707335 4946 scope.go:117] "RemoveContainer" containerID="645de69ddfc7c17281560671e49193b53ae44e1203a8dcec8e85eafa1b79d54b" Nov 28 07:59:15 crc kubenswrapper[4946]: E1128 07:59:15.707703 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"645de69ddfc7c17281560671e49193b53ae44e1203a8dcec8e85eafa1b79d54b\": container with ID starting with 645de69ddfc7c17281560671e49193b53ae44e1203a8dcec8e85eafa1b79d54b not found: ID does not exist" containerID="645de69ddfc7c17281560671e49193b53ae44e1203a8dcec8e85eafa1b79d54b" Nov 28 07:59:15 crc kubenswrapper[4946]: I1128 07:59:15.707738 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"645de69ddfc7c17281560671e49193b53ae44e1203a8dcec8e85eafa1b79d54b"} err="failed to get container status \"645de69ddfc7c17281560671e49193b53ae44e1203a8dcec8e85eafa1b79d54b\": rpc error: code = NotFound desc = could not find container \"645de69ddfc7c17281560671e49193b53ae44e1203a8dcec8e85eafa1b79d54b\": container with ID starting with 645de69ddfc7c17281560671e49193b53ae44e1203a8dcec8e85eafa1b79d54b not found: ID does not exist" Nov 28 07:59:16 crc kubenswrapper[4946]: I1128 07:59:16.000671 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73987f23-97f0-4a80-99a1-03af7f5344bb" path="/var/lib/kubelet/pods/73987f23-97f0-4a80-99a1-03af7f5344bb/volumes" Nov 28 07:59:20 crc kubenswrapper[4946]: I1128 07:59:20.990375 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 07:59:20 crc kubenswrapper[4946]: E1128 07:59:20.991826 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:59:36 crc kubenswrapper[4946]: I1128 07:59:36.001839 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 07:59:36 crc kubenswrapper[4946]: E1128 07:59:36.002876 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 07:59:50 crc kubenswrapper[4946]: I1128 07:59:50.989913 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 07:59:50 crc kubenswrapper[4946]: E1128 07:59:50.990970 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:00:00 crc kubenswrapper[4946]: I1128 08:00:00.197605 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405280-c8d4d"] Nov 28 08:00:00 crc kubenswrapper[4946]: E1128 08:00:00.199022 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73987f23-97f0-4a80-99a1-03af7f5344bb" containerName="extract-content" Nov 28 08:00:00 crc kubenswrapper[4946]: I1128 08:00:00.199046 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="73987f23-97f0-4a80-99a1-03af7f5344bb" containerName="extract-content" Nov 28 08:00:00 crc kubenswrapper[4946]: E1128 08:00:00.199071 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73987f23-97f0-4a80-99a1-03af7f5344bb" containerName="extract-utilities" Nov 28 08:00:00 crc kubenswrapper[4946]: I1128 08:00:00.199085 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="73987f23-97f0-4a80-99a1-03af7f5344bb" containerName="extract-utilities" Nov 28 08:00:00 crc kubenswrapper[4946]: E1128 08:00:00.199129 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73987f23-97f0-4a80-99a1-03af7f5344bb" containerName="registry-server" Nov 28 08:00:00 crc kubenswrapper[4946]: I1128 08:00:00.199140 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="73987f23-97f0-4a80-99a1-03af7f5344bb" containerName="registry-server" Nov 28 08:00:00 crc kubenswrapper[4946]: I1128 08:00:00.199359 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="73987f23-97f0-4a80-99a1-03af7f5344bb" containerName="registry-server" Nov 28 08:00:00 crc kubenswrapper[4946]: I1128 08:00:00.200088 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405280-c8d4d" Nov 28 08:00:00 crc kubenswrapper[4946]: I1128 08:00:00.202527 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 08:00:00 crc kubenswrapper[4946]: I1128 08:00:00.203042 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 08:00:00 crc kubenswrapper[4946]: I1128 08:00:00.205254 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405280-c8d4d"] Nov 28 08:00:00 crc kubenswrapper[4946]: I1128 08:00:00.333109 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18c21b02-adb9-480f-8577-18a5542d3950-config-volume\") pod \"collect-profiles-29405280-c8d4d\" (UID: \"18c21b02-adb9-480f-8577-18a5542d3950\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405280-c8d4d" Nov 28 08:00:00 crc kubenswrapper[4946]: I1128 08:00:00.333192 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76k4f\" (UniqueName: \"kubernetes.io/projected/18c21b02-adb9-480f-8577-18a5542d3950-kube-api-access-76k4f\") pod \"collect-profiles-29405280-c8d4d\" (UID: \"18c21b02-adb9-480f-8577-18a5542d3950\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405280-c8d4d" Nov 28 08:00:00 crc kubenswrapper[4946]: I1128 08:00:00.333265 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18c21b02-adb9-480f-8577-18a5542d3950-secret-volume\") pod \"collect-profiles-29405280-c8d4d\" (UID: \"18c21b02-adb9-480f-8577-18a5542d3950\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405280-c8d4d" Nov 28 08:00:00 crc kubenswrapper[4946]: I1128 08:00:00.434744 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18c21b02-adb9-480f-8577-18a5542d3950-secret-volume\") pod \"collect-profiles-29405280-c8d4d\" (UID: \"18c21b02-adb9-480f-8577-18a5542d3950\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405280-c8d4d" Nov 28 08:00:00 crc kubenswrapper[4946]: I1128 08:00:00.435316 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18c21b02-adb9-480f-8577-18a5542d3950-config-volume\") pod \"collect-profiles-29405280-c8d4d\" (UID: \"18c21b02-adb9-480f-8577-18a5542d3950\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405280-c8d4d" Nov 28 08:00:00 crc kubenswrapper[4946]: I1128 08:00:00.435664 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76k4f\" (UniqueName: \"kubernetes.io/projected/18c21b02-adb9-480f-8577-18a5542d3950-kube-api-access-76k4f\") pod \"collect-profiles-29405280-c8d4d\" (UID: \"18c21b02-adb9-480f-8577-18a5542d3950\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405280-c8d4d" Nov 28 08:00:00 crc kubenswrapper[4946]: I1128 08:00:00.436399 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18c21b02-adb9-480f-8577-18a5542d3950-config-volume\") pod \"collect-profiles-29405280-c8d4d\" (UID: \"18c21b02-adb9-480f-8577-18a5542d3950\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405280-c8d4d" Nov 28 08:00:00 crc kubenswrapper[4946]: I1128 08:00:00.445311 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18c21b02-adb9-480f-8577-18a5542d3950-secret-volume\") pod \"collect-profiles-29405280-c8d4d\" (UID: \"18c21b02-adb9-480f-8577-18a5542d3950\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405280-c8d4d" Nov 28 08:00:00 crc kubenswrapper[4946]: I1128 08:00:00.456660 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76k4f\" (UniqueName: \"kubernetes.io/projected/18c21b02-adb9-480f-8577-18a5542d3950-kube-api-access-76k4f\") pod \"collect-profiles-29405280-c8d4d\" (UID: \"18c21b02-adb9-480f-8577-18a5542d3950\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405280-c8d4d" Nov 28 08:00:00 crc kubenswrapper[4946]: I1128 08:00:00.540389 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405280-c8d4d" Nov 28 08:00:00 crc kubenswrapper[4946]: I1128 08:00:00.991623 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405280-c8d4d"] Nov 28 08:00:01 crc kubenswrapper[4946]: I1128 08:00:01.018699 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405280-c8d4d" event={"ID":"18c21b02-adb9-480f-8577-18a5542d3950","Type":"ContainerStarted","Data":"57454ff63feb40116cb009dc665a5c731b27d34562823578a81d73192051003b"} Nov 28 08:00:01 crc kubenswrapper[4946]: I1128 08:00:01.990812 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 08:00:01 crc kubenswrapper[4946]: E1128 08:00:01.991210 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:00:02 crc kubenswrapper[4946]: I1128 08:00:02.030220 4946 generic.go:334] "Generic (PLEG): container finished" podID="18c21b02-adb9-480f-8577-18a5542d3950" containerID="a49d87de1b2c16571e3b70f8e9bede63c7bfb51b59480392d7780fb6fb294966" exitCode=0 Nov 28 08:00:02 crc kubenswrapper[4946]: I1128 08:00:02.030264 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405280-c8d4d" event={"ID":"18c21b02-adb9-480f-8577-18a5542d3950","Type":"ContainerDied","Data":"a49d87de1b2c16571e3b70f8e9bede63c7bfb51b59480392d7780fb6fb294966"} Nov 28 08:00:03 crc kubenswrapper[4946]: I1128 08:00:03.400050 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405280-c8d4d" Nov 28 08:00:03 crc kubenswrapper[4946]: I1128 08:00:03.481586 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18c21b02-adb9-480f-8577-18a5542d3950-config-volume\") pod \"18c21b02-adb9-480f-8577-18a5542d3950\" (UID: \"18c21b02-adb9-480f-8577-18a5542d3950\") " Nov 28 08:00:03 crc kubenswrapper[4946]: I1128 08:00:03.481652 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76k4f\" (UniqueName: \"kubernetes.io/projected/18c21b02-adb9-480f-8577-18a5542d3950-kube-api-access-76k4f\") pod \"18c21b02-adb9-480f-8577-18a5542d3950\" (UID: \"18c21b02-adb9-480f-8577-18a5542d3950\") " Nov 28 08:00:03 crc kubenswrapper[4946]: I1128 08:00:03.482676 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18c21b02-adb9-480f-8577-18a5542d3950-config-volume" (OuterVolumeSpecName: "config-volume") pod "18c21b02-adb9-480f-8577-18a5542d3950" (UID: "18c21b02-adb9-480f-8577-18a5542d3950"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:00:03 crc kubenswrapper[4946]: I1128 08:00:03.488447 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18c21b02-adb9-480f-8577-18a5542d3950-kube-api-access-76k4f" (OuterVolumeSpecName: "kube-api-access-76k4f") pod "18c21b02-adb9-480f-8577-18a5542d3950" (UID: "18c21b02-adb9-480f-8577-18a5542d3950"). InnerVolumeSpecName "kube-api-access-76k4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:00:03 crc kubenswrapper[4946]: I1128 08:00:03.583600 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18c21b02-adb9-480f-8577-18a5542d3950-secret-volume\") pod \"18c21b02-adb9-480f-8577-18a5542d3950\" (UID: \"18c21b02-adb9-480f-8577-18a5542d3950\") " Nov 28 08:00:03 crc kubenswrapper[4946]: I1128 08:00:03.584400 4946 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18c21b02-adb9-480f-8577-18a5542d3950-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 08:00:03 crc kubenswrapper[4946]: I1128 08:00:03.584451 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76k4f\" (UniqueName: \"kubernetes.io/projected/18c21b02-adb9-480f-8577-18a5542d3950-kube-api-access-76k4f\") on node \"crc\" DevicePath \"\"" Nov 28 08:00:03 crc kubenswrapper[4946]: I1128 08:00:03.586821 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18c21b02-adb9-480f-8577-18a5542d3950-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "18c21b02-adb9-480f-8577-18a5542d3950" (UID: "18c21b02-adb9-480f-8577-18a5542d3950"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:00:03 crc kubenswrapper[4946]: I1128 08:00:03.685882 4946 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18c21b02-adb9-480f-8577-18a5542d3950-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 08:00:04 crc kubenswrapper[4946]: I1128 08:00:04.085015 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405280-c8d4d" event={"ID":"18c21b02-adb9-480f-8577-18a5542d3950","Type":"ContainerDied","Data":"57454ff63feb40116cb009dc665a5c731b27d34562823578a81d73192051003b"} Nov 28 08:00:04 crc kubenswrapper[4946]: I1128 08:00:04.085326 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57454ff63feb40116cb009dc665a5c731b27d34562823578a81d73192051003b" Nov 28 08:00:04 crc kubenswrapper[4946]: I1128 08:00:04.085127 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405280-c8d4d" Nov 28 08:00:04 crc kubenswrapper[4946]: I1128 08:00:04.486047 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405235-7j8pc"] Nov 28 08:00:04 crc kubenswrapper[4946]: I1128 08:00:04.494161 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405235-7j8pc"] Nov 28 08:00:06 crc kubenswrapper[4946]: I1128 08:00:06.010818 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f58b01ab-52cf-4bd2-a799-8ca6b0de39c4" path="/var/lib/kubelet/pods/f58b01ab-52cf-4bd2-a799-8ca6b0de39c4/volumes" Nov 28 08:00:13 crc kubenswrapper[4946]: I1128 08:00:13.990666 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 08:00:13 crc kubenswrapper[4946]: E1128 08:00:13.991915 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:00:24 crc kubenswrapper[4946]: I1128 08:00:24.991643 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 08:00:24 crc kubenswrapper[4946]: E1128 08:00:24.992761 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:00:33 crc kubenswrapper[4946]: I1128 08:00:33.915441 4946 scope.go:117] "RemoveContainer" containerID="10cb33614b5154ddfb6e59d821c3c8ec7bd2185825823036d0b7bbf3af9f8614" Nov 28 08:00:39 crc kubenswrapper[4946]: I1128 08:00:39.991108 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 08:00:39 crc kubenswrapper[4946]: E1128 08:00:39.992292 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:00:51 crc kubenswrapper[4946]: I1128 08:00:51.990788 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 08:00:51 crc kubenswrapper[4946]: E1128 08:00:51.991973 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:01:06 crc kubenswrapper[4946]: I1128 08:01:06.022081 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 08:01:06 crc kubenswrapper[4946]: I1128 08:01:06.717385 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"57ced4d37902c4fc01c7f5bed2b1f2db54d548f22d2fd23d59206e5db4f9ddf1"} Nov 28 08:01:28 crc kubenswrapper[4946]: I1128 08:01:28.346183 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j6jqp"] Nov 28 08:01:28 crc kubenswrapper[4946]: E1128 08:01:28.349935 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c21b02-adb9-480f-8577-18a5542d3950" containerName="collect-profiles" Nov 28 08:01:28 crc kubenswrapper[4946]: I1128 08:01:28.350098 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c21b02-adb9-480f-8577-18a5542d3950" containerName="collect-profiles" Nov 28 08:01:28 crc kubenswrapper[4946]: I1128 08:01:28.350474 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c21b02-adb9-480f-8577-18a5542d3950" containerName="collect-profiles" Nov 28 08:01:28 crc kubenswrapper[4946]: I1128 08:01:28.352277 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j6jqp" Nov 28 08:01:28 crc kubenswrapper[4946]: I1128 08:01:28.357864 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j6jqp"] Nov 28 08:01:28 crc kubenswrapper[4946]: I1128 08:01:28.546988 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62nnk\" (UniqueName: \"kubernetes.io/projected/ee142d94-e22c-4459-9bc7-02db777aabc6-kube-api-access-62nnk\") pod \"redhat-marketplace-j6jqp\" (UID: \"ee142d94-e22c-4459-9bc7-02db777aabc6\") " pod="openshift-marketplace/redhat-marketplace-j6jqp" Nov 28 08:01:28 crc kubenswrapper[4946]: I1128 08:01:28.547060 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee142d94-e22c-4459-9bc7-02db777aabc6-catalog-content\") pod \"redhat-marketplace-j6jqp\" (UID: \"ee142d94-e22c-4459-9bc7-02db777aabc6\") " pod="openshift-marketplace/redhat-marketplace-j6jqp" Nov 28 08:01:28 crc kubenswrapper[4946]: I1128 08:01:28.547089 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee142d94-e22c-4459-9bc7-02db777aabc6-utilities\") pod \"redhat-marketplace-j6jqp\" (UID: \"ee142d94-e22c-4459-9bc7-02db777aabc6\") " pod="openshift-marketplace/redhat-marketplace-j6jqp" Nov 28 08:01:28 crc kubenswrapper[4946]: I1128 08:01:28.647760 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62nnk\" (UniqueName: \"kubernetes.io/projected/ee142d94-e22c-4459-9bc7-02db777aabc6-kube-api-access-62nnk\") pod \"redhat-marketplace-j6jqp\" (UID: \"ee142d94-e22c-4459-9bc7-02db777aabc6\") " pod="openshift-marketplace/redhat-marketplace-j6jqp" Nov 28 08:01:28 crc kubenswrapper[4946]: I1128 08:01:28.647813 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee142d94-e22c-4459-9bc7-02db777aabc6-catalog-content\") pod \"redhat-marketplace-j6jqp\" (UID: \"ee142d94-e22c-4459-9bc7-02db777aabc6\") " pod="openshift-marketplace/redhat-marketplace-j6jqp" Nov 28 08:01:28 crc kubenswrapper[4946]: I1128 08:01:28.647835 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee142d94-e22c-4459-9bc7-02db777aabc6-utilities\") pod \"redhat-marketplace-j6jqp\" (UID: \"ee142d94-e22c-4459-9bc7-02db777aabc6\") " pod="openshift-marketplace/redhat-marketplace-j6jqp" Nov 28 08:01:28 crc kubenswrapper[4946]: I1128 08:01:28.648254 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee142d94-e22c-4459-9bc7-02db777aabc6-utilities\") pod \"redhat-marketplace-j6jqp\" (UID: \"ee142d94-e22c-4459-9bc7-02db777aabc6\") " pod="openshift-marketplace/redhat-marketplace-j6jqp" Nov 28 08:01:28 crc kubenswrapper[4946]: I1128 08:01:28.648330 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee142d94-e22c-4459-9bc7-02db777aabc6-catalog-content\") pod \"redhat-marketplace-j6jqp\" (UID: \"ee142d94-e22c-4459-9bc7-02db777aabc6\") " pod="openshift-marketplace/redhat-marketplace-j6jqp" Nov 28 08:01:29 crc kubenswrapper[4946]: I1128 08:01:29.111100 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62nnk\" (UniqueName: \"kubernetes.io/projected/ee142d94-e22c-4459-9bc7-02db777aabc6-kube-api-access-62nnk\") pod \"redhat-marketplace-j6jqp\" (UID: \"ee142d94-e22c-4459-9bc7-02db777aabc6\") " pod="openshift-marketplace/redhat-marketplace-j6jqp" Nov 28 08:01:29 crc kubenswrapper[4946]: I1128 08:01:29.275868 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j6jqp" Nov 28 08:01:29 crc kubenswrapper[4946]: I1128 08:01:29.519927 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j6jqp"] Nov 28 08:01:29 crc kubenswrapper[4946]: I1128 08:01:29.946814 4946 generic.go:334] "Generic (PLEG): container finished" podID="ee142d94-e22c-4459-9bc7-02db777aabc6" containerID="94fcc3050d542df51ab2bbcb1824f687fdb3a33a1850516120c6c58bc347d06d" exitCode=0 Nov 28 08:01:29 crc kubenswrapper[4946]: I1128 08:01:29.946875 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j6jqp" event={"ID":"ee142d94-e22c-4459-9bc7-02db777aabc6","Type":"ContainerDied","Data":"94fcc3050d542df51ab2bbcb1824f687fdb3a33a1850516120c6c58bc347d06d"} Nov 28 08:01:29 crc kubenswrapper[4946]: I1128 08:01:29.946914 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j6jqp" event={"ID":"ee142d94-e22c-4459-9bc7-02db777aabc6","Type":"ContainerStarted","Data":"e97c80e69a54f6235ce0fe47fe370ab8734005743def7ba9be5835fae2030685"} Nov 28 08:01:31 crc kubenswrapper[4946]: I1128 08:01:31.969941 4946 generic.go:334] "Generic (PLEG): container finished" podID="ee142d94-e22c-4459-9bc7-02db777aabc6" containerID="87e775c861fa594df042a77a65434b3b0889a22ec865851bedafdd9de9cbc65b" exitCode=0 Nov 28 08:01:31 crc kubenswrapper[4946]: I1128 08:01:31.970058 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j6jqp" event={"ID":"ee142d94-e22c-4459-9bc7-02db777aabc6","Type":"ContainerDied","Data":"87e775c861fa594df042a77a65434b3b0889a22ec865851bedafdd9de9cbc65b"} Nov 28 08:01:34 crc kubenswrapper[4946]: I1128 08:01:34.009095 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j6jqp" event={"ID":"ee142d94-e22c-4459-9bc7-02db777aabc6","Type":"ContainerStarted","Data":"111cfdbbd1da94ca177f5dfed444c02046b837379187b9c74331ce3b4a65ff6d"} Nov 28 08:01:34 crc kubenswrapper[4946]: I1128 08:01:34.038554 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j6jqp" podStartSLOduration=3.185322245 podStartE2EDuration="6.038511712s" podCreationTimestamp="2025-11-28 08:01:28 +0000 UTC" firstStartedPulling="2025-11-28 08:01:29.948492301 +0000 UTC m=+4144.326557452" lastFinishedPulling="2025-11-28 08:01:32.801681798 +0000 UTC m=+4147.179746919" observedRunningTime="2025-11-28 08:01:34.029305963 +0000 UTC m=+4148.407371114" watchObservedRunningTime="2025-11-28 08:01:34.038511712 +0000 UTC m=+4148.416576833" Nov 28 08:01:39 crc kubenswrapper[4946]: I1128 08:01:39.276498 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j6jqp" Nov 28 08:01:39 crc kubenswrapper[4946]: I1128 08:01:39.276991 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j6jqp" Nov 28 08:01:39 crc kubenswrapper[4946]: I1128 08:01:39.331715 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j6jqp" Nov 28 08:01:40 crc kubenswrapper[4946]: I1128 08:01:40.109982 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j6jqp" Nov 28 08:01:40 crc kubenswrapper[4946]: I1128 08:01:40.176414 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j6jqp"] Nov 28 08:01:42 crc kubenswrapper[4946]: I1128 08:01:42.070010 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j6jqp" podUID="ee142d94-e22c-4459-9bc7-02db777aabc6" containerName="registry-server" containerID="cri-o://111cfdbbd1da94ca177f5dfed444c02046b837379187b9c74331ce3b4a65ff6d" gracePeriod=2 Nov 28 08:01:43 crc kubenswrapper[4946]: I1128 08:01:43.080428 4946 generic.go:334] "Generic (PLEG): container finished" podID="ee142d94-e22c-4459-9bc7-02db777aabc6" containerID="111cfdbbd1da94ca177f5dfed444c02046b837379187b9c74331ce3b4a65ff6d" exitCode=0 Nov 28 08:01:43 crc kubenswrapper[4946]: I1128 08:01:43.080529 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j6jqp" event={"ID":"ee142d94-e22c-4459-9bc7-02db777aabc6","Type":"ContainerDied","Data":"111cfdbbd1da94ca177f5dfed444c02046b837379187b9c74331ce3b4a65ff6d"} Nov 28 08:01:43 crc kubenswrapper[4946]: I1128 08:01:43.443601 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j6jqp" Nov 28 08:01:43 crc kubenswrapper[4946]: I1128 08:01:43.521313 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee142d94-e22c-4459-9bc7-02db777aabc6-catalog-content\") pod \"ee142d94-e22c-4459-9bc7-02db777aabc6\" (UID: \"ee142d94-e22c-4459-9bc7-02db777aabc6\") " Nov 28 08:01:43 crc kubenswrapper[4946]: I1128 08:01:43.521515 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62nnk\" (UniqueName: \"kubernetes.io/projected/ee142d94-e22c-4459-9bc7-02db777aabc6-kube-api-access-62nnk\") pod \"ee142d94-e22c-4459-9bc7-02db777aabc6\" (UID: \"ee142d94-e22c-4459-9bc7-02db777aabc6\") " Nov 28 08:01:43 crc kubenswrapper[4946]: I1128 08:01:43.521590 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee142d94-e22c-4459-9bc7-02db777aabc6-utilities\") pod \"ee142d94-e22c-4459-9bc7-02db777aabc6\" (UID: \"ee142d94-e22c-4459-9bc7-02db777aabc6\") " Nov 28 08:01:43 crc kubenswrapper[4946]: I1128 08:01:43.522717 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee142d94-e22c-4459-9bc7-02db777aabc6-utilities" (OuterVolumeSpecName: "utilities") pod "ee142d94-e22c-4459-9bc7-02db777aabc6" (UID: "ee142d94-e22c-4459-9bc7-02db777aabc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:01:43 crc kubenswrapper[4946]: I1128 08:01:43.528599 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee142d94-e22c-4459-9bc7-02db777aabc6-kube-api-access-62nnk" (OuterVolumeSpecName: "kube-api-access-62nnk") pod "ee142d94-e22c-4459-9bc7-02db777aabc6" (UID: "ee142d94-e22c-4459-9bc7-02db777aabc6"). InnerVolumeSpecName "kube-api-access-62nnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:01:43 crc kubenswrapper[4946]: I1128 08:01:43.548718 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee142d94-e22c-4459-9bc7-02db777aabc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee142d94-e22c-4459-9bc7-02db777aabc6" (UID: "ee142d94-e22c-4459-9bc7-02db777aabc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:01:43 crc kubenswrapper[4946]: I1128 08:01:43.623987 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee142d94-e22c-4459-9bc7-02db777aabc6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 08:01:43 crc kubenswrapper[4946]: I1128 08:01:43.624038 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62nnk\" (UniqueName: \"kubernetes.io/projected/ee142d94-e22c-4459-9bc7-02db777aabc6-kube-api-access-62nnk\") on node \"crc\" DevicePath \"\"" Nov 28 08:01:43 crc kubenswrapper[4946]: I1128 08:01:43.624062 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee142d94-e22c-4459-9bc7-02db777aabc6-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 08:01:44 crc kubenswrapper[4946]: I1128 08:01:44.090403 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j6jqp" event={"ID":"ee142d94-e22c-4459-9bc7-02db777aabc6","Type":"ContainerDied","Data":"e97c80e69a54f6235ce0fe47fe370ab8734005743def7ba9be5835fae2030685"} Nov 28 08:01:44 crc kubenswrapper[4946]: I1128 08:01:44.090481 4946 scope.go:117] "RemoveContainer" containerID="111cfdbbd1da94ca177f5dfed444c02046b837379187b9c74331ce3b4a65ff6d" Nov 28 08:01:44 crc kubenswrapper[4946]: I1128 08:01:44.090639 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j6jqp" Nov 28 08:01:44 crc kubenswrapper[4946]: I1128 08:01:44.123571 4946 scope.go:117] "RemoveContainer" containerID="87e775c861fa594df042a77a65434b3b0889a22ec865851bedafdd9de9cbc65b" Nov 28 08:01:44 crc kubenswrapper[4946]: I1128 08:01:44.125338 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j6jqp"] Nov 28 08:01:44 crc kubenswrapper[4946]: I1128 08:01:44.135694 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j6jqp"] Nov 28 08:01:44 crc kubenswrapper[4946]: I1128 08:01:44.229152 4946 scope.go:117] "RemoveContainer" containerID="94fcc3050d542df51ab2bbcb1824f687fdb3a33a1850516120c6c58bc347d06d" Nov 28 08:01:46 crc kubenswrapper[4946]: I1128 08:01:46.004809 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee142d94-e22c-4459-9bc7-02db777aabc6" path="/var/lib/kubelet/pods/ee142d94-e22c-4459-9bc7-02db777aabc6/volumes" Nov 28 08:03:24 crc kubenswrapper[4946]: I1128 08:03:24.730602 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:03:24 crc kubenswrapper[4946]: I1128 08:03:24.731288 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:03:54 crc kubenswrapper[4946]: I1128 08:03:54.731056 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:03:54 crc kubenswrapper[4946]: I1128 08:03:54.731931 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:04:24 crc kubenswrapper[4946]: I1128 08:04:24.731081 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:04:24 crc kubenswrapper[4946]: I1128 08:04:24.732107 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:04:24 crc kubenswrapper[4946]: I1128 08:04:24.732189 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 08:04:24 crc kubenswrapper[4946]: I1128 08:04:24.733633 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"57ced4d37902c4fc01c7f5bed2b1f2db54d548f22d2fd23d59206e5db4f9ddf1"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 08:04:24 crc kubenswrapper[4946]: I1128 08:04:24.733744 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://57ced4d37902c4fc01c7f5bed2b1f2db54d548f22d2fd23d59206e5db4f9ddf1" gracePeriod=600 Nov 28 08:04:25 crc kubenswrapper[4946]: I1128 08:04:25.705112 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="57ced4d37902c4fc01c7f5bed2b1f2db54d548f22d2fd23d59206e5db4f9ddf1" exitCode=0 Nov 28 08:04:25 crc kubenswrapper[4946]: I1128 08:04:25.705171 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"57ced4d37902c4fc01c7f5bed2b1f2db54d548f22d2fd23d59206e5db4f9ddf1"} Nov 28 08:04:25 crc kubenswrapper[4946]: I1128 08:04:25.705774 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb"} Nov 28 08:04:25 crc kubenswrapper[4946]: I1128 08:04:25.705792 4946 scope.go:117] "RemoveContainer" containerID="7876c9fece19df20df891f59364927c85a8d97822e71d9732a4599cf4df3c3ba" Nov 28 08:04:47 crc kubenswrapper[4946]: I1128 08:04:47.088875 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9czk7"] Nov 28 08:04:47 crc kubenswrapper[4946]: E1128 08:04:47.090109 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee142d94-e22c-4459-9bc7-02db777aabc6" containerName="extract-content" Nov 28 08:04:47 crc kubenswrapper[4946]: I1128 08:04:47.090145 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee142d94-e22c-4459-9bc7-02db777aabc6" containerName="extract-content" Nov 28 08:04:47 crc kubenswrapper[4946]: E1128 08:04:47.090175 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee142d94-e22c-4459-9bc7-02db777aabc6" containerName="extract-utilities" Nov 28 08:04:47 crc kubenswrapper[4946]: I1128 08:04:47.090190 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee142d94-e22c-4459-9bc7-02db777aabc6" containerName="extract-utilities" Nov 28 08:04:47 crc kubenswrapper[4946]: E1128 08:04:47.090226 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee142d94-e22c-4459-9bc7-02db777aabc6" containerName="registry-server" Nov 28 08:04:47 crc kubenswrapper[4946]: I1128 08:04:47.090244 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee142d94-e22c-4459-9bc7-02db777aabc6" containerName="registry-server" Nov 28 08:04:47 crc kubenswrapper[4946]: I1128 08:04:47.090664 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee142d94-e22c-4459-9bc7-02db777aabc6" containerName="registry-server" Nov 28 08:04:47 crc kubenswrapper[4946]: I1128 08:04:47.093585 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9czk7" Nov 28 08:04:47 crc kubenswrapper[4946]: I1128 08:04:47.112273 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9czk7"] Nov 28 08:04:47 crc kubenswrapper[4946]: I1128 08:04:47.204635 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/662282f4-9993-4c1b-b8c0-a505f998cd04-utilities\") pod \"community-operators-9czk7\" (UID: \"662282f4-9993-4c1b-b8c0-a505f998cd04\") " pod="openshift-marketplace/community-operators-9czk7" Nov 28 08:04:47 crc kubenswrapper[4946]: I1128 08:04:47.204726 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjd6p\" (UniqueName: \"kubernetes.io/projected/662282f4-9993-4c1b-b8c0-a505f998cd04-kube-api-access-pjd6p\") pod \"community-operators-9czk7\" (UID: \"662282f4-9993-4c1b-b8c0-a505f998cd04\") " pod="openshift-marketplace/community-operators-9czk7" Nov 28 08:04:47 crc kubenswrapper[4946]: I1128 08:04:47.204768 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/662282f4-9993-4c1b-b8c0-a505f998cd04-catalog-content\") pod \"community-operators-9czk7\" (UID: \"662282f4-9993-4c1b-b8c0-a505f998cd04\") " pod="openshift-marketplace/community-operators-9czk7" Nov 28 08:04:47 crc kubenswrapper[4946]: I1128 08:04:47.306512 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/662282f4-9993-4c1b-b8c0-a505f998cd04-utilities\") pod \"community-operators-9czk7\" (UID: \"662282f4-9993-4c1b-b8c0-a505f998cd04\") " pod="openshift-marketplace/community-operators-9czk7" Nov 28 08:04:47 crc kubenswrapper[4946]: I1128 08:04:47.306858 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjd6p\" (UniqueName: \"kubernetes.io/projected/662282f4-9993-4c1b-b8c0-a505f998cd04-kube-api-access-pjd6p\") pod \"community-operators-9czk7\" (UID: \"662282f4-9993-4c1b-b8c0-a505f998cd04\") " pod="openshift-marketplace/community-operators-9czk7" Nov 28 08:04:47 crc kubenswrapper[4946]: I1128 08:04:47.306888 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/662282f4-9993-4c1b-b8c0-a505f998cd04-catalog-content\") pod \"community-operators-9czk7\" (UID: \"662282f4-9993-4c1b-b8c0-a505f998cd04\") " pod="openshift-marketplace/community-operators-9czk7" Nov 28 08:04:47 crc kubenswrapper[4946]: I1128 08:04:47.307309 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/662282f4-9993-4c1b-b8c0-a505f998cd04-utilities\") pod \"community-operators-9czk7\" (UID: \"662282f4-9993-4c1b-b8c0-a505f998cd04\") " pod="openshift-marketplace/community-operators-9czk7" Nov 28 08:04:47 crc kubenswrapper[4946]: I1128 08:04:47.307327 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/662282f4-9993-4c1b-b8c0-a505f998cd04-catalog-content\") pod \"community-operators-9czk7\" (UID: \"662282f4-9993-4c1b-b8c0-a505f998cd04\") " pod="openshift-marketplace/community-operators-9czk7" Nov 28 08:04:47 crc kubenswrapper[4946]: I1128 08:04:47.327406 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjd6p\" (UniqueName: \"kubernetes.io/projected/662282f4-9993-4c1b-b8c0-a505f998cd04-kube-api-access-pjd6p\") pod \"community-operators-9czk7\" (UID: \"662282f4-9993-4c1b-b8c0-a505f998cd04\") " pod="openshift-marketplace/community-operators-9czk7" Nov 28 08:04:47 crc kubenswrapper[4946]: I1128 08:04:47.432502 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9czk7" Nov 28 08:04:47 crc kubenswrapper[4946]: I1128 08:04:47.879278 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9czk7"] Nov 28 08:04:47 crc kubenswrapper[4946]: W1128 08:04:47.889234 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod662282f4_9993_4c1b_b8c0_a505f998cd04.slice/crio-8f03cbcb60b85d454c3aa3dc84bb59d34bc7f60dbd9354a4b27bf92a388e64b1 WatchSource:0}: Error finding container 8f03cbcb60b85d454c3aa3dc84bb59d34bc7f60dbd9354a4b27bf92a388e64b1: Status 404 returned error can't find the container with id 8f03cbcb60b85d454c3aa3dc84bb59d34bc7f60dbd9354a4b27bf92a388e64b1 Nov 28 08:04:47 crc kubenswrapper[4946]: I1128 08:04:47.956519 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9czk7" event={"ID":"662282f4-9993-4c1b-b8c0-a505f998cd04","Type":"ContainerStarted","Data":"8f03cbcb60b85d454c3aa3dc84bb59d34bc7f60dbd9354a4b27bf92a388e64b1"} Nov 28 08:04:48 crc kubenswrapper[4946]: I1128 08:04:48.971559 4946 generic.go:334] "Generic (PLEG): container finished" podID="662282f4-9993-4c1b-b8c0-a505f998cd04" containerID="461d5786583bafda170d7ab8b5cc62ba0933a03ee03a50978567a9d70f371af6" exitCode=0 Nov 28 08:04:48 crc kubenswrapper[4946]: I1128 08:04:48.971640 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9czk7" event={"ID":"662282f4-9993-4c1b-b8c0-a505f998cd04","Type":"ContainerDied","Data":"461d5786583bafda170d7ab8b5cc62ba0933a03ee03a50978567a9d70f371af6"} Nov 28 08:04:48 crc kubenswrapper[4946]: I1128 08:04:48.974432 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 08:04:49 crc kubenswrapper[4946]: I1128 08:04:49.985016 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9czk7" event={"ID":"662282f4-9993-4c1b-b8c0-a505f998cd04","Type":"ContainerStarted","Data":"0666e8968c581fbc6146dcfe80ec31c2ead901a7fb5d5470a3d679a39a6e82e4"} Nov 28 08:04:50 crc kubenswrapper[4946]: I1128 08:04:50.996741 4946 generic.go:334] "Generic (PLEG): container finished" podID="662282f4-9993-4c1b-b8c0-a505f998cd04" containerID="0666e8968c581fbc6146dcfe80ec31c2ead901a7fb5d5470a3d679a39a6e82e4" exitCode=0 Nov 28 08:04:50 crc kubenswrapper[4946]: I1128 08:04:50.996884 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9czk7" event={"ID":"662282f4-9993-4c1b-b8c0-a505f998cd04","Type":"ContainerDied","Data":"0666e8968c581fbc6146dcfe80ec31c2ead901a7fb5d5470a3d679a39a6e82e4"} Nov 28 08:04:52 crc kubenswrapper[4946]: I1128 08:04:52.009349 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9czk7" event={"ID":"662282f4-9993-4c1b-b8c0-a505f998cd04","Type":"ContainerStarted","Data":"ff1957630138f5c58e34ac2ac97dedc291942b959c79dcc9c97c8b302867f997"} Nov 28 08:04:52 crc kubenswrapper[4946]: I1128 08:04:52.050230 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9czk7" podStartSLOduration=2.561512106 podStartE2EDuration="5.05019377s" podCreationTimestamp="2025-11-28 08:04:47 +0000 UTC" firstStartedPulling="2025-11-28 08:04:48.974028752 +0000 UTC m=+4343.352093893" lastFinishedPulling="2025-11-28 08:04:51.462710446 +0000 UTC m=+4345.840775557" observedRunningTime="2025-11-28 08:04:52.042185741 +0000 UTC m=+4346.420250912" watchObservedRunningTime="2025-11-28 08:04:52.05019377 +0000 UTC m=+4346.428258921" Nov 28 08:04:57 crc kubenswrapper[4946]: I1128 08:04:57.433448 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9czk7" Nov 28 08:04:57 crc kubenswrapper[4946]: I1128 08:04:57.434273 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9czk7" Nov 28 08:04:57 crc kubenswrapper[4946]: I1128 08:04:57.505729 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9czk7" Nov 28 08:04:58 crc kubenswrapper[4946]: I1128 08:04:58.147978 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9czk7" Nov 28 08:04:58 crc kubenswrapper[4946]: I1128 08:04:58.226833 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9czk7"] Nov 28 08:05:00 crc kubenswrapper[4946]: I1128 08:05:00.090637 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9czk7" podUID="662282f4-9993-4c1b-b8c0-a505f998cd04" containerName="registry-server" containerID="cri-o://ff1957630138f5c58e34ac2ac97dedc291942b959c79dcc9c97c8b302867f997" gracePeriod=2 Nov 28 08:05:01 crc kubenswrapper[4946]: I1128 08:05:01.104221 4946 generic.go:334] "Generic (PLEG): container finished" podID="662282f4-9993-4c1b-b8c0-a505f998cd04" containerID="ff1957630138f5c58e34ac2ac97dedc291942b959c79dcc9c97c8b302867f997" exitCode=0 Nov 28 08:05:01 crc kubenswrapper[4946]: I1128 08:05:01.104292 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9czk7" event={"ID":"662282f4-9993-4c1b-b8c0-a505f998cd04","Type":"ContainerDied","Data":"ff1957630138f5c58e34ac2ac97dedc291942b959c79dcc9c97c8b302867f997"} Nov 28 08:05:01 crc kubenswrapper[4946]: I1128 08:05:01.104631 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9czk7" event={"ID":"662282f4-9993-4c1b-b8c0-a505f998cd04","Type":"ContainerDied","Data":"8f03cbcb60b85d454c3aa3dc84bb59d34bc7f60dbd9354a4b27bf92a388e64b1"} Nov 28 08:05:01 crc kubenswrapper[4946]: I1128 08:05:01.104649 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f03cbcb60b85d454c3aa3dc84bb59d34bc7f60dbd9354a4b27bf92a388e64b1" Nov 28 08:05:01 crc kubenswrapper[4946]: I1128 08:05:01.110066 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9czk7" Nov 28 08:05:01 crc kubenswrapper[4946]: I1128 08:05:01.265641 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjd6p\" (UniqueName: \"kubernetes.io/projected/662282f4-9993-4c1b-b8c0-a505f998cd04-kube-api-access-pjd6p\") pod \"662282f4-9993-4c1b-b8c0-a505f998cd04\" (UID: \"662282f4-9993-4c1b-b8c0-a505f998cd04\") " Nov 28 08:05:01 crc kubenswrapper[4946]: I1128 08:05:01.265755 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/662282f4-9993-4c1b-b8c0-a505f998cd04-utilities\") pod \"662282f4-9993-4c1b-b8c0-a505f998cd04\" (UID: \"662282f4-9993-4c1b-b8c0-a505f998cd04\") " Nov 28 08:05:01 crc kubenswrapper[4946]: I1128 08:05:01.265898 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/662282f4-9993-4c1b-b8c0-a505f998cd04-catalog-content\") pod \"662282f4-9993-4c1b-b8c0-a505f998cd04\" (UID: \"662282f4-9993-4c1b-b8c0-a505f998cd04\") " Nov 28 08:05:01 crc kubenswrapper[4946]: I1128 08:05:01.267983 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/662282f4-9993-4c1b-b8c0-a505f998cd04-utilities" (OuterVolumeSpecName: "utilities") pod "662282f4-9993-4c1b-b8c0-a505f998cd04" (UID: "662282f4-9993-4c1b-b8c0-a505f998cd04"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:05:01 crc kubenswrapper[4946]: I1128 08:05:01.271455 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/662282f4-9993-4c1b-b8c0-a505f998cd04-kube-api-access-pjd6p" (OuterVolumeSpecName: "kube-api-access-pjd6p") pod "662282f4-9993-4c1b-b8c0-a505f998cd04" (UID: "662282f4-9993-4c1b-b8c0-a505f998cd04"). InnerVolumeSpecName "kube-api-access-pjd6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:05:01 crc kubenswrapper[4946]: I1128 08:05:01.333915 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/662282f4-9993-4c1b-b8c0-a505f998cd04-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "662282f4-9993-4c1b-b8c0-a505f998cd04" (UID: "662282f4-9993-4c1b-b8c0-a505f998cd04"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:05:01 crc kubenswrapper[4946]: I1128 08:05:01.367651 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/662282f4-9993-4c1b-b8c0-a505f998cd04-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 08:05:01 crc kubenswrapper[4946]: I1128 08:05:01.367694 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjd6p\" (UniqueName: \"kubernetes.io/projected/662282f4-9993-4c1b-b8c0-a505f998cd04-kube-api-access-pjd6p\") on node \"crc\" DevicePath \"\"" Nov 28 08:05:01 crc kubenswrapper[4946]: I1128 08:05:01.367705 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/662282f4-9993-4c1b-b8c0-a505f998cd04-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 08:05:02 crc kubenswrapper[4946]: I1128 08:05:02.113371 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9czk7" Nov 28 08:05:02 crc kubenswrapper[4946]: I1128 08:05:02.150307 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9czk7"] Nov 28 08:05:02 crc kubenswrapper[4946]: I1128 08:05:02.160733 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9czk7"] Nov 28 08:05:04 crc kubenswrapper[4946]: I1128 08:05:04.003550 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="662282f4-9993-4c1b-b8c0-a505f998cd04" path="/var/lib/kubelet/pods/662282f4-9993-4c1b-b8c0-a505f998cd04/volumes" Nov 28 08:06:54 crc kubenswrapper[4946]: I1128 08:06:54.730964 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:06:54 crc kubenswrapper[4946]: I1128 08:06:54.731801 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:07:24 crc kubenswrapper[4946]: I1128 08:07:24.731244 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:07:24 crc kubenswrapper[4946]: I1128 08:07:24.731891 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:07:54 crc kubenswrapper[4946]: I1128 08:07:54.730653 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:07:54 crc kubenswrapper[4946]: I1128 08:07:54.731522 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:07:54 crc kubenswrapper[4946]: I1128 08:07:54.731609 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 08:07:54 crc kubenswrapper[4946]: I1128 08:07:54.732717 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 08:07:54 crc kubenswrapper[4946]: I1128 08:07:54.732898 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" gracePeriod=600 Nov 28 08:07:54 crc kubenswrapper[4946]: E1128 08:07:54.857194 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:07:55 crc kubenswrapper[4946]: I1128 08:07:55.802730 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" exitCode=0 Nov 28 08:07:55 crc kubenswrapper[4946]: I1128 08:07:55.802801 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb"} Nov 28 08:07:55 crc kubenswrapper[4946]: I1128 08:07:55.803194 4946 scope.go:117] "RemoveContainer" containerID="57ced4d37902c4fc01c7f5bed2b1f2db54d548f22d2fd23d59206e5db4f9ddf1" Nov 28 08:07:55 crc kubenswrapper[4946]: I1128 08:07:55.803677 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:07:55 crc kubenswrapper[4946]: E1128 08:07:55.803947 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:08:07 crc kubenswrapper[4946]: I1128 08:08:07.990637 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:08:07 crc kubenswrapper[4946]: E1128 08:08:07.991776 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:08:22 crc kubenswrapper[4946]: I1128 08:08:22.989504 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:08:22 crc kubenswrapper[4946]: E1128 08:08:22.990397 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:08:34 crc kubenswrapper[4946]: I1128 08:08:34.990735 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:08:34 crc kubenswrapper[4946]: E1128 08:08:34.991761 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:08:47 crc kubenswrapper[4946]: I1128 08:08:47.991412 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:08:47 crc kubenswrapper[4946]: E1128 08:08:47.992619 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:08:59 crc kubenswrapper[4946]: I1128 08:08:59.990622 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:08:59 crc kubenswrapper[4946]: E1128 08:08:59.991554 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:09:09 crc kubenswrapper[4946]: I1128 08:09:09.431359 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n6hk6"] Nov 28 08:09:09 crc kubenswrapper[4946]: E1128 08:09:09.432178 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="662282f4-9993-4c1b-b8c0-a505f998cd04" containerName="extract-utilities" Nov 28 08:09:09 crc kubenswrapper[4946]: I1128 08:09:09.432191 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="662282f4-9993-4c1b-b8c0-a505f998cd04" containerName="extract-utilities" Nov 28 08:09:09 crc kubenswrapper[4946]: E1128 08:09:09.432216 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="662282f4-9993-4c1b-b8c0-a505f998cd04" containerName="registry-server" Nov 28 08:09:09 crc kubenswrapper[4946]: I1128 08:09:09.432222 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="662282f4-9993-4c1b-b8c0-a505f998cd04" containerName="registry-server" Nov 28 08:09:09 crc kubenswrapper[4946]: E1128 08:09:09.432234 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="662282f4-9993-4c1b-b8c0-a505f998cd04" containerName="extract-content" Nov 28 08:09:09 crc kubenswrapper[4946]: I1128 08:09:09.432241 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="662282f4-9993-4c1b-b8c0-a505f998cd04" containerName="extract-content" Nov 28 08:09:09 crc kubenswrapper[4946]: I1128 08:09:09.432380 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="662282f4-9993-4c1b-b8c0-a505f998cd04" containerName="registry-server" Nov 28 08:09:09 crc kubenswrapper[4946]: I1128 08:09:09.433363 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n6hk6" Nov 28 08:09:09 crc kubenswrapper[4946]: I1128 08:09:09.463621 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n6hk6"] Nov 28 08:09:09 crc kubenswrapper[4946]: I1128 08:09:09.594060 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/115259a3-150f-427c-a731-deb8901def10-catalog-content\") pod \"redhat-operators-n6hk6\" (UID: \"115259a3-150f-427c-a731-deb8901def10\") " pod="openshift-marketplace/redhat-operators-n6hk6" Nov 28 08:09:09 crc kubenswrapper[4946]: I1128 08:09:09.594158 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/115259a3-150f-427c-a731-deb8901def10-utilities\") pod \"redhat-operators-n6hk6\" (UID: \"115259a3-150f-427c-a731-deb8901def10\") " pod="openshift-marketplace/redhat-operators-n6hk6" Nov 28 08:09:09 crc kubenswrapper[4946]: I1128 08:09:09.594344 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgrmq\" (UniqueName: \"kubernetes.io/projected/115259a3-150f-427c-a731-deb8901def10-kube-api-access-rgrmq\") pod \"redhat-operators-n6hk6\" (UID: \"115259a3-150f-427c-a731-deb8901def10\") " pod="openshift-marketplace/redhat-operators-n6hk6" Nov 28 08:09:09 crc kubenswrapper[4946]: I1128 08:09:09.695708 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/115259a3-150f-427c-a731-deb8901def10-catalog-content\") pod \"redhat-operators-n6hk6\" (UID: \"115259a3-150f-427c-a731-deb8901def10\") " pod="openshift-marketplace/redhat-operators-n6hk6" Nov 28 08:09:09 crc kubenswrapper[4946]: I1128 08:09:09.695774 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/115259a3-150f-427c-a731-deb8901def10-utilities\") pod \"redhat-operators-n6hk6\" (UID: \"115259a3-150f-427c-a731-deb8901def10\") " pod="openshift-marketplace/redhat-operators-n6hk6" Nov 28 08:09:09 crc kubenswrapper[4946]: I1128 08:09:09.696403 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/115259a3-150f-427c-a731-deb8901def10-utilities\") pod \"redhat-operators-n6hk6\" (UID: \"115259a3-150f-427c-a731-deb8901def10\") " pod="openshift-marketplace/redhat-operators-n6hk6" Nov 28 08:09:09 crc kubenswrapper[4946]: I1128 08:09:09.696592 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/115259a3-150f-427c-a731-deb8901def10-catalog-content\") pod \"redhat-operators-n6hk6\" (UID: \"115259a3-150f-427c-a731-deb8901def10\") " pod="openshift-marketplace/redhat-operators-n6hk6" Nov 28 08:09:09 crc kubenswrapper[4946]: I1128 08:09:09.697396 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgrmq\" (UniqueName: \"kubernetes.io/projected/115259a3-150f-427c-a731-deb8901def10-kube-api-access-rgrmq\") pod \"redhat-operators-n6hk6\" (UID: \"115259a3-150f-427c-a731-deb8901def10\") " pod="openshift-marketplace/redhat-operators-n6hk6" Nov 28 08:09:09 crc kubenswrapper[4946]: I1128 08:09:09.734363 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgrmq\" (UniqueName: \"kubernetes.io/projected/115259a3-150f-427c-a731-deb8901def10-kube-api-access-rgrmq\") pod \"redhat-operators-n6hk6\" (UID: \"115259a3-150f-427c-a731-deb8901def10\") " pod="openshift-marketplace/redhat-operators-n6hk6" Nov 28 08:09:09 crc kubenswrapper[4946]: I1128 08:09:09.758041 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n6hk6" Nov 28 08:09:10 crc kubenswrapper[4946]: I1128 08:09:10.251512 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n6hk6"] Nov 28 08:09:10 crc kubenswrapper[4946]: I1128 08:09:10.520947 4946 generic.go:334] "Generic (PLEG): container finished" podID="115259a3-150f-427c-a731-deb8901def10" containerID="33543c30bfd14144075ec2918a22073ee6d79b22e6828e79db54b51d68f45d18" exitCode=0 Nov 28 08:09:10 crc kubenswrapper[4946]: I1128 08:09:10.520983 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n6hk6" event={"ID":"115259a3-150f-427c-a731-deb8901def10","Type":"ContainerDied","Data":"33543c30bfd14144075ec2918a22073ee6d79b22e6828e79db54b51d68f45d18"} Nov 28 08:09:10 crc kubenswrapper[4946]: I1128 08:09:10.521260 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n6hk6" event={"ID":"115259a3-150f-427c-a731-deb8901def10","Type":"ContainerStarted","Data":"b598e8ffce625416928cab0b652ddc6fa12c7d52dc6d088840abde9080a88e9c"} Nov 28 08:09:11 crc kubenswrapper[4946]: I1128 08:09:11.544231 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n6hk6" event={"ID":"115259a3-150f-427c-a731-deb8901def10","Type":"ContainerStarted","Data":"2c3e9964bf9c87c5f42f012418977b57004255b21dd40d95103514601bd5294d"} Nov 28 08:09:12 crc kubenswrapper[4946]: I1128 08:09:12.554928 4946 generic.go:334] "Generic (PLEG): container finished" podID="115259a3-150f-427c-a731-deb8901def10" containerID="2c3e9964bf9c87c5f42f012418977b57004255b21dd40d95103514601bd5294d" exitCode=0 Nov 28 08:09:12 crc kubenswrapper[4946]: I1128 08:09:12.555059 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n6hk6" event={"ID":"115259a3-150f-427c-a731-deb8901def10","Type":"ContainerDied","Data":"2c3e9964bf9c87c5f42f012418977b57004255b21dd40d95103514601bd5294d"} Nov 28 08:09:12 crc kubenswrapper[4946]: I1128 08:09:12.990610 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:09:12 crc kubenswrapper[4946]: E1128 08:09:12.991092 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:09:13 crc kubenswrapper[4946]: I1128 08:09:13.566733 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n6hk6" event={"ID":"115259a3-150f-427c-a731-deb8901def10","Type":"ContainerStarted","Data":"52beeb8baa1dfcc9a1f6ebc6703ae0003c6492664738c591a2f590d86db58c11"} Nov 28 08:09:19 crc kubenswrapper[4946]: I1128 08:09:19.758575 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n6hk6" Nov 28 08:09:19 crc kubenswrapper[4946]: I1128 08:09:19.759027 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n6hk6" Nov 28 08:09:20 crc kubenswrapper[4946]: I1128 08:09:20.804287 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n6hk6" podUID="115259a3-150f-427c-a731-deb8901def10" containerName="registry-server" probeResult="failure" output=< Nov 28 08:09:20 crc kubenswrapper[4946]: timeout: failed to connect service ":50051" within 1s Nov 28 08:09:20 crc kubenswrapper[4946]: > Nov 28 08:09:27 crc kubenswrapper[4946]: I1128 08:09:27.990611 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:09:27 crc kubenswrapper[4946]: E1128 08:09:27.992907 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:09:29 crc kubenswrapper[4946]: I1128 08:09:29.822044 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n6hk6" Nov 28 08:09:29 crc kubenswrapper[4946]: I1128 08:09:29.852837 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n6hk6" podStartSLOduration=18.32241901 podStartE2EDuration="20.852812261s" podCreationTimestamp="2025-11-28 08:09:09 +0000 UTC" firstStartedPulling="2025-11-28 08:09:10.534964965 +0000 UTC m=+4604.913030076" lastFinishedPulling="2025-11-28 08:09:13.065358186 +0000 UTC m=+4607.443423327" observedRunningTime="2025-11-28 08:09:13.5892994 +0000 UTC m=+4607.967364521" watchObservedRunningTime="2025-11-28 08:09:29.852812261 +0000 UTC m=+4624.230877382" Nov 28 08:09:29 crc kubenswrapper[4946]: I1128 08:09:29.880226 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n6hk6" Nov 28 08:09:30 crc kubenswrapper[4946]: I1128 08:09:30.059880 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n6hk6"] Nov 28 08:09:31 crc kubenswrapper[4946]: I1128 08:09:31.730273 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n6hk6" podUID="115259a3-150f-427c-a731-deb8901def10" containerName="registry-server" containerID="cri-o://52beeb8baa1dfcc9a1f6ebc6703ae0003c6492664738c591a2f590d86db58c11" gracePeriod=2 Nov 28 08:09:32 crc kubenswrapper[4946]: I1128 08:09:32.147851 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n6hk6" Nov 28 08:09:32 crc kubenswrapper[4946]: I1128 08:09:32.172932 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgrmq\" (UniqueName: \"kubernetes.io/projected/115259a3-150f-427c-a731-deb8901def10-kube-api-access-rgrmq\") pod \"115259a3-150f-427c-a731-deb8901def10\" (UID: \"115259a3-150f-427c-a731-deb8901def10\") " Nov 28 08:09:32 crc kubenswrapper[4946]: I1128 08:09:32.187528 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/115259a3-150f-427c-a731-deb8901def10-kube-api-access-rgrmq" (OuterVolumeSpecName: "kube-api-access-rgrmq") pod "115259a3-150f-427c-a731-deb8901def10" (UID: "115259a3-150f-427c-a731-deb8901def10"). InnerVolumeSpecName "kube-api-access-rgrmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:09:32 crc kubenswrapper[4946]: I1128 08:09:32.274412 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/115259a3-150f-427c-a731-deb8901def10-catalog-content\") pod \"115259a3-150f-427c-a731-deb8901def10\" (UID: \"115259a3-150f-427c-a731-deb8901def10\") " Nov 28 08:09:32 crc kubenswrapper[4946]: I1128 08:09:32.274486 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/115259a3-150f-427c-a731-deb8901def10-utilities\") pod \"115259a3-150f-427c-a731-deb8901def10\" (UID: \"115259a3-150f-427c-a731-deb8901def10\") " Nov 28 08:09:32 crc kubenswrapper[4946]: I1128 08:09:32.274957 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgrmq\" (UniqueName: \"kubernetes.io/projected/115259a3-150f-427c-a731-deb8901def10-kube-api-access-rgrmq\") on node \"crc\" DevicePath \"\"" Nov 28 08:09:32 crc kubenswrapper[4946]: I1128 08:09:32.275863 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/115259a3-150f-427c-a731-deb8901def10-utilities" (OuterVolumeSpecName: "utilities") pod "115259a3-150f-427c-a731-deb8901def10" (UID: "115259a3-150f-427c-a731-deb8901def10"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:09:32 crc kubenswrapper[4946]: I1128 08:09:32.375816 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/115259a3-150f-427c-a731-deb8901def10-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 08:09:32 crc kubenswrapper[4946]: I1128 08:09:32.410333 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/115259a3-150f-427c-a731-deb8901def10-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "115259a3-150f-427c-a731-deb8901def10" (UID: "115259a3-150f-427c-a731-deb8901def10"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:09:32 crc kubenswrapper[4946]: I1128 08:09:32.478073 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/115259a3-150f-427c-a731-deb8901def10-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 08:09:32 crc kubenswrapper[4946]: I1128 08:09:32.752407 4946 generic.go:334] "Generic (PLEG): container finished" podID="115259a3-150f-427c-a731-deb8901def10" containerID="52beeb8baa1dfcc9a1f6ebc6703ae0003c6492664738c591a2f590d86db58c11" exitCode=0 Nov 28 08:09:32 crc kubenswrapper[4946]: I1128 08:09:32.752517 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n6hk6" event={"ID":"115259a3-150f-427c-a731-deb8901def10","Type":"ContainerDied","Data":"52beeb8baa1dfcc9a1f6ebc6703ae0003c6492664738c591a2f590d86db58c11"} Nov 28 08:09:32 crc kubenswrapper[4946]: I1128 08:09:32.752577 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n6hk6" event={"ID":"115259a3-150f-427c-a731-deb8901def10","Type":"ContainerDied","Data":"b598e8ffce625416928cab0b652ddc6fa12c7d52dc6d088840abde9080a88e9c"} Nov 28 08:09:32 crc kubenswrapper[4946]: I1128 08:09:32.752603 4946 scope.go:117] "RemoveContainer" containerID="52beeb8baa1dfcc9a1f6ebc6703ae0003c6492664738c591a2f590d86db58c11" Nov 28 08:09:32 crc kubenswrapper[4946]: I1128 08:09:32.752603 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n6hk6" Nov 28 08:09:32 crc kubenswrapper[4946]: I1128 08:09:32.784411 4946 scope.go:117] "RemoveContainer" containerID="2c3e9964bf9c87c5f42f012418977b57004255b21dd40d95103514601bd5294d" Nov 28 08:09:32 crc kubenswrapper[4946]: I1128 08:09:32.805969 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n6hk6"] Nov 28 08:09:32 crc kubenswrapper[4946]: I1128 08:09:32.816498 4946 scope.go:117] "RemoveContainer" containerID="33543c30bfd14144075ec2918a22073ee6d79b22e6828e79db54b51d68f45d18" Nov 28 08:09:32 crc kubenswrapper[4946]: I1128 08:09:32.817227 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n6hk6"] Nov 28 08:09:32 crc kubenswrapper[4946]: I1128 08:09:32.856355 4946 scope.go:117] "RemoveContainer" containerID="52beeb8baa1dfcc9a1f6ebc6703ae0003c6492664738c591a2f590d86db58c11" Nov 28 08:09:32 crc kubenswrapper[4946]: E1128 08:09:32.857347 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52beeb8baa1dfcc9a1f6ebc6703ae0003c6492664738c591a2f590d86db58c11\": container with ID starting with 52beeb8baa1dfcc9a1f6ebc6703ae0003c6492664738c591a2f590d86db58c11 not found: ID does not exist" containerID="52beeb8baa1dfcc9a1f6ebc6703ae0003c6492664738c591a2f590d86db58c11" Nov 28 08:09:32 crc kubenswrapper[4946]: I1128 08:09:32.857429 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52beeb8baa1dfcc9a1f6ebc6703ae0003c6492664738c591a2f590d86db58c11"} err="failed to get container status \"52beeb8baa1dfcc9a1f6ebc6703ae0003c6492664738c591a2f590d86db58c11\": rpc error: code = NotFound desc = could not find container \"52beeb8baa1dfcc9a1f6ebc6703ae0003c6492664738c591a2f590d86db58c11\": container with ID starting with 52beeb8baa1dfcc9a1f6ebc6703ae0003c6492664738c591a2f590d86db58c11 not found: ID does not exist" Nov 28 08:09:32 crc kubenswrapper[4946]: I1128 08:09:32.857549 4946 scope.go:117] "RemoveContainer" containerID="2c3e9964bf9c87c5f42f012418977b57004255b21dd40d95103514601bd5294d" Nov 28 08:09:32 crc kubenswrapper[4946]: E1128 08:09:32.858147 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c3e9964bf9c87c5f42f012418977b57004255b21dd40d95103514601bd5294d\": container with ID starting with 2c3e9964bf9c87c5f42f012418977b57004255b21dd40d95103514601bd5294d not found: ID does not exist" containerID="2c3e9964bf9c87c5f42f012418977b57004255b21dd40d95103514601bd5294d" Nov 28 08:09:32 crc kubenswrapper[4946]: I1128 08:09:32.858189 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c3e9964bf9c87c5f42f012418977b57004255b21dd40d95103514601bd5294d"} err="failed to get container status \"2c3e9964bf9c87c5f42f012418977b57004255b21dd40d95103514601bd5294d\": rpc error: code = NotFound desc = could not find container \"2c3e9964bf9c87c5f42f012418977b57004255b21dd40d95103514601bd5294d\": container with ID starting with 2c3e9964bf9c87c5f42f012418977b57004255b21dd40d95103514601bd5294d not found: ID does not exist" Nov 28 08:09:32 crc kubenswrapper[4946]: I1128 08:09:32.858218 4946 scope.go:117] "RemoveContainer" containerID="33543c30bfd14144075ec2918a22073ee6d79b22e6828e79db54b51d68f45d18" Nov 28 08:09:32 crc kubenswrapper[4946]: E1128 08:09:32.858896 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33543c30bfd14144075ec2918a22073ee6d79b22e6828e79db54b51d68f45d18\": container with ID starting with 33543c30bfd14144075ec2918a22073ee6d79b22e6828e79db54b51d68f45d18 not found: ID does not exist" containerID="33543c30bfd14144075ec2918a22073ee6d79b22e6828e79db54b51d68f45d18" Nov 28 08:09:32 crc kubenswrapper[4946]: I1128 08:09:32.858940 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33543c30bfd14144075ec2918a22073ee6d79b22e6828e79db54b51d68f45d18"} err="failed to get container status \"33543c30bfd14144075ec2918a22073ee6d79b22e6828e79db54b51d68f45d18\": rpc error: code = NotFound desc = could not find container \"33543c30bfd14144075ec2918a22073ee6d79b22e6828e79db54b51d68f45d18\": container with ID starting with 33543c30bfd14144075ec2918a22073ee6d79b22e6828e79db54b51d68f45d18 not found: ID does not exist" Nov 28 08:09:34 crc kubenswrapper[4946]: I1128 08:09:34.000315 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="115259a3-150f-427c-a731-deb8901def10" path="/var/lib/kubelet/pods/115259a3-150f-427c-a731-deb8901def10/volumes" Nov 28 08:09:38 crc kubenswrapper[4946]: I1128 08:09:38.990264 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:09:38 crc kubenswrapper[4946]: E1128 08:09:38.990992 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:09:49 crc kubenswrapper[4946]: I1128 08:09:49.810204 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r8mzk"] Nov 28 08:09:49 crc kubenswrapper[4946]: E1128 08:09:49.811221 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115259a3-150f-427c-a731-deb8901def10" containerName="extract-utilities" Nov 28 08:09:49 crc kubenswrapper[4946]: I1128 08:09:49.811241 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="115259a3-150f-427c-a731-deb8901def10" containerName="extract-utilities" Nov 28 08:09:49 crc kubenswrapper[4946]: E1128 08:09:49.811263 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115259a3-150f-427c-a731-deb8901def10" containerName="registry-server" Nov 28 08:09:49 crc kubenswrapper[4946]: I1128 08:09:49.811275 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="115259a3-150f-427c-a731-deb8901def10" containerName="registry-server" Nov 28 08:09:49 crc kubenswrapper[4946]: E1128 08:09:49.811297 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115259a3-150f-427c-a731-deb8901def10" containerName="extract-content" Nov 28 08:09:49 crc kubenswrapper[4946]: I1128 08:09:49.811309 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="115259a3-150f-427c-a731-deb8901def10" containerName="extract-content" Nov 28 08:09:49 crc kubenswrapper[4946]: I1128 08:09:49.811619 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="115259a3-150f-427c-a731-deb8901def10" containerName="registry-server" Nov 28 08:09:49 crc kubenswrapper[4946]: I1128 08:09:49.813299 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r8mzk" Nov 28 08:09:49 crc kubenswrapper[4946]: I1128 08:09:49.840166 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r8mzk"] Nov 28 08:09:49 crc kubenswrapper[4946]: I1128 08:09:49.958387 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/947fdf9d-9190-4542-b1b2-e0dd9e76a5c5-catalog-content\") pod \"certified-operators-r8mzk\" (UID: \"947fdf9d-9190-4542-b1b2-e0dd9e76a5c5\") " pod="openshift-marketplace/certified-operators-r8mzk" Nov 28 08:09:49 crc kubenswrapper[4946]: I1128 08:09:49.958673 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8htl2\" (UniqueName: \"kubernetes.io/projected/947fdf9d-9190-4542-b1b2-e0dd9e76a5c5-kube-api-access-8htl2\") pod \"certified-operators-r8mzk\" (UID: \"947fdf9d-9190-4542-b1b2-e0dd9e76a5c5\") " pod="openshift-marketplace/certified-operators-r8mzk" Nov 28 08:09:49 crc kubenswrapper[4946]: I1128 08:09:49.958789 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/947fdf9d-9190-4542-b1b2-e0dd9e76a5c5-utilities\") pod \"certified-operators-r8mzk\" (UID: \"947fdf9d-9190-4542-b1b2-e0dd9e76a5c5\") " pod="openshift-marketplace/certified-operators-r8mzk" Nov 28 08:09:49 crc kubenswrapper[4946]: I1128 08:09:49.990329 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:09:49 crc kubenswrapper[4946]: E1128 08:09:49.990831 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:09:50 crc kubenswrapper[4946]: I1128 08:09:50.060858 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/947fdf9d-9190-4542-b1b2-e0dd9e76a5c5-utilities\") pod \"certified-operators-r8mzk\" (UID: \"947fdf9d-9190-4542-b1b2-e0dd9e76a5c5\") " pod="openshift-marketplace/certified-operators-r8mzk" Nov 28 08:09:50 crc kubenswrapper[4946]: I1128 08:09:50.061015 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/947fdf9d-9190-4542-b1b2-e0dd9e76a5c5-catalog-content\") pod \"certified-operators-r8mzk\" (UID: \"947fdf9d-9190-4542-b1b2-e0dd9e76a5c5\") " pod="openshift-marketplace/certified-operators-r8mzk" Nov 28 08:09:50 crc kubenswrapper[4946]: I1128 08:09:50.061070 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8htl2\" (UniqueName: \"kubernetes.io/projected/947fdf9d-9190-4542-b1b2-e0dd9e76a5c5-kube-api-access-8htl2\") pod \"certified-operators-r8mzk\" (UID: \"947fdf9d-9190-4542-b1b2-e0dd9e76a5c5\") " pod="openshift-marketplace/certified-operators-r8mzk" Nov 28 08:09:50 crc kubenswrapper[4946]: I1128 08:09:50.061641 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/947fdf9d-9190-4542-b1b2-e0dd9e76a5c5-utilities\") pod \"certified-operators-r8mzk\" (UID: \"947fdf9d-9190-4542-b1b2-e0dd9e76a5c5\") " pod="openshift-marketplace/certified-operators-r8mzk" Nov 28 08:09:50 crc kubenswrapper[4946]: I1128 08:09:50.061684 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/947fdf9d-9190-4542-b1b2-e0dd9e76a5c5-catalog-content\") pod \"certified-operators-r8mzk\" (UID: \"947fdf9d-9190-4542-b1b2-e0dd9e76a5c5\") " pod="openshift-marketplace/certified-operators-r8mzk" Nov 28 08:09:50 crc kubenswrapper[4946]: I1128 08:09:50.082773 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8htl2\" (UniqueName: \"kubernetes.io/projected/947fdf9d-9190-4542-b1b2-e0dd9e76a5c5-kube-api-access-8htl2\") pod \"certified-operators-r8mzk\" (UID: \"947fdf9d-9190-4542-b1b2-e0dd9e76a5c5\") " pod="openshift-marketplace/certified-operators-r8mzk" Nov 28 08:09:50 crc kubenswrapper[4946]: I1128 08:09:50.151520 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r8mzk" Nov 28 08:09:50 crc kubenswrapper[4946]: I1128 08:09:50.423026 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r8mzk"] Nov 28 08:09:50 crc kubenswrapper[4946]: I1128 08:09:50.907718 4946 generic.go:334] "Generic (PLEG): container finished" podID="947fdf9d-9190-4542-b1b2-e0dd9e76a5c5" containerID="f84f8272962c06211ee5d78573d01c674813989e4188f27d550bf70f082e1910" exitCode=0 Nov 28 08:09:50 crc kubenswrapper[4946]: I1128 08:09:50.907765 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8mzk" event={"ID":"947fdf9d-9190-4542-b1b2-e0dd9e76a5c5","Type":"ContainerDied","Data":"f84f8272962c06211ee5d78573d01c674813989e4188f27d550bf70f082e1910"} Nov 28 08:09:50 crc kubenswrapper[4946]: I1128 08:09:50.907810 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8mzk" event={"ID":"947fdf9d-9190-4542-b1b2-e0dd9e76a5c5","Type":"ContainerStarted","Data":"ce5cc37744888e97c2d78381de7447e42c66733cd29cfeeddad2bde6ce35eca0"} Nov 28 08:09:50 crc kubenswrapper[4946]: I1128 08:09:50.910618 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 08:09:51 crc kubenswrapper[4946]: I1128 08:09:51.920612 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8mzk" event={"ID":"947fdf9d-9190-4542-b1b2-e0dd9e76a5c5","Type":"ContainerStarted","Data":"074f062e71b1109bde4606d46c398ed84cada327ff8bc319f1fb0ec050694e5b"} Nov 28 08:09:52 crc kubenswrapper[4946]: I1128 08:09:52.932650 4946 generic.go:334] "Generic (PLEG): container finished" podID="947fdf9d-9190-4542-b1b2-e0dd9e76a5c5" containerID="074f062e71b1109bde4606d46c398ed84cada327ff8bc319f1fb0ec050694e5b" exitCode=0 Nov 28 08:09:52 crc kubenswrapper[4946]: I1128 08:09:52.932781 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8mzk" event={"ID":"947fdf9d-9190-4542-b1b2-e0dd9e76a5c5","Type":"ContainerDied","Data":"074f062e71b1109bde4606d46c398ed84cada327ff8bc319f1fb0ec050694e5b"} Nov 28 08:09:53 crc kubenswrapper[4946]: I1128 08:09:53.946016 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8mzk" event={"ID":"947fdf9d-9190-4542-b1b2-e0dd9e76a5c5","Type":"ContainerStarted","Data":"42a4a74a7d6fcbd3219c92d5e71e72e9e69b248dd07782197c006a3cdcf1adf8"} Nov 28 08:09:53 crc kubenswrapper[4946]: I1128 08:09:53.972824 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r8mzk" podStartSLOduration=2.341303418 podStartE2EDuration="4.972798022s" podCreationTimestamp="2025-11-28 08:09:49 +0000 UTC" firstStartedPulling="2025-11-28 08:09:50.910377905 +0000 UTC m=+4645.288443016" lastFinishedPulling="2025-11-28 08:09:53.541872469 +0000 UTC m=+4647.919937620" observedRunningTime="2025-11-28 08:09:53.97112405 +0000 UTC m=+4648.349189201" watchObservedRunningTime="2025-11-28 08:09:53.972798022 +0000 UTC m=+4648.350863163" Nov 28 08:10:00 crc kubenswrapper[4946]: I1128 08:10:00.152511 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r8mzk" Nov 28 08:10:00 crc kubenswrapper[4946]: I1128 08:10:00.153332 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r8mzk" Nov 28 08:10:00 crc kubenswrapper[4946]: I1128 08:10:00.228889 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r8mzk" Nov 28 08:10:01 crc kubenswrapper[4946]: I1128 08:10:01.092200 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r8mzk" Nov 28 08:10:01 crc kubenswrapper[4946]: I1128 08:10:01.990549 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:10:01 crc kubenswrapper[4946]: E1128 08:10:01.991070 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:10:02 crc kubenswrapper[4946]: I1128 08:10:02.794130 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r8mzk"] Nov 28 08:10:03 crc kubenswrapper[4946]: I1128 08:10:03.031422 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r8mzk" podUID="947fdf9d-9190-4542-b1b2-e0dd9e76a5c5" containerName="registry-server" containerID="cri-o://42a4a74a7d6fcbd3219c92d5e71e72e9e69b248dd07782197c006a3cdcf1adf8" gracePeriod=2 Nov 28 08:10:04 crc kubenswrapper[4946]: I1128 08:10:03.997017 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r8mzk" Nov 28 08:10:04 crc kubenswrapper[4946]: I1128 08:10:04.088861 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/947fdf9d-9190-4542-b1b2-e0dd9e76a5c5-catalog-content\") pod \"947fdf9d-9190-4542-b1b2-e0dd9e76a5c5\" (UID: \"947fdf9d-9190-4542-b1b2-e0dd9e76a5c5\") " Nov 28 08:10:04 crc kubenswrapper[4946]: I1128 08:10:04.088952 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/947fdf9d-9190-4542-b1b2-e0dd9e76a5c5-utilities\") pod \"947fdf9d-9190-4542-b1b2-e0dd9e76a5c5\" (UID: \"947fdf9d-9190-4542-b1b2-e0dd9e76a5c5\") " Nov 28 08:10:04 crc kubenswrapper[4946]: I1128 08:10:04.088998 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8htl2\" (UniqueName: \"kubernetes.io/projected/947fdf9d-9190-4542-b1b2-e0dd9e76a5c5-kube-api-access-8htl2\") pod \"947fdf9d-9190-4542-b1b2-e0dd9e76a5c5\" (UID: \"947fdf9d-9190-4542-b1b2-e0dd9e76a5c5\") " Nov 28 08:10:04 crc kubenswrapper[4946]: I1128 08:10:04.091111 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/947fdf9d-9190-4542-b1b2-e0dd9e76a5c5-utilities" (OuterVolumeSpecName: "utilities") pod "947fdf9d-9190-4542-b1b2-e0dd9e76a5c5" (UID: "947fdf9d-9190-4542-b1b2-e0dd9e76a5c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:10:04 crc kubenswrapper[4946]: I1128 08:10:04.096721 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/947fdf9d-9190-4542-b1b2-e0dd9e76a5c5-kube-api-access-8htl2" (OuterVolumeSpecName: "kube-api-access-8htl2") pod "947fdf9d-9190-4542-b1b2-e0dd9e76a5c5" (UID: "947fdf9d-9190-4542-b1b2-e0dd9e76a5c5"). InnerVolumeSpecName "kube-api-access-8htl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:10:04 crc kubenswrapper[4946]: I1128 08:10:04.097172 4946 generic.go:334] "Generic (PLEG): container finished" podID="947fdf9d-9190-4542-b1b2-e0dd9e76a5c5" containerID="42a4a74a7d6fcbd3219c92d5e71e72e9e69b248dd07782197c006a3cdcf1adf8" exitCode=0 Nov 28 08:10:04 crc kubenswrapper[4946]: I1128 08:10:04.097226 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8mzk" event={"ID":"947fdf9d-9190-4542-b1b2-e0dd9e76a5c5","Type":"ContainerDied","Data":"42a4a74a7d6fcbd3219c92d5e71e72e9e69b248dd07782197c006a3cdcf1adf8"} Nov 28 08:10:04 crc kubenswrapper[4946]: I1128 08:10:04.097245 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r8mzk" Nov 28 08:10:04 crc kubenswrapper[4946]: I1128 08:10:04.097265 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8mzk" event={"ID":"947fdf9d-9190-4542-b1b2-e0dd9e76a5c5","Type":"ContainerDied","Data":"ce5cc37744888e97c2d78381de7447e42c66733cd29cfeeddad2bde6ce35eca0"} Nov 28 08:10:04 crc kubenswrapper[4946]: I1128 08:10:04.097293 4946 scope.go:117] "RemoveContainer" containerID="42a4a74a7d6fcbd3219c92d5e71e72e9e69b248dd07782197c006a3cdcf1adf8" Nov 28 08:10:04 crc kubenswrapper[4946]: I1128 08:10:04.138521 4946 scope.go:117] "RemoveContainer" containerID="074f062e71b1109bde4606d46c398ed84cada327ff8bc319f1fb0ec050694e5b" Nov 28 08:10:04 crc kubenswrapper[4946]: I1128 08:10:04.154017 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/947fdf9d-9190-4542-b1b2-e0dd9e76a5c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "947fdf9d-9190-4542-b1b2-e0dd9e76a5c5" (UID: "947fdf9d-9190-4542-b1b2-e0dd9e76a5c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:10:04 crc kubenswrapper[4946]: I1128 08:10:04.165389 4946 scope.go:117] "RemoveContainer" containerID="f84f8272962c06211ee5d78573d01c674813989e4188f27d550bf70f082e1910" Nov 28 08:10:04 crc kubenswrapper[4946]: I1128 08:10:04.187059 4946 scope.go:117] "RemoveContainer" containerID="42a4a74a7d6fcbd3219c92d5e71e72e9e69b248dd07782197c006a3cdcf1adf8" Nov 28 08:10:04 crc kubenswrapper[4946]: E1128 08:10:04.187634 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42a4a74a7d6fcbd3219c92d5e71e72e9e69b248dd07782197c006a3cdcf1adf8\": container with ID starting with 42a4a74a7d6fcbd3219c92d5e71e72e9e69b248dd07782197c006a3cdcf1adf8 not found: ID does not exist" containerID="42a4a74a7d6fcbd3219c92d5e71e72e9e69b248dd07782197c006a3cdcf1adf8" Nov 28 08:10:04 crc kubenswrapper[4946]: I1128 08:10:04.187699 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42a4a74a7d6fcbd3219c92d5e71e72e9e69b248dd07782197c006a3cdcf1adf8"} err="failed to get container status \"42a4a74a7d6fcbd3219c92d5e71e72e9e69b248dd07782197c006a3cdcf1adf8\": rpc error: code = NotFound desc = could not find container \"42a4a74a7d6fcbd3219c92d5e71e72e9e69b248dd07782197c006a3cdcf1adf8\": container with ID starting with 42a4a74a7d6fcbd3219c92d5e71e72e9e69b248dd07782197c006a3cdcf1adf8 not found: ID does not exist" Nov 28 08:10:04 crc kubenswrapper[4946]: I1128 08:10:04.187737 4946 scope.go:117] "RemoveContainer" containerID="074f062e71b1109bde4606d46c398ed84cada327ff8bc319f1fb0ec050694e5b" Nov 28 08:10:04 crc kubenswrapper[4946]: E1128 08:10:04.188273 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"074f062e71b1109bde4606d46c398ed84cada327ff8bc319f1fb0ec050694e5b\": container with ID starting with 074f062e71b1109bde4606d46c398ed84cada327ff8bc319f1fb0ec050694e5b not found: ID does not exist" containerID="074f062e71b1109bde4606d46c398ed84cada327ff8bc319f1fb0ec050694e5b" Nov 28 08:10:04 crc kubenswrapper[4946]: I1128 08:10:04.188313 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"074f062e71b1109bde4606d46c398ed84cada327ff8bc319f1fb0ec050694e5b"} err="failed to get container status \"074f062e71b1109bde4606d46c398ed84cada327ff8bc319f1fb0ec050694e5b\": rpc error: code = NotFound desc = could not find container \"074f062e71b1109bde4606d46c398ed84cada327ff8bc319f1fb0ec050694e5b\": container with ID starting with 074f062e71b1109bde4606d46c398ed84cada327ff8bc319f1fb0ec050694e5b not found: ID does not exist" Nov 28 08:10:04 crc kubenswrapper[4946]: I1128 08:10:04.188340 4946 scope.go:117] "RemoveContainer" containerID="f84f8272962c06211ee5d78573d01c674813989e4188f27d550bf70f082e1910" Nov 28 08:10:04 crc kubenswrapper[4946]: E1128 08:10:04.188627 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f84f8272962c06211ee5d78573d01c674813989e4188f27d550bf70f082e1910\": container with ID starting with f84f8272962c06211ee5d78573d01c674813989e4188f27d550bf70f082e1910 not found: ID does not exist" containerID="f84f8272962c06211ee5d78573d01c674813989e4188f27d550bf70f082e1910" Nov 28 08:10:04 crc kubenswrapper[4946]: I1128 08:10:04.188670 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f84f8272962c06211ee5d78573d01c674813989e4188f27d550bf70f082e1910"} err="failed to get container status \"f84f8272962c06211ee5d78573d01c674813989e4188f27d550bf70f082e1910\": rpc error: code = NotFound desc = could not find container \"f84f8272962c06211ee5d78573d01c674813989e4188f27d550bf70f082e1910\": container with ID starting with f84f8272962c06211ee5d78573d01c674813989e4188f27d550bf70f082e1910 not found: ID does not exist" Nov 28 08:10:04 crc kubenswrapper[4946]: I1128 08:10:04.190580 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/947fdf9d-9190-4542-b1b2-e0dd9e76a5c5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 08:10:04 crc kubenswrapper[4946]: I1128 08:10:04.190610 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/947fdf9d-9190-4542-b1b2-e0dd9e76a5c5-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 08:10:04 crc kubenswrapper[4946]: I1128 08:10:04.190626 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8htl2\" (UniqueName: \"kubernetes.io/projected/947fdf9d-9190-4542-b1b2-e0dd9e76a5c5-kube-api-access-8htl2\") on node \"crc\" DevicePath \"\"" Nov 28 08:10:04 crc kubenswrapper[4946]: I1128 08:10:04.459123 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r8mzk"] Nov 28 08:10:04 crc kubenswrapper[4946]: I1128 08:10:04.470073 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r8mzk"] Nov 28 08:10:06 crc kubenswrapper[4946]: I1128 08:10:06.005890 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="947fdf9d-9190-4542-b1b2-e0dd9e76a5c5" path="/var/lib/kubelet/pods/947fdf9d-9190-4542-b1b2-e0dd9e76a5c5/volumes" Nov 28 08:10:12 crc kubenswrapper[4946]: I1128 08:10:12.990294 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:10:12 crc kubenswrapper[4946]: E1128 08:10:12.991314 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:10:26 crc kubenswrapper[4946]: I1128 08:10:26.990837 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:10:26 crc kubenswrapper[4946]: E1128 08:10:26.992196 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:10:41 crc kubenswrapper[4946]: I1128 08:10:41.990793 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:10:41 crc kubenswrapper[4946]: E1128 08:10:41.993347 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:10:56 crc kubenswrapper[4946]: I1128 08:10:56.990451 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:10:56 crc kubenswrapper[4946]: E1128 08:10:56.991618 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:11:07 crc kubenswrapper[4946]: I1128 08:11:07.990343 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:11:07 crc kubenswrapper[4946]: E1128 08:11:07.991255 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:11:21 crc kubenswrapper[4946]: I1128 08:11:21.989694 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:11:21 crc kubenswrapper[4946]: E1128 08:11:21.990533 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:11:34 crc kubenswrapper[4946]: I1128 08:11:34.251353 4946 scope.go:117] "RemoveContainer" containerID="0666e8968c581fbc6146dcfe80ec31c2ead901a7fb5d5470a3d679a39a6e82e4" Nov 28 08:11:34 crc kubenswrapper[4946]: I1128 08:11:34.287059 4946 scope.go:117] "RemoveContainer" containerID="ff1957630138f5c58e34ac2ac97dedc291942b959c79dcc9c97c8b302867f997" Nov 28 08:11:34 crc kubenswrapper[4946]: I1128 08:11:34.319313 4946 scope.go:117] "RemoveContainer" containerID="461d5786583bafda170d7ab8b5cc62ba0933a03ee03a50978567a9d70f371af6" Nov 28 08:11:34 crc kubenswrapper[4946]: I1128 08:11:34.990448 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:11:34 crc kubenswrapper[4946]: E1128 08:11:34.990799 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:11:42 crc kubenswrapper[4946]: I1128 08:11:42.329998 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j7prg"] Nov 28 08:11:42 crc kubenswrapper[4946]: E1128 08:11:42.330982 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="947fdf9d-9190-4542-b1b2-e0dd9e76a5c5" containerName="registry-server" Nov 28 08:11:42 crc kubenswrapper[4946]: I1128 08:11:42.331003 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="947fdf9d-9190-4542-b1b2-e0dd9e76a5c5" containerName="registry-server" Nov 28 08:11:42 crc kubenswrapper[4946]: E1128 08:11:42.331023 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="947fdf9d-9190-4542-b1b2-e0dd9e76a5c5" containerName="extract-content" Nov 28 08:11:42 crc kubenswrapper[4946]: I1128 08:11:42.331035 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="947fdf9d-9190-4542-b1b2-e0dd9e76a5c5" containerName="extract-content" Nov 28 08:11:42 crc kubenswrapper[4946]: E1128 08:11:42.331070 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="947fdf9d-9190-4542-b1b2-e0dd9e76a5c5" containerName="extract-utilities" Nov 28 08:11:42 crc kubenswrapper[4946]: I1128 08:11:42.331083 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="947fdf9d-9190-4542-b1b2-e0dd9e76a5c5" containerName="extract-utilities" Nov 28 08:11:42 crc kubenswrapper[4946]: I1128 08:11:42.331359 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="947fdf9d-9190-4542-b1b2-e0dd9e76a5c5" containerName="registry-server" Nov 28 08:11:42 crc kubenswrapper[4946]: I1128 08:11:42.333126 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j7prg" Nov 28 08:11:42 crc kubenswrapper[4946]: I1128 08:11:42.354403 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7prg"] Nov 28 08:11:42 crc kubenswrapper[4946]: I1128 08:11:42.468388 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0795cac-9698-4f5e-a20f-97ee93f8d859-utilities\") pod \"redhat-marketplace-j7prg\" (UID: \"f0795cac-9698-4f5e-a20f-97ee93f8d859\") " pod="openshift-marketplace/redhat-marketplace-j7prg" Nov 28 08:11:42 crc kubenswrapper[4946]: I1128 08:11:42.468533 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0795cac-9698-4f5e-a20f-97ee93f8d859-catalog-content\") pod \"redhat-marketplace-j7prg\" (UID: \"f0795cac-9698-4f5e-a20f-97ee93f8d859\") " pod="openshift-marketplace/redhat-marketplace-j7prg" Nov 28 08:11:42 crc kubenswrapper[4946]: I1128 08:11:42.468612 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psmkk\" (UniqueName: \"kubernetes.io/projected/f0795cac-9698-4f5e-a20f-97ee93f8d859-kube-api-access-psmkk\") pod \"redhat-marketplace-j7prg\" (UID: \"f0795cac-9698-4f5e-a20f-97ee93f8d859\") " pod="openshift-marketplace/redhat-marketplace-j7prg" Nov 28 08:11:42 crc kubenswrapper[4946]: I1128 08:11:42.569487 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0795cac-9698-4f5e-a20f-97ee93f8d859-utilities\") pod \"redhat-marketplace-j7prg\" (UID: \"f0795cac-9698-4f5e-a20f-97ee93f8d859\") " pod="openshift-marketplace/redhat-marketplace-j7prg" Nov 28 08:11:42 crc kubenswrapper[4946]: I1128 08:11:42.569539 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0795cac-9698-4f5e-a20f-97ee93f8d859-catalog-content\") pod \"redhat-marketplace-j7prg\" (UID: \"f0795cac-9698-4f5e-a20f-97ee93f8d859\") " pod="openshift-marketplace/redhat-marketplace-j7prg" Nov 28 08:11:42 crc kubenswrapper[4946]: I1128 08:11:42.569582 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psmkk\" (UniqueName: \"kubernetes.io/projected/f0795cac-9698-4f5e-a20f-97ee93f8d859-kube-api-access-psmkk\") pod \"redhat-marketplace-j7prg\" (UID: \"f0795cac-9698-4f5e-a20f-97ee93f8d859\") " pod="openshift-marketplace/redhat-marketplace-j7prg" Nov 28 08:11:42 crc kubenswrapper[4946]: I1128 08:11:42.570103 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0795cac-9698-4f5e-a20f-97ee93f8d859-utilities\") pod \"redhat-marketplace-j7prg\" (UID: \"f0795cac-9698-4f5e-a20f-97ee93f8d859\") " pod="openshift-marketplace/redhat-marketplace-j7prg" Nov 28 08:11:42 crc kubenswrapper[4946]: I1128 08:11:42.570308 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0795cac-9698-4f5e-a20f-97ee93f8d859-catalog-content\") pod \"redhat-marketplace-j7prg\" (UID: \"f0795cac-9698-4f5e-a20f-97ee93f8d859\") " pod="openshift-marketplace/redhat-marketplace-j7prg" Nov 28 08:11:42 crc kubenswrapper[4946]: I1128 08:11:42.600239 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psmkk\" (UniqueName: \"kubernetes.io/projected/f0795cac-9698-4f5e-a20f-97ee93f8d859-kube-api-access-psmkk\") pod \"redhat-marketplace-j7prg\" (UID: \"f0795cac-9698-4f5e-a20f-97ee93f8d859\") " pod="openshift-marketplace/redhat-marketplace-j7prg" Nov 28 08:11:42 crc kubenswrapper[4946]: I1128 08:11:42.658101 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j7prg" Nov 28 08:11:43 crc kubenswrapper[4946]: I1128 08:11:43.114277 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7prg"] Nov 28 08:11:43 crc kubenswrapper[4946]: W1128 08:11:43.123055 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0795cac_9698_4f5e_a20f_97ee93f8d859.slice/crio-a63aad5d7bb8ed1b02a0fe8984cf082fa41e610722c25654bf6818858591a2ab WatchSource:0}: Error finding container a63aad5d7bb8ed1b02a0fe8984cf082fa41e610722c25654bf6818858591a2ab: Status 404 returned error can't find the container with id a63aad5d7bb8ed1b02a0fe8984cf082fa41e610722c25654bf6818858591a2ab Nov 28 08:11:43 crc kubenswrapper[4946]: I1128 08:11:43.204539 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7prg" event={"ID":"f0795cac-9698-4f5e-a20f-97ee93f8d859","Type":"ContainerStarted","Data":"a63aad5d7bb8ed1b02a0fe8984cf082fa41e610722c25654bf6818858591a2ab"} Nov 28 08:11:44 crc kubenswrapper[4946]: I1128 08:11:44.217182 4946 generic.go:334] "Generic (PLEG): container finished" podID="f0795cac-9698-4f5e-a20f-97ee93f8d859" containerID="e526d351a50af407fd1fcac729949d6e590a4d06d5f865f6ac28b9039775b374" exitCode=0 Nov 28 08:11:44 crc kubenswrapper[4946]: I1128 08:11:44.217248 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7prg" event={"ID":"f0795cac-9698-4f5e-a20f-97ee93f8d859","Type":"ContainerDied","Data":"e526d351a50af407fd1fcac729949d6e590a4d06d5f865f6ac28b9039775b374"} Nov 28 08:11:45 crc kubenswrapper[4946]: I1128 08:11:45.236962 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7prg" event={"ID":"f0795cac-9698-4f5e-a20f-97ee93f8d859","Type":"ContainerStarted","Data":"d7ebe2c57c22799614579f137cc0b7b4b5afe9085ffa92263d55f792dc68f508"} Nov 28 08:11:46 crc kubenswrapper[4946]: I1128 08:11:46.243842 4946 generic.go:334] "Generic (PLEG): container finished" podID="f0795cac-9698-4f5e-a20f-97ee93f8d859" containerID="d7ebe2c57c22799614579f137cc0b7b4b5afe9085ffa92263d55f792dc68f508" exitCode=0 Nov 28 08:11:46 crc kubenswrapper[4946]: I1128 08:11:46.244175 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7prg" event={"ID":"f0795cac-9698-4f5e-a20f-97ee93f8d859","Type":"ContainerDied","Data":"d7ebe2c57c22799614579f137cc0b7b4b5afe9085ffa92263d55f792dc68f508"} Nov 28 08:11:47 crc kubenswrapper[4946]: I1128 08:11:47.258215 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7prg" event={"ID":"f0795cac-9698-4f5e-a20f-97ee93f8d859","Type":"ContainerStarted","Data":"522db5734762a63187ff8acfdf5e4e5e736ff666f9eafa5c4ffd99849252679a"} Nov 28 08:11:47 crc kubenswrapper[4946]: I1128 08:11:47.303129 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j7prg" podStartSLOduration=2.820729574 podStartE2EDuration="5.303091739s" podCreationTimestamp="2025-11-28 08:11:42 +0000 UTC" firstStartedPulling="2025-11-28 08:11:44.219900988 +0000 UTC m=+4758.597966149" lastFinishedPulling="2025-11-28 08:11:46.702263203 +0000 UTC m=+4761.080328314" observedRunningTime="2025-11-28 08:11:47.290008214 +0000 UTC m=+4761.668073365" watchObservedRunningTime="2025-11-28 08:11:47.303091739 +0000 UTC m=+4761.681156890" Nov 28 08:11:49 crc kubenswrapper[4946]: I1128 08:11:49.990188 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:11:49 crc kubenswrapper[4946]: E1128 08:11:49.990936 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:11:52 crc kubenswrapper[4946]: I1128 08:11:52.658858 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j7prg" Nov 28 08:11:52 crc kubenswrapper[4946]: I1128 08:11:52.660735 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j7prg" Nov 28 08:11:52 crc kubenswrapper[4946]: I1128 08:11:52.734571 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j7prg" Nov 28 08:11:53 crc kubenswrapper[4946]: I1128 08:11:53.415023 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j7prg" Nov 28 08:11:53 crc kubenswrapper[4946]: I1128 08:11:53.493322 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7prg"] Nov 28 08:11:55 crc kubenswrapper[4946]: I1128 08:11:55.340331 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j7prg" podUID="f0795cac-9698-4f5e-a20f-97ee93f8d859" containerName="registry-server" containerID="cri-o://522db5734762a63187ff8acfdf5e4e5e736ff666f9eafa5c4ffd99849252679a" gracePeriod=2 Nov 28 08:11:55 crc kubenswrapper[4946]: I1128 08:11:55.783342 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j7prg" Nov 28 08:11:55 crc kubenswrapper[4946]: I1128 08:11:55.883258 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psmkk\" (UniqueName: \"kubernetes.io/projected/f0795cac-9698-4f5e-a20f-97ee93f8d859-kube-api-access-psmkk\") pod \"f0795cac-9698-4f5e-a20f-97ee93f8d859\" (UID: \"f0795cac-9698-4f5e-a20f-97ee93f8d859\") " Nov 28 08:11:55 crc kubenswrapper[4946]: I1128 08:11:55.883347 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0795cac-9698-4f5e-a20f-97ee93f8d859-catalog-content\") pod \"f0795cac-9698-4f5e-a20f-97ee93f8d859\" (UID: \"f0795cac-9698-4f5e-a20f-97ee93f8d859\") " Nov 28 08:11:55 crc kubenswrapper[4946]: I1128 08:11:55.883417 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0795cac-9698-4f5e-a20f-97ee93f8d859-utilities\") pod \"f0795cac-9698-4f5e-a20f-97ee93f8d859\" (UID: \"f0795cac-9698-4f5e-a20f-97ee93f8d859\") " Nov 28 08:11:55 crc kubenswrapper[4946]: I1128 08:11:55.884587 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0795cac-9698-4f5e-a20f-97ee93f8d859-utilities" (OuterVolumeSpecName: "utilities") pod "f0795cac-9698-4f5e-a20f-97ee93f8d859" (UID: "f0795cac-9698-4f5e-a20f-97ee93f8d859"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:11:55 crc kubenswrapper[4946]: I1128 08:11:55.890003 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0795cac-9698-4f5e-a20f-97ee93f8d859-kube-api-access-psmkk" (OuterVolumeSpecName: "kube-api-access-psmkk") pod "f0795cac-9698-4f5e-a20f-97ee93f8d859" (UID: "f0795cac-9698-4f5e-a20f-97ee93f8d859"). InnerVolumeSpecName "kube-api-access-psmkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:11:55 crc kubenswrapper[4946]: I1128 08:11:55.903364 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0795cac-9698-4f5e-a20f-97ee93f8d859-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0795cac-9698-4f5e-a20f-97ee93f8d859" (UID: "f0795cac-9698-4f5e-a20f-97ee93f8d859"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:11:55 crc kubenswrapper[4946]: I1128 08:11:55.985192 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psmkk\" (UniqueName: \"kubernetes.io/projected/f0795cac-9698-4f5e-a20f-97ee93f8d859-kube-api-access-psmkk\") on node \"crc\" DevicePath \"\"" Nov 28 08:11:55 crc kubenswrapper[4946]: I1128 08:11:55.985241 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0795cac-9698-4f5e-a20f-97ee93f8d859-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 08:11:55 crc kubenswrapper[4946]: I1128 08:11:55.985254 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0795cac-9698-4f5e-a20f-97ee93f8d859-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 08:11:56 crc kubenswrapper[4946]: I1128 08:11:56.350438 4946 generic.go:334] "Generic (PLEG): container finished" podID="f0795cac-9698-4f5e-a20f-97ee93f8d859" containerID="522db5734762a63187ff8acfdf5e4e5e736ff666f9eafa5c4ffd99849252679a" exitCode=0 Nov 28 08:11:56 crc kubenswrapper[4946]: I1128 08:11:56.350530 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7prg" event={"ID":"f0795cac-9698-4f5e-a20f-97ee93f8d859","Type":"ContainerDied","Data":"522db5734762a63187ff8acfdf5e4e5e736ff666f9eafa5c4ffd99849252679a"} Nov 28 08:11:56 crc kubenswrapper[4946]: I1128 08:11:56.350613 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7prg" event={"ID":"f0795cac-9698-4f5e-a20f-97ee93f8d859","Type":"ContainerDied","Data":"a63aad5d7bb8ed1b02a0fe8984cf082fa41e610722c25654bf6818858591a2ab"} Nov 28 08:11:56 crc kubenswrapper[4946]: I1128 08:11:56.350645 4946 scope.go:117] "RemoveContainer" containerID="522db5734762a63187ff8acfdf5e4e5e736ff666f9eafa5c4ffd99849252679a" Nov 28 08:11:56 crc kubenswrapper[4946]: I1128 08:11:56.350556 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j7prg" Nov 28 08:11:56 crc kubenswrapper[4946]: I1128 08:11:56.382842 4946 scope.go:117] "RemoveContainer" containerID="d7ebe2c57c22799614579f137cc0b7b4b5afe9085ffa92263d55f792dc68f508" Nov 28 08:11:56 crc kubenswrapper[4946]: I1128 08:11:56.384126 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7prg"] Nov 28 08:11:56 crc kubenswrapper[4946]: I1128 08:11:56.411028 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7prg"] Nov 28 08:11:56 crc kubenswrapper[4946]: I1128 08:11:56.419142 4946 scope.go:117] "RemoveContainer" containerID="e526d351a50af407fd1fcac729949d6e590a4d06d5f865f6ac28b9039775b374" Nov 28 08:11:56 crc kubenswrapper[4946]: I1128 08:11:56.459048 4946 scope.go:117] "RemoveContainer" containerID="522db5734762a63187ff8acfdf5e4e5e736ff666f9eafa5c4ffd99849252679a" Nov 28 08:11:56 crc kubenswrapper[4946]: E1128 08:11:56.459825 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"522db5734762a63187ff8acfdf5e4e5e736ff666f9eafa5c4ffd99849252679a\": container with ID starting with 522db5734762a63187ff8acfdf5e4e5e736ff666f9eafa5c4ffd99849252679a not found: ID does not exist" containerID="522db5734762a63187ff8acfdf5e4e5e736ff666f9eafa5c4ffd99849252679a" Nov 28 08:11:56 crc kubenswrapper[4946]: I1128 08:11:56.459903 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"522db5734762a63187ff8acfdf5e4e5e736ff666f9eafa5c4ffd99849252679a"} err="failed to get container status \"522db5734762a63187ff8acfdf5e4e5e736ff666f9eafa5c4ffd99849252679a\": rpc error: code = NotFound desc = could not find container \"522db5734762a63187ff8acfdf5e4e5e736ff666f9eafa5c4ffd99849252679a\": container with ID starting with 522db5734762a63187ff8acfdf5e4e5e736ff666f9eafa5c4ffd99849252679a not found: ID does not exist" Nov 28 08:11:56 crc kubenswrapper[4946]: I1128 08:11:56.459950 4946 scope.go:117] "RemoveContainer" containerID="d7ebe2c57c22799614579f137cc0b7b4b5afe9085ffa92263d55f792dc68f508" Nov 28 08:11:56 crc kubenswrapper[4946]: E1128 08:11:56.460443 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7ebe2c57c22799614579f137cc0b7b4b5afe9085ffa92263d55f792dc68f508\": container with ID starting with d7ebe2c57c22799614579f137cc0b7b4b5afe9085ffa92263d55f792dc68f508 not found: ID does not exist" containerID="d7ebe2c57c22799614579f137cc0b7b4b5afe9085ffa92263d55f792dc68f508" Nov 28 08:11:56 crc kubenswrapper[4946]: I1128 08:11:56.460614 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7ebe2c57c22799614579f137cc0b7b4b5afe9085ffa92263d55f792dc68f508"} err="failed to get container status \"d7ebe2c57c22799614579f137cc0b7b4b5afe9085ffa92263d55f792dc68f508\": rpc error: code = NotFound desc = could not find container \"d7ebe2c57c22799614579f137cc0b7b4b5afe9085ffa92263d55f792dc68f508\": container with ID starting with d7ebe2c57c22799614579f137cc0b7b4b5afe9085ffa92263d55f792dc68f508 not found: ID does not exist" Nov 28 08:11:56 crc kubenswrapper[4946]: I1128 08:11:56.460641 4946 scope.go:117] "RemoveContainer" containerID="e526d351a50af407fd1fcac729949d6e590a4d06d5f865f6ac28b9039775b374" Nov 28 08:11:56 crc kubenswrapper[4946]: E1128 08:11:56.461168 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e526d351a50af407fd1fcac729949d6e590a4d06d5f865f6ac28b9039775b374\": container with ID starting with e526d351a50af407fd1fcac729949d6e590a4d06d5f865f6ac28b9039775b374 not found: ID does not exist" containerID="e526d351a50af407fd1fcac729949d6e590a4d06d5f865f6ac28b9039775b374" Nov 28 08:11:56 crc kubenswrapper[4946]: I1128 08:11:56.461247 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e526d351a50af407fd1fcac729949d6e590a4d06d5f865f6ac28b9039775b374"} err="failed to get container status \"e526d351a50af407fd1fcac729949d6e590a4d06d5f865f6ac28b9039775b374\": rpc error: code = NotFound desc = could not find container \"e526d351a50af407fd1fcac729949d6e590a4d06d5f865f6ac28b9039775b374\": container with ID starting with e526d351a50af407fd1fcac729949d6e590a4d06d5f865f6ac28b9039775b374 not found: ID does not exist" Nov 28 08:11:58 crc kubenswrapper[4946]: I1128 08:11:58.010244 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0795cac-9698-4f5e-a20f-97ee93f8d859" path="/var/lib/kubelet/pods/f0795cac-9698-4f5e-a20f-97ee93f8d859/volumes" Nov 28 08:12:03 crc kubenswrapper[4946]: I1128 08:12:03.990061 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:12:03 crc kubenswrapper[4946]: E1128 08:12:03.991822 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:12:14 crc kubenswrapper[4946]: I1128 08:12:14.990064 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:12:14 crc kubenswrapper[4946]: E1128 08:12:14.991243 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:12:28 crc kubenswrapper[4946]: I1128 08:12:28.989934 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:12:28 crc kubenswrapper[4946]: E1128 08:12:28.991052 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:12:41 crc kubenswrapper[4946]: I1128 08:12:41.990689 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:12:41 crc kubenswrapper[4946]: E1128 08:12:41.991661 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:12:53 crc kubenswrapper[4946]: I1128 08:12:53.990359 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:12:53 crc kubenswrapper[4946]: E1128 08:12:53.991909 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:13:04 crc kubenswrapper[4946]: I1128 08:13:04.989986 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:13:05 crc kubenswrapper[4946]: I1128 08:13:05.974992 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"51b8d1f43edaf6417cc42018ea6373a46d169776a2ca3e50b4bb1cdc5ac238d7"} Nov 28 08:15:00 crc kubenswrapper[4946]: I1128 08:15:00.164350 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405295-zc2t8"] Nov 28 08:15:00 crc kubenswrapper[4946]: E1128 08:15:00.165328 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0795cac-9698-4f5e-a20f-97ee93f8d859" containerName="extract-content" Nov 28 08:15:00 crc kubenswrapper[4946]: I1128 08:15:00.165354 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0795cac-9698-4f5e-a20f-97ee93f8d859" containerName="extract-content" Nov 28 08:15:00 crc kubenswrapper[4946]: E1128 08:15:00.165394 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0795cac-9698-4f5e-a20f-97ee93f8d859" containerName="extract-utilities" Nov 28 08:15:00 crc kubenswrapper[4946]: I1128 08:15:00.165404 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0795cac-9698-4f5e-a20f-97ee93f8d859" containerName="extract-utilities" Nov 28 08:15:00 crc kubenswrapper[4946]: E1128 08:15:00.165423 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0795cac-9698-4f5e-a20f-97ee93f8d859" containerName="registry-server" Nov 28 08:15:00 crc kubenswrapper[4946]: I1128 08:15:00.165432 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0795cac-9698-4f5e-a20f-97ee93f8d859" containerName="registry-server" Nov 28 08:15:00 crc kubenswrapper[4946]: I1128 08:15:00.165673 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0795cac-9698-4f5e-a20f-97ee93f8d859" containerName="registry-server" Nov 28 08:15:00 crc kubenswrapper[4946]: I1128 08:15:00.166376 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405295-zc2t8" Nov 28 08:15:00 crc kubenswrapper[4946]: I1128 08:15:00.168711 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 08:15:00 crc kubenswrapper[4946]: I1128 08:15:00.169353 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 08:15:00 crc kubenswrapper[4946]: I1128 08:15:00.188315 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405295-zc2t8"] Nov 28 08:15:00 crc kubenswrapper[4946]: I1128 08:15:00.320073 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c91401c-ef3e-4050-8373-929d7237425f-secret-volume\") pod \"collect-profiles-29405295-zc2t8\" (UID: \"7c91401c-ef3e-4050-8373-929d7237425f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405295-zc2t8" Nov 28 08:15:00 crc kubenswrapper[4946]: I1128 08:15:00.320136 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7wbs\" (UniqueName: \"kubernetes.io/projected/7c91401c-ef3e-4050-8373-929d7237425f-kube-api-access-g7wbs\") pod \"collect-profiles-29405295-zc2t8\" (UID: \"7c91401c-ef3e-4050-8373-929d7237425f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405295-zc2t8" Nov 28 08:15:00 crc kubenswrapper[4946]: I1128 08:15:00.320317 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c91401c-ef3e-4050-8373-929d7237425f-config-volume\") pod \"collect-profiles-29405295-zc2t8\" (UID: \"7c91401c-ef3e-4050-8373-929d7237425f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405295-zc2t8" Nov 28 08:15:00 crc kubenswrapper[4946]: I1128 08:15:00.422014 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c91401c-ef3e-4050-8373-929d7237425f-secret-volume\") pod \"collect-profiles-29405295-zc2t8\" (UID: \"7c91401c-ef3e-4050-8373-929d7237425f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405295-zc2t8" Nov 28 08:15:00 crc kubenswrapper[4946]: I1128 08:15:00.422077 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7wbs\" (UniqueName: \"kubernetes.io/projected/7c91401c-ef3e-4050-8373-929d7237425f-kube-api-access-g7wbs\") pod \"collect-profiles-29405295-zc2t8\" (UID: \"7c91401c-ef3e-4050-8373-929d7237425f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405295-zc2t8" Nov 28 08:15:00 crc kubenswrapper[4946]: I1128 08:15:00.422109 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c91401c-ef3e-4050-8373-929d7237425f-config-volume\") pod \"collect-profiles-29405295-zc2t8\" (UID: \"7c91401c-ef3e-4050-8373-929d7237425f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405295-zc2t8" Nov 28 08:15:00 crc kubenswrapper[4946]: I1128 08:15:00.424089 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c91401c-ef3e-4050-8373-929d7237425f-config-volume\") pod \"collect-profiles-29405295-zc2t8\" (UID: \"7c91401c-ef3e-4050-8373-929d7237425f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405295-zc2t8" Nov 28 08:15:00 crc kubenswrapper[4946]: I1128 08:15:00.431697 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c91401c-ef3e-4050-8373-929d7237425f-secret-volume\") pod \"collect-profiles-29405295-zc2t8\" (UID: \"7c91401c-ef3e-4050-8373-929d7237425f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405295-zc2t8" Nov 28 08:15:00 crc kubenswrapper[4946]: I1128 08:15:00.460076 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7wbs\" (UniqueName: \"kubernetes.io/projected/7c91401c-ef3e-4050-8373-929d7237425f-kube-api-access-g7wbs\") pod \"collect-profiles-29405295-zc2t8\" (UID: \"7c91401c-ef3e-4050-8373-929d7237425f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405295-zc2t8" Nov 28 08:15:00 crc kubenswrapper[4946]: I1128 08:15:00.532301 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405295-zc2t8" Nov 28 08:15:01 crc kubenswrapper[4946]: I1128 08:15:01.022146 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405295-zc2t8"] Nov 28 08:15:01 crc kubenswrapper[4946]: W1128 08:15:01.031770 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c91401c_ef3e_4050_8373_929d7237425f.slice/crio-52188899cfec6400bf72fa60919f5cd0959ec424fd668a02606d334565ef2582 WatchSource:0}: Error finding container 52188899cfec6400bf72fa60919f5cd0959ec424fd668a02606d334565ef2582: Status 404 returned error can't find the container with id 52188899cfec6400bf72fa60919f5cd0959ec424fd668a02606d334565ef2582 Nov 28 08:15:01 crc kubenswrapper[4946]: I1128 08:15:01.055631 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405295-zc2t8" event={"ID":"7c91401c-ef3e-4050-8373-929d7237425f","Type":"ContainerStarted","Data":"52188899cfec6400bf72fa60919f5cd0959ec424fd668a02606d334565ef2582"} Nov 28 08:15:02 crc kubenswrapper[4946]: I1128 08:15:02.066160 4946 generic.go:334] "Generic (PLEG): container finished" podID="7c91401c-ef3e-4050-8373-929d7237425f" containerID="f7a85fb7551b1a7a5e7856cfa42c838c7895b03a2df14c820ebaa2a531213159" exitCode=0 Nov 28 08:15:02 crc kubenswrapper[4946]: I1128 08:15:02.066222 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405295-zc2t8" event={"ID":"7c91401c-ef3e-4050-8373-929d7237425f","Type":"ContainerDied","Data":"f7a85fb7551b1a7a5e7856cfa42c838c7895b03a2df14c820ebaa2a531213159"} Nov 28 08:15:03 crc kubenswrapper[4946]: I1128 08:15:03.355049 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405295-zc2t8" Nov 28 08:15:03 crc kubenswrapper[4946]: I1128 08:15:03.473765 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c91401c-ef3e-4050-8373-929d7237425f-secret-volume\") pod \"7c91401c-ef3e-4050-8373-929d7237425f\" (UID: \"7c91401c-ef3e-4050-8373-929d7237425f\") " Nov 28 08:15:03 crc kubenswrapper[4946]: I1128 08:15:03.473850 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7wbs\" (UniqueName: \"kubernetes.io/projected/7c91401c-ef3e-4050-8373-929d7237425f-kube-api-access-g7wbs\") pod \"7c91401c-ef3e-4050-8373-929d7237425f\" (UID: \"7c91401c-ef3e-4050-8373-929d7237425f\") " Nov 28 08:15:03 crc kubenswrapper[4946]: I1128 08:15:03.473912 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c91401c-ef3e-4050-8373-929d7237425f-config-volume\") pod \"7c91401c-ef3e-4050-8373-929d7237425f\" (UID: \"7c91401c-ef3e-4050-8373-929d7237425f\") " Nov 28 08:15:03 crc kubenswrapper[4946]: I1128 08:15:03.474803 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c91401c-ef3e-4050-8373-929d7237425f-config-volume" (OuterVolumeSpecName: "config-volume") pod "7c91401c-ef3e-4050-8373-929d7237425f" (UID: "7c91401c-ef3e-4050-8373-929d7237425f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:15:03 crc kubenswrapper[4946]: I1128 08:15:03.479820 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c91401c-ef3e-4050-8373-929d7237425f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7c91401c-ef3e-4050-8373-929d7237425f" (UID: "7c91401c-ef3e-4050-8373-929d7237425f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:15:03 crc kubenswrapper[4946]: I1128 08:15:03.480433 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c91401c-ef3e-4050-8373-929d7237425f-kube-api-access-g7wbs" (OuterVolumeSpecName: "kube-api-access-g7wbs") pod "7c91401c-ef3e-4050-8373-929d7237425f" (UID: "7c91401c-ef3e-4050-8373-929d7237425f"). InnerVolumeSpecName "kube-api-access-g7wbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:15:03 crc kubenswrapper[4946]: I1128 08:15:03.575728 4946 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c91401c-ef3e-4050-8373-929d7237425f-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 08:15:03 crc kubenswrapper[4946]: I1128 08:15:03.576011 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7wbs\" (UniqueName: \"kubernetes.io/projected/7c91401c-ef3e-4050-8373-929d7237425f-kube-api-access-g7wbs\") on node \"crc\" DevicePath \"\"" Nov 28 08:15:03 crc kubenswrapper[4946]: I1128 08:15:03.576142 4946 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c91401c-ef3e-4050-8373-929d7237425f-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 08:15:04 crc kubenswrapper[4946]: I1128 08:15:04.086652 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405295-zc2t8" event={"ID":"7c91401c-ef3e-4050-8373-929d7237425f","Type":"ContainerDied","Data":"52188899cfec6400bf72fa60919f5cd0959ec424fd668a02606d334565ef2582"} Nov 28 08:15:04 crc kubenswrapper[4946]: I1128 08:15:04.086721 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52188899cfec6400bf72fa60919f5cd0959ec424fd668a02606d334565ef2582" Nov 28 08:15:04 crc kubenswrapper[4946]: I1128 08:15:04.086741 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405295-zc2t8" Nov 28 08:15:04 crc kubenswrapper[4946]: I1128 08:15:04.447937 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405250-n9zfg"] Nov 28 08:15:04 crc kubenswrapper[4946]: I1128 08:15:04.453511 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405250-n9zfg"] Nov 28 08:15:05 crc kubenswrapper[4946]: I1128 08:15:05.999089 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9831846a-e0ea-4a91-9650-2e1de09bcf32" path="/var/lib/kubelet/pods/9831846a-e0ea-4a91-9650-2e1de09bcf32/volumes" Nov 28 08:15:24 crc kubenswrapper[4946]: I1128 08:15:24.731221 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:15:24 crc kubenswrapper[4946]: I1128 08:15:24.732095 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:15:34 crc kubenswrapper[4946]: I1128 08:15:34.449986 4946 scope.go:117] "RemoveContainer" containerID="d4afeb8a0611f08069ded1dbf1f8a5e920708063b3c3c403d5283406490a7c8a" Nov 28 08:15:46 crc kubenswrapper[4946]: I1128 08:15:46.889252 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dw8vc"] Nov 28 08:15:46 crc kubenswrapper[4946]: E1128 08:15:46.890623 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c91401c-ef3e-4050-8373-929d7237425f" containerName="collect-profiles" Nov 28 08:15:46 crc kubenswrapper[4946]: I1128 08:15:46.890656 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c91401c-ef3e-4050-8373-929d7237425f" containerName="collect-profiles" Nov 28 08:15:46 crc kubenswrapper[4946]: I1128 08:15:46.891008 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c91401c-ef3e-4050-8373-929d7237425f" containerName="collect-profiles" Nov 28 08:15:46 crc kubenswrapper[4946]: I1128 08:15:46.893347 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dw8vc" Nov 28 08:15:46 crc kubenswrapper[4946]: I1128 08:15:46.907136 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dw8vc"] Nov 28 08:15:47 crc kubenswrapper[4946]: I1128 08:15:47.008537 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws5ml\" (UniqueName: \"kubernetes.io/projected/3b7fccea-0395-4b0b-b11a-5e2cd5307345-kube-api-access-ws5ml\") pod \"community-operators-dw8vc\" (UID: \"3b7fccea-0395-4b0b-b11a-5e2cd5307345\") " pod="openshift-marketplace/community-operators-dw8vc" Nov 28 08:15:47 crc kubenswrapper[4946]: I1128 08:15:47.008611 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b7fccea-0395-4b0b-b11a-5e2cd5307345-utilities\") pod \"community-operators-dw8vc\" (UID: \"3b7fccea-0395-4b0b-b11a-5e2cd5307345\") " pod="openshift-marketplace/community-operators-dw8vc" Nov 28 08:15:47 crc kubenswrapper[4946]: I1128 08:15:47.008651 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b7fccea-0395-4b0b-b11a-5e2cd5307345-catalog-content\") pod \"community-operators-dw8vc\" (UID: \"3b7fccea-0395-4b0b-b11a-5e2cd5307345\") " pod="openshift-marketplace/community-operators-dw8vc" Nov 28 08:15:47 crc kubenswrapper[4946]: I1128 08:15:47.110748 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws5ml\" (UniqueName: \"kubernetes.io/projected/3b7fccea-0395-4b0b-b11a-5e2cd5307345-kube-api-access-ws5ml\") pod \"community-operators-dw8vc\" (UID: \"3b7fccea-0395-4b0b-b11a-5e2cd5307345\") " pod="openshift-marketplace/community-operators-dw8vc" Nov 28 08:15:47 crc kubenswrapper[4946]: I1128 08:15:47.110820 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b7fccea-0395-4b0b-b11a-5e2cd5307345-utilities\") pod \"community-operators-dw8vc\" (UID: \"3b7fccea-0395-4b0b-b11a-5e2cd5307345\") " pod="openshift-marketplace/community-operators-dw8vc" Nov 28 08:15:47 crc kubenswrapper[4946]: I1128 08:15:47.110855 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b7fccea-0395-4b0b-b11a-5e2cd5307345-catalog-content\") pod \"community-operators-dw8vc\" (UID: \"3b7fccea-0395-4b0b-b11a-5e2cd5307345\") " pod="openshift-marketplace/community-operators-dw8vc" Nov 28 08:15:47 crc kubenswrapper[4946]: I1128 08:15:47.111590 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b7fccea-0395-4b0b-b11a-5e2cd5307345-catalog-content\") pod \"community-operators-dw8vc\" (UID: \"3b7fccea-0395-4b0b-b11a-5e2cd5307345\") " pod="openshift-marketplace/community-operators-dw8vc" Nov 28 08:15:47 crc kubenswrapper[4946]: I1128 08:15:47.111777 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b7fccea-0395-4b0b-b11a-5e2cd5307345-utilities\") pod \"community-operators-dw8vc\" (UID: \"3b7fccea-0395-4b0b-b11a-5e2cd5307345\") " pod="openshift-marketplace/community-operators-dw8vc" Nov 28 08:15:47 crc kubenswrapper[4946]: I1128 08:15:47.138920 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws5ml\" (UniqueName: \"kubernetes.io/projected/3b7fccea-0395-4b0b-b11a-5e2cd5307345-kube-api-access-ws5ml\") pod \"community-operators-dw8vc\" (UID: \"3b7fccea-0395-4b0b-b11a-5e2cd5307345\") " pod="openshift-marketplace/community-operators-dw8vc" Nov 28 08:15:47 crc kubenswrapper[4946]: I1128 08:15:47.222064 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dw8vc" Nov 28 08:15:47 crc kubenswrapper[4946]: I1128 08:15:47.535399 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dw8vc"] Nov 28 08:15:48 crc kubenswrapper[4946]: I1128 08:15:48.476786 4946 generic.go:334] "Generic (PLEG): container finished" podID="3b7fccea-0395-4b0b-b11a-5e2cd5307345" containerID="5b891276dce737802b528e15e99e80182b93a09ffeb0ae26a82694a15841a145" exitCode=0 Nov 28 08:15:48 crc kubenswrapper[4946]: I1128 08:15:48.476947 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dw8vc" event={"ID":"3b7fccea-0395-4b0b-b11a-5e2cd5307345","Type":"ContainerDied","Data":"5b891276dce737802b528e15e99e80182b93a09ffeb0ae26a82694a15841a145"} Nov 28 08:15:48 crc kubenswrapper[4946]: I1128 08:15:48.477048 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dw8vc" event={"ID":"3b7fccea-0395-4b0b-b11a-5e2cd5307345","Type":"ContainerStarted","Data":"14b45d25f178884886b13005c92518b594213afc1778a31ada9059f537526f88"} Nov 28 08:15:48 crc kubenswrapper[4946]: I1128 08:15:48.479830 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 08:15:49 crc kubenswrapper[4946]: I1128 08:15:49.487119 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dw8vc" event={"ID":"3b7fccea-0395-4b0b-b11a-5e2cd5307345","Type":"ContainerStarted","Data":"b4a130c4ab02c0cacfc808333117c9696adc536e5b64d76ad2687749b8c0ec48"} Nov 28 08:15:50 crc kubenswrapper[4946]: I1128 08:15:50.501023 4946 generic.go:334] "Generic (PLEG): container finished" podID="3b7fccea-0395-4b0b-b11a-5e2cd5307345" containerID="b4a130c4ab02c0cacfc808333117c9696adc536e5b64d76ad2687749b8c0ec48" exitCode=0 Nov 28 08:15:50 crc kubenswrapper[4946]: I1128 08:15:50.501109 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dw8vc" event={"ID":"3b7fccea-0395-4b0b-b11a-5e2cd5307345","Type":"ContainerDied","Data":"b4a130c4ab02c0cacfc808333117c9696adc536e5b64d76ad2687749b8c0ec48"} Nov 28 08:15:51 crc kubenswrapper[4946]: I1128 08:15:51.515326 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dw8vc" event={"ID":"3b7fccea-0395-4b0b-b11a-5e2cd5307345","Type":"ContainerStarted","Data":"0521560d3f3b8fd872efccc5f18808698110b0703720ec1c6a7f5c7ada52a046"} Nov 28 08:15:51 crc kubenswrapper[4946]: I1128 08:15:51.540252 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dw8vc" podStartSLOduration=2.95144665 podStartE2EDuration="5.540214258s" podCreationTimestamp="2025-11-28 08:15:46 +0000 UTC" firstStartedPulling="2025-11-28 08:15:48.479584585 +0000 UTC m=+5002.857649696" lastFinishedPulling="2025-11-28 08:15:51.068352153 +0000 UTC m=+5005.446417304" observedRunningTime="2025-11-28 08:15:51.530252211 +0000 UTC m=+5005.908317362" watchObservedRunningTime="2025-11-28 08:15:51.540214258 +0000 UTC m=+5005.918279429" Nov 28 08:15:54 crc kubenswrapper[4946]: I1128 08:15:54.730933 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:15:54 crc kubenswrapper[4946]: I1128 08:15:54.731408 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:15:57 crc kubenswrapper[4946]: I1128 08:15:57.222567 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dw8vc" Nov 28 08:15:57 crc kubenswrapper[4946]: I1128 08:15:57.222976 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dw8vc" Nov 28 08:15:57 crc kubenswrapper[4946]: I1128 08:15:57.291996 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dw8vc" Nov 28 08:15:57 crc kubenswrapper[4946]: I1128 08:15:57.619093 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dw8vc" Nov 28 08:15:57 crc kubenswrapper[4946]: I1128 08:15:57.674544 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dw8vc"] Nov 28 08:15:59 crc kubenswrapper[4946]: I1128 08:15:59.577042 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dw8vc" podUID="3b7fccea-0395-4b0b-b11a-5e2cd5307345" containerName="registry-server" containerID="cri-o://0521560d3f3b8fd872efccc5f18808698110b0703720ec1c6a7f5c7ada52a046" gracePeriod=2 Nov 28 08:16:00 crc kubenswrapper[4946]: I1128 08:16:00.599606 4946 generic.go:334] "Generic (PLEG): container finished" podID="3b7fccea-0395-4b0b-b11a-5e2cd5307345" containerID="0521560d3f3b8fd872efccc5f18808698110b0703720ec1c6a7f5c7ada52a046" exitCode=0 Nov 28 08:16:00 crc kubenswrapper[4946]: I1128 08:16:00.600612 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dw8vc" event={"ID":"3b7fccea-0395-4b0b-b11a-5e2cd5307345","Type":"ContainerDied","Data":"0521560d3f3b8fd872efccc5f18808698110b0703720ec1c6a7f5c7ada52a046"} Nov 28 08:16:00 crc kubenswrapper[4946]: I1128 08:16:00.946332 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dw8vc" Nov 28 08:16:01 crc kubenswrapper[4946]: I1128 08:16:01.028305 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b7fccea-0395-4b0b-b11a-5e2cd5307345-utilities\") pod \"3b7fccea-0395-4b0b-b11a-5e2cd5307345\" (UID: \"3b7fccea-0395-4b0b-b11a-5e2cd5307345\") " Nov 28 08:16:01 crc kubenswrapper[4946]: I1128 08:16:01.028352 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws5ml\" (UniqueName: \"kubernetes.io/projected/3b7fccea-0395-4b0b-b11a-5e2cd5307345-kube-api-access-ws5ml\") pod \"3b7fccea-0395-4b0b-b11a-5e2cd5307345\" (UID: \"3b7fccea-0395-4b0b-b11a-5e2cd5307345\") " Nov 28 08:16:01 crc kubenswrapper[4946]: I1128 08:16:01.028412 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b7fccea-0395-4b0b-b11a-5e2cd5307345-catalog-content\") pod \"3b7fccea-0395-4b0b-b11a-5e2cd5307345\" (UID: \"3b7fccea-0395-4b0b-b11a-5e2cd5307345\") " Nov 28 08:16:01 crc kubenswrapper[4946]: I1128 08:16:01.030316 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b7fccea-0395-4b0b-b11a-5e2cd5307345-utilities" (OuterVolumeSpecName: "utilities") pod "3b7fccea-0395-4b0b-b11a-5e2cd5307345" (UID: "3b7fccea-0395-4b0b-b11a-5e2cd5307345"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:16:01 crc kubenswrapper[4946]: I1128 08:16:01.040317 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b7fccea-0395-4b0b-b11a-5e2cd5307345-kube-api-access-ws5ml" (OuterVolumeSpecName: "kube-api-access-ws5ml") pod "3b7fccea-0395-4b0b-b11a-5e2cd5307345" (UID: "3b7fccea-0395-4b0b-b11a-5e2cd5307345"). InnerVolumeSpecName "kube-api-access-ws5ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:16:01 crc kubenswrapper[4946]: I1128 08:16:01.104703 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b7fccea-0395-4b0b-b11a-5e2cd5307345-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b7fccea-0395-4b0b-b11a-5e2cd5307345" (UID: "3b7fccea-0395-4b0b-b11a-5e2cd5307345"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:16:01 crc kubenswrapper[4946]: I1128 08:16:01.129621 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b7fccea-0395-4b0b-b11a-5e2cd5307345-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 08:16:01 crc kubenswrapper[4946]: I1128 08:16:01.129643 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws5ml\" (UniqueName: \"kubernetes.io/projected/3b7fccea-0395-4b0b-b11a-5e2cd5307345-kube-api-access-ws5ml\") on node \"crc\" DevicePath \"\"" Nov 28 08:16:01 crc kubenswrapper[4946]: I1128 08:16:01.129654 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b7fccea-0395-4b0b-b11a-5e2cd5307345-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 08:16:01 crc kubenswrapper[4946]: I1128 08:16:01.618279 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dw8vc" event={"ID":"3b7fccea-0395-4b0b-b11a-5e2cd5307345","Type":"ContainerDied","Data":"14b45d25f178884886b13005c92518b594213afc1778a31ada9059f537526f88"} Nov 28 08:16:01 crc kubenswrapper[4946]: I1128 08:16:01.618353 4946 scope.go:117] "RemoveContainer" containerID="0521560d3f3b8fd872efccc5f18808698110b0703720ec1c6a7f5c7ada52a046" Nov 28 08:16:01 crc kubenswrapper[4946]: I1128 08:16:01.618406 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dw8vc" Nov 28 08:16:01 crc kubenswrapper[4946]: I1128 08:16:01.665456 4946 scope.go:117] "RemoveContainer" containerID="b4a130c4ab02c0cacfc808333117c9696adc536e5b64d76ad2687749b8c0ec48" Nov 28 08:16:01 crc kubenswrapper[4946]: I1128 08:16:01.681822 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dw8vc"] Nov 28 08:16:01 crc kubenswrapper[4946]: I1128 08:16:01.695218 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dw8vc"] Nov 28 08:16:01 crc kubenswrapper[4946]: I1128 08:16:01.930092 4946 scope.go:117] "RemoveContainer" containerID="5b891276dce737802b528e15e99e80182b93a09ffeb0ae26a82694a15841a145" Nov 28 08:16:02 crc kubenswrapper[4946]: I1128 08:16:02.001651 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b7fccea-0395-4b0b-b11a-5e2cd5307345" path="/var/lib/kubelet/pods/3b7fccea-0395-4b0b-b11a-5e2cd5307345/volumes" Nov 28 08:16:24 crc kubenswrapper[4946]: I1128 08:16:24.730986 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:16:24 crc kubenswrapper[4946]: I1128 08:16:24.731662 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:16:24 crc kubenswrapper[4946]: I1128 08:16:24.731719 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 08:16:24 crc kubenswrapper[4946]: I1128 08:16:24.732391 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51b8d1f43edaf6417cc42018ea6373a46d169776a2ca3e50b4bb1cdc5ac238d7"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 08:16:24 crc kubenswrapper[4946]: I1128 08:16:24.732455 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://51b8d1f43edaf6417cc42018ea6373a46d169776a2ca3e50b4bb1cdc5ac238d7" gracePeriod=600 Nov 28 08:16:25 crc kubenswrapper[4946]: I1128 08:16:25.824989 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="51b8d1f43edaf6417cc42018ea6373a46d169776a2ca3e50b4bb1cdc5ac238d7" exitCode=0 Nov 28 08:16:25 crc kubenswrapper[4946]: I1128 08:16:25.825048 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"51b8d1f43edaf6417cc42018ea6373a46d169776a2ca3e50b4bb1cdc5ac238d7"} Nov 28 08:16:25 crc kubenswrapper[4946]: I1128 08:16:25.825275 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb"} Nov 28 08:16:25 crc kubenswrapper[4946]: I1128 08:16:25.825299 4946 scope.go:117] "RemoveContainer" containerID="bd0bd75046e7d04fa31c11a8223f2791a2ec47bace8e4fbb294896fa7c119efb" Nov 28 08:18:54 crc kubenswrapper[4946]: I1128 08:18:54.730644 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:18:54 crc kubenswrapper[4946]: I1128 08:18:54.731457 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:19:24 crc kubenswrapper[4946]: I1128 08:19:24.731097 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:19:24 crc kubenswrapper[4946]: I1128 08:19:24.731978 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:19:40 crc kubenswrapper[4946]: I1128 08:19:40.259897 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xj7dl"] Nov 28 08:19:40 crc kubenswrapper[4946]: E1128 08:19:40.261053 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b7fccea-0395-4b0b-b11a-5e2cd5307345" containerName="registry-server" Nov 28 08:19:40 crc kubenswrapper[4946]: I1128 08:19:40.261077 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b7fccea-0395-4b0b-b11a-5e2cd5307345" containerName="registry-server" Nov 28 08:19:40 crc kubenswrapper[4946]: E1128 08:19:40.261120 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b7fccea-0395-4b0b-b11a-5e2cd5307345" containerName="extract-content" Nov 28 08:19:40 crc kubenswrapper[4946]: I1128 08:19:40.261133 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b7fccea-0395-4b0b-b11a-5e2cd5307345" containerName="extract-content" Nov 28 08:19:40 crc kubenswrapper[4946]: E1128 08:19:40.261164 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b7fccea-0395-4b0b-b11a-5e2cd5307345" containerName="extract-utilities" Nov 28 08:19:40 crc kubenswrapper[4946]: I1128 08:19:40.261178 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b7fccea-0395-4b0b-b11a-5e2cd5307345" containerName="extract-utilities" Nov 28 08:19:40 crc kubenswrapper[4946]: I1128 08:19:40.261439 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b7fccea-0395-4b0b-b11a-5e2cd5307345" containerName="registry-server" Nov 28 08:19:40 crc kubenswrapper[4946]: I1128 08:19:40.263289 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xj7dl" Nov 28 08:19:40 crc kubenswrapper[4946]: I1128 08:19:40.282365 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xj7dl"] Nov 28 08:19:40 crc kubenswrapper[4946]: I1128 08:19:40.454295 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f1547b6-b22c-4bc9-b032-4e60f715e98c-catalog-content\") pod \"redhat-operators-xj7dl\" (UID: \"3f1547b6-b22c-4bc9-b032-4e60f715e98c\") " pod="openshift-marketplace/redhat-operators-xj7dl" Nov 28 08:19:40 crc kubenswrapper[4946]: I1128 08:19:40.454386 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f1547b6-b22c-4bc9-b032-4e60f715e98c-utilities\") pod \"redhat-operators-xj7dl\" (UID: \"3f1547b6-b22c-4bc9-b032-4e60f715e98c\") " pod="openshift-marketplace/redhat-operators-xj7dl" Nov 28 08:19:40 crc kubenswrapper[4946]: I1128 08:19:40.454440 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn52d\" (UniqueName: \"kubernetes.io/projected/3f1547b6-b22c-4bc9-b032-4e60f715e98c-kube-api-access-dn52d\") pod \"redhat-operators-xj7dl\" (UID: \"3f1547b6-b22c-4bc9-b032-4e60f715e98c\") " pod="openshift-marketplace/redhat-operators-xj7dl" Nov 28 08:19:40 crc kubenswrapper[4946]: I1128 08:19:40.555391 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f1547b6-b22c-4bc9-b032-4e60f715e98c-catalog-content\") pod \"redhat-operators-xj7dl\" (UID: \"3f1547b6-b22c-4bc9-b032-4e60f715e98c\") " pod="openshift-marketplace/redhat-operators-xj7dl" Nov 28 08:19:40 crc kubenswrapper[4946]: I1128 08:19:40.555453 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f1547b6-b22c-4bc9-b032-4e60f715e98c-utilities\") pod \"redhat-operators-xj7dl\" (UID: \"3f1547b6-b22c-4bc9-b032-4e60f715e98c\") " pod="openshift-marketplace/redhat-operators-xj7dl" Nov 28 08:19:40 crc kubenswrapper[4946]: I1128 08:19:40.555530 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn52d\" (UniqueName: \"kubernetes.io/projected/3f1547b6-b22c-4bc9-b032-4e60f715e98c-kube-api-access-dn52d\") pod \"redhat-operators-xj7dl\" (UID: \"3f1547b6-b22c-4bc9-b032-4e60f715e98c\") " pod="openshift-marketplace/redhat-operators-xj7dl" Nov 28 08:19:40 crc kubenswrapper[4946]: I1128 08:19:40.556084 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f1547b6-b22c-4bc9-b032-4e60f715e98c-catalog-content\") pod \"redhat-operators-xj7dl\" (UID: \"3f1547b6-b22c-4bc9-b032-4e60f715e98c\") " pod="openshift-marketplace/redhat-operators-xj7dl" Nov 28 08:19:40 crc kubenswrapper[4946]: I1128 08:19:40.556288 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f1547b6-b22c-4bc9-b032-4e60f715e98c-utilities\") pod \"redhat-operators-xj7dl\" (UID: \"3f1547b6-b22c-4bc9-b032-4e60f715e98c\") " pod="openshift-marketplace/redhat-operators-xj7dl" Nov 28 08:19:40 crc kubenswrapper[4946]: I1128 08:19:40.573816 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn52d\" (UniqueName: \"kubernetes.io/projected/3f1547b6-b22c-4bc9-b032-4e60f715e98c-kube-api-access-dn52d\") pod \"redhat-operators-xj7dl\" (UID: \"3f1547b6-b22c-4bc9-b032-4e60f715e98c\") " pod="openshift-marketplace/redhat-operators-xj7dl" Nov 28 08:19:40 crc kubenswrapper[4946]: I1128 08:19:40.603324 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xj7dl" Nov 28 08:19:41 crc kubenswrapper[4946]: I1128 08:19:41.033614 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xj7dl"] Nov 28 08:19:41 crc kubenswrapper[4946]: I1128 08:19:41.694432 4946 generic.go:334] "Generic (PLEG): container finished" podID="3f1547b6-b22c-4bc9-b032-4e60f715e98c" containerID="3b5f00a4fc99a244ee4faa2309d3046a3277b9f9933b19f2d7e9e273f5da0b80" exitCode=0 Nov 28 08:19:41 crc kubenswrapper[4946]: I1128 08:19:41.694505 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xj7dl" event={"ID":"3f1547b6-b22c-4bc9-b032-4e60f715e98c","Type":"ContainerDied","Data":"3b5f00a4fc99a244ee4faa2309d3046a3277b9f9933b19f2d7e9e273f5da0b80"} Nov 28 08:19:41 crc kubenswrapper[4946]: I1128 08:19:41.694800 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xj7dl" event={"ID":"3f1547b6-b22c-4bc9-b032-4e60f715e98c","Type":"ContainerStarted","Data":"c578da52ea59d3552f66fd2b937d9f81bb73a1f279447c4ce87b3d684ae0c8d1"} Nov 28 08:19:42 crc kubenswrapper[4946]: I1128 08:19:42.709441 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xj7dl" event={"ID":"3f1547b6-b22c-4bc9-b032-4e60f715e98c","Type":"ContainerStarted","Data":"f14d9cac98fdb7817dfa32f9754717ab920bae9a185340538f82462953b3c84f"} Nov 28 08:19:43 crc kubenswrapper[4946]: I1128 08:19:43.720519 4946 generic.go:334] "Generic (PLEG): container finished" podID="3f1547b6-b22c-4bc9-b032-4e60f715e98c" containerID="f14d9cac98fdb7817dfa32f9754717ab920bae9a185340538f82462953b3c84f" exitCode=0 Nov 28 08:19:43 crc kubenswrapper[4946]: I1128 08:19:43.720583 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xj7dl" event={"ID":"3f1547b6-b22c-4bc9-b032-4e60f715e98c","Type":"ContainerDied","Data":"f14d9cac98fdb7817dfa32f9754717ab920bae9a185340538f82462953b3c84f"} Nov 28 08:19:44 crc kubenswrapper[4946]: I1128 08:19:44.733917 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xj7dl" event={"ID":"3f1547b6-b22c-4bc9-b032-4e60f715e98c","Type":"ContainerStarted","Data":"04f3caef47fc3ae73930a166abc3a998fbe7aefc60c59c61869b11ec0333e623"} Nov 28 08:19:44 crc kubenswrapper[4946]: I1128 08:19:44.754069 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xj7dl" podStartSLOduration=2.174094653 podStartE2EDuration="4.754040724s" podCreationTimestamp="2025-11-28 08:19:40 +0000 UTC" firstStartedPulling="2025-11-28 08:19:41.696578728 +0000 UTC m=+5236.074643879" lastFinishedPulling="2025-11-28 08:19:44.276524839 +0000 UTC m=+5238.654589950" observedRunningTime="2025-11-28 08:19:44.752875285 +0000 UTC m=+5239.130940386" watchObservedRunningTime="2025-11-28 08:19:44.754040724 +0000 UTC m=+5239.132105875" Nov 28 08:19:50 crc kubenswrapper[4946]: I1128 08:19:50.604171 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xj7dl" Nov 28 08:19:50 crc kubenswrapper[4946]: I1128 08:19:50.604843 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xj7dl" Nov 28 08:19:51 crc kubenswrapper[4946]: I1128 08:19:51.681981 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xj7dl" podUID="3f1547b6-b22c-4bc9-b032-4e60f715e98c" containerName="registry-server" probeResult="failure" output=< Nov 28 08:19:51 crc kubenswrapper[4946]: timeout: failed to connect service ":50051" within 1s Nov 28 08:19:51 crc kubenswrapper[4946]: > Nov 28 08:19:54 crc kubenswrapper[4946]: I1128 08:19:54.730540 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:19:54 crc kubenswrapper[4946]: I1128 08:19:54.730954 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:19:54 crc kubenswrapper[4946]: I1128 08:19:54.731041 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 08:19:54 crc kubenswrapper[4946]: I1128 08:19:54.731889 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 08:19:54 crc kubenswrapper[4946]: I1128 08:19:54.731985 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" gracePeriod=600 Nov 28 08:19:56 crc kubenswrapper[4946]: E1128 08:19:56.720995 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:19:56 crc kubenswrapper[4946]: I1128 08:19:56.999129 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" exitCode=0 Nov 28 08:19:56 crc kubenswrapper[4946]: I1128 08:19:56.999186 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb"} Nov 28 08:19:56 crc kubenswrapper[4946]: I1128 08:19:56.999229 4946 scope.go:117] "RemoveContainer" containerID="51b8d1f43edaf6417cc42018ea6373a46d169776a2ca3e50b4bb1cdc5ac238d7" Nov 28 08:19:57 crc kubenswrapper[4946]: I1128 08:19:57.002238 4946 scope.go:117] "RemoveContainer" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" Nov 28 08:19:57 crc kubenswrapper[4946]: E1128 08:19:57.003888 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:20:00 crc kubenswrapper[4946]: I1128 08:20:00.673923 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xj7dl" Nov 28 08:20:00 crc kubenswrapper[4946]: I1128 08:20:00.757451 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xj7dl" Nov 28 08:20:00 crc kubenswrapper[4946]: I1128 08:20:00.929417 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xj7dl"] Nov 28 08:20:02 crc kubenswrapper[4946]: I1128 08:20:02.056845 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xj7dl" podUID="3f1547b6-b22c-4bc9-b032-4e60f715e98c" containerName="registry-server" containerID="cri-o://04f3caef47fc3ae73930a166abc3a998fbe7aefc60c59c61869b11ec0333e623" gracePeriod=2 Nov 28 08:20:02 crc kubenswrapper[4946]: I1128 08:20:02.535794 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xj7dl" Nov 28 08:20:02 crc kubenswrapper[4946]: I1128 08:20:02.637799 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f1547b6-b22c-4bc9-b032-4e60f715e98c-utilities\") pod \"3f1547b6-b22c-4bc9-b032-4e60f715e98c\" (UID: \"3f1547b6-b22c-4bc9-b032-4e60f715e98c\") " Nov 28 08:20:02 crc kubenswrapper[4946]: I1128 08:20:02.637918 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f1547b6-b22c-4bc9-b032-4e60f715e98c-catalog-content\") pod \"3f1547b6-b22c-4bc9-b032-4e60f715e98c\" (UID: \"3f1547b6-b22c-4bc9-b032-4e60f715e98c\") " Nov 28 08:20:02 crc kubenswrapper[4946]: I1128 08:20:02.638038 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn52d\" (UniqueName: \"kubernetes.io/projected/3f1547b6-b22c-4bc9-b032-4e60f715e98c-kube-api-access-dn52d\") pod \"3f1547b6-b22c-4bc9-b032-4e60f715e98c\" (UID: \"3f1547b6-b22c-4bc9-b032-4e60f715e98c\") " Nov 28 08:20:02 crc kubenswrapper[4946]: I1128 08:20:02.639720 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f1547b6-b22c-4bc9-b032-4e60f715e98c-utilities" (OuterVolumeSpecName: "utilities") pod "3f1547b6-b22c-4bc9-b032-4e60f715e98c" (UID: "3f1547b6-b22c-4bc9-b032-4e60f715e98c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:20:02 crc kubenswrapper[4946]: I1128 08:20:02.647519 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f1547b6-b22c-4bc9-b032-4e60f715e98c-kube-api-access-dn52d" (OuterVolumeSpecName: "kube-api-access-dn52d") pod "3f1547b6-b22c-4bc9-b032-4e60f715e98c" (UID: "3f1547b6-b22c-4bc9-b032-4e60f715e98c"). InnerVolumeSpecName "kube-api-access-dn52d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:20:02 crc kubenswrapper[4946]: I1128 08:20:02.740048 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn52d\" (UniqueName: \"kubernetes.io/projected/3f1547b6-b22c-4bc9-b032-4e60f715e98c-kube-api-access-dn52d\") on node \"crc\" DevicePath \"\"" Nov 28 08:20:02 crc kubenswrapper[4946]: I1128 08:20:02.740099 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f1547b6-b22c-4bc9-b032-4e60f715e98c-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 08:20:02 crc kubenswrapper[4946]: I1128 08:20:02.811588 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f1547b6-b22c-4bc9-b032-4e60f715e98c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f1547b6-b22c-4bc9-b032-4e60f715e98c" (UID: "3f1547b6-b22c-4bc9-b032-4e60f715e98c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:20:02 crc kubenswrapper[4946]: I1128 08:20:02.841604 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f1547b6-b22c-4bc9-b032-4e60f715e98c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 08:20:03 crc kubenswrapper[4946]: I1128 08:20:03.066447 4946 generic.go:334] "Generic (PLEG): container finished" podID="3f1547b6-b22c-4bc9-b032-4e60f715e98c" containerID="04f3caef47fc3ae73930a166abc3a998fbe7aefc60c59c61869b11ec0333e623" exitCode=0 Nov 28 08:20:03 crc kubenswrapper[4946]: I1128 08:20:03.066559 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xj7dl" Nov 28 08:20:03 crc kubenswrapper[4946]: I1128 08:20:03.066547 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xj7dl" event={"ID":"3f1547b6-b22c-4bc9-b032-4e60f715e98c","Type":"ContainerDied","Data":"04f3caef47fc3ae73930a166abc3a998fbe7aefc60c59c61869b11ec0333e623"} Nov 28 08:20:03 crc kubenswrapper[4946]: I1128 08:20:03.066996 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xj7dl" event={"ID":"3f1547b6-b22c-4bc9-b032-4e60f715e98c","Type":"ContainerDied","Data":"c578da52ea59d3552f66fd2b937d9f81bb73a1f279447c4ce87b3d684ae0c8d1"} Nov 28 08:20:03 crc kubenswrapper[4946]: I1128 08:20:03.067028 4946 scope.go:117] "RemoveContainer" containerID="04f3caef47fc3ae73930a166abc3a998fbe7aefc60c59c61869b11ec0333e623" Nov 28 08:20:03 crc kubenswrapper[4946]: I1128 08:20:03.102629 4946 scope.go:117] "RemoveContainer" containerID="f14d9cac98fdb7817dfa32f9754717ab920bae9a185340538f82462953b3c84f" Nov 28 08:20:03 crc kubenswrapper[4946]: I1128 08:20:03.108351 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xj7dl"] Nov 28 08:20:03 crc kubenswrapper[4946]: I1128 08:20:03.116627 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xj7dl"] Nov 28 08:20:03 crc kubenswrapper[4946]: I1128 08:20:03.126952 4946 scope.go:117] "RemoveContainer" containerID="3b5f00a4fc99a244ee4faa2309d3046a3277b9f9933b19f2d7e9e273f5da0b80" Nov 28 08:20:03 crc kubenswrapper[4946]: I1128 08:20:03.163111 4946 scope.go:117] "RemoveContainer" containerID="04f3caef47fc3ae73930a166abc3a998fbe7aefc60c59c61869b11ec0333e623" Nov 28 08:20:03 crc kubenswrapper[4946]: E1128 08:20:03.163552 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04f3caef47fc3ae73930a166abc3a998fbe7aefc60c59c61869b11ec0333e623\": container with ID starting with 04f3caef47fc3ae73930a166abc3a998fbe7aefc60c59c61869b11ec0333e623 not found: ID does not exist" containerID="04f3caef47fc3ae73930a166abc3a998fbe7aefc60c59c61869b11ec0333e623" Nov 28 08:20:03 crc kubenswrapper[4946]: I1128 08:20:03.163617 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04f3caef47fc3ae73930a166abc3a998fbe7aefc60c59c61869b11ec0333e623"} err="failed to get container status \"04f3caef47fc3ae73930a166abc3a998fbe7aefc60c59c61869b11ec0333e623\": rpc error: code = NotFound desc = could not find container \"04f3caef47fc3ae73930a166abc3a998fbe7aefc60c59c61869b11ec0333e623\": container with ID starting with 04f3caef47fc3ae73930a166abc3a998fbe7aefc60c59c61869b11ec0333e623 not found: ID does not exist" Nov 28 08:20:03 crc kubenswrapper[4946]: I1128 08:20:03.163644 4946 scope.go:117] "RemoveContainer" containerID="f14d9cac98fdb7817dfa32f9754717ab920bae9a185340538f82462953b3c84f" Nov 28 08:20:03 crc kubenswrapper[4946]: E1128 08:20:03.164016 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f14d9cac98fdb7817dfa32f9754717ab920bae9a185340538f82462953b3c84f\": container with ID starting with f14d9cac98fdb7817dfa32f9754717ab920bae9a185340538f82462953b3c84f not found: ID does not exist" containerID="f14d9cac98fdb7817dfa32f9754717ab920bae9a185340538f82462953b3c84f" Nov 28 08:20:03 crc kubenswrapper[4946]: I1128 08:20:03.164042 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f14d9cac98fdb7817dfa32f9754717ab920bae9a185340538f82462953b3c84f"} err="failed to get container status \"f14d9cac98fdb7817dfa32f9754717ab920bae9a185340538f82462953b3c84f\": rpc error: code = NotFound desc = could not find container \"f14d9cac98fdb7817dfa32f9754717ab920bae9a185340538f82462953b3c84f\": container with ID starting with f14d9cac98fdb7817dfa32f9754717ab920bae9a185340538f82462953b3c84f not found: ID does not exist" Nov 28 08:20:03 crc kubenswrapper[4946]: I1128 08:20:03.164062 4946 scope.go:117] "RemoveContainer" containerID="3b5f00a4fc99a244ee4faa2309d3046a3277b9f9933b19f2d7e9e273f5da0b80" Nov 28 08:20:03 crc kubenswrapper[4946]: E1128 08:20:03.164351 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b5f00a4fc99a244ee4faa2309d3046a3277b9f9933b19f2d7e9e273f5da0b80\": container with ID starting with 3b5f00a4fc99a244ee4faa2309d3046a3277b9f9933b19f2d7e9e273f5da0b80 not found: ID does not exist" containerID="3b5f00a4fc99a244ee4faa2309d3046a3277b9f9933b19f2d7e9e273f5da0b80" Nov 28 08:20:03 crc kubenswrapper[4946]: I1128 08:20:03.164378 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b5f00a4fc99a244ee4faa2309d3046a3277b9f9933b19f2d7e9e273f5da0b80"} err="failed to get container status \"3b5f00a4fc99a244ee4faa2309d3046a3277b9f9933b19f2d7e9e273f5da0b80\": rpc error: code = NotFound desc = could not find container \"3b5f00a4fc99a244ee4faa2309d3046a3277b9f9933b19f2d7e9e273f5da0b80\": container with ID starting with 3b5f00a4fc99a244ee4faa2309d3046a3277b9f9933b19f2d7e9e273f5da0b80 not found: ID does not exist" Nov 28 08:20:04 crc kubenswrapper[4946]: I1128 08:20:04.007442 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f1547b6-b22c-4bc9-b032-4e60f715e98c" path="/var/lib/kubelet/pods/3f1547b6-b22c-4bc9-b032-4e60f715e98c/volumes" Nov 28 08:20:10 crc kubenswrapper[4946]: I1128 08:20:10.989954 4946 scope.go:117] "RemoveContainer" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" Nov 28 08:20:10 crc kubenswrapper[4946]: E1128 08:20:10.990955 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:20:17 crc kubenswrapper[4946]: I1128 08:20:17.932738 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h49l7"] Nov 28 08:20:17 crc kubenswrapper[4946]: E1128 08:20:17.933681 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1547b6-b22c-4bc9-b032-4e60f715e98c" containerName="extract-utilities" Nov 28 08:20:17 crc kubenswrapper[4946]: I1128 08:20:17.933698 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1547b6-b22c-4bc9-b032-4e60f715e98c" containerName="extract-utilities" Nov 28 08:20:17 crc kubenswrapper[4946]: E1128 08:20:17.933718 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1547b6-b22c-4bc9-b032-4e60f715e98c" containerName="registry-server" Nov 28 08:20:17 crc kubenswrapper[4946]: I1128 08:20:17.933725 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1547b6-b22c-4bc9-b032-4e60f715e98c" containerName="registry-server" Nov 28 08:20:17 crc kubenswrapper[4946]: E1128 08:20:17.933742 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1547b6-b22c-4bc9-b032-4e60f715e98c" containerName="extract-content" Nov 28 08:20:17 crc kubenswrapper[4946]: I1128 08:20:17.933750 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1547b6-b22c-4bc9-b032-4e60f715e98c" containerName="extract-content" Nov 28 08:20:17 crc kubenswrapper[4946]: I1128 08:20:17.933901 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1547b6-b22c-4bc9-b032-4e60f715e98c" containerName="registry-server" Nov 28 08:20:17 crc kubenswrapper[4946]: I1128 08:20:17.935012 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h49l7" Nov 28 08:20:17 crc kubenswrapper[4946]: I1128 08:20:17.978318 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h49l7"] Nov 28 08:20:18 crc kubenswrapper[4946]: I1128 08:20:18.042304 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d09f4eb7-5889-4889-9901-6dbe8dae3f1e-catalog-content\") pod \"certified-operators-h49l7\" (UID: \"d09f4eb7-5889-4889-9901-6dbe8dae3f1e\") " pod="openshift-marketplace/certified-operators-h49l7" Nov 28 08:20:18 crc kubenswrapper[4946]: I1128 08:20:18.042393 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrk8r\" (UniqueName: \"kubernetes.io/projected/d09f4eb7-5889-4889-9901-6dbe8dae3f1e-kube-api-access-zrk8r\") pod \"certified-operators-h49l7\" (UID: \"d09f4eb7-5889-4889-9901-6dbe8dae3f1e\") " pod="openshift-marketplace/certified-operators-h49l7" Nov 28 08:20:18 crc kubenswrapper[4946]: I1128 08:20:18.042438 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d09f4eb7-5889-4889-9901-6dbe8dae3f1e-utilities\") pod \"certified-operators-h49l7\" (UID: \"d09f4eb7-5889-4889-9901-6dbe8dae3f1e\") " pod="openshift-marketplace/certified-operators-h49l7" Nov 28 08:20:18 crc kubenswrapper[4946]: I1128 08:20:18.143490 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrk8r\" (UniqueName: \"kubernetes.io/projected/d09f4eb7-5889-4889-9901-6dbe8dae3f1e-kube-api-access-zrk8r\") pod \"certified-operators-h49l7\" (UID: \"d09f4eb7-5889-4889-9901-6dbe8dae3f1e\") " pod="openshift-marketplace/certified-operators-h49l7" Nov 28 08:20:18 crc kubenswrapper[4946]: I1128 08:20:18.143955 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d09f4eb7-5889-4889-9901-6dbe8dae3f1e-utilities\") pod \"certified-operators-h49l7\" (UID: \"d09f4eb7-5889-4889-9901-6dbe8dae3f1e\") " pod="openshift-marketplace/certified-operators-h49l7" Nov 28 08:20:18 crc kubenswrapper[4946]: I1128 08:20:18.144187 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d09f4eb7-5889-4889-9901-6dbe8dae3f1e-catalog-content\") pod \"certified-operators-h49l7\" (UID: \"d09f4eb7-5889-4889-9901-6dbe8dae3f1e\") " pod="openshift-marketplace/certified-operators-h49l7" Nov 28 08:20:18 crc kubenswrapper[4946]: I1128 08:20:18.144521 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d09f4eb7-5889-4889-9901-6dbe8dae3f1e-utilities\") pod \"certified-operators-h49l7\" (UID: \"d09f4eb7-5889-4889-9901-6dbe8dae3f1e\") " pod="openshift-marketplace/certified-operators-h49l7" Nov 28 08:20:18 crc kubenswrapper[4946]: I1128 08:20:18.144885 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d09f4eb7-5889-4889-9901-6dbe8dae3f1e-catalog-content\") pod \"certified-operators-h49l7\" (UID: \"d09f4eb7-5889-4889-9901-6dbe8dae3f1e\") " pod="openshift-marketplace/certified-operators-h49l7" Nov 28 08:20:18 crc kubenswrapper[4946]: I1128 08:20:18.164759 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrk8r\" (UniqueName: \"kubernetes.io/projected/d09f4eb7-5889-4889-9901-6dbe8dae3f1e-kube-api-access-zrk8r\") pod \"certified-operators-h49l7\" (UID: \"d09f4eb7-5889-4889-9901-6dbe8dae3f1e\") " pod="openshift-marketplace/certified-operators-h49l7" Nov 28 08:20:18 crc kubenswrapper[4946]: I1128 08:20:18.308914 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h49l7" Nov 28 08:20:18 crc kubenswrapper[4946]: I1128 08:20:18.611351 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h49l7"] Nov 28 08:20:19 crc kubenswrapper[4946]: I1128 08:20:19.244349 4946 generic.go:334] "Generic (PLEG): container finished" podID="d09f4eb7-5889-4889-9901-6dbe8dae3f1e" containerID="acd565dfb0acdd3d037df7aab451b24ee35d25c2b4a8c1c1796a21e6e1122636" exitCode=0 Nov 28 08:20:19 crc kubenswrapper[4946]: I1128 08:20:19.244446 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h49l7" event={"ID":"d09f4eb7-5889-4889-9901-6dbe8dae3f1e","Type":"ContainerDied","Data":"acd565dfb0acdd3d037df7aab451b24ee35d25c2b4a8c1c1796a21e6e1122636"} Nov 28 08:20:19 crc kubenswrapper[4946]: I1128 08:20:19.244733 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h49l7" event={"ID":"d09f4eb7-5889-4889-9901-6dbe8dae3f1e","Type":"ContainerStarted","Data":"1b78d5c9497d25d791fee92cf73582bb9627e53f7801968e346616570c5bdc4a"} Nov 28 08:20:20 crc kubenswrapper[4946]: I1128 08:20:20.256085 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h49l7" event={"ID":"d09f4eb7-5889-4889-9901-6dbe8dae3f1e","Type":"ContainerStarted","Data":"a6dfc09c17df7d1bfe8d23b2a2eccaa21a7f6a31922fadda03ea453f7af6968e"} Nov 28 08:20:21 crc kubenswrapper[4946]: I1128 08:20:21.267107 4946 generic.go:334] "Generic (PLEG): container finished" podID="d09f4eb7-5889-4889-9901-6dbe8dae3f1e" containerID="a6dfc09c17df7d1bfe8d23b2a2eccaa21a7f6a31922fadda03ea453f7af6968e" exitCode=0 Nov 28 08:20:21 crc kubenswrapper[4946]: I1128 08:20:21.267166 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h49l7" event={"ID":"d09f4eb7-5889-4889-9901-6dbe8dae3f1e","Type":"ContainerDied","Data":"a6dfc09c17df7d1bfe8d23b2a2eccaa21a7f6a31922fadda03ea453f7af6968e"} Nov 28 08:20:21 crc kubenswrapper[4946]: I1128 08:20:21.990099 4946 scope.go:117] "RemoveContainer" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" Nov 28 08:20:21 crc kubenswrapper[4946]: E1128 08:20:21.990528 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:20:22 crc kubenswrapper[4946]: I1128 08:20:22.275275 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h49l7" event={"ID":"d09f4eb7-5889-4889-9901-6dbe8dae3f1e","Type":"ContainerStarted","Data":"2b634075a09ca84652eb2cec0d542dad95d70d84a5eb181ee47b00b999b65102"} Nov 28 08:20:22 crc kubenswrapper[4946]: I1128 08:20:22.303035 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h49l7" podStartSLOduration=2.7924917000000002 podStartE2EDuration="5.303000829s" podCreationTimestamp="2025-11-28 08:20:17 +0000 UTC" firstStartedPulling="2025-11-28 08:20:19.247341819 +0000 UTC m=+5273.625406970" lastFinishedPulling="2025-11-28 08:20:21.757850968 +0000 UTC m=+5276.135916099" observedRunningTime="2025-11-28 08:20:22.293546905 +0000 UTC m=+5276.671612026" watchObservedRunningTime="2025-11-28 08:20:22.303000829 +0000 UTC m=+5276.681065980" Nov 28 08:20:28 crc kubenswrapper[4946]: I1128 08:20:28.314134 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h49l7" Nov 28 08:20:28 crc kubenswrapper[4946]: I1128 08:20:28.316640 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h49l7" Nov 28 08:20:28 crc kubenswrapper[4946]: I1128 08:20:28.396361 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h49l7" Nov 28 08:20:29 crc kubenswrapper[4946]: I1128 08:20:29.414729 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h49l7" Nov 28 08:20:29 crc kubenswrapper[4946]: I1128 08:20:29.485498 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h49l7"] Nov 28 08:20:31 crc kubenswrapper[4946]: I1128 08:20:31.360420 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h49l7" podUID="d09f4eb7-5889-4889-9901-6dbe8dae3f1e" containerName="registry-server" containerID="cri-o://2b634075a09ca84652eb2cec0d542dad95d70d84a5eb181ee47b00b999b65102" gracePeriod=2 Nov 28 08:20:32 crc kubenswrapper[4946]: I1128 08:20:32.339768 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h49l7" Nov 28 08:20:32 crc kubenswrapper[4946]: I1128 08:20:32.379055 4946 generic.go:334] "Generic (PLEG): container finished" podID="d09f4eb7-5889-4889-9901-6dbe8dae3f1e" containerID="2b634075a09ca84652eb2cec0d542dad95d70d84a5eb181ee47b00b999b65102" exitCode=0 Nov 28 08:20:32 crc kubenswrapper[4946]: I1128 08:20:32.379111 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h49l7" Nov 28 08:20:32 crc kubenswrapper[4946]: I1128 08:20:32.379111 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h49l7" event={"ID":"d09f4eb7-5889-4889-9901-6dbe8dae3f1e","Type":"ContainerDied","Data":"2b634075a09ca84652eb2cec0d542dad95d70d84a5eb181ee47b00b999b65102"} Nov 28 08:20:32 crc kubenswrapper[4946]: I1128 08:20:32.379256 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h49l7" event={"ID":"d09f4eb7-5889-4889-9901-6dbe8dae3f1e","Type":"ContainerDied","Data":"1b78d5c9497d25d791fee92cf73582bb9627e53f7801968e346616570c5bdc4a"} Nov 28 08:20:32 crc kubenswrapper[4946]: I1128 08:20:32.379288 4946 scope.go:117] "RemoveContainer" containerID="2b634075a09ca84652eb2cec0d542dad95d70d84a5eb181ee47b00b999b65102" Nov 28 08:20:32 crc kubenswrapper[4946]: I1128 08:20:32.403174 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d09f4eb7-5889-4889-9901-6dbe8dae3f1e-catalog-content\") pod \"d09f4eb7-5889-4889-9901-6dbe8dae3f1e\" (UID: \"d09f4eb7-5889-4889-9901-6dbe8dae3f1e\") " Nov 28 08:20:32 crc kubenswrapper[4946]: I1128 08:20:32.403230 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d09f4eb7-5889-4889-9901-6dbe8dae3f1e-utilities\") pod \"d09f4eb7-5889-4889-9901-6dbe8dae3f1e\" (UID: \"d09f4eb7-5889-4889-9901-6dbe8dae3f1e\") " Nov 28 08:20:32 crc kubenswrapper[4946]: I1128 08:20:32.403282 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrk8r\" (UniqueName: \"kubernetes.io/projected/d09f4eb7-5889-4889-9901-6dbe8dae3f1e-kube-api-access-zrk8r\") pod \"d09f4eb7-5889-4889-9901-6dbe8dae3f1e\" (UID: \"d09f4eb7-5889-4889-9901-6dbe8dae3f1e\") " Nov 28 08:20:32 crc kubenswrapper[4946]: I1128 08:20:32.404985 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d09f4eb7-5889-4889-9901-6dbe8dae3f1e-utilities" (OuterVolumeSpecName: "utilities") pod "d09f4eb7-5889-4889-9901-6dbe8dae3f1e" (UID: "d09f4eb7-5889-4889-9901-6dbe8dae3f1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:20:32 crc kubenswrapper[4946]: I1128 08:20:32.411730 4946 scope.go:117] "RemoveContainer" containerID="a6dfc09c17df7d1bfe8d23b2a2eccaa21a7f6a31922fadda03ea453f7af6968e" Nov 28 08:20:32 crc kubenswrapper[4946]: I1128 08:20:32.411748 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d09f4eb7-5889-4889-9901-6dbe8dae3f1e-kube-api-access-zrk8r" (OuterVolumeSpecName: "kube-api-access-zrk8r") pod "d09f4eb7-5889-4889-9901-6dbe8dae3f1e" (UID: "d09f4eb7-5889-4889-9901-6dbe8dae3f1e"). InnerVolumeSpecName "kube-api-access-zrk8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:20:32 crc kubenswrapper[4946]: I1128 08:20:32.453693 4946 scope.go:117] "RemoveContainer" containerID="acd565dfb0acdd3d037df7aab451b24ee35d25c2b4a8c1c1796a21e6e1122636" Nov 28 08:20:32 crc kubenswrapper[4946]: I1128 08:20:32.472050 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d09f4eb7-5889-4889-9901-6dbe8dae3f1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d09f4eb7-5889-4889-9901-6dbe8dae3f1e" (UID: "d09f4eb7-5889-4889-9901-6dbe8dae3f1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:20:32 crc kubenswrapper[4946]: I1128 08:20:32.481999 4946 scope.go:117] "RemoveContainer" containerID="2b634075a09ca84652eb2cec0d542dad95d70d84a5eb181ee47b00b999b65102" Nov 28 08:20:32 crc kubenswrapper[4946]: E1128 08:20:32.487077 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b634075a09ca84652eb2cec0d542dad95d70d84a5eb181ee47b00b999b65102\": container with ID starting with 2b634075a09ca84652eb2cec0d542dad95d70d84a5eb181ee47b00b999b65102 not found: ID does not exist" containerID="2b634075a09ca84652eb2cec0d542dad95d70d84a5eb181ee47b00b999b65102" Nov 28 08:20:32 crc kubenswrapper[4946]: I1128 08:20:32.487117 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b634075a09ca84652eb2cec0d542dad95d70d84a5eb181ee47b00b999b65102"} err="failed to get container status \"2b634075a09ca84652eb2cec0d542dad95d70d84a5eb181ee47b00b999b65102\": rpc error: code = NotFound desc = could not find container \"2b634075a09ca84652eb2cec0d542dad95d70d84a5eb181ee47b00b999b65102\": container with ID starting with 2b634075a09ca84652eb2cec0d542dad95d70d84a5eb181ee47b00b999b65102 not found: ID does not exist" Nov 28 08:20:32 crc kubenswrapper[4946]: I1128 08:20:32.487141 4946 scope.go:117] "RemoveContainer" containerID="a6dfc09c17df7d1bfe8d23b2a2eccaa21a7f6a31922fadda03ea453f7af6968e" Nov 28 08:20:32 crc kubenswrapper[4946]: E1128 08:20:32.487915 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6dfc09c17df7d1bfe8d23b2a2eccaa21a7f6a31922fadda03ea453f7af6968e\": container with ID starting with a6dfc09c17df7d1bfe8d23b2a2eccaa21a7f6a31922fadda03ea453f7af6968e not found: ID does not exist" containerID="a6dfc09c17df7d1bfe8d23b2a2eccaa21a7f6a31922fadda03ea453f7af6968e" Nov 28 08:20:32 crc kubenswrapper[4946]: I1128 08:20:32.487976 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6dfc09c17df7d1bfe8d23b2a2eccaa21a7f6a31922fadda03ea453f7af6968e"} err="failed to get container status \"a6dfc09c17df7d1bfe8d23b2a2eccaa21a7f6a31922fadda03ea453f7af6968e\": rpc error: code = NotFound desc = could not find container \"a6dfc09c17df7d1bfe8d23b2a2eccaa21a7f6a31922fadda03ea453f7af6968e\": container with ID starting with a6dfc09c17df7d1bfe8d23b2a2eccaa21a7f6a31922fadda03ea453f7af6968e not found: ID does not exist" Nov 28 08:20:32 crc kubenswrapper[4946]: I1128 08:20:32.488013 4946 scope.go:117] "RemoveContainer" containerID="acd565dfb0acdd3d037df7aab451b24ee35d25c2b4a8c1c1796a21e6e1122636" Nov 28 08:20:32 crc kubenswrapper[4946]: E1128 08:20:32.488378 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acd565dfb0acdd3d037df7aab451b24ee35d25c2b4a8c1c1796a21e6e1122636\": container with ID starting with acd565dfb0acdd3d037df7aab451b24ee35d25c2b4a8c1c1796a21e6e1122636 not found: ID does not exist" containerID="acd565dfb0acdd3d037df7aab451b24ee35d25c2b4a8c1c1796a21e6e1122636" Nov 28 08:20:32 crc kubenswrapper[4946]: I1128 08:20:32.488412 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acd565dfb0acdd3d037df7aab451b24ee35d25c2b4a8c1c1796a21e6e1122636"} err="failed to get container status \"acd565dfb0acdd3d037df7aab451b24ee35d25c2b4a8c1c1796a21e6e1122636\": rpc error: code = NotFound desc = could not find container \"acd565dfb0acdd3d037df7aab451b24ee35d25c2b4a8c1c1796a21e6e1122636\": container with ID starting with acd565dfb0acdd3d037df7aab451b24ee35d25c2b4a8c1c1796a21e6e1122636 not found: ID does not exist" Nov 28 08:20:32 crc kubenswrapper[4946]: I1128 08:20:32.506163 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d09f4eb7-5889-4889-9901-6dbe8dae3f1e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 08:20:32 crc kubenswrapper[4946]: I1128 08:20:32.506200 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d09f4eb7-5889-4889-9901-6dbe8dae3f1e-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 08:20:32 crc kubenswrapper[4946]: I1128 08:20:32.506215 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrk8r\" (UniqueName: \"kubernetes.io/projected/d09f4eb7-5889-4889-9901-6dbe8dae3f1e-kube-api-access-zrk8r\") on node \"crc\" DevicePath \"\"" Nov 28 08:20:32 crc kubenswrapper[4946]: I1128 08:20:32.733962 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h49l7"] Nov 28 08:20:32 crc kubenswrapper[4946]: I1128 08:20:32.746644 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h49l7"] Nov 28 08:20:34 crc kubenswrapper[4946]: I1128 08:20:34.011004 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d09f4eb7-5889-4889-9901-6dbe8dae3f1e" path="/var/lib/kubelet/pods/d09f4eb7-5889-4889-9901-6dbe8dae3f1e/volumes" Nov 28 08:20:36 crc kubenswrapper[4946]: I1128 08:20:36.989845 4946 scope.go:117] "RemoveContainer" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" Nov 28 08:20:36 crc kubenswrapper[4946]: E1128 08:20:36.990844 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:20:48 crc kubenswrapper[4946]: I1128 08:20:48.990599 4946 scope.go:117] "RemoveContainer" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" Nov 28 08:20:48 crc kubenswrapper[4946]: E1128 08:20:48.991703 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:21:02 crc kubenswrapper[4946]: I1128 08:21:02.990548 4946 scope.go:117] "RemoveContainer" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" Nov 28 08:21:02 crc kubenswrapper[4946]: E1128 08:21:02.991668 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:21:17 crc kubenswrapper[4946]: I1128 08:21:17.989890 4946 scope.go:117] "RemoveContainer" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" Nov 28 08:21:17 crc kubenswrapper[4946]: E1128 08:21:17.990748 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:21:29 crc kubenswrapper[4946]: I1128 08:21:29.990439 4946 scope.go:117] "RemoveContainer" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" Nov 28 08:21:29 crc kubenswrapper[4946]: E1128 08:21:29.991443 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:21:44 crc kubenswrapper[4946]: I1128 08:21:44.990075 4946 scope.go:117] "RemoveContainer" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" Nov 28 08:21:44 crc kubenswrapper[4946]: E1128 08:21:44.991235 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:21:57 crc kubenswrapper[4946]: I1128 08:21:57.990024 4946 scope.go:117] "RemoveContainer" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" Nov 28 08:21:57 crc kubenswrapper[4946]: E1128 08:21:57.991137 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:22:11 crc kubenswrapper[4946]: I1128 08:22:11.991080 4946 scope.go:117] "RemoveContainer" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" Nov 28 08:22:11 crc kubenswrapper[4946]: E1128 08:22:11.992047 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:22:25 crc kubenswrapper[4946]: I1128 08:22:25.995376 4946 scope.go:117] "RemoveContainer" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" Nov 28 08:22:25 crc kubenswrapper[4946]: E1128 08:22:25.996728 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:22:36 crc kubenswrapper[4946]: I1128 08:22:36.990390 4946 scope.go:117] "RemoveContainer" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" Nov 28 08:22:36 crc kubenswrapper[4946]: E1128 08:22:36.991034 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:22:47 crc kubenswrapper[4946]: I1128 08:22:47.996167 4946 scope.go:117] "RemoveContainer" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" Nov 28 08:22:47 crc kubenswrapper[4946]: E1128 08:22:47.996982 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:23:02 crc kubenswrapper[4946]: I1128 08:23:02.989761 4946 scope.go:117] "RemoveContainer" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" Nov 28 08:23:02 crc kubenswrapper[4946]: E1128 08:23:02.990623 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:23:05 crc kubenswrapper[4946]: I1128 08:23:05.537704 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9sg26"] Nov 28 08:23:05 crc kubenswrapper[4946]: E1128 08:23:05.538749 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d09f4eb7-5889-4889-9901-6dbe8dae3f1e" containerName="extract-content" Nov 28 08:23:05 crc kubenswrapper[4946]: I1128 08:23:05.538771 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="d09f4eb7-5889-4889-9901-6dbe8dae3f1e" containerName="extract-content" Nov 28 08:23:05 crc kubenswrapper[4946]: E1128 08:23:05.538802 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d09f4eb7-5889-4889-9901-6dbe8dae3f1e" containerName="extract-utilities" Nov 28 08:23:05 crc kubenswrapper[4946]: I1128 08:23:05.538816 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="d09f4eb7-5889-4889-9901-6dbe8dae3f1e" containerName="extract-utilities" Nov 28 08:23:05 crc kubenswrapper[4946]: E1128 08:23:05.538839 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d09f4eb7-5889-4889-9901-6dbe8dae3f1e" containerName="registry-server" Nov 28 08:23:05 crc kubenswrapper[4946]: I1128 08:23:05.538851 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="d09f4eb7-5889-4889-9901-6dbe8dae3f1e" containerName="registry-server" Nov 28 08:23:05 crc kubenswrapper[4946]: I1128 08:23:05.540247 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="d09f4eb7-5889-4889-9901-6dbe8dae3f1e" containerName="registry-server" Nov 28 08:23:05 crc kubenswrapper[4946]: I1128 08:23:05.541993 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9sg26" Nov 28 08:23:05 crc kubenswrapper[4946]: I1128 08:23:05.557447 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9sg26"] Nov 28 08:23:05 crc kubenswrapper[4946]: I1128 08:23:05.651851 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03785c92-af40-4691-aa17-593b59f44e24-catalog-content\") pod \"redhat-marketplace-9sg26\" (UID: \"03785c92-af40-4691-aa17-593b59f44e24\") " pod="openshift-marketplace/redhat-marketplace-9sg26" Nov 28 08:23:05 crc kubenswrapper[4946]: I1128 08:23:05.651916 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03785c92-af40-4691-aa17-593b59f44e24-utilities\") pod \"redhat-marketplace-9sg26\" (UID: \"03785c92-af40-4691-aa17-593b59f44e24\") " pod="openshift-marketplace/redhat-marketplace-9sg26" Nov 28 08:23:05 crc kubenswrapper[4946]: I1128 08:23:05.652010 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46g5v\" (UniqueName: \"kubernetes.io/projected/03785c92-af40-4691-aa17-593b59f44e24-kube-api-access-46g5v\") pod \"redhat-marketplace-9sg26\" (UID: \"03785c92-af40-4691-aa17-593b59f44e24\") " pod="openshift-marketplace/redhat-marketplace-9sg26" Nov 28 08:23:05 crc kubenswrapper[4946]: I1128 08:23:05.753644 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46g5v\" (UniqueName: \"kubernetes.io/projected/03785c92-af40-4691-aa17-593b59f44e24-kube-api-access-46g5v\") pod \"redhat-marketplace-9sg26\" (UID: \"03785c92-af40-4691-aa17-593b59f44e24\") " pod="openshift-marketplace/redhat-marketplace-9sg26" Nov 28 08:23:05 crc kubenswrapper[4946]: I1128 08:23:05.754059 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03785c92-af40-4691-aa17-593b59f44e24-catalog-content\") pod \"redhat-marketplace-9sg26\" (UID: \"03785c92-af40-4691-aa17-593b59f44e24\") " pod="openshift-marketplace/redhat-marketplace-9sg26" Nov 28 08:23:05 crc kubenswrapper[4946]: I1128 08:23:05.754251 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03785c92-af40-4691-aa17-593b59f44e24-utilities\") pod \"redhat-marketplace-9sg26\" (UID: \"03785c92-af40-4691-aa17-593b59f44e24\") " pod="openshift-marketplace/redhat-marketplace-9sg26" Nov 28 08:23:05 crc kubenswrapper[4946]: I1128 08:23:05.754582 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03785c92-af40-4691-aa17-593b59f44e24-catalog-content\") pod \"redhat-marketplace-9sg26\" (UID: \"03785c92-af40-4691-aa17-593b59f44e24\") " pod="openshift-marketplace/redhat-marketplace-9sg26" Nov 28 08:23:05 crc kubenswrapper[4946]: I1128 08:23:05.754853 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03785c92-af40-4691-aa17-593b59f44e24-utilities\") pod \"redhat-marketplace-9sg26\" (UID: \"03785c92-af40-4691-aa17-593b59f44e24\") " pod="openshift-marketplace/redhat-marketplace-9sg26" Nov 28 08:23:05 crc kubenswrapper[4946]: I1128 08:23:05.775083 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46g5v\" (UniqueName: \"kubernetes.io/projected/03785c92-af40-4691-aa17-593b59f44e24-kube-api-access-46g5v\") pod \"redhat-marketplace-9sg26\" (UID: \"03785c92-af40-4691-aa17-593b59f44e24\") " pod="openshift-marketplace/redhat-marketplace-9sg26" Nov 28 08:23:05 crc kubenswrapper[4946]: I1128 08:23:05.868767 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9sg26" Nov 28 08:23:06 crc kubenswrapper[4946]: I1128 08:23:06.345867 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9sg26"] Nov 28 08:23:07 crc kubenswrapper[4946]: I1128 08:23:07.339845 4946 generic.go:334] "Generic (PLEG): container finished" podID="03785c92-af40-4691-aa17-593b59f44e24" containerID="cd724e1aff13da9e4e7cecc518cd6bbcd90c22ecb554d9f3ae11ba53433ab061" exitCode=0 Nov 28 08:23:07 crc kubenswrapper[4946]: I1128 08:23:07.339894 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9sg26" event={"ID":"03785c92-af40-4691-aa17-593b59f44e24","Type":"ContainerDied","Data":"cd724e1aff13da9e4e7cecc518cd6bbcd90c22ecb554d9f3ae11ba53433ab061"} Nov 28 08:23:07 crc kubenswrapper[4946]: I1128 08:23:07.339922 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9sg26" event={"ID":"03785c92-af40-4691-aa17-593b59f44e24","Type":"ContainerStarted","Data":"2c5ecac3cc47c966613c3ff0c310f27fcbcca5e381a966179ef6109baf44b818"} Nov 28 08:23:07 crc kubenswrapper[4946]: I1128 08:23:07.343184 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 08:23:08 crc kubenswrapper[4946]: I1128 08:23:08.353864 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9sg26" event={"ID":"03785c92-af40-4691-aa17-593b59f44e24","Type":"ContainerStarted","Data":"24721297530cb45db1b427164270240bda8c47ac1f18ac2edae21013c248e8f6"} Nov 28 08:23:09 crc kubenswrapper[4946]: I1128 08:23:09.384893 4946 generic.go:334] "Generic (PLEG): container finished" podID="03785c92-af40-4691-aa17-593b59f44e24" containerID="24721297530cb45db1b427164270240bda8c47ac1f18ac2edae21013c248e8f6" exitCode=0 Nov 28 08:23:09 crc kubenswrapper[4946]: I1128 08:23:09.385085 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9sg26" event={"ID":"03785c92-af40-4691-aa17-593b59f44e24","Type":"ContainerDied","Data":"24721297530cb45db1b427164270240bda8c47ac1f18ac2edae21013c248e8f6"} Nov 28 08:23:10 crc kubenswrapper[4946]: I1128 08:23:10.396243 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9sg26" event={"ID":"03785c92-af40-4691-aa17-593b59f44e24","Type":"ContainerStarted","Data":"6ab498d57321ad0095895aaf73010a0dcc3c73d287b9bbe1cee0bbd73b0a1bdd"} Nov 28 08:23:10 crc kubenswrapper[4946]: I1128 08:23:10.419726 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9sg26" podStartSLOduration=2.906736251 podStartE2EDuration="5.419701711s" podCreationTimestamp="2025-11-28 08:23:05 +0000 UTC" firstStartedPulling="2025-11-28 08:23:07.342232851 +0000 UTC m=+5441.720297982" lastFinishedPulling="2025-11-28 08:23:09.855198291 +0000 UTC m=+5444.233263442" observedRunningTime="2025-11-28 08:23:10.415755943 +0000 UTC m=+5444.793821084" watchObservedRunningTime="2025-11-28 08:23:10.419701711 +0000 UTC m=+5444.797766832" Nov 28 08:23:15 crc kubenswrapper[4946]: I1128 08:23:15.869237 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9sg26" Nov 28 08:23:15 crc kubenswrapper[4946]: I1128 08:23:15.870104 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9sg26" Nov 28 08:23:15 crc kubenswrapper[4946]: I1128 08:23:15.948943 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9sg26" Nov 28 08:23:16 crc kubenswrapper[4946]: I1128 08:23:16.517705 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9sg26" Nov 28 08:23:16 crc kubenswrapper[4946]: I1128 08:23:16.581611 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9sg26"] Nov 28 08:23:17 crc kubenswrapper[4946]: I1128 08:23:17.990550 4946 scope.go:117] "RemoveContainer" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" Nov 28 08:23:17 crc kubenswrapper[4946]: E1128 08:23:17.990955 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:23:18 crc kubenswrapper[4946]: I1128 08:23:18.477659 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9sg26" podUID="03785c92-af40-4691-aa17-593b59f44e24" containerName="registry-server" containerID="cri-o://6ab498d57321ad0095895aaf73010a0dcc3c73d287b9bbe1cee0bbd73b0a1bdd" gracePeriod=2 Nov 28 08:23:19 crc kubenswrapper[4946]: I1128 08:23:19.487696 4946 generic.go:334] "Generic (PLEG): container finished" podID="03785c92-af40-4691-aa17-593b59f44e24" containerID="6ab498d57321ad0095895aaf73010a0dcc3c73d287b9bbe1cee0bbd73b0a1bdd" exitCode=0 Nov 28 08:23:19 crc kubenswrapper[4946]: I1128 08:23:19.487791 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9sg26" event={"ID":"03785c92-af40-4691-aa17-593b59f44e24","Type":"ContainerDied","Data":"6ab498d57321ad0095895aaf73010a0dcc3c73d287b9bbe1cee0bbd73b0a1bdd"} Nov 28 08:23:19 crc kubenswrapper[4946]: I1128 08:23:19.488025 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9sg26" event={"ID":"03785c92-af40-4691-aa17-593b59f44e24","Type":"ContainerDied","Data":"2c5ecac3cc47c966613c3ff0c310f27fcbcca5e381a966179ef6109baf44b818"} Nov 28 08:23:19 crc kubenswrapper[4946]: I1128 08:23:19.488045 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c5ecac3cc47c966613c3ff0c310f27fcbcca5e381a966179ef6109baf44b818" Nov 28 08:23:19 crc kubenswrapper[4946]: I1128 08:23:19.488715 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9sg26" Nov 28 08:23:19 crc kubenswrapper[4946]: I1128 08:23:19.676846 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46g5v\" (UniqueName: \"kubernetes.io/projected/03785c92-af40-4691-aa17-593b59f44e24-kube-api-access-46g5v\") pod \"03785c92-af40-4691-aa17-593b59f44e24\" (UID: \"03785c92-af40-4691-aa17-593b59f44e24\") " Nov 28 08:23:19 crc kubenswrapper[4946]: I1128 08:23:19.676897 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03785c92-af40-4691-aa17-593b59f44e24-catalog-content\") pod \"03785c92-af40-4691-aa17-593b59f44e24\" (UID: \"03785c92-af40-4691-aa17-593b59f44e24\") " Nov 28 08:23:19 crc kubenswrapper[4946]: I1128 08:23:19.676990 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03785c92-af40-4691-aa17-593b59f44e24-utilities\") pod \"03785c92-af40-4691-aa17-593b59f44e24\" (UID: \"03785c92-af40-4691-aa17-593b59f44e24\") " Nov 28 08:23:19 crc kubenswrapper[4946]: I1128 08:23:19.678863 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03785c92-af40-4691-aa17-593b59f44e24-utilities" (OuterVolumeSpecName: "utilities") pod "03785c92-af40-4691-aa17-593b59f44e24" (UID: "03785c92-af40-4691-aa17-593b59f44e24"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:23:19 crc kubenswrapper[4946]: I1128 08:23:19.685389 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03785c92-af40-4691-aa17-593b59f44e24-kube-api-access-46g5v" (OuterVolumeSpecName: "kube-api-access-46g5v") pod "03785c92-af40-4691-aa17-593b59f44e24" (UID: "03785c92-af40-4691-aa17-593b59f44e24"). InnerVolumeSpecName "kube-api-access-46g5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:23:19 crc kubenswrapper[4946]: I1128 08:23:19.704064 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03785c92-af40-4691-aa17-593b59f44e24-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03785c92-af40-4691-aa17-593b59f44e24" (UID: "03785c92-af40-4691-aa17-593b59f44e24"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:23:19 crc kubenswrapper[4946]: I1128 08:23:19.778800 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03785c92-af40-4691-aa17-593b59f44e24-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 08:23:19 crc kubenswrapper[4946]: I1128 08:23:19.778834 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46g5v\" (UniqueName: \"kubernetes.io/projected/03785c92-af40-4691-aa17-593b59f44e24-kube-api-access-46g5v\") on node \"crc\" DevicePath \"\"" Nov 28 08:23:19 crc kubenswrapper[4946]: I1128 08:23:19.778843 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03785c92-af40-4691-aa17-593b59f44e24-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 08:23:20 crc kubenswrapper[4946]: I1128 08:23:20.499170 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9sg26" Nov 28 08:23:20 crc kubenswrapper[4946]: I1128 08:23:20.537759 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9sg26"] Nov 28 08:23:20 crc kubenswrapper[4946]: I1128 08:23:20.549274 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9sg26"] Nov 28 08:23:22 crc kubenswrapper[4946]: I1128 08:23:22.001387 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03785c92-af40-4691-aa17-593b59f44e24" path="/var/lib/kubelet/pods/03785c92-af40-4691-aa17-593b59f44e24/volumes" Nov 28 08:23:30 crc kubenswrapper[4946]: I1128 08:23:30.989413 4946 scope.go:117] "RemoveContainer" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" Nov 28 08:23:30 crc kubenswrapper[4946]: E1128 08:23:30.990297 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:23:41 crc kubenswrapper[4946]: I1128 08:23:41.989754 4946 scope.go:117] "RemoveContainer" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" Nov 28 08:23:41 crc kubenswrapper[4946]: E1128 08:23:41.990667 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:23:56 crc kubenswrapper[4946]: I1128 08:23:56.990061 4946 scope.go:117] "RemoveContainer" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" Nov 28 08:23:56 crc kubenswrapper[4946]: E1128 08:23:56.991096 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:24:09 crc kubenswrapper[4946]: I1128 08:24:09.990939 4946 scope.go:117] "RemoveContainer" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" Nov 28 08:24:09 crc kubenswrapper[4946]: E1128 08:24:09.992007 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:24:23 crc kubenswrapper[4946]: I1128 08:24:23.991332 4946 scope.go:117] "RemoveContainer" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" Nov 28 08:24:23 crc kubenswrapper[4946]: E1128 08:24:23.992331 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:24:34 crc kubenswrapper[4946]: I1128 08:24:34.990019 4946 scope.go:117] "RemoveContainer" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" Nov 28 08:24:34 crc kubenswrapper[4946]: E1128 08:24:34.990985 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:24:45 crc kubenswrapper[4946]: I1128 08:24:45.998725 4946 scope.go:117] "RemoveContainer" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" Nov 28 08:24:46 crc kubenswrapper[4946]: E1128 08:24:46.001454 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:24:57 crc kubenswrapper[4946]: I1128 08:24:57.989767 4946 scope.go:117] "RemoveContainer" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" Nov 28 08:24:58 crc kubenswrapper[4946]: I1128 08:24:58.425962 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"300f30166de36a631f76a3d13fbe5df5f318d95b2aa73f46b8ae00bb9efa796f"} Nov 28 08:26:51 crc kubenswrapper[4946]: I1128 08:26:51.524329 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p6xwv"] Nov 28 08:26:51 crc kubenswrapper[4946]: E1128 08:26:51.525785 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03785c92-af40-4691-aa17-593b59f44e24" containerName="extract-content" Nov 28 08:26:51 crc kubenswrapper[4946]: I1128 08:26:51.525822 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="03785c92-af40-4691-aa17-593b59f44e24" containerName="extract-content" Nov 28 08:26:51 crc kubenswrapper[4946]: E1128 08:26:51.525882 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03785c92-af40-4691-aa17-593b59f44e24" containerName="registry-server" Nov 28 08:26:51 crc kubenswrapper[4946]: I1128 08:26:51.525900 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="03785c92-af40-4691-aa17-593b59f44e24" containerName="registry-server" Nov 28 08:26:51 crc kubenswrapper[4946]: E1128 08:26:51.525955 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03785c92-af40-4691-aa17-593b59f44e24" containerName="extract-utilities" Nov 28 08:26:51 crc kubenswrapper[4946]: I1128 08:26:51.525977 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="03785c92-af40-4691-aa17-593b59f44e24" containerName="extract-utilities" Nov 28 08:26:51 crc kubenswrapper[4946]: I1128 08:26:51.526336 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="03785c92-af40-4691-aa17-593b59f44e24" containerName="registry-server" Nov 28 08:26:51 crc kubenswrapper[4946]: I1128 08:26:51.528996 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6xwv" Nov 28 08:26:51 crc kubenswrapper[4946]: I1128 08:26:51.535535 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p6xwv"] Nov 28 08:26:51 crc kubenswrapper[4946]: I1128 08:26:51.571089 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d0d482e-7035-4b6e-8207-62deef8c3e19-utilities\") pod \"community-operators-p6xwv\" (UID: \"0d0d482e-7035-4b6e-8207-62deef8c3e19\") " pod="openshift-marketplace/community-operators-p6xwv" Nov 28 08:26:51 crc kubenswrapper[4946]: I1128 08:26:51.571423 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sd46\" (UniqueName: \"kubernetes.io/projected/0d0d482e-7035-4b6e-8207-62deef8c3e19-kube-api-access-8sd46\") pod \"community-operators-p6xwv\" (UID: \"0d0d482e-7035-4b6e-8207-62deef8c3e19\") " pod="openshift-marketplace/community-operators-p6xwv" Nov 28 08:26:51 crc kubenswrapper[4946]: I1128 08:26:51.571581 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d0d482e-7035-4b6e-8207-62deef8c3e19-catalog-content\") pod \"community-operators-p6xwv\" (UID: \"0d0d482e-7035-4b6e-8207-62deef8c3e19\") " pod="openshift-marketplace/community-operators-p6xwv" Nov 28 08:26:51 crc kubenswrapper[4946]: I1128 08:26:51.673136 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d0d482e-7035-4b6e-8207-62deef8c3e19-utilities\") pod \"community-operators-p6xwv\" (UID: \"0d0d482e-7035-4b6e-8207-62deef8c3e19\") " pod="openshift-marketplace/community-operators-p6xwv" Nov 28 08:26:51 crc kubenswrapper[4946]: I1128 08:26:51.673268 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sd46\" (UniqueName: \"kubernetes.io/projected/0d0d482e-7035-4b6e-8207-62deef8c3e19-kube-api-access-8sd46\") pod \"community-operators-p6xwv\" (UID: \"0d0d482e-7035-4b6e-8207-62deef8c3e19\") " pod="openshift-marketplace/community-operators-p6xwv" Nov 28 08:26:51 crc kubenswrapper[4946]: I1128 08:26:51.673311 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d0d482e-7035-4b6e-8207-62deef8c3e19-catalog-content\") pod \"community-operators-p6xwv\" (UID: \"0d0d482e-7035-4b6e-8207-62deef8c3e19\") " pod="openshift-marketplace/community-operators-p6xwv" Nov 28 08:26:51 crc kubenswrapper[4946]: I1128 08:26:51.673615 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d0d482e-7035-4b6e-8207-62deef8c3e19-utilities\") pod \"community-operators-p6xwv\" (UID: \"0d0d482e-7035-4b6e-8207-62deef8c3e19\") " pod="openshift-marketplace/community-operators-p6xwv" Nov 28 08:26:51 crc kubenswrapper[4946]: I1128 08:26:51.674015 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d0d482e-7035-4b6e-8207-62deef8c3e19-catalog-content\") pod \"community-operators-p6xwv\" (UID: \"0d0d482e-7035-4b6e-8207-62deef8c3e19\") " pod="openshift-marketplace/community-operators-p6xwv" Nov 28 08:26:51 crc kubenswrapper[4946]: I1128 08:26:51.708796 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sd46\" (UniqueName: \"kubernetes.io/projected/0d0d482e-7035-4b6e-8207-62deef8c3e19-kube-api-access-8sd46\") pod \"community-operators-p6xwv\" (UID: \"0d0d482e-7035-4b6e-8207-62deef8c3e19\") " pod="openshift-marketplace/community-operators-p6xwv" Nov 28 08:26:51 crc kubenswrapper[4946]: I1128 08:26:51.848387 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6xwv" Nov 28 08:26:52 crc kubenswrapper[4946]: I1128 08:26:52.397567 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p6xwv"] Nov 28 08:26:52 crc kubenswrapper[4946]: I1128 08:26:52.443221 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6xwv" event={"ID":"0d0d482e-7035-4b6e-8207-62deef8c3e19","Type":"ContainerStarted","Data":"b6405e94455de9a5249c64bae271c199624b4e9c004628163d9301b02ee6e95f"} Nov 28 08:26:53 crc kubenswrapper[4946]: I1128 08:26:53.455807 4946 generic.go:334] "Generic (PLEG): container finished" podID="0d0d482e-7035-4b6e-8207-62deef8c3e19" containerID="7658bd0dfe95e3e107953bbfffba11499eb1710771fd13150c30d9b784cef9c8" exitCode=0 Nov 28 08:26:53 crc kubenswrapper[4946]: I1128 08:26:53.455898 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6xwv" event={"ID":"0d0d482e-7035-4b6e-8207-62deef8c3e19","Type":"ContainerDied","Data":"7658bd0dfe95e3e107953bbfffba11499eb1710771fd13150c30d9b784cef9c8"} Nov 28 08:26:55 crc kubenswrapper[4946]: I1128 08:26:55.474544 4946 generic.go:334] "Generic (PLEG): container finished" podID="0d0d482e-7035-4b6e-8207-62deef8c3e19" containerID="fdf3b5ca31ef3895a9e9cf1e56a600521ce59cf235221a989358a6cb64ff1b4c" exitCode=0 Nov 28 08:26:55 crc kubenswrapper[4946]: I1128 08:26:55.474993 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6xwv" event={"ID":"0d0d482e-7035-4b6e-8207-62deef8c3e19","Type":"ContainerDied","Data":"fdf3b5ca31ef3895a9e9cf1e56a600521ce59cf235221a989358a6cb64ff1b4c"} Nov 28 08:26:56 crc kubenswrapper[4946]: I1128 08:26:56.486345 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6xwv" event={"ID":"0d0d482e-7035-4b6e-8207-62deef8c3e19","Type":"ContainerStarted","Data":"440a95d0a7ad397b9b80cd42c819287162b1ec2287039b1a40c7ea811953c9bf"} Nov 28 08:26:56 crc kubenswrapper[4946]: I1128 08:26:56.514383 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p6xwv" podStartSLOduration=2.859452886 podStartE2EDuration="5.514366234s" podCreationTimestamp="2025-11-28 08:26:51 +0000 UTC" firstStartedPulling="2025-11-28 08:26:53.459417802 +0000 UTC m=+5667.837482923" lastFinishedPulling="2025-11-28 08:26:56.11433112 +0000 UTC m=+5670.492396271" observedRunningTime="2025-11-28 08:26:56.508797256 +0000 UTC m=+5670.886862407" watchObservedRunningTime="2025-11-28 08:26:56.514366234 +0000 UTC m=+5670.892431355" Nov 28 08:27:01 crc kubenswrapper[4946]: I1128 08:27:01.848653 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p6xwv" Nov 28 08:27:01 crc kubenswrapper[4946]: I1128 08:27:01.850888 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p6xwv" Nov 28 08:27:01 crc kubenswrapper[4946]: I1128 08:27:01.926419 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p6xwv" Nov 28 08:27:02 crc kubenswrapper[4946]: I1128 08:27:02.623146 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p6xwv" Nov 28 08:27:02 crc kubenswrapper[4946]: I1128 08:27:02.660825 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p6xwv"] Nov 28 08:27:04 crc kubenswrapper[4946]: I1128 08:27:04.555123 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p6xwv" podUID="0d0d482e-7035-4b6e-8207-62deef8c3e19" containerName="registry-server" containerID="cri-o://440a95d0a7ad397b9b80cd42c819287162b1ec2287039b1a40c7ea811953c9bf" gracePeriod=2 Nov 28 08:27:05 crc kubenswrapper[4946]: I1128 08:27:05.570696 4946 generic.go:334] "Generic (PLEG): container finished" podID="0d0d482e-7035-4b6e-8207-62deef8c3e19" containerID="440a95d0a7ad397b9b80cd42c819287162b1ec2287039b1a40c7ea811953c9bf" exitCode=0 Nov 28 08:27:05 crc kubenswrapper[4946]: I1128 08:27:05.570767 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6xwv" event={"ID":"0d0d482e-7035-4b6e-8207-62deef8c3e19","Type":"ContainerDied","Data":"440a95d0a7ad397b9b80cd42c819287162b1ec2287039b1a40c7ea811953c9bf"} Nov 28 08:27:05 crc kubenswrapper[4946]: I1128 08:27:05.676365 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6xwv" Nov 28 08:27:05 crc kubenswrapper[4946]: I1128 08:27:05.721558 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d0d482e-7035-4b6e-8207-62deef8c3e19-catalog-content\") pod \"0d0d482e-7035-4b6e-8207-62deef8c3e19\" (UID: \"0d0d482e-7035-4b6e-8207-62deef8c3e19\") " Nov 28 08:27:05 crc kubenswrapper[4946]: I1128 08:27:05.721673 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d0d482e-7035-4b6e-8207-62deef8c3e19-utilities\") pod \"0d0d482e-7035-4b6e-8207-62deef8c3e19\" (UID: \"0d0d482e-7035-4b6e-8207-62deef8c3e19\") " Nov 28 08:27:05 crc kubenswrapper[4946]: I1128 08:27:05.721712 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sd46\" (UniqueName: \"kubernetes.io/projected/0d0d482e-7035-4b6e-8207-62deef8c3e19-kube-api-access-8sd46\") pod \"0d0d482e-7035-4b6e-8207-62deef8c3e19\" (UID: \"0d0d482e-7035-4b6e-8207-62deef8c3e19\") " Nov 28 08:27:05 crc kubenswrapper[4946]: I1128 08:27:05.722937 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d0d482e-7035-4b6e-8207-62deef8c3e19-utilities" (OuterVolumeSpecName: "utilities") pod "0d0d482e-7035-4b6e-8207-62deef8c3e19" (UID: "0d0d482e-7035-4b6e-8207-62deef8c3e19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:27:05 crc kubenswrapper[4946]: I1128 08:27:05.730051 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d0d482e-7035-4b6e-8207-62deef8c3e19-kube-api-access-8sd46" (OuterVolumeSpecName: "kube-api-access-8sd46") pod "0d0d482e-7035-4b6e-8207-62deef8c3e19" (UID: "0d0d482e-7035-4b6e-8207-62deef8c3e19"). InnerVolumeSpecName "kube-api-access-8sd46". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:27:05 crc kubenswrapper[4946]: I1128 08:27:05.811543 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d0d482e-7035-4b6e-8207-62deef8c3e19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d0d482e-7035-4b6e-8207-62deef8c3e19" (UID: "0d0d482e-7035-4b6e-8207-62deef8c3e19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:27:05 crc kubenswrapper[4946]: I1128 08:27:05.823439 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d0d482e-7035-4b6e-8207-62deef8c3e19-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 08:27:05 crc kubenswrapper[4946]: I1128 08:27:05.823491 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d0d482e-7035-4b6e-8207-62deef8c3e19-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 08:27:05 crc kubenswrapper[4946]: I1128 08:27:05.823503 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sd46\" (UniqueName: \"kubernetes.io/projected/0d0d482e-7035-4b6e-8207-62deef8c3e19-kube-api-access-8sd46\") on node \"crc\" DevicePath \"\"" Nov 28 08:27:06 crc kubenswrapper[4946]: I1128 08:27:06.583454 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6xwv" event={"ID":"0d0d482e-7035-4b6e-8207-62deef8c3e19","Type":"ContainerDied","Data":"b6405e94455de9a5249c64bae271c199624b4e9c004628163d9301b02ee6e95f"} Nov 28 08:27:06 crc kubenswrapper[4946]: I1128 08:27:06.583596 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6xwv" Nov 28 08:27:06 crc kubenswrapper[4946]: I1128 08:27:06.583915 4946 scope.go:117] "RemoveContainer" containerID="440a95d0a7ad397b9b80cd42c819287162b1ec2287039b1a40c7ea811953c9bf" Nov 28 08:27:06 crc kubenswrapper[4946]: I1128 08:27:06.614987 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p6xwv"] Nov 28 08:27:06 crc kubenswrapper[4946]: I1128 08:27:06.623586 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p6xwv"] Nov 28 08:27:06 crc kubenswrapper[4946]: I1128 08:27:06.628953 4946 scope.go:117] "RemoveContainer" containerID="fdf3b5ca31ef3895a9e9cf1e56a600521ce59cf235221a989358a6cb64ff1b4c" Nov 28 08:27:06 crc kubenswrapper[4946]: I1128 08:27:06.658667 4946 scope.go:117] "RemoveContainer" containerID="7658bd0dfe95e3e107953bbfffba11499eb1710771fd13150c30d9b784cef9c8" Nov 28 08:27:08 crc kubenswrapper[4946]: I1128 08:27:08.007644 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d0d482e-7035-4b6e-8207-62deef8c3e19" path="/var/lib/kubelet/pods/0d0d482e-7035-4b6e-8207-62deef8c3e19/volumes" Nov 28 08:27:24 crc kubenswrapper[4946]: I1128 08:27:24.731683 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:27:24 crc kubenswrapper[4946]: I1128 08:27:24.732787 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:27:54 crc kubenswrapper[4946]: I1128 08:27:54.731195 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:27:54 crc kubenswrapper[4946]: I1128 08:27:54.732098 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:28:24 crc kubenswrapper[4946]: I1128 08:28:24.731208 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:28:24 crc kubenswrapper[4946]: I1128 08:28:24.731846 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:28:24 crc kubenswrapper[4946]: I1128 08:28:24.731907 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 08:28:24 crc kubenswrapper[4946]: I1128 08:28:24.732993 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"300f30166de36a631f76a3d13fbe5df5f318d95b2aa73f46b8ae00bb9efa796f"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 08:28:24 crc kubenswrapper[4946]: I1128 08:28:24.733090 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://300f30166de36a631f76a3d13fbe5df5f318d95b2aa73f46b8ae00bb9efa796f" gracePeriod=600 Nov 28 08:28:25 crc kubenswrapper[4946]: I1128 08:28:25.497840 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="300f30166de36a631f76a3d13fbe5df5f318d95b2aa73f46b8ae00bb9efa796f" exitCode=0 Nov 28 08:28:25 crc kubenswrapper[4946]: I1128 08:28:25.497914 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"300f30166de36a631f76a3d13fbe5df5f318d95b2aa73f46b8ae00bb9efa796f"} Nov 28 08:28:25 crc kubenswrapper[4946]: I1128 08:28:25.498304 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927"} Nov 28 08:28:25 crc kubenswrapper[4946]: I1128 08:28:25.498336 4946 scope.go:117] "RemoveContainer" containerID="ad330417d26bcc1bd12f3c61a305d6470428abe94fcc981baf7c135dfd2031cb" Nov 28 08:29:34 crc kubenswrapper[4946]: I1128 08:29:34.779999 4946 scope.go:117] "RemoveContainer" containerID="24721297530cb45db1b427164270240bda8c47ac1f18ac2edae21013c248e8f6" Nov 28 08:29:34 crc kubenswrapper[4946]: I1128 08:29:34.815411 4946 scope.go:117] "RemoveContainer" containerID="cd724e1aff13da9e4e7cecc518cd6bbcd90c22ecb554d9f3ae11ba53433ab061" Nov 28 08:29:34 crc kubenswrapper[4946]: I1128 08:29:34.863970 4946 scope.go:117] "RemoveContainer" containerID="6ab498d57321ad0095895aaf73010a0dcc3c73d287b9bbe1cee0bbd73b0a1bdd" Nov 28 08:30:00 crc kubenswrapper[4946]: I1128 08:30:00.175007 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405310-tcq6s"] Nov 28 08:30:00 crc kubenswrapper[4946]: E1128 08:30:00.176452 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d0d482e-7035-4b6e-8207-62deef8c3e19" containerName="registry-server" Nov 28 08:30:00 crc kubenswrapper[4946]: I1128 08:30:00.176501 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d0d482e-7035-4b6e-8207-62deef8c3e19" containerName="registry-server" Nov 28 08:30:00 crc kubenswrapper[4946]: E1128 08:30:00.176513 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d0d482e-7035-4b6e-8207-62deef8c3e19" containerName="extract-utilities" Nov 28 08:30:00 crc kubenswrapper[4946]: I1128 08:30:00.176519 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d0d482e-7035-4b6e-8207-62deef8c3e19" containerName="extract-utilities" Nov 28 08:30:00 crc kubenswrapper[4946]: E1128 08:30:00.176561 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d0d482e-7035-4b6e-8207-62deef8c3e19" containerName="extract-content" Nov 28 08:30:00 crc kubenswrapper[4946]: I1128 08:30:00.176568 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d0d482e-7035-4b6e-8207-62deef8c3e19" containerName="extract-content" Nov 28 08:30:00 crc kubenswrapper[4946]: I1128 08:30:00.176747 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d0d482e-7035-4b6e-8207-62deef8c3e19" containerName="registry-server" Nov 28 08:30:00 crc kubenswrapper[4946]: I1128 08:30:00.177620 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405310-tcq6s" Nov 28 08:30:00 crc kubenswrapper[4946]: I1128 08:30:00.183585 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405310-tcq6s"] Nov 28 08:30:00 crc kubenswrapper[4946]: I1128 08:30:00.223572 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 08:30:00 crc kubenswrapper[4946]: I1128 08:30:00.223706 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 08:30:00 crc kubenswrapper[4946]: I1128 08:30:00.225277 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a581cbb-503c-4af0-91ce-747373199185-config-volume\") pod \"collect-profiles-29405310-tcq6s\" (UID: \"1a581cbb-503c-4af0-91ce-747373199185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405310-tcq6s" Nov 28 08:30:00 crc kubenswrapper[4946]: I1128 08:30:00.225755 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a581cbb-503c-4af0-91ce-747373199185-secret-volume\") pod \"collect-profiles-29405310-tcq6s\" (UID: \"1a581cbb-503c-4af0-91ce-747373199185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405310-tcq6s" Nov 28 08:30:00 crc kubenswrapper[4946]: I1128 08:30:00.225789 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2zkc\" (UniqueName: \"kubernetes.io/projected/1a581cbb-503c-4af0-91ce-747373199185-kube-api-access-s2zkc\") pod \"collect-profiles-29405310-tcq6s\" (UID: \"1a581cbb-503c-4af0-91ce-747373199185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405310-tcq6s" Nov 28 08:30:00 crc kubenswrapper[4946]: I1128 08:30:00.327168 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a581cbb-503c-4af0-91ce-747373199185-secret-volume\") pod \"collect-profiles-29405310-tcq6s\" (UID: \"1a581cbb-503c-4af0-91ce-747373199185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405310-tcq6s" Nov 28 08:30:00 crc kubenswrapper[4946]: I1128 08:30:00.327224 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2zkc\" (UniqueName: \"kubernetes.io/projected/1a581cbb-503c-4af0-91ce-747373199185-kube-api-access-s2zkc\") pod \"collect-profiles-29405310-tcq6s\" (UID: \"1a581cbb-503c-4af0-91ce-747373199185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405310-tcq6s" Nov 28 08:30:00 crc kubenswrapper[4946]: I1128 08:30:00.327262 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a581cbb-503c-4af0-91ce-747373199185-config-volume\") pod \"collect-profiles-29405310-tcq6s\" (UID: \"1a581cbb-503c-4af0-91ce-747373199185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405310-tcq6s" Nov 28 08:30:00 crc kubenswrapper[4946]: I1128 08:30:00.329699 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a581cbb-503c-4af0-91ce-747373199185-config-volume\") pod \"collect-profiles-29405310-tcq6s\" (UID: \"1a581cbb-503c-4af0-91ce-747373199185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405310-tcq6s" Nov 28 08:30:00 crc kubenswrapper[4946]: I1128 08:30:00.334788 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a581cbb-503c-4af0-91ce-747373199185-secret-volume\") pod \"collect-profiles-29405310-tcq6s\" (UID: \"1a581cbb-503c-4af0-91ce-747373199185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405310-tcq6s" Nov 28 08:30:00 crc kubenswrapper[4946]: I1128 08:30:00.346305 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2zkc\" (UniqueName: \"kubernetes.io/projected/1a581cbb-503c-4af0-91ce-747373199185-kube-api-access-s2zkc\") pod \"collect-profiles-29405310-tcq6s\" (UID: \"1a581cbb-503c-4af0-91ce-747373199185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405310-tcq6s" Nov 28 08:30:00 crc kubenswrapper[4946]: I1128 08:30:00.542844 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405310-tcq6s" Nov 28 08:30:01 crc kubenswrapper[4946]: I1128 08:30:01.069088 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405310-tcq6s"] Nov 28 08:30:01 crc kubenswrapper[4946]: I1128 08:30:01.444203 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405310-tcq6s" event={"ID":"1a581cbb-503c-4af0-91ce-747373199185","Type":"ContainerStarted","Data":"b1904cb93cef921a5741f9346108c53ea15e41a0cbdd179f760cb1a53f1c5fb3"} Nov 28 08:30:01 crc kubenswrapper[4946]: I1128 08:30:01.444687 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405310-tcq6s" event={"ID":"1a581cbb-503c-4af0-91ce-747373199185","Type":"ContainerStarted","Data":"003b2fbf2dca8272f560ae2a481dbee87824f8c5f5713c9c68ca1147c640f3cb"} Nov 28 08:30:01 crc kubenswrapper[4946]: I1128 08:30:01.469554 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29405310-tcq6s" podStartSLOduration=1.469535365 podStartE2EDuration="1.469535365s" podCreationTimestamp="2025-11-28 08:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:30:01.466959181 +0000 UTC m=+5855.845024302" watchObservedRunningTime="2025-11-28 08:30:01.469535365 +0000 UTC m=+5855.847600486" Nov 28 08:30:02 crc kubenswrapper[4946]: I1128 08:30:02.455922 4946 generic.go:334] "Generic (PLEG): container finished" podID="1a581cbb-503c-4af0-91ce-747373199185" containerID="b1904cb93cef921a5741f9346108c53ea15e41a0cbdd179f760cb1a53f1c5fb3" exitCode=0 Nov 28 08:30:02 crc kubenswrapper[4946]: I1128 08:30:02.455988 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405310-tcq6s" event={"ID":"1a581cbb-503c-4af0-91ce-747373199185","Type":"ContainerDied","Data":"b1904cb93cef921a5741f9346108c53ea15e41a0cbdd179f760cb1a53f1c5fb3"} Nov 28 08:30:03 crc kubenswrapper[4946]: I1128 08:30:03.897506 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405310-tcq6s" Nov 28 08:30:04 crc kubenswrapper[4946]: I1128 08:30:04.089071 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a581cbb-503c-4af0-91ce-747373199185-secret-volume\") pod \"1a581cbb-503c-4af0-91ce-747373199185\" (UID: \"1a581cbb-503c-4af0-91ce-747373199185\") " Nov 28 08:30:04 crc kubenswrapper[4946]: I1128 08:30:04.089615 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2zkc\" (UniqueName: \"kubernetes.io/projected/1a581cbb-503c-4af0-91ce-747373199185-kube-api-access-s2zkc\") pod \"1a581cbb-503c-4af0-91ce-747373199185\" (UID: \"1a581cbb-503c-4af0-91ce-747373199185\") " Nov 28 08:30:04 crc kubenswrapper[4946]: I1128 08:30:04.090282 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a581cbb-503c-4af0-91ce-747373199185-config-volume\") pod \"1a581cbb-503c-4af0-91ce-747373199185\" (UID: \"1a581cbb-503c-4af0-91ce-747373199185\") " Nov 28 08:30:04 crc kubenswrapper[4946]: I1128 08:30:04.095227 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a581cbb-503c-4af0-91ce-747373199185-config-volume" (OuterVolumeSpecName: "config-volume") pod "1a581cbb-503c-4af0-91ce-747373199185" (UID: "1a581cbb-503c-4af0-91ce-747373199185"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:30:04 crc kubenswrapper[4946]: I1128 08:30:04.096213 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a581cbb-503c-4af0-91ce-747373199185-kube-api-access-s2zkc" (OuterVolumeSpecName: "kube-api-access-s2zkc") pod "1a581cbb-503c-4af0-91ce-747373199185" (UID: "1a581cbb-503c-4af0-91ce-747373199185"). InnerVolumeSpecName "kube-api-access-s2zkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:30:04 crc kubenswrapper[4946]: I1128 08:30:04.097759 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a581cbb-503c-4af0-91ce-747373199185-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1a581cbb-503c-4af0-91ce-747373199185" (UID: "1a581cbb-503c-4af0-91ce-747373199185"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:30:04 crc kubenswrapper[4946]: I1128 08:30:04.196557 4946 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a581cbb-503c-4af0-91ce-747373199185-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 08:30:04 crc kubenswrapper[4946]: I1128 08:30:04.196596 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2zkc\" (UniqueName: \"kubernetes.io/projected/1a581cbb-503c-4af0-91ce-747373199185-kube-api-access-s2zkc\") on node \"crc\" DevicePath \"\"" Nov 28 08:30:04 crc kubenswrapper[4946]: I1128 08:30:04.196895 4946 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a581cbb-503c-4af0-91ce-747373199185-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 08:30:04 crc kubenswrapper[4946]: I1128 08:30:04.481060 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405310-tcq6s" Nov 28 08:30:04 crc kubenswrapper[4946]: I1128 08:30:04.480980 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405310-tcq6s" event={"ID":"1a581cbb-503c-4af0-91ce-747373199185","Type":"ContainerDied","Data":"003b2fbf2dca8272f560ae2a481dbee87824f8c5f5713c9c68ca1147c640f3cb"} Nov 28 08:30:04 crc kubenswrapper[4946]: I1128 08:30:04.481279 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="003b2fbf2dca8272f560ae2a481dbee87824f8c5f5713c9c68ca1147c640f3cb" Nov 28 08:30:04 crc kubenswrapper[4946]: I1128 08:30:04.577267 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405265-rmvt2"] Nov 28 08:30:04 crc kubenswrapper[4946]: I1128 08:30:04.585998 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405265-rmvt2"] Nov 28 08:30:06 crc kubenswrapper[4946]: I1128 08:30:06.001522 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc6460c4-8209-4350-90b0-31a14dc20858" path="/var/lib/kubelet/pods/cc6460c4-8209-4350-90b0-31a14dc20858/volumes" Nov 28 08:30:13 crc kubenswrapper[4946]: I1128 08:30:13.066503 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7fhvz"] Nov 28 08:30:13 crc kubenswrapper[4946]: E1128 08:30:13.067549 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a581cbb-503c-4af0-91ce-747373199185" containerName="collect-profiles" Nov 28 08:30:13 crc kubenswrapper[4946]: I1128 08:30:13.067595 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a581cbb-503c-4af0-91ce-747373199185" containerName="collect-profiles" Nov 28 08:30:13 crc kubenswrapper[4946]: I1128 08:30:13.067780 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a581cbb-503c-4af0-91ce-747373199185" containerName="collect-profiles" Nov 28 08:30:13 crc kubenswrapper[4946]: I1128 08:30:13.068973 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7fhvz" Nov 28 08:30:13 crc kubenswrapper[4946]: I1128 08:30:13.096535 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7fhvz"] Nov 28 08:30:13 crc kubenswrapper[4946]: I1128 08:30:13.232653 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sft5s\" (UniqueName: \"kubernetes.io/projected/6c3f2a9c-bfcf-48f9-ae49-8ba485964b69-kube-api-access-sft5s\") pod \"redhat-operators-7fhvz\" (UID: \"6c3f2a9c-bfcf-48f9-ae49-8ba485964b69\") " pod="openshift-marketplace/redhat-operators-7fhvz" Nov 28 08:30:13 crc kubenswrapper[4946]: I1128 08:30:13.232702 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c3f2a9c-bfcf-48f9-ae49-8ba485964b69-utilities\") pod \"redhat-operators-7fhvz\" (UID: \"6c3f2a9c-bfcf-48f9-ae49-8ba485964b69\") " pod="openshift-marketplace/redhat-operators-7fhvz" Nov 28 08:30:13 crc kubenswrapper[4946]: I1128 08:30:13.232735 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c3f2a9c-bfcf-48f9-ae49-8ba485964b69-catalog-content\") pod \"redhat-operators-7fhvz\" (UID: \"6c3f2a9c-bfcf-48f9-ae49-8ba485964b69\") " pod="openshift-marketplace/redhat-operators-7fhvz" Nov 28 08:30:13 crc kubenswrapper[4946]: I1128 08:30:13.333678 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sft5s\" (UniqueName: \"kubernetes.io/projected/6c3f2a9c-bfcf-48f9-ae49-8ba485964b69-kube-api-access-sft5s\") pod \"redhat-operators-7fhvz\" (UID: \"6c3f2a9c-bfcf-48f9-ae49-8ba485964b69\") " pod="openshift-marketplace/redhat-operators-7fhvz" Nov 28 08:30:13 crc kubenswrapper[4946]: I1128 08:30:13.333730 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c3f2a9c-bfcf-48f9-ae49-8ba485964b69-utilities\") pod \"redhat-operators-7fhvz\" (UID: \"6c3f2a9c-bfcf-48f9-ae49-8ba485964b69\") " pod="openshift-marketplace/redhat-operators-7fhvz" Nov 28 08:30:13 crc kubenswrapper[4946]: I1128 08:30:13.333758 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c3f2a9c-bfcf-48f9-ae49-8ba485964b69-catalog-content\") pod \"redhat-operators-7fhvz\" (UID: \"6c3f2a9c-bfcf-48f9-ae49-8ba485964b69\") " pod="openshift-marketplace/redhat-operators-7fhvz" Nov 28 08:30:13 crc kubenswrapper[4946]: I1128 08:30:13.334202 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c3f2a9c-bfcf-48f9-ae49-8ba485964b69-utilities\") pod \"redhat-operators-7fhvz\" (UID: \"6c3f2a9c-bfcf-48f9-ae49-8ba485964b69\") " pod="openshift-marketplace/redhat-operators-7fhvz" Nov 28 08:30:13 crc kubenswrapper[4946]: I1128 08:30:13.334263 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c3f2a9c-bfcf-48f9-ae49-8ba485964b69-catalog-content\") pod \"redhat-operators-7fhvz\" (UID: \"6c3f2a9c-bfcf-48f9-ae49-8ba485964b69\") " pod="openshift-marketplace/redhat-operators-7fhvz" Nov 28 08:30:13 crc kubenswrapper[4946]: I1128 08:30:13.357909 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sft5s\" (UniqueName: \"kubernetes.io/projected/6c3f2a9c-bfcf-48f9-ae49-8ba485964b69-kube-api-access-sft5s\") pod \"redhat-operators-7fhvz\" (UID: \"6c3f2a9c-bfcf-48f9-ae49-8ba485964b69\") " pod="openshift-marketplace/redhat-operators-7fhvz" Nov 28 08:30:13 crc kubenswrapper[4946]: I1128 08:30:13.393741 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7fhvz" Nov 28 08:30:13 crc kubenswrapper[4946]: I1128 08:30:13.819757 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7fhvz"] Nov 28 08:30:13 crc kubenswrapper[4946]: W1128 08:30:13.831342 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c3f2a9c_bfcf_48f9_ae49_8ba485964b69.slice/crio-e57c0c317111a8a3b680a17db6a61f72f7f51bc47ee005b2b2c6258e66415b89 WatchSource:0}: Error finding container e57c0c317111a8a3b680a17db6a61f72f7f51bc47ee005b2b2c6258e66415b89: Status 404 returned error can't find the container with id e57c0c317111a8a3b680a17db6a61f72f7f51bc47ee005b2b2c6258e66415b89 Nov 28 08:30:14 crc kubenswrapper[4946]: I1128 08:30:14.581377 4946 generic.go:334] "Generic (PLEG): container finished" podID="6c3f2a9c-bfcf-48f9-ae49-8ba485964b69" containerID="26feb989a83614fe05ea20bbe8ae7dd9419021a68c7ebe5bae45857e67c54e78" exitCode=0 Nov 28 08:30:14 crc kubenswrapper[4946]: I1128 08:30:14.581831 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fhvz" event={"ID":"6c3f2a9c-bfcf-48f9-ae49-8ba485964b69","Type":"ContainerDied","Data":"26feb989a83614fe05ea20bbe8ae7dd9419021a68c7ebe5bae45857e67c54e78"} Nov 28 08:30:14 crc kubenswrapper[4946]: I1128 08:30:14.581875 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fhvz" event={"ID":"6c3f2a9c-bfcf-48f9-ae49-8ba485964b69","Type":"ContainerStarted","Data":"e57c0c317111a8a3b680a17db6a61f72f7f51bc47ee005b2b2c6258e66415b89"} Nov 28 08:30:14 crc kubenswrapper[4946]: I1128 08:30:14.584880 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 08:30:16 crc kubenswrapper[4946]: I1128 08:30:16.602709 4946 generic.go:334] "Generic (PLEG): container finished" podID="6c3f2a9c-bfcf-48f9-ae49-8ba485964b69" containerID="7da61f8da15497a7a0650280535f030689feb80a66dd6d18f43967af48f67d80" exitCode=0 Nov 28 08:30:16 crc kubenswrapper[4946]: I1128 08:30:16.602790 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fhvz" event={"ID":"6c3f2a9c-bfcf-48f9-ae49-8ba485964b69","Type":"ContainerDied","Data":"7da61f8da15497a7a0650280535f030689feb80a66dd6d18f43967af48f67d80"} Nov 28 08:30:17 crc kubenswrapper[4946]: I1128 08:30:17.613546 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fhvz" event={"ID":"6c3f2a9c-bfcf-48f9-ae49-8ba485964b69","Type":"ContainerStarted","Data":"6abd850f4c7ff59853e65ef981e0938bb043847bc45f77a576c7dfb376850d5b"} Nov 28 08:30:17 crc kubenswrapper[4946]: I1128 08:30:17.645718 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7fhvz" podStartSLOduration=1.896743029 podStartE2EDuration="4.645688767s" podCreationTimestamp="2025-11-28 08:30:13 +0000 UTC" firstStartedPulling="2025-11-28 08:30:14.583988468 +0000 UTC m=+5868.962053619" lastFinishedPulling="2025-11-28 08:30:17.332934216 +0000 UTC m=+5871.710999357" observedRunningTime="2025-11-28 08:30:17.639557145 +0000 UTC m=+5872.017622256" watchObservedRunningTime="2025-11-28 08:30:17.645688767 +0000 UTC m=+5872.023753908" Nov 28 08:30:19 crc kubenswrapper[4946]: I1128 08:30:19.452110 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-stz98"] Nov 28 08:30:19 crc kubenswrapper[4946]: I1128 08:30:19.454702 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stz98" Nov 28 08:30:19 crc kubenswrapper[4946]: I1128 08:30:19.471307 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-stz98"] Nov 28 08:30:19 crc kubenswrapper[4946]: I1128 08:30:19.628647 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b938e0e4-3b5b-46a9-a018-d3a4e85b1610-catalog-content\") pod \"certified-operators-stz98\" (UID: \"b938e0e4-3b5b-46a9-a018-d3a4e85b1610\") " pod="openshift-marketplace/certified-operators-stz98" Nov 28 08:30:19 crc kubenswrapper[4946]: I1128 08:30:19.628837 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrk2q\" (UniqueName: \"kubernetes.io/projected/b938e0e4-3b5b-46a9-a018-d3a4e85b1610-kube-api-access-qrk2q\") pod \"certified-operators-stz98\" (UID: \"b938e0e4-3b5b-46a9-a018-d3a4e85b1610\") " pod="openshift-marketplace/certified-operators-stz98" Nov 28 08:30:19 crc kubenswrapper[4946]: I1128 08:30:19.628923 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b938e0e4-3b5b-46a9-a018-d3a4e85b1610-utilities\") pod \"certified-operators-stz98\" (UID: \"b938e0e4-3b5b-46a9-a018-d3a4e85b1610\") " pod="openshift-marketplace/certified-operators-stz98" Nov 28 08:30:19 crc kubenswrapper[4946]: I1128 08:30:19.730518 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b938e0e4-3b5b-46a9-a018-d3a4e85b1610-catalog-content\") pod \"certified-operators-stz98\" (UID: \"b938e0e4-3b5b-46a9-a018-d3a4e85b1610\") " pod="openshift-marketplace/certified-operators-stz98" Nov 28 08:30:19 crc kubenswrapper[4946]: I1128 08:30:19.730600 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrk2q\" (UniqueName: \"kubernetes.io/projected/b938e0e4-3b5b-46a9-a018-d3a4e85b1610-kube-api-access-qrk2q\") pod \"certified-operators-stz98\" (UID: \"b938e0e4-3b5b-46a9-a018-d3a4e85b1610\") " pod="openshift-marketplace/certified-operators-stz98" Nov 28 08:30:19 crc kubenswrapper[4946]: I1128 08:30:19.730626 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b938e0e4-3b5b-46a9-a018-d3a4e85b1610-utilities\") pod \"certified-operators-stz98\" (UID: \"b938e0e4-3b5b-46a9-a018-d3a4e85b1610\") " pod="openshift-marketplace/certified-operators-stz98" Nov 28 08:30:19 crc kubenswrapper[4946]: I1128 08:30:19.731167 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b938e0e4-3b5b-46a9-a018-d3a4e85b1610-utilities\") pod \"certified-operators-stz98\" (UID: \"b938e0e4-3b5b-46a9-a018-d3a4e85b1610\") " pod="openshift-marketplace/certified-operators-stz98" Nov 28 08:30:19 crc kubenswrapper[4946]: I1128 08:30:19.731334 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b938e0e4-3b5b-46a9-a018-d3a4e85b1610-catalog-content\") pod \"certified-operators-stz98\" (UID: \"b938e0e4-3b5b-46a9-a018-d3a4e85b1610\") " pod="openshift-marketplace/certified-operators-stz98" Nov 28 08:30:19 crc kubenswrapper[4946]: I1128 08:30:19.751966 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrk2q\" (UniqueName: \"kubernetes.io/projected/b938e0e4-3b5b-46a9-a018-d3a4e85b1610-kube-api-access-qrk2q\") pod \"certified-operators-stz98\" (UID: \"b938e0e4-3b5b-46a9-a018-d3a4e85b1610\") " pod="openshift-marketplace/certified-operators-stz98" Nov 28 08:30:19 crc kubenswrapper[4946]: I1128 08:30:19.807799 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stz98" Nov 28 08:30:20 crc kubenswrapper[4946]: I1128 08:30:20.293524 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-stz98"] Nov 28 08:30:20 crc kubenswrapper[4946]: I1128 08:30:20.640580 4946 generic.go:334] "Generic (PLEG): container finished" podID="b938e0e4-3b5b-46a9-a018-d3a4e85b1610" containerID="f68e6faf5a7e5b61996980ac34f78f44fac63d5aac93bc93d3e8c145b6db997a" exitCode=0 Nov 28 08:30:20 crc kubenswrapper[4946]: I1128 08:30:20.640642 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stz98" event={"ID":"b938e0e4-3b5b-46a9-a018-d3a4e85b1610","Type":"ContainerDied","Data":"f68e6faf5a7e5b61996980ac34f78f44fac63d5aac93bc93d3e8c145b6db997a"} Nov 28 08:30:20 crc kubenswrapper[4946]: I1128 08:30:20.640683 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stz98" event={"ID":"b938e0e4-3b5b-46a9-a018-d3a4e85b1610","Type":"ContainerStarted","Data":"c970eb67c73c366898dff90639a07758059b60b32722d20ad071b52d67c6de82"} Nov 28 08:30:22 crc kubenswrapper[4946]: I1128 08:30:22.660006 4946 generic.go:334] "Generic (PLEG): container finished" podID="b938e0e4-3b5b-46a9-a018-d3a4e85b1610" containerID="7865977df5b52a2a54f0928b8cd2b49c37239e2e1de00b003642f1704bab1ee4" exitCode=0 Nov 28 08:30:22 crc kubenswrapper[4946]: I1128 08:30:22.660072 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stz98" event={"ID":"b938e0e4-3b5b-46a9-a018-d3a4e85b1610","Type":"ContainerDied","Data":"7865977df5b52a2a54f0928b8cd2b49c37239e2e1de00b003642f1704bab1ee4"} Nov 28 08:30:23 crc kubenswrapper[4946]: I1128 08:30:23.394722 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7fhvz" Nov 28 08:30:23 crc kubenswrapper[4946]: I1128 08:30:23.394773 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7fhvz" Nov 28 08:30:23 crc kubenswrapper[4946]: I1128 08:30:23.670424 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stz98" event={"ID":"b938e0e4-3b5b-46a9-a018-d3a4e85b1610","Type":"ContainerStarted","Data":"f266569d278ee62990f8fc0a07624fcf5b9ed266667cecf6b2bcafbaed5a2782"} Nov 28 08:30:23 crc kubenswrapper[4946]: I1128 08:30:23.692901 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-stz98" podStartSLOduration=2.037020377 podStartE2EDuration="4.692880988s" podCreationTimestamp="2025-11-28 08:30:19 +0000 UTC" firstStartedPulling="2025-11-28 08:30:20.642586461 +0000 UTC m=+5875.020651612" lastFinishedPulling="2025-11-28 08:30:23.298447112 +0000 UTC m=+5877.676512223" observedRunningTime="2025-11-28 08:30:23.690366246 +0000 UTC m=+5878.068431367" watchObservedRunningTime="2025-11-28 08:30:23.692880988 +0000 UTC m=+5878.070946099" Nov 28 08:30:24 crc kubenswrapper[4946]: I1128 08:30:24.451290 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7fhvz" podUID="6c3f2a9c-bfcf-48f9-ae49-8ba485964b69" containerName="registry-server" probeResult="failure" output=< Nov 28 08:30:24 crc kubenswrapper[4946]: timeout: failed to connect service ":50051" within 1s Nov 28 08:30:24 crc kubenswrapper[4946]: > Nov 28 08:30:29 crc kubenswrapper[4946]: I1128 08:30:29.808883 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-stz98" Nov 28 08:30:29 crc kubenswrapper[4946]: I1128 08:30:29.809622 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-stz98" Nov 28 08:30:29 crc kubenswrapper[4946]: I1128 08:30:29.884028 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-stz98" Nov 28 08:30:30 crc kubenswrapper[4946]: I1128 08:30:30.821600 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-stz98" Nov 28 08:30:30 crc kubenswrapper[4946]: I1128 08:30:30.890947 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-stz98"] Nov 28 08:30:32 crc kubenswrapper[4946]: I1128 08:30:32.764267 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-stz98" podUID="b938e0e4-3b5b-46a9-a018-d3a4e85b1610" containerName="registry-server" containerID="cri-o://f266569d278ee62990f8fc0a07624fcf5b9ed266667cecf6b2bcafbaed5a2782" gracePeriod=2 Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.458932 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7fhvz" Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.532186 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7fhvz" Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.678255 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stz98" Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.701022 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7fhvz"] Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.745527 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b938e0e4-3b5b-46a9-a018-d3a4e85b1610-utilities\") pod \"b938e0e4-3b5b-46a9-a018-d3a4e85b1610\" (UID: \"b938e0e4-3b5b-46a9-a018-d3a4e85b1610\") " Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.745609 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrk2q\" (UniqueName: \"kubernetes.io/projected/b938e0e4-3b5b-46a9-a018-d3a4e85b1610-kube-api-access-qrk2q\") pod \"b938e0e4-3b5b-46a9-a018-d3a4e85b1610\" (UID: \"b938e0e4-3b5b-46a9-a018-d3a4e85b1610\") " Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.745698 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b938e0e4-3b5b-46a9-a018-d3a4e85b1610-catalog-content\") pod \"b938e0e4-3b5b-46a9-a018-d3a4e85b1610\" (UID: \"b938e0e4-3b5b-46a9-a018-d3a4e85b1610\") " Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.746322 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b938e0e4-3b5b-46a9-a018-d3a4e85b1610-utilities" (OuterVolumeSpecName: "utilities") pod "b938e0e4-3b5b-46a9-a018-d3a4e85b1610" (UID: "b938e0e4-3b5b-46a9-a018-d3a4e85b1610"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.754778 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b938e0e4-3b5b-46a9-a018-d3a4e85b1610-kube-api-access-qrk2q" (OuterVolumeSpecName: "kube-api-access-qrk2q") pod "b938e0e4-3b5b-46a9-a018-d3a4e85b1610" (UID: "b938e0e4-3b5b-46a9-a018-d3a4e85b1610"). InnerVolumeSpecName "kube-api-access-qrk2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.779582 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stz98" event={"ID":"b938e0e4-3b5b-46a9-a018-d3a4e85b1610","Type":"ContainerDied","Data":"f266569d278ee62990f8fc0a07624fcf5b9ed266667cecf6b2bcafbaed5a2782"} Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.779630 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stz98" Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.779527 4946 generic.go:334] "Generic (PLEG): container finished" podID="b938e0e4-3b5b-46a9-a018-d3a4e85b1610" containerID="f266569d278ee62990f8fc0a07624fcf5b9ed266667cecf6b2bcafbaed5a2782" exitCode=0 Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.779687 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stz98" event={"ID":"b938e0e4-3b5b-46a9-a018-d3a4e85b1610","Type":"ContainerDied","Data":"c970eb67c73c366898dff90639a07758059b60b32722d20ad071b52d67c6de82"} Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.779640 4946 scope.go:117] "RemoveContainer" containerID="f266569d278ee62990f8fc0a07624fcf5b9ed266667cecf6b2bcafbaed5a2782" Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.789840 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b938e0e4-3b5b-46a9-a018-d3a4e85b1610-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b938e0e4-3b5b-46a9-a018-d3a4e85b1610" (UID: "b938e0e4-3b5b-46a9-a018-d3a4e85b1610"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.802843 4946 scope.go:117] "RemoveContainer" containerID="7865977df5b52a2a54f0928b8cd2b49c37239e2e1de00b003642f1704bab1ee4" Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.827665 4946 scope.go:117] "RemoveContainer" containerID="f68e6faf5a7e5b61996980ac34f78f44fac63d5aac93bc93d3e8c145b6db997a" Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.847280 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b938e0e4-3b5b-46a9-a018-d3a4e85b1610-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.847323 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b938e0e4-3b5b-46a9-a018-d3a4e85b1610-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.847342 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrk2q\" (UniqueName: \"kubernetes.io/projected/b938e0e4-3b5b-46a9-a018-d3a4e85b1610-kube-api-access-qrk2q\") on node \"crc\" DevicePath \"\"" Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.869778 4946 scope.go:117] "RemoveContainer" containerID="f266569d278ee62990f8fc0a07624fcf5b9ed266667cecf6b2bcafbaed5a2782" Nov 28 08:30:33 crc kubenswrapper[4946]: E1128 08:30:33.870515 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f266569d278ee62990f8fc0a07624fcf5b9ed266667cecf6b2bcafbaed5a2782\": container with ID starting with f266569d278ee62990f8fc0a07624fcf5b9ed266667cecf6b2bcafbaed5a2782 not found: ID does not exist" containerID="f266569d278ee62990f8fc0a07624fcf5b9ed266667cecf6b2bcafbaed5a2782" Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.870562 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f266569d278ee62990f8fc0a07624fcf5b9ed266667cecf6b2bcafbaed5a2782"} err="failed to get container status \"f266569d278ee62990f8fc0a07624fcf5b9ed266667cecf6b2bcafbaed5a2782\": rpc error: code = NotFound desc = could not find container \"f266569d278ee62990f8fc0a07624fcf5b9ed266667cecf6b2bcafbaed5a2782\": container with ID starting with f266569d278ee62990f8fc0a07624fcf5b9ed266667cecf6b2bcafbaed5a2782 not found: ID does not exist" Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.870589 4946 scope.go:117] "RemoveContainer" containerID="7865977df5b52a2a54f0928b8cd2b49c37239e2e1de00b003642f1704bab1ee4" Nov 28 08:30:33 crc kubenswrapper[4946]: E1128 08:30:33.871140 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7865977df5b52a2a54f0928b8cd2b49c37239e2e1de00b003642f1704bab1ee4\": container with ID starting with 7865977df5b52a2a54f0928b8cd2b49c37239e2e1de00b003642f1704bab1ee4 not found: ID does not exist" containerID="7865977df5b52a2a54f0928b8cd2b49c37239e2e1de00b003642f1704bab1ee4" Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.871179 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7865977df5b52a2a54f0928b8cd2b49c37239e2e1de00b003642f1704bab1ee4"} err="failed to get container status \"7865977df5b52a2a54f0928b8cd2b49c37239e2e1de00b003642f1704bab1ee4\": rpc error: code = NotFound desc = could not find container \"7865977df5b52a2a54f0928b8cd2b49c37239e2e1de00b003642f1704bab1ee4\": container with ID starting with 7865977df5b52a2a54f0928b8cd2b49c37239e2e1de00b003642f1704bab1ee4 not found: ID does not exist" Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.871205 4946 scope.go:117] "RemoveContainer" containerID="f68e6faf5a7e5b61996980ac34f78f44fac63d5aac93bc93d3e8c145b6db997a" Nov 28 08:30:33 crc kubenswrapper[4946]: E1128 08:30:33.871802 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f68e6faf5a7e5b61996980ac34f78f44fac63d5aac93bc93d3e8c145b6db997a\": container with ID starting with f68e6faf5a7e5b61996980ac34f78f44fac63d5aac93bc93d3e8c145b6db997a not found: ID does not exist" containerID="f68e6faf5a7e5b61996980ac34f78f44fac63d5aac93bc93d3e8c145b6db997a" Nov 28 08:30:33 crc kubenswrapper[4946]: I1128 08:30:33.871843 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f68e6faf5a7e5b61996980ac34f78f44fac63d5aac93bc93d3e8c145b6db997a"} err="failed to get container status \"f68e6faf5a7e5b61996980ac34f78f44fac63d5aac93bc93d3e8c145b6db997a\": rpc error: code = NotFound desc = could not find container \"f68e6faf5a7e5b61996980ac34f78f44fac63d5aac93bc93d3e8c145b6db997a\": container with ID starting with f68e6faf5a7e5b61996980ac34f78f44fac63d5aac93bc93d3e8c145b6db997a not found: ID does not exist" Nov 28 08:30:34 crc kubenswrapper[4946]: I1128 08:30:34.099447 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-stz98"] Nov 28 08:30:34 crc kubenswrapper[4946]: I1128 08:30:34.106602 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-stz98"] Nov 28 08:30:34 crc kubenswrapper[4946]: I1128 08:30:34.790932 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7fhvz" podUID="6c3f2a9c-bfcf-48f9-ae49-8ba485964b69" containerName="registry-server" containerID="cri-o://6abd850f4c7ff59853e65ef981e0938bb043847bc45f77a576c7dfb376850d5b" gracePeriod=2 Nov 28 08:30:34 crc kubenswrapper[4946]: I1128 08:30:34.926156 4946 scope.go:117] "RemoveContainer" containerID="e9d585782ee48ae7ee7532dc54d5453e3c6db7118c2862aa60320be63fa5ae38" Nov 28 08:30:35 crc kubenswrapper[4946]: I1128 08:30:35.282350 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7fhvz" Nov 28 08:30:35 crc kubenswrapper[4946]: I1128 08:30:35.396330 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c3f2a9c-bfcf-48f9-ae49-8ba485964b69-catalog-content\") pod \"6c3f2a9c-bfcf-48f9-ae49-8ba485964b69\" (UID: \"6c3f2a9c-bfcf-48f9-ae49-8ba485964b69\") " Nov 28 08:30:35 crc kubenswrapper[4946]: I1128 08:30:35.396854 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c3f2a9c-bfcf-48f9-ae49-8ba485964b69-utilities\") pod \"6c3f2a9c-bfcf-48f9-ae49-8ba485964b69\" (UID: \"6c3f2a9c-bfcf-48f9-ae49-8ba485964b69\") " Nov 28 08:30:35 crc kubenswrapper[4946]: I1128 08:30:35.396947 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sft5s\" (UniqueName: \"kubernetes.io/projected/6c3f2a9c-bfcf-48f9-ae49-8ba485964b69-kube-api-access-sft5s\") pod \"6c3f2a9c-bfcf-48f9-ae49-8ba485964b69\" (UID: \"6c3f2a9c-bfcf-48f9-ae49-8ba485964b69\") " Nov 28 08:30:35 crc kubenswrapper[4946]: I1128 08:30:35.399516 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c3f2a9c-bfcf-48f9-ae49-8ba485964b69-utilities" (OuterVolumeSpecName: "utilities") pod "6c3f2a9c-bfcf-48f9-ae49-8ba485964b69" (UID: "6c3f2a9c-bfcf-48f9-ae49-8ba485964b69"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:30:35 crc kubenswrapper[4946]: I1128 08:30:35.402104 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c3f2a9c-bfcf-48f9-ae49-8ba485964b69-kube-api-access-sft5s" (OuterVolumeSpecName: "kube-api-access-sft5s") pod "6c3f2a9c-bfcf-48f9-ae49-8ba485964b69" (UID: "6c3f2a9c-bfcf-48f9-ae49-8ba485964b69"). InnerVolumeSpecName "kube-api-access-sft5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:30:35 crc kubenswrapper[4946]: I1128 08:30:35.497994 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c3f2a9c-bfcf-48f9-ae49-8ba485964b69-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 08:30:35 crc kubenswrapper[4946]: I1128 08:30:35.498028 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sft5s\" (UniqueName: \"kubernetes.io/projected/6c3f2a9c-bfcf-48f9-ae49-8ba485964b69-kube-api-access-sft5s\") on node \"crc\" DevicePath \"\"" Nov 28 08:30:35 crc kubenswrapper[4946]: I1128 08:30:35.573907 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c3f2a9c-bfcf-48f9-ae49-8ba485964b69-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c3f2a9c-bfcf-48f9-ae49-8ba485964b69" (UID: "6c3f2a9c-bfcf-48f9-ae49-8ba485964b69"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:30:35 crc kubenswrapper[4946]: I1128 08:30:35.599952 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c3f2a9c-bfcf-48f9-ae49-8ba485964b69-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 08:30:35 crc kubenswrapper[4946]: I1128 08:30:35.810039 4946 generic.go:334] "Generic (PLEG): container finished" podID="6c3f2a9c-bfcf-48f9-ae49-8ba485964b69" containerID="6abd850f4c7ff59853e65ef981e0938bb043847bc45f77a576c7dfb376850d5b" exitCode=0 Nov 28 08:30:35 crc kubenswrapper[4946]: I1128 08:30:35.810107 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fhvz" event={"ID":"6c3f2a9c-bfcf-48f9-ae49-8ba485964b69","Type":"ContainerDied","Data":"6abd850f4c7ff59853e65ef981e0938bb043847bc45f77a576c7dfb376850d5b"} Nov 28 08:30:35 crc kubenswrapper[4946]: I1128 08:30:35.810157 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fhvz" event={"ID":"6c3f2a9c-bfcf-48f9-ae49-8ba485964b69","Type":"ContainerDied","Data":"e57c0c317111a8a3b680a17db6a61f72f7f51bc47ee005b2b2c6258e66415b89"} Nov 28 08:30:35 crc kubenswrapper[4946]: I1128 08:30:35.810202 4946 scope.go:117] "RemoveContainer" containerID="6abd850f4c7ff59853e65ef981e0938bb043847bc45f77a576c7dfb376850d5b" Nov 28 08:30:35 crc kubenswrapper[4946]: I1128 08:30:35.810110 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7fhvz" Nov 28 08:30:35 crc kubenswrapper[4946]: I1128 08:30:35.843429 4946 scope.go:117] "RemoveContainer" containerID="7da61f8da15497a7a0650280535f030689feb80a66dd6d18f43967af48f67d80" Nov 28 08:30:35 crc kubenswrapper[4946]: I1128 08:30:35.879609 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7fhvz"] Nov 28 08:30:35 crc kubenswrapper[4946]: I1128 08:30:35.890304 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7fhvz"] Nov 28 08:30:35 crc kubenswrapper[4946]: I1128 08:30:35.893907 4946 scope.go:117] "RemoveContainer" containerID="26feb989a83614fe05ea20bbe8ae7dd9419021a68c7ebe5bae45857e67c54e78" Nov 28 08:30:35 crc kubenswrapper[4946]: I1128 08:30:35.918693 4946 scope.go:117] "RemoveContainer" containerID="6abd850f4c7ff59853e65ef981e0938bb043847bc45f77a576c7dfb376850d5b" Nov 28 08:30:35 crc kubenswrapper[4946]: E1128 08:30:35.919252 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6abd850f4c7ff59853e65ef981e0938bb043847bc45f77a576c7dfb376850d5b\": container with ID starting with 6abd850f4c7ff59853e65ef981e0938bb043847bc45f77a576c7dfb376850d5b not found: ID does not exist" containerID="6abd850f4c7ff59853e65ef981e0938bb043847bc45f77a576c7dfb376850d5b" Nov 28 08:30:35 crc kubenswrapper[4946]: I1128 08:30:35.919288 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6abd850f4c7ff59853e65ef981e0938bb043847bc45f77a576c7dfb376850d5b"} err="failed to get container status \"6abd850f4c7ff59853e65ef981e0938bb043847bc45f77a576c7dfb376850d5b\": rpc error: code = NotFound desc = could not find container \"6abd850f4c7ff59853e65ef981e0938bb043847bc45f77a576c7dfb376850d5b\": container with ID starting with 6abd850f4c7ff59853e65ef981e0938bb043847bc45f77a576c7dfb376850d5b not found: ID does not exist" Nov 28 08:30:35 crc kubenswrapper[4946]: I1128 08:30:35.919315 4946 scope.go:117] "RemoveContainer" containerID="7da61f8da15497a7a0650280535f030689feb80a66dd6d18f43967af48f67d80" Nov 28 08:30:35 crc kubenswrapper[4946]: E1128 08:30:35.919821 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7da61f8da15497a7a0650280535f030689feb80a66dd6d18f43967af48f67d80\": container with ID starting with 7da61f8da15497a7a0650280535f030689feb80a66dd6d18f43967af48f67d80 not found: ID does not exist" containerID="7da61f8da15497a7a0650280535f030689feb80a66dd6d18f43967af48f67d80" Nov 28 08:30:35 crc kubenswrapper[4946]: I1128 08:30:35.919895 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7da61f8da15497a7a0650280535f030689feb80a66dd6d18f43967af48f67d80"} err="failed to get container status \"7da61f8da15497a7a0650280535f030689feb80a66dd6d18f43967af48f67d80\": rpc error: code = NotFound desc = could not find container \"7da61f8da15497a7a0650280535f030689feb80a66dd6d18f43967af48f67d80\": container with ID starting with 7da61f8da15497a7a0650280535f030689feb80a66dd6d18f43967af48f67d80 not found: ID does not exist" Nov 28 08:30:35 crc kubenswrapper[4946]: I1128 08:30:35.919920 4946 scope.go:117] "RemoveContainer" containerID="26feb989a83614fe05ea20bbe8ae7dd9419021a68c7ebe5bae45857e67c54e78" Nov 28 08:30:35 crc kubenswrapper[4946]: E1128 08:30:35.920332 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26feb989a83614fe05ea20bbe8ae7dd9419021a68c7ebe5bae45857e67c54e78\": container with ID starting with 26feb989a83614fe05ea20bbe8ae7dd9419021a68c7ebe5bae45857e67c54e78 not found: ID does not exist" containerID="26feb989a83614fe05ea20bbe8ae7dd9419021a68c7ebe5bae45857e67c54e78" Nov 28 08:30:35 crc kubenswrapper[4946]: I1128 08:30:35.920366 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26feb989a83614fe05ea20bbe8ae7dd9419021a68c7ebe5bae45857e67c54e78"} err="failed to get container status \"26feb989a83614fe05ea20bbe8ae7dd9419021a68c7ebe5bae45857e67c54e78\": rpc error: code = NotFound desc = could not find container \"26feb989a83614fe05ea20bbe8ae7dd9419021a68c7ebe5bae45857e67c54e78\": container with ID starting with 26feb989a83614fe05ea20bbe8ae7dd9419021a68c7ebe5bae45857e67c54e78 not found: ID does not exist" Nov 28 08:30:36 crc kubenswrapper[4946]: I1128 08:30:36.010269 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c3f2a9c-bfcf-48f9-ae49-8ba485964b69" path="/var/lib/kubelet/pods/6c3f2a9c-bfcf-48f9-ae49-8ba485964b69/volumes" Nov 28 08:30:36 crc kubenswrapper[4946]: I1128 08:30:36.011708 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b938e0e4-3b5b-46a9-a018-d3a4e85b1610" path="/var/lib/kubelet/pods/b938e0e4-3b5b-46a9-a018-d3a4e85b1610/volumes" Nov 28 08:30:54 crc kubenswrapper[4946]: I1128 08:30:54.730586 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:30:54 crc kubenswrapper[4946]: I1128 08:30:54.731253 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:31:24 crc kubenswrapper[4946]: I1128 08:31:24.730926 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:31:24 crc kubenswrapper[4946]: I1128 08:31:24.731731 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.099034 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-whwl7"] Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.108158 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-whwl7"] Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.251133 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-lhhg5"] Nov 28 08:31:31 crc kubenswrapper[4946]: E1128 08:31:31.252004 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b938e0e4-3b5b-46a9-a018-d3a4e85b1610" containerName="extract-utilities" Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.252034 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b938e0e4-3b5b-46a9-a018-d3a4e85b1610" containerName="extract-utilities" Nov 28 08:31:31 crc kubenswrapper[4946]: E1128 08:31:31.252082 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b938e0e4-3b5b-46a9-a018-d3a4e85b1610" containerName="extract-content" Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.252096 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b938e0e4-3b5b-46a9-a018-d3a4e85b1610" containerName="extract-content" Nov 28 08:31:31 crc kubenswrapper[4946]: E1128 08:31:31.252128 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c3f2a9c-bfcf-48f9-ae49-8ba485964b69" containerName="registry-server" Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.252140 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c3f2a9c-bfcf-48f9-ae49-8ba485964b69" containerName="registry-server" Nov 28 08:31:31 crc kubenswrapper[4946]: E1128 08:31:31.252160 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b938e0e4-3b5b-46a9-a018-d3a4e85b1610" containerName="registry-server" Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.252171 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b938e0e4-3b5b-46a9-a018-d3a4e85b1610" containerName="registry-server" Nov 28 08:31:31 crc kubenswrapper[4946]: E1128 08:31:31.252195 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c3f2a9c-bfcf-48f9-ae49-8ba485964b69" containerName="extract-utilities" Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.252207 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c3f2a9c-bfcf-48f9-ae49-8ba485964b69" containerName="extract-utilities" Nov 28 08:31:31 crc kubenswrapper[4946]: E1128 08:31:31.252230 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c3f2a9c-bfcf-48f9-ae49-8ba485964b69" containerName="extract-content" Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.252242 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c3f2a9c-bfcf-48f9-ae49-8ba485964b69" containerName="extract-content" Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.252524 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c3f2a9c-bfcf-48f9-ae49-8ba485964b69" containerName="registry-server" Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.252564 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b938e0e4-3b5b-46a9-a018-d3a4e85b1610" containerName="registry-server" Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.253388 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-lhhg5" Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.256873 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.257245 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.258065 4946 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-s7ht6" Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.258083 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.263016 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-lhhg5"] Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.438325 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/27ccd692-6586-4213-bab4-1ea924ea111f-crc-storage\") pod \"crc-storage-crc-lhhg5\" (UID: \"27ccd692-6586-4213-bab4-1ea924ea111f\") " pod="crc-storage/crc-storage-crc-lhhg5" Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.438491 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/27ccd692-6586-4213-bab4-1ea924ea111f-node-mnt\") pod \"crc-storage-crc-lhhg5\" (UID: \"27ccd692-6586-4213-bab4-1ea924ea111f\") " pod="crc-storage/crc-storage-crc-lhhg5" Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.438556 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bzpf\" (UniqueName: \"kubernetes.io/projected/27ccd692-6586-4213-bab4-1ea924ea111f-kube-api-access-4bzpf\") pod \"crc-storage-crc-lhhg5\" (UID: \"27ccd692-6586-4213-bab4-1ea924ea111f\") " pod="crc-storage/crc-storage-crc-lhhg5" Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.539943 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/27ccd692-6586-4213-bab4-1ea924ea111f-crc-storage\") pod \"crc-storage-crc-lhhg5\" (UID: \"27ccd692-6586-4213-bab4-1ea924ea111f\") " pod="crc-storage/crc-storage-crc-lhhg5" Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.540744 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/27ccd692-6586-4213-bab4-1ea924ea111f-node-mnt\") pod \"crc-storage-crc-lhhg5\" (UID: \"27ccd692-6586-4213-bab4-1ea924ea111f\") " pod="crc-storage/crc-storage-crc-lhhg5" Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.540817 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bzpf\" (UniqueName: \"kubernetes.io/projected/27ccd692-6586-4213-bab4-1ea924ea111f-kube-api-access-4bzpf\") pod \"crc-storage-crc-lhhg5\" (UID: \"27ccd692-6586-4213-bab4-1ea924ea111f\") " pod="crc-storage/crc-storage-crc-lhhg5" Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.541247 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/27ccd692-6586-4213-bab4-1ea924ea111f-crc-storage\") pod \"crc-storage-crc-lhhg5\" (UID: \"27ccd692-6586-4213-bab4-1ea924ea111f\") " pod="crc-storage/crc-storage-crc-lhhg5" Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.541673 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/27ccd692-6586-4213-bab4-1ea924ea111f-node-mnt\") pod \"crc-storage-crc-lhhg5\" (UID: \"27ccd692-6586-4213-bab4-1ea924ea111f\") " pod="crc-storage/crc-storage-crc-lhhg5" Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.560092 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bzpf\" (UniqueName: \"kubernetes.io/projected/27ccd692-6586-4213-bab4-1ea924ea111f-kube-api-access-4bzpf\") pod \"crc-storage-crc-lhhg5\" (UID: \"27ccd692-6586-4213-bab4-1ea924ea111f\") " pod="crc-storage/crc-storage-crc-lhhg5" Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.588962 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-lhhg5" Nov 28 08:31:31 crc kubenswrapper[4946]: I1128 08:31:31.841056 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-lhhg5"] Nov 28 08:31:32 crc kubenswrapper[4946]: I1128 08:31:32.002048 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15199c22-6d54-45fd-b47f-e1046b4875be" path="/var/lib/kubelet/pods/15199c22-6d54-45fd-b47f-e1046b4875be/volumes" Nov 28 08:31:32 crc kubenswrapper[4946]: I1128 08:31:32.341783 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-lhhg5" event={"ID":"27ccd692-6586-4213-bab4-1ea924ea111f","Type":"ContainerStarted","Data":"0c15ea3609ce6846169ec25cd1bb4b05afb3b9b4aed7a64910a3f4f19d1519ff"} Nov 28 08:31:33 crc kubenswrapper[4946]: I1128 08:31:33.356502 4946 generic.go:334] "Generic (PLEG): container finished" podID="27ccd692-6586-4213-bab4-1ea924ea111f" containerID="e95af5789fb71f7aec788985e4fc9087bb14bd4ee0fb9f9dabbb781aa97948c2" exitCode=0 Nov 28 08:31:33 crc kubenswrapper[4946]: I1128 08:31:33.356905 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-lhhg5" event={"ID":"27ccd692-6586-4213-bab4-1ea924ea111f","Type":"ContainerDied","Data":"e95af5789fb71f7aec788985e4fc9087bb14bd4ee0fb9f9dabbb781aa97948c2"} Nov 28 08:31:34 crc kubenswrapper[4946]: I1128 08:31:34.743852 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-lhhg5" Nov 28 08:31:34 crc kubenswrapper[4946]: I1128 08:31:34.912217 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bzpf\" (UniqueName: \"kubernetes.io/projected/27ccd692-6586-4213-bab4-1ea924ea111f-kube-api-access-4bzpf\") pod \"27ccd692-6586-4213-bab4-1ea924ea111f\" (UID: \"27ccd692-6586-4213-bab4-1ea924ea111f\") " Nov 28 08:31:34 crc kubenswrapper[4946]: I1128 08:31:34.912346 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/27ccd692-6586-4213-bab4-1ea924ea111f-crc-storage\") pod \"27ccd692-6586-4213-bab4-1ea924ea111f\" (UID: \"27ccd692-6586-4213-bab4-1ea924ea111f\") " Nov 28 08:31:34 crc kubenswrapper[4946]: I1128 08:31:34.912377 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/27ccd692-6586-4213-bab4-1ea924ea111f-node-mnt\") pod \"27ccd692-6586-4213-bab4-1ea924ea111f\" (UID: \"27ccd692-6586-4213-bab4-1ea924ea111f\") " Nov 28 08:31:34 crc kubenswrapper[4946]: I1128 08:31:34.912693 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27ccd692-6586-4213-bab4-1ea924ea111f-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "27ccd692-6586-4213-bab4-1ea924ea111f" (UID: "27ccd692-6586-4213-bab4-1ea924ea111f"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 08:31:34 crc kubenswrapper[4946]: I1128 08:31:34.919806 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27ccd692-6586-4213-bab4-1ea924ea111f-kube-api-access-4bzpf" (OuterVolumeSpecName: "kube-api-access-4bzpf") pod "27ccd692-6586-4213-bab4-1ea924ea111f" (UID: "27ccd692-6586-4213-bab4-1ea924ea111f"). InnerVolumeSpecName "kube-api-access-4bzpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:31:34 crc kubenswrapper[4946]: I1128 08:31:34.939217 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27ccd692-6586-4213-bab4-1ea924ea111f-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "27ccd692-6586-4213-bab4-1ea924ea111f" (UID: "27ccd692-6586-4213-bab4-1ea924ea111f"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:31:35 crc kubenswrapper[4946]: I1128 08:31:35.014073 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bzpf\" (UniqueName: \"kubernetes.io/projected/27ccd692-6586-4213-bab4-1ea924ea111f-kube-api-access-4bzpf\") on node \"crc\" DevicePath \"\"" Nov 28 08:31:35 crc kubenswrapper[4946]: I1128 08:31:35.014387 4946 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/27ccd692-6586-4213-bab4-1ea924ea111f-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 28 08:31:35 crc kubenswrapper[4946]: I1128 08:31:35.014580 4946 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/27ccd692-6586-4213-bab4-1ea924ea111f-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 28 08:31:35 crc kubenswrapper[4946]: I1128 08:31:35.037549 4946 scope.go:117] "RemoveContainer" containerID="f7f05ce7c4a48ef690cec16b64e2189c4eb184e3722ad7a03a1c13bc025c0fc5" Nov 28 08:31:35 crc kubenswrapper[4946]: I1128 08:31:35.382039 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-lhhg5" event={"ID":"27ccd692-6586-4213-bab4-1ea924ea111f","Type":"ContainerDied","Data":"0c15ea3609ce6846169ec25cd1bb4b05afb3b9b4aed7a64910a3f4f19d1519ff"} Nov 28 08:31:35 crc kubenswrapper[4946]: I1128 08:31:35.382105 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c15ea3609ce6846169ec25cd1bb4b05afb3b9b4aed7a64910a3f4f19d1519ff" Nov 28 08:31:35 crc kubenswrapper[4946]: I1128 08:31:35.382154 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-lhhg5" Nov 28 08:31:37 crc kubenswrapper[4946]: I1128 08:31:37.269213 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-lhhg5"] Nov 28 08:31:37 crc kubenswrapper[4946]: I1128 08:31:37.284772 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-lhhg5"] Nov 28 08:31:37 crc kubenswrapper[4946]: I1128 08:31:37.444876 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-tmpmf"] Nov 28 08:31:37 crc kubenswrapper[4946]: E1128 08:31:37.445228 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27ccd692-6586-4213-bab4-1ea924ea111f" containerName="storage" Nov 28 08:31:37 crc kubenswrapper[4946]: I1128 08:31:37.445251 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="27ccd692-6586-4213-bab4-1ea924ea111f" containerName="storage" Nov 28 08:31:37 crc kubenswrapper[4946]: I1128 08:31:37.445484 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="27ccd692-6586-4213-bab4-1ea924ea111f" containerName="storage" Nov 28 08:31:37 crc kubenswrapper[4946]: I1128 08:31:37.446676 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tmpmf" Nov 28 08:31:37 crc kubenswrapper[4946]: I1128 08:31:37.448751 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 28 08:31:37 crc kubenswrapper[4946]: I1128 08:31:37.449976 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 28 08:31:37 crc kubenswrapper[4946]: I1128 08:31:37.450097 4946 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-s7ht6" Nov 28 08:31:37 crc kubenswrapper[4946]: I1128 08:31:37.460021 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-tmpmf"] Nov 28 08:31:37 crc kubenswrapper[4946]: I1128 08:31:37.460520 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 28 08:31:37 crc kubenswrapper[4946]: I1128 08:31:37.561203 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrrzr\" (UniqueName: \"kubernetes.io/projected/d89e2778-303e-43f7-bbf1-6b8d70536573-kube-api-access-wrrzr\") pod \"crc-storage-crc-tmpmf\" (UID: \"d89e2778-303e-43f7-bbf1-6b8d70536573\") " pod="crc-storage/crc-storage-crc-tmpmf" Nov 28 08:31:37 crc kubenswrapper[4946]: I1128 08:31:37.561287 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d89e2778-303e-43f7-bbf1-6b8d70536573-crc-storage\") pod \"crc-storage-crc-tmpmf\" (UID: \"d89e2778-303e-43f7-bbf1-6b8d70536573\") " pod="crc-storage/crc-storage-crc-tmpmf" Nov 28 08:31:37 crc kubenswrapper[4946]: I1128 08:31:37.561308 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d89e2778-303e-43f7-bbf1-6b8d70536573-node-mnt\") pod \"crc-storage-crc-tmpmf\" (UID: \"d89e2778-303e-43f7-bbf1-6b8d70536573\") " pod="crc-storage/crc-storage-crc-tmpmf" Nov 28 08:31:37 crc kubenswrapper[4946]: I1128 08:31:37.663086 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d89e2778-303e-43f7-bbf1-6b8d70536573-crc-storage\") pod \"crc-storage-crc-tmpmf\" (UID: \"d89e2778-303e-43f7-bbf1-6b8d70536573\") " pod="crc-storage/crc-storage-crc-tmpmf" Nov 28 08:31:37 crc kubenswrapper[4946]: I1128 08:31:37.663153 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d89e2778-303e-43f7-bbf1-6b8d70536573-node-mnt\") pod \"crc-storage-crc-tmpmf\" (UID: \"d89e2778-303e-43f7-bbf1-6b8d70536573\") " pod="crc-storage/crc-storage-crc-tmpmf" Nov 28 08:31:37 crc kubenswrapper[4946]: I1128 08:31:37.663279 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrrzr\" (UniqueName: \"kubernetes.io/projected/d89e2778-303e-43f7-bbf1-6b8d70536573-kube-api-access-wrrzr\") pod \"crc-storage-crc-tmpmf\" (UID: \"d89e2778-303e-43f7-bbf1-6b8d70536573\") " pod="crc-storage/crc-storage-crc-tmpmf" Nov 28 08:31:37 crc kubenswrapper[4946]: I1128 08:31:37.663432 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d89e2778-303e-43f7-bbf1-6b8d70536573-node-mnt\") pod \"crc-storage-crc-tmpmf\" (UID: \"d89e2778-303e-43f7-bbf1-6b8d70536573\") " pod="crc-storage/crc-storage-crc-tmpmf" Nov 28 08:31:37 crc kubenswrapper[4946]: I1128 08:31:37.663897 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d89e2778-303e-43f7-bbf1-6b8d70536573-crc-storage\") pod \"crc-storage-crc-tmpmf\" (UID: \"d89e2778-303e-43f7-bbf1-6b8d70536573\") " pod="crc-storage/crc-storage-crc-tmpmf" Nov 28 08:31:37 crc kubenswrapper[4946]: I1128 08:31:37.679982 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrrzr\" (UniqueName: \"kubernetes.io/projected/d89e2778-303e-43f7-bbf1-6b8d70536573-kube-api-access-wrrzr\") pod \"crc-storage-crc-tmpmf\" (UID: \"d89e2778-303e-43f7-bbf1-6b8d70536573\") " pod="crc-storage/crc-storage-crc-tmpmf" Nov 28 08:31:37 crc kubenswrapper[4946]: I1128 08:31:37.769787 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tmpmf" Nov 28 08:31:38 crc kubenswrapper[4946]: I1128 08:31:38.004862 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27ccd692-6586-4213-bab4-1ea924ea111f" path="/var/lib/kubelet/pods/27ccd692-6586-4213-bab4-1ea924ea111f/volumes" Nov 28 08:31:38 crc kubenswrapper[4946]: I1128 08:31:38.241346 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-tmpmf"] Nov 28 08:31:38 crc kubenswrapper[4946]: I1128 08:31:38.411767 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-tmpmf" event={"ID":"d89e2778-303e-43f7-bbf1-6b8d70536573","Type":"ContainerStarted","Data":"a6e5940c6a6f865a33b587d80bb6cd93e8434a2583f345baaa661355bca758b2"} Nov 28 08:31:39 crc kubenswrapper[4946]: I1128 08:31:39.425524 4946 generic.go:334] "Generic (PLEG): container finished" podID="d89e2778-303e-43f7-bbf1-6b8d70536573" containerID="2f1aca7b7da0df59903b6cb3a7d9b4b4963b410ca89d12a4b5acdc16ac2974db" exitCode=0 Nov 28 08:31:39 crc kubenswrapper[4946]: I1128 08:31:39.425620 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-tmpmf" event={"ID":"d89e2778-303e-43f7-bbf1-6b8d70536573","Type":"ContainerDied","Data":"2f1aca7b7da0df59903b6cb3a7d9b4b4963b410ca89d12a4b5acdc16ac2974db"} Nov 28 08:31:40 crc kubenswrapper[4946]: I1128 08:31:40.798490 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tmpmf" Nov 28 08:31:40 crc kubenswrapper[4946]: I1128 08:31:40.914123 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrrzr\" (UniqueName: \"kubernetes.io/projected/d89e2778-303e-43f7-bbf1-6b8d70536573-kube-api-access-wrrzr\") pod \"d89e2778-303e-43f7-bbf1-6b8d70536573\" (UID: \"d89e2778-303e-43f7-bbf1-6b8d70536573\") " Nov 28 08:31:40 crc kubenswrapper[4946]: I1128 08:31:40.914230 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d89e2778-303e-43f7-bbf1-6b8d70536573-node-mnt\") pod \"d89e2778-303e-43f7-bbf1-6b8d70536573\" (UID: \"d89e2778-303e-43f7-bbf1-6b8d70536573\") " Nov 28 08:31:40 crc kubenswrapper[4946]: I1128 08:31:40.914354 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d89e2778-303e-43f7-bbf1-6b8d70536573-crc-storage\") pod \"d89e2778-303e-43f7-bbf1-6b8d70536573\" (UID: \"d89e2778-303e-43f7-bbf1-6b8d70536573\") " Nov 28 08:31:40 crc kubenswrapper[4946]: I1128 08:31:40.914493 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d89e2778-303e-43f7-bbf1-6b8d70536573-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "d89e2778-303e-43f7-bbf1-6b8d70536573" (UID: "d89e2778-303e-43f7-bbf1-6b8d70536573"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 08:31:40 crc kubenswrapper[4946]: I1128 08:31:40.915548 4946 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d89e2778-303e-43f7-bbf1-6b8d70536573-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 28 08:31:40 crc kubenswrapper[4946]: I1128 08:31:40.923830 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d89e2778-303e-43f7-bbf1-6b8d70536573-kube-api-access-wrrzr" (OuterVolumeSpecName: "kube-api-access-wrrzr") pod "d89e2778-303e-43f7-bbf1-6b8d70536573" (UID: "d89e2778-303e-43f7-bbf1-6b8d70536573"). InnerVolumeSpecName "kube-api-access-wrrzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:31:40 crc kubenswrapper[4946]: I1128 08:31:40.949767 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d89e2778-303e-43f7-bbf1-6b8d70536573-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "d89e2778-303e-43f7-bbf1-6b8d70536573" (UID: "d89e2778-303e-43f7-bbf1-6b8d70536573"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:31:41 crc kubenswrapper[4946]: I1128 08:31:41.017091 4946 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d89e2778-303e-43f7-bbf1-6b8d70536573-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 28 08:31:41 crc kubenswrapper[4946]: I1128 08:31:41.017149 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrrzr\" (UniqueName: \"kubernetes.io/projected/d89e2778-303e-43f7-bbf1-6b8d70536573-kube-api-access-wrrzr\") on node \"crc\" DevicePath \"\"" Nov 28 08:31:41 crc kubenswrapper[4946]: I1128 08:31:41.449937 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-tmpmf" event={"ID":"d89e2778-303e-43f7-bbf1-6b8d70536573","Type":"ContainerDied","Data":"a6e5940c6a6f865a33b587d80bb6cd93e8434a2583f345baaa661355bca758b2"} Nov 28 08:31:41 crc kubenswrapper[4946]: I1128 08:31:41.450006 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6e5940c6a6f865a33b587d80bb6cd93e8434a2583f345baaa661355bca758b2" Nov 28 08:31:41 crc kubenswrapper[4946]: I1128 08:31:41.450040 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tmpmf" Nov 28 08:31:54 crc kubenswrapper[4946]: I1128 08:31:54.730589 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:31:54 crc kubenswrapper[4946]: I1128 08:31:54.731379 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:31:54 crc kubenswrapper[4946]: I1128 08:31:54.731487 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 08:31:54 crc kubenswrapper[4946]: I1128 08:31:54.732412 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 08:31:54 crc kubenswrapper[4946]: I1128 08:31:54.732509 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" gracePeriod=600 Nov 28 08:31:54 crc kubenswrapper[4946]: E1128 08:31:54.867106 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:31:55 crc kubenswrapper[4946]: I1128 08:31:55.575748 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" exitCode=0 Nov 28 08:31:55 crc kubenswrapper[4946]: I1128 08:31:55.575835 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927"} Nov 28 08:31:55 crc kubenswrapper[4946]: I1128 08:31:55.575878 4946 scope.go:117] "RemoveContainer" containerID="300f30166de36a631f76a3d13fbe5df5f318d95b2aa73f46b8ae00bb9efa796f" Nov 28 08:31:55 crc kubenswrapper[4946]: I1128 08:31:55.576669 4946 scope.go:117] "RemoveContainer" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" Nov 28 08:31:55 crc kubenswrapper[4946]: E1128 08:31:55.577021 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:32:07 crc kubenswrapper[4946]: I1128 08:32:07.989959 4946 scope.go:117] "RemoveContainer" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" Nov 28 08:32:07 crc kubenswrapper[4946]: E1128 08:32:07.991070 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:32:20 crc kubenswrapper[4946]: I1128 08:32:20.990556 4946 scope.go:117] "RemoveContainer" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" Nov 28 08:32:20 crc kubenswrapper[4946]: E1128 08:32:20.991297 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:32:31 crc kubenswrapper[4946]: I1128 08:32:31.990574 4946 scope.go:117] "RemoveContainer" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" Nov 28 08:32:31 crc kubenswrapper[4946]: E1128 08:32:31.991659 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:32:46 crc kubenswrapper[4946]: I1128 08:32:46.990018 4946 scope.go:117] "RemoveContainer" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" Nov 28 08:32:46 crc kubenswrapper[4946]: E1128 08:32:46.990996 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:33:00 crc kubenswrapper[4946]: I1128 08:33:00.990655 4946 scope.go:117] "RemoveContainer" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" Nov 28 08:33:00 crc kubenswrapper[4946]: E1128 08:33:00.991823 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:33:15 crc kubenswrapper[4946]: I1128 08:33:15.997934 4946 scope.go:117] "RemoveContainer" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" Nov 28 08:33:16 crc kubenswrapper[4946]: E1128 08:33:15.998954 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:33:30 crc kubenswrapper[4946]: I1128 08:33:30.990498 4946 scope.go:117] "RemoveContainer" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" Nov 28 08:33:30 crc kubenswrapper[4946]: E1128 08:33:30.991784 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:33:44 crc kubenswrapper[4946]: I1128 08:33:44.990673 4946 scope.go:117] "RemoveContainer" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" Nov 28 08:33:44 crc kubenswrapper[4946]: E1128 08:33:44.991616 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.091857 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fbc5b666c-wfs2h"] Nov 28 08:33:51 crc kubenswrapper[4946]: E1128 08:33:51.092512 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89e2778-303e-43f7-bbf1-6b8d70536573" containerName="storage" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.092524 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89e2778-303e-43f7-bbf1-6b8d70536573" containerName="storage" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.092671 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="d89e2778-303e-43f7-bbf1-6b8d70536573" containerName="storage" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.093350 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc5b666c-wfs2h" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.096007 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.096189 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.096296 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-8sbk2" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.096445 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.099294 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.115756 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fbc5b666c-wfs2h"] Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.283996 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c8dc9f5-2d13-47c6-9768-7426a854b4b6-dns-svc\") pod \"dnsmasq-dns-5fbc5b666c-wfs2h\" (UID: \"2c8dc9f5-2d13-47c6-9768-7426a854b4b6\") " pod="openstack/dnsmasq-dns-5fbc5b666c-wfs2h" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.284060 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5lgv\" (UniqueName: \"kubernetes.io/projected/2c8dc9f5-2d13-47c6-9768-7426a854b4b6-kube-api-access-j5lgv\") pod \"dnsmasq-dns-5fbc5b666c-wfs2h\" (UID: \"2c8dc9f5-2d13-47c6-9768-7426a854b4b6\") " pod="openstack/dnsmasq-dns-5fbc5b666c-wfs2h" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.284303 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c8dc9f5-2d13-47c6-9768-7426a854b4b6-config\") pod \"dnsmasq-dns-5fbc5b666c-wfs2h\" (UID: \"2c8dc9f5-2d13-47c6-9768-7426a854b4b6\") " pod="openstack/dnsmasq-dns-5fbc5b666c-wfs2h" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.374282 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65fd8458c9-rptbj"] Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.375596 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65fd8458c9-rptbj" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.385489 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c8dc9f5-2d13-47c6-9768-7426a854b4b6-dns-svc\") pod \"dnsmasq-dns-5fbc5b666c-wfs2h\" (UID: \"2c8dc9f5-2d13-47c6-9768-7426a854b4b6\") " pod="openstack/dnsmasq-dns-5fbc5b666c-wfs2h" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.385534 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5lgv\" (UniqueName: \"kubernetes.io/projected/2c8dc9f5-2d13-47c6-9768-7426a854b4b6-kube-api-access-j5lgv\") pod \"dnsmasq-dns-5fbc5b666c-wfs2h\" (UID: \"2c8dc9f5-2d13-47c6-9768-7426a854b4b6\") " pod="openstack/dnsmasq-dns-5fbc5b666c-wfs2h" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.385596 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c8dc9f5-2d13-47c6-9768-7426a854b4b6-config\") pod \"dnsmasq-dns-5fbc5b666c-wfs2h\" (UID: \"2c8dc9f5-2d13-47c6-9768-7426a854b4b6\") " pod="openstack/dnsmasq-dns-5fbc5b666c-wfs2h" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.386530 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c8dc9f5-2d13-47c6-9768-7426a854b4b6-config\") pod \"dnsmasq-dns-5fbc5b666c-wfs2h\" (UID: \"2c8dc9f5-2d13-47c6-9768-7426a854b4b6\") " pod="openstack/dnsmasq-dns-5fbc5b666c-wfs2h" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.386602 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c8dc9f5-2d13-47c6-9768-7426a854b4b6-dns-svc\") pod \"dnsmasq-dns-5fbc5b666c-wfs2h\" (UID: \"2c8dc9f5-2d13-47c6-9768-7426a854b4b6\") " pod="openstack/dnsmasq-dns-5fbc5b666c-wfs2h" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.388330 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65fd8458c9-rptbj"] Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.412480 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5lgv\" (UniqueName: \"kubernetes.io/projected/2c8dc9f5-2d13-47c6-9768-7426a854b4b6-kube-api-access-j5lgv\") pod \"dnsmasq-dns-5fbc5b666c-wfs2h\" (UID: \"2c8dc9f5-2d13-47c6-9768-7426a854b4b6\") " pod="openstack/dnsmasq-dns-5fbc5b666c-wfs2h" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.487586 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/892175b4-661d-4457-88ca-2f7c9a14a283-dns-svc\") pod \"dnsmasq-dns-65fd8458c9-rptbj\" (UID: \"892175b4-661d-4457-88ca-2f7c9a14a283\") " pod="openstack/dnsmasq-dns-65fd8458c9-rptbj" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.487674 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/892175b4-661d-4457-88ca-2f7c9a14a283-config\") pod \"dnsmasq-dns-65fd8458c9-rptbj\" (UID: \"892175b4-661d-4457-88ca-2f7c9a14a283\") " pod="openstack/dnsmasq-dns-65fd8458c9-rptbj" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.487725 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g98k\" (UniqueName: \"kubernetes.io/projected/892175b4-661d-4457-88ca-2f7c9a14a283-kube-api-access-7g98k\") pod \"dnsmasq-dns-65fd8458c9-rptbj\" (UID: \"892175b4-661d-4457-88ca-2f7c9a14a283\") " pod="openstack/dnsmasq-dns-65fd8458c9-rptbj" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.588515 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g98k\" (UniqueName: \"kubernetes.io/projected/892175b4-661d-4457-88ca-2f7c9a14a283-kube-api-access-7g98k\") pod \"dnsmasq-dns-65fd8458c9-rptbj\" (UID: \"892175b4-661d-4457-88ca-2f7c9a14a283\") " pod="openstack/dnsmasq-dns-65fd8458c9-rptbj" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.588573 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/892175b4-661d-4457-88ca-2f7c9a14a283-dns-svc\") pod \"dnsmasq-dns-65fd8458c9-rptbj\" (UID: \"892175b4-661d-4457-88ca-2f7c9a14a283\") " pod="openstack/dnsmasq-dns-65fd8458c9-rptbj" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.588627 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/892175b4-661d-4457-88ca-2f7c9a14a283-config\") pod \"dnsmasq-dns-65fd8458c9-rptbj\" (UID: \"892175b4-661d-4457-88ca-2f7c9a14a283\") " pod="openstack/dnsmasq-dns-65fd8458c9-rptbj" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.589366 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/892175b4-661d-4457-88ca-2f7c9a14a283-config\") pod \"dnsmasq-dns-65fd8458c9-rptbj\" (UID: \"892175b4-661d-4457-88ca-2f7c9a14a283\") " pod="openstack/dnsmasq-dns-65fd8458c9-rptbj" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.589512 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/892175b4-661d-4457-88ca-2f7c9a14a283-dns-svc\") pod \"dnsmasq-dns-65fd8458c9-rptbj\" (UID: \"892175b4-661d-4457-88ca-2f7c9a14a283\") " pod="openstack/dnsmasq-dns-65fd8458c9-rptbj" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.610282 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g98k\" (UniqueName: \"kubernetes.io/projected/892175b4-661d-4457-88ca-2f7c9a14a283-kube-api-access-7g98k\") pod \"dnsmasq-dns-65fd8458c9-rptbj\" (UID: \"892175b4-661d-4457-88ca-2f7c9a14a283\") " pod="openstack/dnsmasq-dns-65fd8458c9-rptbj" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.689984 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65fd8458c9-rptbj" Nov 28 08:33:51 crc kubenswrapper[4946]: I1128 08:33:51.713804 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc5b666c-wfs2h" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.173733 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65fd8458c9-rptbj"] Nov 28 08:33:52 crc kubenswrapper[4946]: W1128 08:33:52.175865 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c8dc9f5_2d13_47c6_9768_7426a854b4b6.slice/crio-c8cdc5b60261567d3b59f15661553a6767b15fad6873e88951cfd2f99cf57dca WatchSource:0}: Error finding container c8cdc5b60261567d3b59f15661553a6767b15fad6873e88951cfd2f99cf57dca: Status 404 returned error can't find the container with id c8cdc5b60261567d3b59f15661553a6767b15fad6873e88951cfd2f99cf57dca Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.184419 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fbc5b666c-wfs2h"] Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.259944 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.261191 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.263848 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.264300 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.264398 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.264729 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mc4q4" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.264798 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.277157 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.400166 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b429b35-b62b-411d-91dc-eee42d2b359c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.400237 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b429b35-b62b-411d-91dc-eee42d2b359c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.400561 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b429b35-b62b-411d-91dc-eee42d2b359c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.400589 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b429b35-b62b-411d-91dc-eee42d2b359c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.400620 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-266e7d83-22dd-417b-bdf7-7c3e59d7a5ab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-266e7d83-22dd-417b-bdf7-7c3e59d7a5ab\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.400660 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klv8q\" (UniqueName: \"kubernetes.io/projected/0b429b35-b62b-411d-91dc-eee42d2b359c-kube-api-access-klv8q\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.400705 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b429b35-b62b-411d-91dc-eee42d2b359c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.400741 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b429b35-b62b-411d-91dc-eee42d2b359c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.400767 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b429b35-b62b-411d-91dc-eee42d2b359c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.435741 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7ddvl"] Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.444761 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ddvl" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.475679 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ddvl"] Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.506698 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b429b35-b62b-411d-91dc-eee42d2b359c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.506754 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b429b35-b62b-411d-91dc-eee42d2b359c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.506786 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-266e7d83-22dd-417b-bdf7-7c3e59d7a5ab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-266e7d83-22dd-417b-bdf7-7c3e59d7a5ab\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.506851 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klv8q\" (UniqueName: \"kubernetes.io/projected/0b429b35-b62b-411d-91dc-eee42d2b359c-kube-api-access-klv8q\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.506890 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b429b35-b62b-411d-91dc-eee42d2b359c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.506924 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b429b35-b62b-411d-91dc-eee42d2b359c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.506944 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b429b35-b62b-411d-91dc-eee42d2b359c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.507084 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b429b35-b62b-411d-91dc-eee42d2b359c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.507117 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b429b35-b62b-411d-91dc-eee42d2b359c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.508621 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b429b35-b62b-411d-91dc-eee42d2b359c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.508655 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b429b35-b62b-411d-91dc-eee42d2b359c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.508755 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b429b35-b62b-411d-91dc-eee42d2b359c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.508789 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b429b35-b62b-411d-91dc-eee42d2b359c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.511532 4946 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.511563 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-266e7d83-22dd-417b-bdf7-7c3e59d7a5ab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-266e7d83-22dd-417b-bdf7-7c3e59d7a5ab\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2b1576cedd3e93aac170e4e8d07d9795baa0d2ce1d0390e7bf5440f071778c5d/globalmount\"" pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.525004 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b429b35-b62b-411d-91dc-eee42d2b359c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.529253 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b429b35-b62b-411d-91dc-eee42d2b359c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.536170 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b429b35-b62b-411d-91dc-eee42d2b359c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.536726 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klv8q\" (UniqueName: \"kubernetes.io/projected/0b429b35-b62b-411d-91dc-eee42d2b359c-kube-api-access-klv8q\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.545901 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.547284 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.548956 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.549437 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-t4vqc" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.549707 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.549930 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.552058 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.556075 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.566290 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-266e7d83-22dd-417b-bdf7-7c3e59d7a5ab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-266e7d83-22dd-417b-bdf7-7c3e59d7a5ab\") pod \"rabbitmq-server-0\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.583532 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.608856 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a397e3ca-90da-41ce-be73-be07db15d66d-utilities\") pod \"redhat-marketplace-7ddvl\" (UID: \"a397e3ca-90da-41ce-be73-be07db15d66d\") " pod="openshift-marketplace/redhat-marketplace-7ddvl" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.609087 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqkdc\" (UniqueName: \"kubernetes.io/projected/a397e3ca-90da-41ce-be73-be07db15d66d-kube-api-access-fqkdc\") pod \"redhat-marketplace-7ddvl\" (UID: \"a397e3ca-90da-41ce-be73-be07db15d66d\") " pod="openshift-marketplace/redhat-marketplace-7ddvl" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.609150 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a397e3ca-90da-41ce-be73-be07db15d66d-catalog-content\") pod \"redhat-marketplace-7ddvl\" (UID: \"a397e3ca-90da-41ce-be73-be07db15d66d\") " pod="openshift-marketplace/redhat-marketplace-7ddvl" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.685903 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65fd8458c9-rptbj" event={"ID":"892175b4-661d-4457-88ca-2f7c9a14a283","Type":"ContainerStarted","Data":"7ae71678242ee68760cea06496c0c4a77791cff113c9c5c8de08e660a8487949"} Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.701536 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc5b666c-wfs2h" event={"ID":"2c8dc9f5-2d13-47c6-9768-7426a854b4b6","Type":"ContainerStarted","Data":"c8cdc5b60261567d3b59f15661553a6767b15fad6873e88951cfd2f99cf57dca"} Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.710216 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a397e3ca-90da-41ce-be73-be07db15d66d-utilities\") pod \"redhat-marketplace-7ddvl\" (UID: \"a397e3ca-90da-41ce-be73-be07db15d66d\") " pod="openshift-marketplace/redhat-marketplace-7ddvl" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.710264 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ce2d653-a79e-4b24-b214-7f0d00141c71-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.710297 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ce2d653-a79e-4b24-b214-7f0d00141c71-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.710312 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ce2d653-a79e-4b24-b214-7f0d00141c71-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.710345 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqkdc\" (UniqueName: \"kubernetes.io/projected/a397e3ca-90da-41ce-be73-be07db15d66d-kube-api-access-fqkdc\") pod \"redhat-marketplace-7ddvl\" (UID: \"a397e3ca-90da-41ce-be73-be07db15d66d\") " pod="openshift-marketplace/redhat-marketplace-7ddvl" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.710447 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a397e3ca-90da-41ce-be73-be07db15d66d-catalog-content\") pod \"redhat-marketplace-7ddvl\" (UID: \"a397e3ca-90da-41ce-be73-be07db15d66d\") " pod="openshift-marketplace/redhat-marketplace-7ddvl" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.710574 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ce2d653-a79e-4b24-b214-7f0d00141c71-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.710737 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-91a67a38-f0e2-4b11-a369-c477cada57f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91a67a38-f0e2-4b11-a369-c477cada57f5\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.710784 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ce2d653-a79e-4b24-b214-7f0d00141c71-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.710839 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgcg4\" (UniqueName: \"kubernetes.io/projected/2ce2d653-a79e-4b24-b214-7f0d00141c71-kube-api-access-rgcg4\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.710880 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ce2d653-a79e-4b24-b214-7f0d00141c71-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.710909 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ce2d653-a79e-4b24-b214-7f0d00141c71-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.711215 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a397e3ca-90da-41ce-be73-be07db15d66d-utilities\") pod \"redhat-marketplace-7ddvl\" (UID: \"a397e3ca-90da-41ce-be73-be07db15d66d\") " pod="openshift-marketplace/redhat-marketplace-7ddvl" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.711329 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a397e3ca-90da-41ce-be73-be07db15d66d-catalog-content\") pod \"redhat-marketplace-7ddvl\" (UID: \"a397e3ca-90da-41ce-be73-be07db15d66d\") " pod="openshift-marketplace/redhat-marketplace-7ddvl" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.749695 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqkdc\" (UniqueName: \"kubernetes.io/projected/a397e3ca-90da-41ce-be73-be07db15d66d-kube-api-access-fqkdc\") pod \"redhat-marketplace-7ddvl\" (UID: \"a397e3ca-90da-41ce-be73-be07db15d66d\") " pod="openshift-marketplace/redhat-marketplace-7ddvl" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.806994 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ddvl" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.820291 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ce2d653-a79e-4b24-b214-7f0d00141c71-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.820336 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-91a67a38-f0e2-4b11-a369-c477cada57f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91a67a38-f0e2-4b11-a369-c477cada57f5\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.820354 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ce2d653-a79e-4b24-b214-7f0d00141c71-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.820383 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgcg4\" (UniqueName: \"kubernetes.io/projected/2ce2d653-a79e-4b24-b214-7f0d00141c71-kube-api-access-rgcg4\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.820405 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ce2d653-a79e-4b24-b214-7f0d00141c71-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.820425 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ce2d653-a79e-4b24-b214-7f0d00141c71-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.820475 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ce2d653-a79e-4b24-b214-7f0d00141c71-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.820500 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ce2d653-a79e-4b24-b214-7f0d00141c71-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.820515 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ce2d653-a79e-4b24-b214-7f0d00141c71-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.820993 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ce2d653-a79e-4b24-b214-7f0d00141c71-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.821196 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ce2d653-a79e-4b24-b214-7f0d00141c71-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.822416 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ce2d653-a79e-4b24-b214-7f0d00141c71-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.827734 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ce2d653-a79e-4b24-b214-7f0d00141c71-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.828379 4946 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.828486 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-91a67a38-f0e2-4b11-a369-c477cada57f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91a67a38-f0e2-4b11-a369-c477cada57f5\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ad841bab452a44bc4483df56d6386dff5f134804806addb749e9d36b65b51d57/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.830767 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ce2d653-a79e-4b24-b214-7f0d00141c71-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.833244 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ce2d653-a79e-4b24-b214-7f0d00141c71-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.842098 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ce2d653-a79e-4b24-b214-7f0d00141c71-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.846898 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgcg4\" (UniqueName: \"kubernetes.io/projected/2ce2d653-a79e-4b24-b214-7f0d00141c71-kube-api-access-rgcg4\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.863208 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-91a67a38-f0e2-4b11-a369-c477cada57f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91a67a38-f0e2-4b11-a369-c477cada57f5\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:52 crc kubenswrapper[4946]: I1128 08:33:52.905126 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.111200 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 08:33:53 crc kubenswrapper[4946]: W1128 08:33:53.135194 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b429b35_b62b_411d_91dc_eee42d2b359c.slice/crio-e005eab32dcef35f0e38c1ba5d952a1784c3214db728a0ff06d7b1ac9af00566 WatchSource:0}: Error finding container e005eab32dcef35f0e38c1ba5d952a1784c3214db728a0ff06d7b1ac9af00566: Status 404 returned error can't find the container with id e005eab32dcef35f0e38c1ba5d952a1784c3214db728a0ff06d7b1ac9af00566 Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.264480 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ddvl"] Nov 28 08:33:53 crc kubenswrapper[4946]: W1128 08:33:53.276923 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda397e3ca_90da_41ce_be73_be07db15d66d.slice/crio-8c05c05017718cd027e4512fac38132806fdef4779e045f3ebaf315d323dcbbe WatchSource:0}: Error finding container 8c05c05017718cd027e4512fac38132806fdef4779e045f3ebaf315d323dcbbe: Status 404 returned error can't find the container with id 8c05c05017718cd027e4512fac38132806fdef4779e045f3ebaf315d323dcbbe Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.392674 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 08:33:53 crc kubenswrapper[4946]: W1128 08:33:53.407971 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ce2d653_a79e_4b24_b214_7f0d00141c71.slice/crio-a3d792cc6cf5ee62fdb02fb209657d2a7ff00d62ee47d273e60b0c5a379265ee WatchSource:0}: Error finding container a3d792cc6cf5ee62fdb02fb209657d2a7ff00d62ee47d273e60b0c5a379265ee: Status 404 returned error can't find the container with id a3d792cc6cf5ee62fdb02fb209657d2a7ff00d62ee47d273e60b0c5a379265ee Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.711373 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2ce2d653-a79e-4b24-b214-7f0d00141c71","Type":"ContainerStarted","Data":"a3d792cc6cf5ee62fdb02fb209657d2a7ff00d62ee47d273e60b0c5a379265ee"} Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.714180 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0b429b35-b62b-411d-91dc-eee42d2b359c","Type":"ContainerStarted","Data":"e005eab32dcef35f0e38c1ba5d952a1784c3214db728a0ff06d7b1ac9af00566"} Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.717039 4946 generic.go:334] "Generic (PLEG): container finished" podID="a397e3ca-90da-41ce-be73-be07db15d66d" containerID="f7e1bd28bcf405d3743bb87b92786b467398aaaf620af4e9befdc6d0f382aaa7" exitCode=0 Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.717106 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ddvl" event={"ID":"a397e3ca-90da-41ce-be73-be07db15d66d","Type":"ContainerDied","Data":"f7e1bd28bcf405d3743bb87b92786b467398aaaf620af4e9befdc6d0f382aaa7"} Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.717138 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ddvl" event={"ID":"a397e3ca-90da-41ce-be73-be07db15d66d","Type":"ContainerStarted","Data":"8c05c05017718cd027e4512fac38132806fdef4779e045f3ebaf315d323dcbbe"} Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.732201 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.733810 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.738075 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-jt4qx" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.738454 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.738636 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.742695 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.756644 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.763987 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.835098 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxwdn\" (UniqueName: \"kubernetes.io/projected/1a323bad-127f-44c5-8d32-9a7c53deceea-kube-api-access-fxwdn\") pod \"openstack-galera-0\" (UID: \"1a323bad-127f-44c5-8d32-9a7c53deceea\") " pod="openstack/openstack-galera-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.835429 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a323bad-127f-44c5-8d32-9a7c53deceea-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1a323bad-127f-44c5-8d32-9a7c53deceea\") " pod="openstack/openstack-galera-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.835487 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1a323bad-127f-44c5-8d32-9a7c53deceea-kolla-config\") pod \"openstack-galera-0\" (UID: \"1a323bad-127f-44c5-8d32-9a7c53deceea\") " pod="openstack/openstack-galera-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.835543 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1a323bad-127f-44c5-8d32-9a7c53deceea-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1a323bad-127f-44c5-8d32-9a7c53deceea\") " pod="openstack/openstack-galera-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.835574 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-53343410-a4da-4799-9f66-8c52247e6940\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53343410-a4da-4799-9f66-8c52247e6940\") pod \"openstack-galera-0\" (UID: \"1a323bad-127f-44c5-8d32-9a7c53deceea\") " pod="openstack/openstack-galera-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.835618 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a323bad-127f-44c5-8d32-9a7c53deceea-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1a323bad-127f-44c5-8d32-9a7c53deceea\") " pod="openstack/openstack-galera-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.835646 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a323bad-127f-44c5-8d32-9a7c53deceea-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1a323bad-127f-44c5-8d32-9a7c53deceea\") " pod="openstack/openstack-galera-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.835665 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1a323bad-127f-44c5-8d32-9a7c53deceea-config-data-default\") pod \"openstack-galera-0\" (UID: \"1a323bad-127f-44c5-8d32-9a7c53deceea\") " pod="openstack/openstack-galera-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.938514 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1a323bad-127f-44c5-8d32-9a7c53deceea-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1a323bad-127f-44c5-8d32-9a7c53deceea\") " pod="openstack/openstack-galera-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.938573 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-53343410-a4da-4799-9f66-8c52247e6940\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53343410-a4da-4799-9f66-8c52247e6940\") pod \"openstack-galera-0\" (UID: \"1a323bad-127f-44c5-8d32-9a7c53deceea\") " pod="openstack/openstack-galera-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.938625 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a323bad-127f-44c5-8d32-9a7c53deceea-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1a323bad-127f-44c5-8d32-9a7c53deceea\") " pod="openstack/openstack-galera-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.938653 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a323bad-127f-44c5-8d32-9a7c53deceea-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1a323bad-127f-44c5-8d32-9a7c53deceea\") " pod="openstack/openstack-galera-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.938674 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1a323bad-127f-44c5-8d32-9a7c53deceea-config-data-default\") pod \"openstack-galera-0\" (UID: \"1a323bad-127f-44c5-8d32-9a7c53deceea\") " pod="openstack/openstack-galera-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.938693 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxwdn\" (UniqueName: \"kubernetes.io/projected/1a323bad-127f-44c5-8d32-9a7c53deceea-kube-api-access-fxwdn\") pod \"openstack-galera-0\" (UID: \"1a323bad-127f-44c5-8d32-9a7c53deceea\") " pod="openstack/openstack-galera-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.938717 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a323bad-127f-44c5-8d32-9a7c53deceea-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1a323bad-127f-44c5-8d32-9a7c53deceea\") " pod="openstack/openstack-galera-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.938738 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1a323bad-127f-44c5-8d32-9a7c53deceea-kolla-config\") pod \"openstack-galera-0\" (UID: \"1a323bad-127f-44c5-8d32-9a7c53deceea\") " pod="openstack/openstack-galera-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.938978 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1a323bad-127f-44c5-8d32-9a7c53deceea-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1a323bad-127f-44c5-8d32-9a7c53deceea\") " pod="openstack/openstack-galera-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.939442 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1a323bad-127f-44c5-8d32-9a7c53deceea-kolla-config\") pod \"openstack-galera-0\" (UID: \"1a323bad-127f-44c5-8d32-9a7c53deceea\") " pod="openstack/openstack-galera-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.940080 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1a323bad-127f-44c5-8d32-9a7c53deceea-config-data-default\") pod \"openstack-galera-0\" (UID: \"1a323bad-127f-44c5-8d32-9a7c53deceea\") " pod="openstack/openstack-galera-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.940929 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a323bad-127f-44c5-8d32-9a7c53deceea-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1a323bad-127f-44c5-8d32-9a7c53deceea\") " pod="openstack/openstack-galera-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.942886 4946 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.942984 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-53343410-a4da-4799-9f66-8c52247e6940\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53343410-a4da-4799-9f66-8c52247e6940\") pod \"openstack-galera-0\" (UID: \"1a323bad-127f-44c5-8d32-9a7c53deceea\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8e343e91d5e696885e31660e1243a0dfd01cc4e98f7259f78e36538e040f2755/globalmount\"" pod="openstack/openstack-galera-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.947690 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a323bad-127f-44c5-8d32-9a7c53deceea-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1a323bad-127f-44c5-8d32-9a7c53deceea\") " pod="openstack/openstack-galera-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.964238 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a323bad-127f-44c5-8d32-9a7c53deceea-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1a323bad-127f-44c5-8d32-9a7c53deceea\") " pod="openstack/openstack-galera-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.971225 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxwdn\" (UniqueName: \"kubernetes.io/projected/1a323bad-127f-44c5-8d32-9a7c53deceea-kube-api-access-fxwdn\") pod \"openstack-galera-0\" (UID: \"1a323bad-127f-44c5-8d32-9a7c53deceea\") " pod="openstack/openstack-galera-0" Nov 28 08:33:53 crc kubenswrapper[4946]: I1128 08:33:53.990933 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-53343410-a4da-4799-9f66-8c52247e6940\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53343410-a4da-4799-9f66-8c52247e6940\") pod \"openstack-galera-0\" (UID: \"1a323bad-127f-44c5-8d32-9a7c53deceea\") " pod="openstack/openstack-galera-0" Nov 28 08:33:54 crc kubenswrapper[4946]: I1128 08:33:54.031543 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 28 08:33:54 crc kubenswrapper[4946]: I1128 08:33:54.032503 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 28 08:33:54 crc kubenswrapper[4946]: I1128 08:33:54.039034 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 28 08:33:54 crc kubenswrapper[4946]: I1128 08:33:54.080914 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-54sfj" Nov 28 08:33:54 crc kubenswrapper[4946]: I1128 08:33:54.081559 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 28 08:33:54 crc kubenswrapper[4946]: I1128 08:33:54.083369 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 28 08:33:54 crc kubenswrapper[4946]: I1128 08:33:54.147250 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90119a9d-5f8c-4da1-8695-89da8ff356e9-config-data\") pod \"memcached-0\" (UID: \"90119a9d-5f8c-4da1-8695-89da8ff356e9\") " pod="openstack/memcached-0" Nov 28 08:33:54 crc kubenswrapper[4946]: I1128 08:33:54.147299 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72m8b\" (UniqueName: \"kubernetes.io/projected/90119a9d-5f8c-4da1-8695-89da8ff356e9-kube-api-access-72m8b\") pod \"memcached-0\" (UID: \"90119a9d-5f8c-4da1-8695-89da8ff356e9\") " pod="openstack/memcached-0" Nov 28 08:33:54 crc kubenswrapper[4946]: I1128 08:33:54.147346 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/90119a9d-5f8c-4da1-8695-89da8ff356e9-kolla-config\") pod \"memcached-0\" (UID: \"90119a9d-5f8c-4da1-8695-89da8ff356e9\") " pod="openstack/memcached-0" Nov 28 08:33:54 crc kubenswrapper[4946]: I1128 08:33:54.249308 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90119a9d-5f8c-4da1-8695-89da8ff356e9-config-data\") pod \"memcached-0\" (UID: \"90119a9d-5f8c-4da1-8695-89da8ff356e9\") " pod="openstack/memcached-0" Nov 28 08:33:54 crc kubenswrapper[4946]: I1128 08:33:54.249631 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72m8b\" (UniqueName: \"kubernetes.io/projected/90119a9d-5f8c-4da1-8695-89da8ff356e9-kube-api-access-72m8b\") pod \"memcached-0\" (UID: \"90119a9d-5f8c-4da1-8695-89da8ff356e9\") " pod="openstack/memcached-0" Nov 28 08:33:54 crc kubenswrapper[4946]: I1128 08:33:54.249679 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/90119a9d-5f8c-4da1-8695-89da8ff356e9-kolla-config\") pod \"memcached-0\" (UID: \"90119a9d-5f8c-4da1-8695-89da8ff356e9\") " pod="openstack/memcached-0" Nov 28 08:33:54 crc kubenswrapper[4946]: I1128 08:33:54.250893 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90119a9d-5f8c-4da1-8695-89da8ff356e9-config-data\") pod \"memcached-0\" (UID: \"90119a9d-5f8c-4da1-8695-89da8ff356e9\") " pod="openstack/memcached-0" Nov 28 08:33:54 crc kubenswrapper[4946]: I1128 08:33:54.252140 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/90119a9d-5f8c-4da1-8695-89da8ff356e9-kolla-config\") pod \"memcached-0\" (UID: \"90119a9d-5f8c-4da1-8695-89da8ff356e9\") " pod="openstack/memcached-0" Nov 28 08:33:54 crc kubenswrapper[4946]: I1128 08:33:54.266361 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72m8b\" (UniqueName: \"kubernetes.io/projected/90119a9d-5f8c-4da1-8695-89da8ff356e9-kube-api-access-72m8b\") pod \"memcached-0\" (UID: \"90119a9d-5f8c-4da1-8695-89da8ff356e9\") " pod="openstack/memcached-0" Nov 28 08:33:54 crc kubenswrapper[4946]: I1128 08:33:54.390953 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:54.565649 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:54.729243 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1a323bad-127f-44c5-8d32-9a7c53deceea","Type":"ContainerStarted","Data":"0a79c06980924a07c026d8c5c47bb8b9c8b7d28ccf875b1abb796ea0a4b8c974"} Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.224945 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.231643 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.231778 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.256231 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.256433 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-98tf9" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.256569 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.267926 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.371834 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a5bdfa1c-6408-40f5-a9db-4b991fd2b022-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a5bdfa1c-6408-40f5-a9db-4b991fd2b022\") " pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.371909 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5bdfa1c-6408-40f5-a9db-4b991fd2b022-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a5bdfa1c-6408-40f5-a9db-4b991fd2b022\") " pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.371967 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a5bdfa1c-6408-40f5-a9db-4b991fd2b022-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a5bdfa1c-6408-40f5-a9db-4b991fd2b022\") " pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.372021 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a5bdfa1c-6408-40f5-a9db-4b991fd2b022-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a5bdfa1c-6408-40f5-a9db-4b991fd2b022\") " pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.372047 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5bdfa1c-6408-40f5-a9db-4b991fd2b022-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a5bdfa1c-6408-40f5-a9db-4b991fd2b022\") " pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.372152 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqqjs\" (UniqueName: \"kubernetes.io/projected/a5bdfa1c-6408-40f5-a9db-4b991fd2b022-kube-api-access-dqqjs\") pod \"openstack-cell1-galera-0\" (UID: \"a5bdfa1c-6408-40f5-a9db-4b991fd2b022\") " pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.372258 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f9871203-96b3-40a9-8493-05df0a41f84b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f9871203-96b3-40a9-8493-05df0a41f84b\") pod \"openstack-cell1-galera-0\" (UID: \"a5bdfa1c-6408-40f5-a9db-4b991fd2b022\") " pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.372395 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5bdfa1c-6408-40f5-a9db-4b991fd2b022-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a5bdfa1c-6408-40f5-a9db-4b991fd2b022\") " pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.474136 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a5bdfa1c-6408-40f5-a9db-4b991fd2b022-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a5bdfa1c-6408-40f5-a9db-4b991fd2b022\") " pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.474190 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a5bdfa1c-6408-40f5-a9db-4b991fd2b022-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a5bdfa1c-6408-40f5-a9db-4b991fd2b022\") " pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.474215 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5bdfa1c-6408-40f5-a9db-4b991fd2b022-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a5bdfa1c-6408-40f5-a9db-4b991fd2b022\") " pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.474233 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqqjs\" (UniqueName: \"kubernetes.io/projected/a5bdfa1c-6408-40f5-a9db-4b991fd2b022-kube-api-access-dqqjs\") pod \"openstack-cell1-galera-0\" (UID: \"a5bdfa1c-6408-40f5-a9db-4b991fd2b022\") " pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.474275 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f9871203-96b3-40a9-8493-05df0a41f84b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f9871203-96b3-40a9-8493-05df0a41f84b\") pod \"openstack-cell1-galera-0\" (UID: \"a5bdfa1c-6408-40f5-a9db-4b991fd2b022\") " pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.474326 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5bdfa1c-6408-40f5-a9db-4b991fd2b022-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a5bdfa1c-6408-40f5-a9db-4b991fd2b022\") " pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.474346 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a5bdfa1c-6408-40f5-a9db-4b991fd2b022-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a5bdfa1c-6408-40f5-a9db-4b991fd2b022\") " pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.474377 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5bdfa1c-6408-40f5-a9db-4b991fd2b022-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a5bdfa1c-6408-40f5-a9db-4b991fd2b022\") " pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.474838 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a5bdfa1c-6408-40f5-a9db-4b991fd2b022-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a5bdfa1c-6408-40f5-a9db-4b991fd2b022\") " pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.475236 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a5bdfa1c-6408-40f5-a9db-4b991fd2b022-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a5bdfa1c-6408-40f5-a9db-4b991fd2b022\") " pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.475246 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a5bdfa1c-6408-40f5-a9db-4b991fd2b022-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a5bdfa1c-6408-40f5-a9db-4b991fd2b022\") " pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.475945 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5bdfa1c-6408-40f5-a9db-4b991fd2b022-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a5bdfa1c-6408-40f5-a9db-4b991fd2b022\") " pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.476712 4946 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.476743 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f9871203-96b3-40a9-8493-05df0a41f84b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f9871203-96b3-40a9-8493-05df0a41f84b\") pod \"openstack-cell1-galera-0\" (UID: \"a5bdfa1c-6408-40f5-a9db-4b991fd2b022\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/75579413553fcabd6fa4967009e36634c45480d218d645566bd77f3eab9aff73/globalmount\"" pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.480425 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5bdfa1c-6408-40f5-a9db-4b991fd2b022-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a5bdfa1c-6408-40f5-a9db-4b991fd2b022\") " pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.480473 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5bdfa1c-6408-40f5-a9db-4b991fd2b022-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a5bdfa1c-6408-40f5-a9db-4b991fd2b022\") " pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.504745 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f9871203-96b3-40a9-8493-05df0a41f84b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f9871203-96b3-40a9-8493-05df0a41f84b\") pod \"openstack-cell1-galera-0\" (UID: \"a5bdfa1c-6408-40f5-a9db-4b991fd2b022\") " pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.627831 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqqjs\" (UniqueName: \"kubernetes.io/projected/a5bdfa1c-6408-40f5-a9db-4b991fd2b022-kube-api-access-dqqjs\") pod \"openstack-cell1-galera-0\" (UID: \"a5bdfa1c-6408-40f5-a9db-4b991fd2b022\") " pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.650381 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 28 08:33:55 crc kubenswrapper[4946]: W1128 08:33:55.678017 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90119a9d_5f8c_4da1_8695_89da8ff356e9.slice/crio-ea4edd4c7ecb8c393900fd85713001f7d7b2c5720f478b1c2b553658c4e188b3 WatchSource:0}: Error finding container ea4edd4c7ecb8c393900fd85713001f7d7b2c5720f478b1c2b553658c4e188b3: Status 404 returned error can't find the container with id ea4edd4c7ecb8c393900fd85713001f7d7b2c5720f478b1c2b553658c4e188b3 Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.751779 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"90119a9d-5f8c-4da1-8695-89da8ff356e9","Type":"ContainerStarted","Data":"ea4edd4c7ecb8c393900fd85713001f7d7b2c5720f478b1c2b553658c4e188b3"} Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.755604 4946 generic.go:334] "Generic (PLEG): container finished" podID="a397e3ca-90da-41ce-be73-be07db15d66d" containerID="d52d3ea77b1e8a72ac2a8a3f725dfb17c54b8a8f8968aa2770353bd5ff7cc5c8" exitCode=0 Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.755635 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ddvl" event={"ID":"a397e3ca-90da-41ce-be73-be07db15d66d","Type":"ContainerDied","Data":"d52d3ea77b1e8a72ac2a8a3f725dfb17c54b8a8f8968aa2770353bd5ff7cc5c8"} Nov 28 08:33:55 crc kubenswrapper[4946]: I1128 08:33:55.880774 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 28 08:33:56 crc kubenswrapper[4946]: I1128 08:33:56.006489 4946 scope.go:117] "RemoveContainer" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" Nov 28 08:33:56 crc kubenswrapper[4946]: E1128 08:33:56.006680 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:33:56 crc kubenswrapper[4946]: I1128 08:33:56.322872 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 08:33:56 crc kubenswrapper[4946]: W1128 08:33:56.327144 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5bdfa1c_6408_40f5_a9db_4b991fd2b022.slice/crio-359992c35cac74a7659167442ee45b5667a9e5f611f74207ed160ccf54772ff8 WatchSource:0}: Error finding container 359992c35cac74a7659167442ee45b5667a9e5f611f74207ed160ccf54772ff8: Status 404 returned error can't find the container with id 359992c35cac74a7659167442ee45b5667a9e5f611f74207ed160ccf54772ff8 Nov 28 08:33:56 crc kubenswrapper[4946]: I1128 08:33:56.763468 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a5bdfa1c-6408-40f5-a9db-4b991fd2b022","Type":"ContainerStarted","Data":"359992c35cac74a7659167442ee45b5667a9e5f611f74207ed160ccf54772ff8"} Nov 28 08:33:57 crc kubenswrapper[4946]: I1128 08:33:57.772879 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ddvl" event={"ID":"a397e3ca-90da-41ce-be73-be07db15d66d","Type":"ContainerStarted","Data":"43d052f5ea7eea09240ccd27c57371e75de279879ac994deebc32870d20139ba"} Nov 28 08:33:57 crc kubenswrapper[4946]: I1128 08:33:57.792310 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7ddvl" podStartSLOduration=3.190433879 podStartE2EDuration="5.792269668s" podCreationTimestamp="2025-11-28 08:33:52 +0000 UTC" firstStartedPulling="2025-11-28 08:33:53.718854714 +0000 UTC m=+6088.096919825" lastFinishedPulling="2025-11-28 08:33:56.320690513 +0000 UTC m=+6090.698755614" observedRunningTime="2025-11-28 08:33:57.789597962 +0000 UTC m=+6092.167663093" watchObservedRunningTime="2025-11-28 08:33:57.792269668 +0000 UTC m=+6092.170334779" Nov 28 08:34:02 crc kubenswrapper[4946]: I1128 08:34:02.807909 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7ddvl" Nov 28 08:34:02 crc kubenswrapper[4946]: I1128 08:34:02.808583 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7ddvl" Nov 28 08:34:02 crc kubenswrapper[4946]: I1128 08:34:02.855288 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7ddvl" Nov 28 08:34:03 crc kubenswrapper[4946]: I1128 08:34:03.899376 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7ddvl" Nov 28 08:34:04 crc kubenswrapper[4946]: I1128 08:34:04.706983 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ddvl"] Nov 28 08:34:05 crc kubenswrapper[4946]: I1128 08:34:05.839848 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7ddvl" podUID="a397e3ca-90da-41ce-be73-be07db15d66d" containerName="registry-server" containerID="cri-o://43d052f5ea7eea09240ccd27c57371e75de279879ac994deebc32870d20139ba" gracePeriod=2 Nov 28 08:34:06 crc kubenswrapper[4946]: I1128 08:34:06.850363 4946 generic.go:334] "Generic (PLEG): container finished" podID="a397e3ca-90da-41ce-be73-be07db15d66d" containerID="43d052f5ea7eea09240ccd27c57371e75de279879ac994deebc32870d20139ba" exitCode=0 Nov 28 08:34:06 crc kubenswrapper[4946]: I1128 08:34:06.850403 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ddvl" event={"ID":"a397e3ca-90da-41ce-be73-be07db15d66d","Type":"ContainerDied","Data":"43d052f5ea7eea09240ccd27c57371e75de279879ac994deebc32870d20139ba"} Nov 28 08:34:07 crc kubenswrapper[4946]: I1128 08:34:07.362974 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ddvl" Nov 28 08:34:07 crc kubenswrapper[4946]: I1128 08:34:07.465269 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a397e3ca-90da-41ce-be73-be07db15d66d-catalog-content\") pod \"a397e3ca-90da-41ce-be73-be07db15d66d\" (UID: \"a397e3ca-90da-41ce-be73-be07db15d66d\") " Nov 28 08:34:07 crc kubenswrapper[4946]: I1128 08:34:07.465319 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqkdc\" (UniqueName: \"kubernetes.io/projected/a397e3ca-90da-41ce-be73-be07db15d66d-kube-api-access-fqkdc\") pod \"a397e3ca-90da-41ce-be73-be07db15d66d\" (UID: \"a397e3ca-90da-41ce-be73-be07db15d66d\") " Nov 28 08:34:07 crc kubenswrapper[4946]: I1128 08:34:07.465491 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a397e3ca-90da-41ce-be73-be07db15d66d-utilities\") pod \"a397e3ca-90da-41ce-be73-be07db15d66d\" (UID: \"a397e3ca-90da-41ce-be73-be07db15d66d\") " Nov 28 08:34:07 crc kubenswrapper[4946]: I1128 08:34:07.466389 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a397e3ca-90da-41ce-be73-be07db15d66d-utilities" (OuterVolumeSpecName: "utilities") pod "a397e3ca-90da-41ce-be73-be07db15d66d" (UID: "a397e3ca-90da-41ce-be73-be07db15d66d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:34:07 crc kubenswrapper[4946]: I1128 08:34:07.470488 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a397e3ca-90da-41ce-be73-be07db15d66d-kube-api-access-fqkdc" (OuterVolumeSpecName: "kube-api-access-fqkdc") pod "a397e3ca-90da-41ce-be73-be07db15d66d" (UID: "a397e3ca-90da-41ce-be73-be07db15d66d"). InnerVolumeSpecName "kube-api-access-fqkdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:34:07 crc kubenswrapper[4946]: I1128 08:34:07.483031 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a397e3ca-90da-41ce-be73-be07db15d66d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a397e3ca-90da-41ce-be73-be07db15d66d" (UID: "a397e3ca-90da-41ce-be73-be07db15d66d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:34:07 crc kubenswrapper[4946]: I1128 08:34:07.566954 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a397e3ca-90da-41ce-be73-be07db15d66d-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 08:34:07 crc kubenswrapper[4946]: I1128 08:34:07.566994 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a397e3ca-90da-41ce-be73-be07db15d66d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 08:34:07 crc kubenswrapper[4946]: I1128 08:34:07.567006 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqkdc\" (UniqueName: \"kubernetes.io/projected/a397e3ca-90da-41ce-be73-be07db15d66d-kube-api-access-fqkdc\") on node \"crc\" DevicePath \"\"" Nov 28 08:34:07 crc kubenswrapper[4946]: I1128 08:34:07.861131 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ddvl" Nov 28 08:34:07 crc kubenswrapper[4946]: I1128 08:34:07.861130 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ddvl" event={"ID":"a397e3ca-90da-41ce-be73-be07db15d66d","Type":"ContainerDied","Data":"8c05c05017718cd027e4512fac38132806fdef4779e045f3ebaf315d323dcbbe"} Nov 28 08:34:07 crc kubenswrapper[4946]: I1128 08:34:07.861344 4946 scope.go:117] "RemoveContainer" containerID="43d052f5ea7eea09240ccd27c57371e75de279879ac994deebc32870d20139ba" Nov 28 08:34:07 crc kubenswrapper[4946]: I1128 08:34:07.864327 4946 generic.go:334] "Generic (PLEG): container finished" podID="892175b4-661d-4457-88ca-2f7c9a14a283" containerID="664f034b71db2a0022ab4abb2428a8a4a888d23dbddd865707165b239953aecc" exitCode=0 Nov 28 08:34:07 crc kubenswrapper[4946]: I1128 08:34:07.864399 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65fd8458c9-rptbj" event={"ID":"892175b4-661d-4457-88ca-2f7c9a14a283","Type":"ContainerDied","Data":"664f034b71db2a0022ab4abb2428a8a4a888d23dbddd865707165b239953aecc"} Nov 28 08:34:07 crc kubenswrapper[4946]: I1128 08:34:07.866358 4946 generic.go:334] "Generic (PLEG): container finished" podID="2c8dc9f5-2d13-47c6-9768-7426a854b4b6" containerID="b7f59c06dc3967da8aa93a6f4b2ac7c15a327785491e2e954caeb6f70254367e" exitCode=0 Nov 28 08:34:07 crc kubenswrapper[4946]: I1128 08:34:07.866431 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc5b666c-wfs2h" event={"ID":"2c8dc9f5-2d13-47c6-9768-7426a854b4b6","Type":"ContainerDied","Data":"b7f59c06dc3967da8aa93a6f4b2ac7c15a327785491e2e954caeb6f70254367e"} Nov 28 08:34:07 crc kubenswrapper[4946]: I1128 08:34:07.868567 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1a323bad-127f-44c5-8d32-9a7c53deceea","Type":"ContainerStarted","Data":"25e96507dc46fa4151a4f3f400f3bb39533ceb93d37e45939288b39abd4db04e"} Nov 28 08:34:07 crc kubenswrapper[4946]: I1128 08:34:07.875032 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"90119a9d-5f8c-4da1-8695-89da8ff356e9","Type":"ContainerStarted","Data":"a03fc5526b80345a203821c12111291f51b5438a93d7268252a4e87cdbe29501"} Nov 28 08:34:07 crc kubenswrapper[4946]: I1128 08:34:07.876325 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 28 08:34:07 crc kubenswrapper[4946]: I1128 08:34:07.881730 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a5bdfa1c-6408-40f5-a9db-4b991fd2b022","Type":"ContainerStarted","Data":"9eb5703e8a02535dffae0d4814b4b8a74f5364cccd8264ec01f6b01fb5511e9a"} Nov 28 08:34:07 crc kubenswrapper[4946]: I1128 08:34:07.897966 4946 scope.go:117] "RemoveContainer" containerID="d52d3ea77b1e8a72ac2a8a3f725dfb17c54b8a8f8968aa2770353bd5ff7cc5c8" Nov 28 08:34:07 crc kubenswrapper[4946]: I1128 08:34:07.965234 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.507026002 podStartE2EDuration="13.965203841s" podCreationTimestamp="2025-11-28 08:33:54 +0000 UTC" firstStartedPulling="2025-11-28 08:33:55.682787272 +0000 UTC m=+6090.060852373" lastFinishedPulling="2025-11-28 08:34:07.140965101 +0000 UTC m=+6101.519030212" observedRunningTime="2025-11-28 08:34:07.94695958 +0000 UTC m=+6102.325024711" watchObservedRunningTime="2025-11-28 08:34:07.965203841 +0000 UTC m=+6102.343268952" Nov 28 08:34:07 crc kubenswrapper[4946]: I1128 08:34:07.991583 4946 scope.go:117] "RemoveContainer" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" Nov 28 08:34:07 crc kubenswrapper[4946]: E1128 08:34:07.991889 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:34:08 crc kubenswrapper[4946]: I1128 08:34:08.017317 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ddvl"] Nov 28 08:34:08 crc kubenswrapper[4946]: I1128 08:34:08.025573 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ddvl"] Nov 28 08:34:08 crc kubenswrapper[4946]: I1128 08:34:08.030956 4946 scope.go:117] "RemoveContainer" containerID="f7e1bd28bcf405d3743bb87b92786b467398aaaf620af4e9befdc6d0f382aaa7" Nov 28 08:34:08 crc kubenswrapper[4946]: I1128 08:34:08.895252 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0b429b35-b62b-411d-91dc-eee42d2b359c","Type":"ContainerStarted","Data":"4deca1c559f35601b2dd5f315508754a269e9a55c60877a302f7d180a94eede2"} Nov 28 08:34:08 crc kubenswrapper[4946]: I1128 08:34:08.906718 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65fd8458c9-rptbj" event={"ID":"892175b4-661d-4457-88ca-2f7c9a14a283","Type":"ContainerStarted","Data":"ebd3bf22568928d6ea621f02b27f79712d2cd1900bf33d07982c0fb68ac0db18"} Nov 28 08:34:08 crc kubenswrapper[4946]: I1128 08:34:08.907450 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65fd8458c9-rptbj" Nov 28 08:34:08 crc kubenswrapper[4946]: I1128 08:34:08.909948 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2ce2d653-a79e-4b24-b214-7f0d00141c71","Type":"ContainerStarted","Data":"cb6642bfcc272592bc9a6c98d09cab434e828778935c3197563d0800ebfd6504"} Nov 28 08:34:08 crc kubenswrapper[4946]: I1128 08:34:08.914748 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc5b666c-wfs2h" event={"ID":"2c8dc9f5-2d13-47c6-9768-7426a854b4b6","Type":"ContainerStarted","Data":"b79b91ee6edaf23f9e98931fccb1691bd959829632d7be38480bedd93cfb0bf1"} Nov 28 08:34:08 crc kubenswrapper[4946]: I1128 08:34:08.914811 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fbc5b666c-wfs2h" Nov 28 08:34:08 crc kubenswrapper[4946]: I1128 08:34:08.998784 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fbc5b666c-wfs2h" podStartSLOduration=2.937494263 podStartE2EDuration="17.998761341s" podCreationTimestamp="2025-11-28 08:33:51 +0000 UTC" firstStartedPulling="2025-11-28 08:33:52.17943205 +0000 UTC m=+6086.557497201" lastFinishedPulling="2025-11-28 08:34:07.240699168 +0000 UTC m=+6101.618764279" observedRunningTime="2025-11-28 08:34:08.994938597 +0000 UTC m=+6103.373003748" watchObservedRunningTime="2025-11-28 08:34:08.998761341 +0000 UTC m=+6103.376826462" Nov 28 08:34:09 crc kubenswrapper[4946]: I1128 08:34:09.025201 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65fd8458c9-rptbj" podStartSLOduration=3.027355897 podStartE2EDuration="18.025174845s" podCreationTimestamp="2025-11-28 08:33:51 +0000 UTC" firstStartedPulling="2025-11-28 08:33:52.174750045 +0000 UTC m=+6086.552815196" lastFinishedPulling="2025-11-28 08:34:07.172568983 +0000 UTC m=+6101.550634144" observedRunningTime="2025-11-28 08:34:09.017085265 +0000 UTC m=+6103.395150396" watchObservedRunningTime="2025-11-28 08:34:09.025174845 +0000 UTC m=+6103.403239996" Nov 28 08:34:10 crc kubenswrapper[4946]: I1128 08:34:10.008405 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a397e3ca-90da-41ce-be73-be07db15d66d" path="/var/lib/kubelet/pods/a397e3ca-90da-41ce-be73-be07db15d66d/volumes" Nov 28 08:34:10 crc kubenswrapper[4946]: I1128 08:34:10.938379 4946 generic.go:334] "Generic (PLEG): container finished" podID="1a323bad-127f-44c5-8d32-9a7c53deceea" containerID="25e96507dc46fa4151a4f3f400f3bb39533ceb93d37e45939288b39abd4db04e" exitCode=0 Nov 28 08:34:10 crc kubenswrapper[4946]: I1128 08:34:10.938452 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1a323bad-127f-44c5-8d32-9a7c53deceea","Type":"ContainerDied","Data":"25e96507dc46fa4151a4f3f400f3bb39533ceb93d37e45939288b39abd4db04e"} Nov 28 08:34:10 crc kubenswrapper[4946]: I1128 08:34:10.942610 4946 generic.go:334] "Generic (PLEG): container finished" podID="a5bdfa1c-6408-40f5-a9db-4b991fd2b022" containerID="9eb5703e8a02535dffae0d4814b4b8a74f5364cccd8264ec01f6b01fb5511e9a" exitCode=0 Nov 28 08:34:10 crc kubenswrapper[4946]: I1128 08:34:10.942722 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a5bdfa1c-6408-40f5-a9db-4b991fd2b022","Type":"ContainerDied","Data":"9eb5703e8a02535dffae0d4814b4b8a74f5364cccd8264ec01f6b01fb5511e9a"} Nov 28 08:34:11 crc kubenswrapper[4946]: I1128 08:34:11.959673 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1a323bad-127f-44c5-8d32-9a7c53deceea","Type":"ContainerStarted","Data":"3334956e2a450b5ac32da1b7508cd10bec8030ecfad2f7c45c04bc413d6d93b4"} Nov 28 08:34:11 crc kubenswrapper[4946]: I1128 08:34:11.965283 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a5bdfa1c-6408-40f5-a9db-4b991fd2b022","Type":"ContainerStarted","Data":"def6c83fc353fff19fe1bc975b2e4ee22bd5987cf1a0a5903386e12a4da142dd"} Nov 28 08:34:12 crc kubenswrapper[4946]: I1128 08:34:12.010739 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=7.47711208 podStartE2EDuration="20.010707575s" podCreationTimestamp="2025-11-28 08:33:52 +0000 UTC" firstStartedPulling="2025-11-28 08:33:54.608659428 +0000 UTC m=+6088.986724539" lastFinishedPulling="2025-11-28 08:34:07.142254923 +0000 UTC m=+6101.520320034" observedRunningTime="2025-11-28 08:34:12.003644711 +0000 UTC m=+6106.381709892" watchObservedRunningTime="2025-11-28 08:34:12.010707575 +0000 UTC m=+6106.388772736" Nov 28 08:34:12 crc kubenswrapper[4946]: I1128 08:34:12.030089 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.191478883 podStartE2EDuration="18.030068004s" podCreationTimestamp="2025-11-28 08:33:54 +0000 UTC" firstStartedPulling="2025-11-28 08:33:56.330387983 +0000 UTC m=+6090.708453104" lastFinishedPulling="2025-11-28 08:34:07.168977114 +0000 UTC m=+6101.547042225" observedRunningTime="2025-11-28 08:34:12.022435655 +0000 UTC m=+6106.400500806" watchObservedRunningTime="2025-11-28 08:34:12.030068004 +0000 UTC m=+6106.408133135" Nov 28 08:34:14 crc kubenswrapper[4946]: I1128 08:34:14.083670 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 28 08:34:14 crc kubenswrapper[4946]: I1128 08:34:14.084163 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 28 08:34:14 crc kubenswrapper[4946]: E1128 08:34:14.101131 4946 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.2:43506->38.102.83.2:34561: write tcp 38.102.83.2:43506->38.102.83.2:34561: write: broken pipe Nov 28 08:34:14 crc kubenswrapper[4946]: I1128 08:34:14.393413 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 28 08:34:15 crc kubenswrapper[4946]: I1128 08:34:15.881705 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 28 08:34:15 crc kubenswrapper[4946]: I1128 08:34:15.882037 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 28 08:34:15 crc kubenswrapper[4946]: I1128 08:34:15.962066 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 28 08:34:16 crc kubenswrapper[4946]: I1128 08:34:16.080598 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 28 08:34:16 crc kubenswrapper[4946]: I1128 08:34:16.691771 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65fd8458c9-rptbj" Nov 28 08:34:16 crc kubenswrapper[4946]: I1128 08:34:16.715760 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fbc5b666c-wfs2h" Nov 28 08:34:16 crc kubenswrapper[4946]: I1128 08:34:16.780253 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fbc5b666c-wfs2h"] Nov 28 08:34:17 crc kubenswrapper[4946]: I1128 08:34:17.011967 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fbc5b666c-wfs2h" podUID="2c8dc9f5-2d13-47c6-9768-7426a854b4b6" containerName="dnsmasq-dns" containerID="cri-o://b79b91ee6edaf23f9e98931fccb1691bd959829632d7be38480bedd93cfb0bf1" gracePeriod=10 Nov 28 08:34:17 crc kubenswrapper[4946]: I1128 08:34:17.473105 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc5b666c-wfs2h" Nov 28 08:34:17 crc kubenswrapper[4946]: I1128 08:34:17.538331 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c8dc9f5-2d13-47c6-9768-7426a854b4b6-dns-svc\") pod \"2c8dc9f5-2d13-47c6-9768-7426a854b4b6\" (UID: \"2c8dc9f5-2d13-47c6-9768-7426a854b4b6\") " Nov 28 08:34:17 crc kubenswrapper[4946]: I1128 08:34:17.538378 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5lgv\" (UniqueName: \"kubernetes.io/projected/2c8dc9f5-2d13-47c6-9768-7426a854b4b6-kube-api-access-j5lgv\") pod \"2c8dc9f5-2d13-47c6-9768-7426a854b4b6\" (UID: \"2c8dc9f5-2d13-47c6-9768-7426a854b4b6\") " Nov 28 08:34:17 crc kubenswrapper[4946]: I1128 08:34:17.538406 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c8dc9f5-2d13-47c6-9768-7426a854b4b6-config\") pod \"2c8dc9f5-2d13-47c6-9768-7426a854b4b6\" (UID: \"2c8dc9f5-2d13-47c6-9768-7426a854b4b6\") " Nov 28 08:34:17 crc kubenswrapper[4946]: I1128 08:34:17.548783 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c8dc9f5-2d13-47c6-9768-7426a854b4b6-kube-api-access-j5lgv" (OuterVolumeSpecName: "kube-api-access-j5lgv") pod "2c8dc9f5-2d13-47c6-9768-7426a854b4b6" (UID: "2c8dc9f5-2d13-47c6-9768-7426a854b4b6"). InnerVolumeSpecName "kube-api-access-j5lgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:34:17 crc kubenswrapper[4946]: I1128 08:34:17.586721 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c8dc9f5-2d13-47c6-9768-7426a854b4b6-config" (OuterVolumeSpecName: "config") pod "2c8dc9f5-2d13-47c6-9768-7426a854b4b6" (UID: "2c8dc9f5-2d13-47c6-9768-7426a854b4b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:34:17 crc kubenswrapper[4946]: I1128 08:34:17.590138 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c8dc9f5-2d13-47c6-9768-7426a854b4b6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2c8dc9f5-2d13-47c6-9768-7426a854b4b6" (UID: "2c8dc9f5-2d13-47c6-9768-7426a854b4b6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:34:17 crc kubenswrapper[4946]: I1128 08:34:17.640066 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c8dc9f5-2d13-47c6-9768-7426a854b4b6-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 08:34:17 crc kubenswrapper[4946]: I1128 08:34:17.640104 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5lgv\" (UniqueName: \"kubernetes.io/projected/2c8dc9f5-2d13-47c6-9768-7426a854b4b6-kube-api-access-j5lgv\") on node \"crc\" DevicePath \"\"" Nov 28 08:34:17 crc kubenswrapper[4946]: I1128 08:34:17.640117 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c8dc9f5-2d13-47c6-9768-7426a854b4b6-config\") on node \"crc\" DevicePath \"\"" Nov 28 08:34:18 crc kubenswrapper[4946]: I1128 08:34:18.024845 4946 generic.go:334] "Generic (PLEG): container finished" podID="2c8dc9f5-2d13-47c6-9768-7426a854b4b6" containerID="b79b91ee6edaf23f9e98931fccb1691bd959829632d7be38480bedd93cfb0bf1" exitCode=0 Nov 28 08:34:18 crc kubenswrapper[4946]: I1128 08:34:18.025652 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc5b666c-wfs2h" Nov 28 08:34:18 crc kubenswrapper[4946]: I1128 08:34:18.025714 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc5b666c-wfs2h" event={"ID":"2c8dc9f5-2d13-47c6-9768-7426a854b4b6","Type":"ContainerDied","Data":"b79b91ee6edaf23f9e98931fccb1691bd959829632d7be38480bedd93cfb0bf1"} Nov 28 08:34:18 crc kubenswrapper[4946]: I1128 08:34:18.026502 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc5b666c-wfs2h" event={"ID":"2c8dc9f5-2d13-47c6-9768-7426a854b4b6","Type":"ContainerDied","Data":"c8cdc5b60261567d3b59f15661553a6767b15fad6873e88951cfd2f99cf57dca"} Nov 28 08:34:18 crc kubenswrapper[4946]: I1128 08:34:18.026542 4946 scope.go:117] "RemoveContainer" containerID="b79b91ee6edaf23f9e98931fccb1691bd959829632d7be38480bedd93cfb0bf1" Nov 28 08:34:18 crc kubenswrapper[4946]: I1128 08:34:18.065947 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fbc5b666c-wfs2h"] Nov 28 08:34:18 crc kubenswrapper[4946]: I1128 08:34:18.072927 4946 scope.go:117] "RemoveContainer" containerID="b7f59c06dc3967da8aa93a6f4b2ac7c15a327785491e2e954caeb6f70254367e" Nov 28 08:34:18 crc kubenswrapper[4946]: I1128 08:34:18.096738 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fbc5b666c-wfs2h"] Nov 28 08:34:18 crc kubenswrapper[4946]: I1128 08:34:18.107364 4946 scope.go:117] "RemoveContainer" containerID="b79b91ee6edaf23f9e98931fccb1691bd959829632d7be38480bedd93cfb0bf1" Nov 28 08:34:18 crc kubenswrapper[4946]: E1128 08:34:18.108647 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b79b91ee6edaf23f9e98931fccb1691bd959829632d7be38480bedd93cfb0bf1\": container with ID starting with b79b91ee6edaf23f9e98931fccb1691bd959829632d7be38480bedd93cfb0bf1 not found: ID does not exist" containerID="b79b91ee6edaf23f9e98931fccb1691bd959829632d7be38480bedd93cfb0bf1" Nov 28 08:34:18 crc kubenswrapper[4946]: I1128 08:34:18.108704 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b79b91ee6edaf23f9e98931fccb1691bd959829632d7be38480bedd93cfb0bf1"} err="failed to get container status \"b79b91ee6edaf23f9e98931fccb1691bd959829632d7be38480bedd93cfb0bf1\": rpc error: code = NotFound desc = could not find container \"b79b91ee6edaf23f9e98931fccb1691bd959829632d7be38480bedd93cfb0bf1\": container with ID starting with b79b91ee6edaf23f9e98931fccb1691bd959829632d7be38480bedd93cfb0bf1 not found: ID does not exist" Nov 28 08:34:18 crc kubenswrapper[4946]: I1128 08:34:18.108736 4946 scope.go:117] "RemoveContainer" containerID="b7f59c06dc3967da8aa93a6f4b2ac7c15a327785491e2e954caeb6f70254367e" Nov 28 08:34:18 crc kubenswrapper[4946]: E1128 08:34:18.109443 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7f59c06dc3967da8aa93a6f4b2ac7c15a327785491e2e954caeb6f70254367e\": container with ID starting with b7f59c06dc3967da8aa93a6f4b2ac7c15a327785491e2e954caeb6f70254367e not found: ID does not exist" containerID="b7f59c06dc3967da8aa93a6f4b2ac7c15a327785491e2e954caeb6f70254367e" Nov 28 08:34:18 crc kubenswrapper[4946]: I1128 08:34:18.109493 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7f59c06dc3967da8aa93a6f4b2ac7c15a327785491e2e954caeb6f70254367e"} err="failed to get container status \"b7f59c06dc3967da8aa93a6f4b2ac7c15a327785491e2e954caeb6f70254367e\": rpc error: code = NotFound desc = could not find container \"b7f59c06dc3967da8aa93a6f4b2ac7c15a327785491e2e954caeb6f70254367e\": container with ID starting with b7f59c06dc3967da8aa93a6f4b2ac7c15a327785491e2e954caeb6f70254367e not found: ID does not exist" Nov 28 08:34:18 crc kubenswrapper[4946]: I1128 08:34:18.196838 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 28 08:34:18 crc kubenswrapper[4946]: I1128 08:34:18.310697 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 28 08:34:20 crc kubenswrapper[4946]: I1128 08:34:20.002558 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c8dc9f5-2d13-47c6-9768-7426a854b4b6" path="/var/lib/kubelet/pods/2c8dc9f5-2d13-47c6-9768-7426a854b4b6/volumes" Nov 28 08:34:22 crc kubenswrapper[4946]: I1128 08:34:22.990311 4946 scope.go:117] "RemoveContainer" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" Nov 28 08:34:22 crc kubenswrapper[4946]: E1128 08:34:22.990862 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:34:36 crc kubenswrapper[4946]: I1128 08:34:36.991211 4946 scope.go:117] "RemoveContainer" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" Nov 28 08:34:36 crc kubenswrapper[4946]: E1128 08:34:36.992276 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:34:42 crc kubenswrapper[4946]: I1128 08:34:42.280439 4946 generic.go:334] "Generic (PLEG): container finished" podID="0b429b35-b62b-411d-91dc-eee42d2b359c" containerID="4deca1c559f35601b2dd5f315508754a269e9a55c60877a302f7d180a94eede2" exitCode=0 Nov 28 08:34:42 crc kubenswrapper[4946]: I1128 08:34:42.280518 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0b429b35-b62b-411d-91dc-eee42d2b359c","Type":"ContainerDied","Data":"4deca1c559f35601b2dd5f315508754a269e9a55c60877a302f7d180a94eede2"} Nov 28 08:34:42 crc kubenswrapper[4946]: I1128 08:34:42.281856 4946 generic.go:334] "Generic (PLEG): container finished" podID="2ce2d653-a79e-4b24-b214-7f0d00141c71" containerID="cb6642bfcc272592bc9a6c98d09cab434e828778935c3197563d0800ebfd6504" exitCode=0 Nov 28 08:34:42 crc kubenswrapper[4946]: I1128 08:34:42.281895 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2ce2d653-a79e-4b24-b214-7f0d00141c71","Type":"ContainerDied","Data":"cb6642bfcc272592bc9a6c98d09cab434e828778935c3197563d0800ebfd6504"} Nov 28 08:34:43 crc kubenswrapper[4946]: I1128 08:34:43.290375 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0b429b35-b62b-411d-91dc-eee42d2b359c","Type":"ContainerStarted","Data":"d0e46705e6cb42b2d00e978c77e7f622717aa445c6521ff8bf70a66744f0a877"} Nov 28 08:34:43 crc kubenswrapper[4946]: I1128 08:34:43.291287 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 28 08:34:43 crc kubenswrapper[4946]: I1128 08:34:43.293048 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2ce2d653-a79e-4b24-b214-7f0d00141c71","Type":"ContainerStarted","Data":"57e6a7ead57a6afccbb6807b7a34f8902a65393a5e43318c66cb482ef8c62102"} Nov 28 08:34:43 crc kubenswrapper[4946]: I1128 08:34:43.293297 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:34:43 crc kubenswrapper[4946]: I1128 08:34:43.315187 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.363188387 podStartE2EDuration="52.315162692s" podCreationTimestamp="2025-11-28 08:33:51 +0000 UTC" firstStartedPulling="2025-11-28 08:33:53.15276293 +0000 UTC m=+6087.530828041" lastFinishedPulling="2025-11-28 08:34:07.104737195 +0000 UTC m=+6101.482802346" observedRunningTime="2025-11-28 08:34:43.313806209 +0000 UTC m=+6137.691871360" watchObservedRunningTime="2025-11-28 08:34:43.315162692 +0000 UTC m=+6137.693227843" Nov 28 08:34:43 crc kubenswrapper[4946]: I1128 08:34:43.336759 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.620172545 podStartE2EDuration="52.336732826s" podCreationTimestamp="2025-11-28 08:33:51 +0000 UTC" firstStartedPulling="2025-11-28 08:33:53.423274712 +0000 UTC m=+6087.801339823" lastFinishedPulling="2025-11-28 08:34:07.139834943 +0000 UTC m=+6101.517900104" observedRunningTime="2025-11-28 08:34:43.336616013 +0000 UTC m=+6137.714681134" watchObservedRunningTime="2025-11-28 08:34:43.336732826 +0000 UTC m=+6137.714797977" Nov 28 08:34:50 crc kubenswrapper[4946]: I1128 08:34:50.990769 4946 scope.go:117] "RemoveContainer" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" Nov 28 08:34:50 crc kubenswrapper[4946]: E1128 08:34:50.991871 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:34:52 crc kubenswrapper[4946]: I1128 08:34:52.586759 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 28 08:34:52 crc kubenswrapper[4946]: I1128 08:34:52.908692 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:00 crc kubenswrapper[4946]: I1128 08:35:00.392727 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dfd7bdf7-rpvww"] Nov 28 08:35:00 crc kubenswrapper[4946]: E1128 08:35:00.394670 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a397e3ca-90da-41ce-be73-be07db15d66d" containerName="registry-server" Nov 28 08:35:00 crc kubenswrapper[4946]: I1128 08:35:00.394690 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="a397e3ca-90da-41ce-be73-be07db15d66d" containerName="registry-server" Nov 28 08:35:00 crc kubenswrapper[4946]: E1128 08:35:00.394713 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c8dc9f5-2d13-47c6-9768-7426a854b4b6" containerName="init" Nov 28 08:35:00 crc kubenswrapper[4946]: I1128 08:35:00.394720 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c8dc9f5-2d13-47c6-9768-7426a854b4b6" containerName="init" Nov 28 08:35:00 crc kubenswrapper[4946]: E1128 08:35:00.394733 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a397e3ca-90da-41ce-be73-be07db15d66d" containerName="extract-utilities" Nov 28 08:35:00 crc kubenswrapper[4946]: I1128 08:35:00.394740 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="a397e3ca-90da-41ce-be73-be07db15d66d" containerName="extract-utilities" Nov 28 08:35:00 crc kubenswrapper[4946]: E1128 08:35:00.394757 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a397e3ca-90da-41ce-be73-be07db15d66d" containerName="extract-content" Nov 28 08:35:00 crc kubenswrapper[4946]: I1128 08:35:00.394764 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="a397e3ca-90da-41ce-be73-be07db15d66d" containerName="extract-content" Nov 28 08:35:00 crc kubenswrapper[4946]: E1128 08:35:00.394780 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c8dc9f5-2d13-47c6-9768-7426a854b4b6" containerName="dnsmasq-dns" Nov 28 08:35:00 crc kubenswrapper[4946]: I1128 08:35:00.394787 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c8dc9f5-2d13-47c6-9768-7426a854b4b6" containerName="dnsmasq-dns" Nov 28 08:35:00 crc kubenswrapper[4946]: I1128 08:35:00.394954 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="a397e3ca-90da-41ce-be73-be07db15d66d" containerName="registry-server" Nov 28 08:35:00 crc kubenswrapper[4946]: I1128 08:35:00.394971 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c8dc9f5-2d13-47c6-9768-7426a854b4b6" containerName="dnsmasq-dns" Nov 28 08:35:00 crc kubenswrapper[4946]: I1128 08:35:00.396624 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dfd7bdf7-rpvww" Nov 28 08:35:00 crc kubenswrapper[4946]: I1128 08:35:00.449920 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dfd7bdf7-rpvww"] Nov 28 08:35:00 crc kubenswrapper[4946]: I1128 08:35:00.544505 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b5d08b-c397-4583-9f57-ec8dbed322b5-config\") pod \"dnsmasq-dns-5dfd7bdf7-rpvww\" (UID: \"75b5d08b-c397-4583-9f57-ec8dbed322b5\") " pod="openstack/dnsmasq-dns-5dfd7bdf7-rpvww" Nov 28 08:35:00 crc kubenswrapper[4946]: I1128 08:35:00.544603 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b5d08b-c397-4583-9f57-ec8dbed322b5-dns-svc\") pod \"dnsmasq-dns-5dfd7bdf7-rpvww\" (UID: \"75b5d08b-c397-4583-9f57-ec8dbed322b5\") " pod="openstack/dnsmasq-dns-5dfd7bdf7-rpvww" Nov 28 08:35:00 crc kubenswrapper[4946]: I1128 08:35:00.544662 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6l26\" (UniqueName: \"kubernetes.io/projected/75b5d08b-c397-4583-9f57-ec8dbed322b5-kube-api-access-t6l26\") pod \"dnsmasq-dns-5dfd7bdf7-rpvww\" (UID: \"75b5d08b-c397-4583-9f57-ec8dbed322b5\") " pod="openstack/dnsmasq-dns-5dfd7bdf7-rpvww" Nov 28 08:35:00 crc kubenswrapper[4946]: I1128 08:35:00.646605 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b5d08b-c397-4583-9f57-ec8dbed322b5-config\") pod \"dnsmasq-dns-5dfd7bdf7-rpvww\" (UID: \"75b5d08b-c397-4583-9f57-ec8dbed322b5\") " pod="openstack/dnsmasq-dns-5dfd7bdf7-rpvww" Nov 28 08:35:00 crc kubenswrapper[4946]: I1128 08:35:00.646758 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b5d08b-c397-4583-9f57-ec8dbed322b5-dns-svc\") pod \"dnsmasq-dns-5dfd7bdf7-rpvww\" (UID: \"75b5d08b-c397-4583-9f57-ec8dbed322b5\") " pod="openstack/dnsmasq-dns-5dfd7bdf7-rpvww" Nov 28 08:35:00 crc kubenswrapper[4946]: I1128 08:35:00.646849 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6l26\" (UniqueName: \"kubernetes.io/projected/75b5d08b-c397-4583-9f57-ec8dbed322b5-kube-api-access-t6l26\") pod \"dnsmasq-dns-5dfd7bdf7-rpvww\" (UID: \"75b5d08b-c397-4583-9f57-ec8dbed322b5\") " pod="openstack/dnsmasq-dns-5dfd7bdf7-rpvww" Nov 28 08:35:00 crc kubenswrapper[4946]: I1128 08:35:00.648365 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b5d08b-c397-4583-9f57-ec8dbed322b5-dns-svc\") pod \"dnsmasq-dns-5dfd7bdf7-rpvww\" (UID: \"75b5d08b-c397-4583-9f57-ec8dbed322b5\") " pod="openstack/dnsmasq-dns-5dfd7bdf7-rpvww" Nov 28 08:35:00 crc kubenswrapper[4946]: I1128 08:35:00.648665 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b5d08b-c397-4583-9f57-ec8dbed322b5-config\") pod \"dnsmasq-dns-5dfd7bdf7-rpvww\" (UID: \"75b5d08b-c397-4583-9f57-ec8dbed322b5\") " pod="openstack/dnsmasq-dns-5dfd7bdf7-rpvww" Nov 28 08:35:00 crc kubenswrapper[4946]: I1128 08:35:00.671933 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6l26\" (UniqueName: \"kubernetes.io/projected/75b5d08b-c397-4583-9f57-ec8dbed322b5-kube-api-access-t6l26\") pod \"dnsmasq-dns-5dfd7bdf7-rpvww\" (UID: \"75b5d08b-c397-4583-9f57-ec8dbed322b5\") " pod="openstack/dnsmasq-dns-5dfd7bdf7-rpvww" Nov 28 08:35:00 crc kubenswrapper[4946]: I1128 08:35:00.758062 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dfd7bdf7-rpvww" Nov 28 08:35:01 crc kubenswrapper[4946]: I1128 08:35:01.174562 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 08:35:01 crc kubenswrapper[4946]: I1128 08:35:01.222533 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dfd7bdf7-rpvww"] Nov 28 08:35:01 crc kubenswrapper[4946]: W1128 08:35:01.233385 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75b5d08b_c397_4583_9f57_ec8dbed322b5.slice/crio-beb816e0561bc0fbec56d5bd651f07a8fc5a963ee972adcf4e0837c5dcbc8a1e WatchSource:0}: Error finding container beb816e0561bc0fbec56d5bd651f07a8fc5a963ee972adcf4e0837c5dcbc8a1e: Status 404 returned error can't find the container with id beb816e0561bc0fbec56d5bd651f07a8fc5a963ee972adcf4e0837c5dcbc8a1e Nov 28 08:35:01 crc kubenswrapper[4946]: I1128 08:35:01.459441 4946 generic.go:334] "Generic (PLEG): container finished" podID="75b5d08b-c397-4583-9f57-ec8dbed322b5" containerID="d6014fcb139895e27f64033039aa5975fdb993e7a8a6a157d02329870cb77e6f" exitCode=0 Nov 28 08:35:01 crc kubenswrapper[4946]: I1128 08:35:01.459524 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dfd7bdf7-rpvww" event={"ID":"75b5d08b-c397-4583-9f57-ec8dbed322b5","Type":"ContainerDied","Data":"d6014fcb139895e27f64033039aa5975fdb993e7a8a6a157d02329870cb77e6f"} Nov 28 08:35:01 crc kubenswrapper[4946]: I1128 08:35:01.461432 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dfd7bdf7-rpvww" event={"ID":"75b5d08b-c397-4583-9f57-ec8dbed322b5","Type":"ContainerStarted","Data":"beb816e0561bc0fbec56d5bd651f07a8fc5a963ee972adcf4e0837c5dcbc8a1e"} Nov 28 08:35:01 crc kubenswrapper[4946]: I1128 08:35:01.593942 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 08:35:01 crc kubenswrapper[4946]: I1128 08:35:01.989878 4946 scope.go:117] "RemoveContainer" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" Nov 28 08:35:01 crc kubenswrapper[4946]: E1128 08:35:01.990673 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:35:02 crc kubenswrapper[4946]: I1128 08:35:02.469014 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dfd7bdf7-rpvww" event={"ID":"75b5d08b-c397-4583-9f57-ec8dbed322b5","Type":"ContainerStarted","Data":"9eba2fca6051a5e6114d24268ba5bc6d5b0e2fc3091a75114ad0f1f126ef8281"} Nov 28 08:35:02 crc kubenswrapper[4946]: I1128 08:35:02.469509 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5dfd7bdf7-rpvww" Nov 28 08:35:02 crc kubenswrapper[4946]: I1128 08:35:02.486888 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5dfd7bdf7-rpvww" podStartSLOduration=2.486870112 podStartE2EDuration="2.486870112s" podCreationTimestamp="2025-11-28 08:35:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:35:02.486706978 +0000 UTC m=+6156.864772089" watchObservedRunningTime="2025-11-28 08:35:02.486870112 +0000 UTC m=+6156.864935223" Nov 28 08:35:02 crc kubenswrapper[4946]: I1128 08:35:02.961940 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="0b429b35-b62b-411d-91dc-eee42d2b359c" containerName="rabbitmq" containerID="cri-o://d0e46705e6cb42b2d00e978c77e7f622717aa445c6521ff8bf70a66744f0a877" gracePeriod=604799 Nov 28 08:35:03 crc kubenswrapper[4946]: I1128 08:35:03.409862 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="2ce2d653-a79e-4b24-b214-7f0d00141c71" containerName="rabbitmq" containerID="cri-o://57e6a7ead57a6afccbb6807b7a34f8902a65393a5e43318c66cb482ef8c62102" gracePeriod=604799 Nov 28 08:35:09 crc kubenswrapper[4946]: I1128 08:35:09.534856 4946 generic.go:334] "Generic (PLEG): container finished" podID="0b429b35-b62b-411d-91dc-eee42d2b359c" containerID="d0e46705e6cb42b2d00e978c77e7f622717aa445c6521ff8bf70a66744f0a877" exitCode=0 Nov 28 08:35:09 crc kubenswrapper[4946]: I1128 08:35:09.535293 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0b429b35-b62b-411d-91dc-eee42d2b359c","Type":"ContainerDied","Data":"d0e46705e6cb42b2d00e978c77e7f622717aa445c6521ff8bf70a66744f0a877"} Nov 28 08:35:09 crc kubenswrapper[4946]: I1128 08:35:09.696674 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 08:35:09 crc kubenswrapper[4946]: I1128 08:35:09.900046 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b429b35-b62b-411d-91dc-eee42d2b359c-rabbitmq-erlang-cookie\") pod \"0b429b35-b62b-411d-91dc-eee42d2b359c\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " Nov 28 08:35:09 crc kubenswrapper[4946]: I1128 08:35:09.900589 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-266e7d83-22dd-417b-bdf7-7c3e59d7a5ab\") pod \"0b429b35-b62b-411d-91dc-eee42d2b359c\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " Nov 28 08:35:09 crc kubenswrapper[4946]: I1128 08:35:09.900673 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b429b35-b62b-411d-91dc-eee42d2b359c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0b429b35-b62b-411d-91dc-eee42d2b359c" (UID: "0b429b35-b62b-411d-91dc-eee42d2b359c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:35:09 crc kubenswrapper[4946]: I1128 08:35:09.900727 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b429b35-b62b-411d-91dc-eee42d2b359c-server-conf\") pod \"0b429b35-b62b-411d-91dc-eee42d2b359c\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " Nov 28 08:35:09 crc kubenswrapper[4946]: I1128 08:35:09.900767 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klv8q\" (UniqueName: \"kubernetes.io/projected/0b429b35-b62b-411d-91dc-eee42d2b359c-kube-api-access-klv8q\") pod \"0b429b35-b62b-411d-91dc-eee42d2b359c\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " Nov 28 08:35:09 crc kubenswrapper[4946]: I1128 08:35:09.900806 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b429b35-b62b-411d-91dc-eee42d2b359c-rabbitmq-confd\") pod \"0b429b35-b62b-411d-91dc-eee42d2b359c\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " Nov 28 08:35:09 crc kubenswrapper[4946]: I1128 08:35:09.900843 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b429b35-b62b-411d-91dc-eee42d2b359c-plugins-conf\") pod \"0b429b35-b62b-411d-91dc-eee42d2b359c\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " Nov 28 08:35:09 crc kubenswrapper[4946]: I1128 08:35:09.900922 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b429b35-b62b-411d-91dc-eee42d2b359c-pod-info\") pod \"0b429b35-b62b-411d-91dc-eee42d2b359c\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " Nov 28 08:35:09 crc kubenswrapper[4946]: I1128 08:35:09.901009 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b429b35-b62b-411d-91dc-eee42d2b359c-rabbitmq-plugins\") pod \"0b429b35-b62b-411d-91dc-eee42d2b359c\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " Nov 28 08:35:09 crc kubenswrapper[4946]: I1128 08:35:09.901195 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b429b35-b62b-411d-91dc-eee42d2b359c-erlang-cookie-secret\") pod \"0b429b35-b62b-411d-91dc-eee42d2b359c\" (UID: \"0b429b35-b62b-411d-91dc-eee42d2b359c\") " Nov 28 08:35:09 crc kubenswrapper[4946]: I1128 08:35:09.901805 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b429b35-b62b-411d-91dc-eee42d2b359c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0b429b35-b62b-411d-91dc-eee42d2b359c" (UID: "0b429b35-b62b-411d-91dc-eee42d2b359c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:35:09 crc kubenswrapper[4946]: I1128 08:35:09.901943 4946 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b429b35-b62b-411d-91dc-eee42d2b359c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 28 08:35:09 crc kubenswrapper[4946]: I1128 08:35:09.901971 4946 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b429b35-b62b-411d-91dc-eee42d2b359c-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 28 08:35:09 crc kubenswrapper[4946]: I1128 08:35:09.902391 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b429b35-b62b-411d-91dc-eee42d2b359c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0b429b35-b62b-411d-91dc-eee42d2b359c" (UID: "0b429b35-b62b-411d-91dc-eee42d2b359c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:35:09 crc kubenswrapper[4946]: I1128 08:35:09.907709 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b429b35-b62b-411d-91dc-eee42d2b359c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0b429b35-b62b-411d-91dc-eee42d2b359c" (UID: "0b429b35-b62b-411d-91dc-eee42d2b359c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:35:09 crc kubenswrapper[4946]: I1128 08:35:09.909744 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b429b35-b62b-411d-91dc-eee42d2b359c-kube-api-access-klv8q" (OuterVolumeSpecName: "kube-api-access-klv8q") pod "0b429b35-b62b-411d-91dc-eee42d2b359c" (UID: "0b429b35-b62b-411d-91dc-eee42d2b359c"). InnerVolumeSpecName "kube-api-access-klv8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:35:09 crc kubenswrapper[4946]: I1128 08:35:09.914622 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0b429b35-b62b-411d-91dc-eee42d2b359c-pod-info" (OuterVolumeSpecName: "pod-info") pod "0b429b35-b62b-411d-91dc-eee42d2b359c" (UID: "0b429b35-b62b-411d-91dc-eee42d2b359c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 28 08:35:09 crc kubenswrapper[4946]: I1128 08:35:09.924367 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b429b35-b62b-411d-91dc-eee42d2b359c-server-conf" (OuterVolumeSpecName: "server-conf") pod "0b429b35-b62b-411d-91dc-eee42d2b359c" (UID: "0b429b35-b62b-411d-91dc-eee42d2b359c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:35:09 crc kubenswrapper[4946]: I1128 08:35:09.936649 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-266e7d83-22dd-417b-bdf7-7c3e59d7a5ab" (OuterVolumeSpecName: "persistence") pod "0b429b35-b62b-411d-91dc-eee42d2b359c" (UID: "0b429b35-b62b-411d-91dc-eee42d2b359c"). InnerVolumeSpecName "pvc-266e7d83-22dd-417b-bdf7-7c3e59d7a5ab". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 28 08:35:09 crc kubenswrapper[4946]: I1128 08:35:09.997752 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b429b35-b62b-411d-91dc-eee42d2b359c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0b429b35-b62b-411d-91dc-eee42d2b359c" (UID: "0b429b35-b62b-411d-91dc-eee42d2b359c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.001956 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.003631 4946 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b429b35-b62b-411d-91dc-eee42d2b359c-pod-info\") on node \"crc\" DevicePath \"\"" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.004756 4946 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b429b35-b62b-411d-91dc-eee42d2b359c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.004782 4946 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b429b35-b62b-411d-91dc-eee42d2b359c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.005028 4946 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-266e7d83-22dd-417b-bdf7-7c3e59d7a5ab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-266e7d83-22dd-417b-bdf7-7c3e59d7a5ab\") on node \"crc\" " Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.005045 4946 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b429b35-b62b-411d-91dc-eee42d2b359c-server-conf\") on node \"crc\" DevicePath \"\"" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.005059 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klv8q\" (UniqueName: \"kubernetes.io/projected/0b429b35-b62b-411d-91dc-eee42d2b359c-kube-api-access-klv8q\") on node \"crc\" DevicePath \"\"" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.005070 4946 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b429b35-b62b-411d-91dc-eee42d2b359c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.021577 4946 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.022111 4946 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-266e7d83-22dd-417b-bdf7-7c3e59d7a5ab" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-266e7d83-22dd-417b-bdf7-7c3e59d7a5ab") on node "crc" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.106126 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ce2d653-a79e-4b24-b214-7f0d00141c71-plugins-conf\") pod \"2ce2d653-a79e-4b24-b214-7f0d00141c71\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.106195 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ce2d653-a79e-4b24-b214-7f0d00141c71-rabbitmq-plugins\") pod \"2ce2d653-a79e-4b24-b214-7f0d00141c71\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.106243 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgcg4\" (UniqueName: \"kubernetes.io/projected/2ce2d653-a79e-4b24-b214-7f0d00141c71-kube-api-access-rgcg4\") pod \"2ce2d653-a79e-4b24-b214-7f0d00141c71\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.106276 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ce2d653-a79e-4b24-b214-7f0d00141c71-erlang-cookie-secret\") pod \"2ce2d653-a79e-4b24-b214-7f0d00141c71\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.106367 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91a67a38-f0e2-4b11-a369-c477cada57f5\") pod \"2ce2d653-a79e-4b24-b214-7f0d00141c71\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.106390 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ce2d653-a79e-4b24-b214-7f0d00141c71-rabbitmq-erlang-cookie\") pod \"2ce2d653-a79e-4b24-b214-7f0d00141c71\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.106451 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ce2d653-a79e-4b24-b214-7f0d00141c71-server-conf\") pod \"2ce2d653-a79e-4b24-b214-7f0d00141c71\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.106502 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ce2d653-a79e-4b24-b214-7f0d00141c71-pod-info\") pod \"2ce2d653-a79e-4b24-b214-7f0d00141c71\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.106541 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ce2d653-a79e-4b24-b214-7f0d00141c71-rabbitmq-confd\") pod \"2ce2d653-a79e-4b24-b214-7f0d00141c71\" (UID: \"2ce2d653-a79e-4b24-b214-7f0d00141c71\") " Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.106778 4946 reconciler_common.go:293] "Volume detached for volume \"pvc-266e7d83-22dd-417b-bdf7-7c3e59d7a5ab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-266e7d83-22dd-417b-bdf7-7c3e59d7a5ab\") on node \"crc\" DevicePath \"\"" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.107575 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ce2d653-a79e-4b24-b214-7f0d00141c71-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2ce2d653-a79e-4b24-b214-7f0d00141c71" (UID: "2ce2d653-a79e-4b24-b214-7f0d00141c71"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.107686 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ce2d653-a79e-4b24-b214-7f0d00141c71-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2ce2d653-a79e-4b24-b214-7f0d00141c71" (UID: "2ce2d653-a79e-4b24-b214-7f0d00141c71"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.107991 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ce2d653-a79e-4b24-b214-7f0d00141c71-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2ce2d653-a79e-4b24-b214-7f0d00141c71" (UID: "2ce2d653-a79e-4b24-b214-7f0d00141c71"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.110178 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ce2d653-a79e-4b24-b214-7f0d00141c71-kube-api-access-rgcg4" (OuterVolumeSpecName: "kube-api-access-rgcg4") pod "2ce2d653-a79e-4b24-b214-7f0d00141c71" (UID: "2ce2d653-a79e-4b24-b214-7f0d00141c71"). InnerVolumeSpecName "kube-api-access-rgcg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.110683 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce2d653-a79e-4b24-b214-7f0d00141c71-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2ce2d653-a79e-4b24-b214-7f0d00141c71" (UID: "2ce2d653-a79e-4b24-b214-7f0d00141c71"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.111802 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2ce2d653-a79e-4b24-b214-7f0d00141c71-pod-info" (OuterVolumeSpecName: "pod-info") pod "2ce2d653-a79e-4b24-b214-7f0d00141c71" (UID: "2ce2d653-a79e-4b24-b214-7f0d00141c71"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.116318 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91a67a38-f0e2-4b11-a369-c477cada57f5" (OuterVolumeSpecName: "persistence") pod "2ce2d653-a79e-4b24-b214-7f0d00141c71" (UID: "2ce2d653-a79e-4b24-b214-7f0d00141c71"). InnerVolumeSpecName "pvc-91a67a38-f0e2-4b11-a369-c477cada57f5". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.125119 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ce2d653-a79e-4b24-b214-7f0d00141c71-server-conf" (OuterVolumeSpecName: "server-conf") pod "2ce2d653-a79e-4b24-b214-7f0d00141c71" (UID: "2ce2d653-a79e-4b24-b214-7f0d00141c71"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.168894 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ce2d653-a79e-4b24-b214-7f0d00141c71-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2ce2d653-a79e-4b24-b214-7f0d00141c71" (UID: "2ce2d653-a79e-4b24-b214-7f0d00141c71"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.208140 4946 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ce2d653-a79e-4b24-b214-7f0d00141c71-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.208175 4946 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ce2d653-a79e-4b24-b214-7f0d00141c71-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.208184 4946 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ce2d653-a79e-4b24-b214-7f0d00141c71-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.208194 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgcg4\" (UniqueName: \"kubernetes.io/projected/2ce2d653-a79e-4b24-b214-7f0d00141c71-kube-api-access-rgcg4\") on node \"crc\" DevicePath \"\"" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.208206 4946 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ce2d653-a79e-4b24-b214-7f0d00141c71-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.208239 4946 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-91a67a38-f0e2-4b11-a369-c477cada57f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91a67a38-f0e2-4b11-a369-c477cada57f5\") on node \"crc\" " Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.208249 4946 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ce2d653-a79e-4b24-b214-7f0d00141c71-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.208258 4946 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ce2d653-a79e-4b24-b214-7f0d00141c71-server-conf\") on node \"crc\" DevicePath \"\"" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.208266 4946 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ce2d653-a79e-4b24-b214-7f0d00141c71-pod-info\") on node \"crc\" DevicePath \"\"" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.222180 4946 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.222308 4946 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-91a67a38-f0e2-4b11-a369-c477cada57f5" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91a67a38-f0e2-4b11-a369-c477cada57f5") on node "crc" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.309775 4946 reconciler_common.go:293] "Volume detached for volume \"pvc-91a67a38-f0e2-4b11-a369-c477cada57f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91a67a38-f0e2-4b11-a369-c477cada57f5\") on node \"crc\" DevicePath \"\"" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.553311 4946 generic.go:334] "Generic (PLEG): container finished" podID="2ce2d653-a79e-4b24-b214-7f0d00141c71" containerID="57e6a7ead57a6afccbb6807b7a34f8902a65393a5e43318c66cb482ef8c62102" exitCode=0 Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.553391 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2ce2d653-a79e-4b24-b214-7f0d00141c71","Type":"ContainerDied","Data":"57e6a7ead57a6afccbb6807b7a34f8902a65393a5e43318c66cb482ef8c62102"} Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.553420 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2ce2d653-a79e-4b24-b214-7f0d00141c71","Type":"ContainerDied","Data":"a3d792cc6cf5ee62fdb02fb209657d2a7ff00d62ee47d273e60b0c5a379265ee"} Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.553477 4946 scope.go:117] "RemoveContainer" containerID="57e6a7ead57a6afccbb6807b7a34f8902a65393a5e43318c66cb482ef8c62102" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.553594 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.559632 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0b429b35-b62b-411d-91dc-eee42d2b359c","Type":"ContainerDied","Data":"e005eab32dcef35f0e38c1ba5d952a1784c3214db728a0ff06d7b1ac9af00566"} Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.559699 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.579005 4946 scope.go:117] "RemoveContainer" containerID="cb6642bfcc272592bc9a6c98d09cab434e828778935c3197563d0800ebfd6504" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.590992 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.599379 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.622489 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.623637 4946 scope.go:117] "RemoveContainer" containerID="57e6a7ead57a6afccbb6807b7a34f8902a65393a5e43318c66cb482ef8c62102" Nov 28 08:35:10 crc kubenswrapper[4946]: E1128 08:35:10.627581 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57e6a7ead57a6afccbb6807b7a34f8902a65393a5e43318c66cb482ef8c62102\": container with ID starting with 57e6a7ead57a6afccbb6807b7a34f8902a65393a5e43318c66cb482ef8c62102 not found: ID does not exist" containerID="57e6a7ead57a6afccbb6807b7a34f8902a65393a5e43318c66cb482ef8c62102" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.627774 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57e6a7ead57a6afccbb6807b7a34f8902a65393a5e43318c66cb482ef8c62102"} err="failed to get container status \"57e6a7ead57a6afccbb6807b7a34f8902a65393a5e43318c66cb482ef8c62102\": rpc error: code = NotFound desc = could not find container \"57e6a7ead57a6afccbb6807b7a34f8902a65393a5e43318c66cb482ef8c62102\": container with ID starting with 57e6a7ead57a6afccbb6807b7a34f8902a65393a5e43318c66cb482ef8c62102 not found: ID does not exist" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.627852 4946 scope.go:117] "RemoveContainer" containerID="cb6642bfcc272592bc9a6c98d09cab434e828778935c3197563d0800ebfd6504" Nov 28 08:35:10 crc kubenswrapper[4946]: E1128 08:35:10.628290 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb6642bfcc272592bc9a6c98d09cab434e828778935c3197563d0800ebfd6504\": container with ID starting with cb6642bfcc272592bc9a6c98d09cab434e828778935c3197563d0800ebfd6504 not found: ID does not exist" containerID="cb6642bfcc272592bc9a6c98d09cab434e828778935c3197563d0800ebfd6504" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.628375 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb6642bfcc272592bc9a6c98d09cab434e828778935c3197563d0800ebfd6504"} err="failed to get container status \"cb6642bfcc272592bc9a6c98d09cab434e828778935c3197563d0800ebfd6504\": rpc error: code = NotFound desc = could not find container \"cb6642bfcc272592bc9a6c98d09cab434e828778935c3197563d0800ebfd6504\": container with ID starting with cb6642bfcc272592bc9a6c98d09cab434e828778935c3197563d0800ebfd6504 not found: ID does not exist" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.628435 4946 scope.go:117] "RemoveContainer" containerID="d0e46705e6cb42b2d00e978c77e7f622717aa445c6521ff8bf70a66744f0a877" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.630110 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.639845 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 08:35:10 crc kubenswrapper[4946]: E1128 08:35:10.640399 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ce2d653-a79e-4b24-b214-7f0d00141c71" containerName="setup-container" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.640498 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ce2d653-a79e-4b24-b214-7f0d00141c71" containerName="setup-container" Nov 28 08:35:10 crc kubenswrapper[4946]: E1128 08:35:10.640572 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b429b35-b62b-411d-91dc-eee42d2b359c" containerName="setup-container" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.640621 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b429b35-b62b-411d-91dc-eee42d2b359c" containerName="setup-container" Nov 28 08:35:10 crc kubenswrapper[4946]: E1128 08:35:10.640691 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ce2d653-a79e-4b24-b214-7f0d00141c71" containerName="rabbitmq" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.640739 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ce2d653-a79e-4b24-b214-7f0d00141c71" containerName="rabbitmq" Nov 28 08:35:10 crc kubenswrapper[4946]: E1128 08:35:10.640792 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b429b35-b62b-411d-91dc-eee42d2b359c" containerName="rabbitmq" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.640859 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b429b35-b62b-411d-91dc-eee42d2b359c" containerName="rabbitmq" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.641078 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ce2d653-a79e-4b24-b214-7f0d00141c71" containerName="rabbitmq" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.641180 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b429b35-b62b-411d-91dc-eee42d2b359c" containerName="rabbitmq" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.642110 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.644495 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.644633 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.644644 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.644664 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mc4q4" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.644715 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.647106 4946 scope.go:117] "RemoveContainer" containerID="4deca1c559f35601b2dd5f315508754a269e9a55c60877a302f7d180a94eede2" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.649714 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.702879 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.713028 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.716024 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.716233 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.716378 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.716534 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.716696 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-t4vqc" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.722030 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-266e7d83-22dd-417b-bdf7-7c3e59d7a5ab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-266e7d83-22dd-417b-bdf7-7c3e59d7a5ab\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.722088 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d90b1a28-1f52-437a-96b4-069f14a01e01-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.722116 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d90b1a28-1f52-437a-96b4-069f14a01e01-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.722238 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d90b1a28-1f52-437a-96b4-069f14a01e01-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.722272 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d90b1a28-1f52-437a-96b4-069f14a01e01-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.722308 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d90b1a28-1f52-437a-96b4-069f14a01e01-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.722342 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv5wf\" (UniqueName: \"kubernetes.io/projected/d90b1a28-1f52-437a-96b4-069f14a01e01-kube-api-access-kv5wf\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.722411 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d90b1a28-1f52-437a-96b4-069f14a01e01-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.722476 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d90b1a28-1f52-437a-96b4-069f14a01e01-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.723997 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.759434 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5dfd7bdf7-rpvww" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.804250 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65fd8458c9-rptbj"] Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.804552 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65fd8458c9-rptbj" podUID="892175b4-661d-4457-88ca-2f7c9a14a283" containerName="dnsmasq-dns" containerID="cri-o://ebd3bf22568928d6ea621f02b27f79712d2cd1900bf33d07982c0fb68ac0db18" gracePeriod=10 Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.823864 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.823944 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d90b1a28-1f52-437a-96b4-069f14a01e01-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.823981 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d90b1a28-1f52-437a-96b4-069f14a01e01-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.824015 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d90b1a28-1f52-437a-96b4-069f14a01e01-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.824038 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.824063 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-91a67a38-f0e2-4b11-a369-c477cada57f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91a67a38-f0e2-4b11-a369-c477cada57f5\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.824086 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv5wf\" (UniqueName: \"kubernetes.io/projected/d90b1a28-1f52-437a-96b4-069f14a01e01-kube-api-access-kv5wf\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.824152 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.824177 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d90b1a28-1f52-437a-96b4-069f14a01e01-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.824204 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.824242 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d90b1a28-1f52-437a-96b4-069f14a01e01-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.824274 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.824310 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-266e7d83-22dd-417b-bdf7-7c3e59d7a5ab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-266e7d83-22dd-417b-bdf7-7c3e59d7a5ab\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.824333 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxnff\" (UniqueName: \"kubernetes.io/projected/148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d-kube-api-access-pxnff\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.824359 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d90b1a28-1f52-437a-96b4-069f14a01e01-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.824379 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d90b1a28-1f52-437a-96b4-069f14a01e01-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.824402 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.824445 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.825175 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d90b1a28-1f52-437a-96b4-069f14a01e01-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.825621 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d90b1a28-1f52-437a-96b4-069f14a01e01-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.826569 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d90b1a28-1f52-437a-96b4-069f14a01e01-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.826880 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d90b1a28-1f52-437a-96b4-069f14a01e01-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.829744 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d90b1a28-1f52-437a-96b4-069f14a01e01-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.832331 4946 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.832381 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-266e7d83-22dd-417b-bdf7-7c3e59d7a5ab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-266e7d83-22dd-417b-bdf7-7c3e59d7a5ab\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2b1576cedd3e93aac170e4e8d07d9795baa0d2ce1d0390e7bf5440f071778c5d/globalmount\"" pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.841927 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d90b1a28-1f52-437a-96b4-069f14a01e01-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.844748 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d90b1a28-1f52-437a-96b4-069f14a01e01-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.852207 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv5wf\" (UniqueName: \"kubernetes.io/projected/d90b1a28-1f52-437a-96b4-069f14a01e01-kube-api-access-kv5wf\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.874776 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-266e7d83-22dd-417b-bdf7-7c3e59d7a5ab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-266e7d83-22dd-417b-bdf7-7c3e59d7a5ab\") pod \"rabbitmq-server-0\" (UID: \"d90b1a28-1f52-437a-96b4-069f14a01e01\") " pod="openstack/rabbitmq-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.925423 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.925489 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-91a67a38-f0e2-4b11-a369-c477cada57f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91a67a38-f0e2-4b11-a369-c477cada57f5\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.925551 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.925581 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.925610 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.925634 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxnff\" (UniqueName: \"kubernetes.io/projected/148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d-kube-api-access-pxnff\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.925661 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.925695 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.925723 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.926494 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.926802 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.927072 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.927120 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.929811 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.929853 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.931422 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.935029 4946 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.935069 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-91a67a38-f0e2-4b11-a369-c477cada57f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91a67a38-f0e2-4b11-a369-c477cada57f5\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ad841bab452a44bc4483df56d6386dff5f134804806addb749e9d36b65b51d57/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.941744 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxnff\" (UniqueName: \"kubernetes.io/projected/148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d-kube-api-access-pxnff\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.963078 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-91a67a38-f0e2-4b11-a369-c477cada57f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91a67a38-f0e2-4b11-a369-c477cada57f5\") pod \"rabbitmq-cell1-server-0\" (UID: \"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:10 crc kubenswrapper[4946]: I1128 08:35:10.963659 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.038960 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.121408 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65fd8458c9-rptbj" Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.229171 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/892175b4-661d-4457-88ca-2f7c9a14a283-dns-svc\") pod \"892175b4-661d-4457-88ca-2f7c9a14a283\" (UID: \"892175b4-661d-4457-88ca-2f7c9a14a283\") " Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.229275 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g98k\" (UniqueName: \"kubernetes.io/projected/892175b4-661d-4457-88ca-2f7c9a14a283-kube-api-access-7g98k\") pod \"892175b4-661d-4457-88ca-2f7c9a14a283\" (UID: \"892175b4-661d-4457-88ca-2f7c9a14a283\") " Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.229346 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/892175b4-661d-4457-88ca-2f7c9a14a283-config\") pod \"892175b4-661d-4457-88ca-2f7c9a14a283\" (UID: \"892175b4-661d-4457-88ca-2f7c9a14a283\") " Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.233689 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/892175b4-661d-4457-88ca-2f7c9a14a283-kube-api-access-7g98k" (OuterVolumeSpecName: "kube-api-access-7g98k") pod "892175b4-661d-4457-88ca-2f7c9a14a283" (UID: "892175b4-661d-4457-88ca-2f7c9a14a283"). InnerVolumeSpecName "kube-api-access-7g98k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.274762 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/892175b4-661d-4457-88ca-2f7c9a14a283-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "892175b4-661d-4457-88ca-2f7c9a14a283" (UID: "892175b4-661d-4457-88ca-2f7c9a14a283"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.277751 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/892175b4-661d-4457-88ca-2f7c9a14a283-config" (OuterVolumeSpecName: "config") pod "892175b4-661d-4457-88ca-2f7c9a14a283" (UID: "892175b4-661d-4457-88ca-2f7c9a14a283"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.330268 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/892175b4-661d-4457-88ca-2f7c9a14a283-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.330301 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g98k\" (UniqueName: \"kubernetes.io/projected/892175b4-661d-4457-88ca-2f7c9a14a283-kube-api-access-7g98k\") on node \"crc\" DevicePath \"\"" Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.330315 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/892175b4-661d-4457-88ca-2f7c9a14a283-config\") on node \"crc\" DevicePath \"\"" Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.431119 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.514220 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.576677 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d","Type":"ContainerStarted","Data":"9595dc53c80b655ea177de91d0f2f35355a2ca50e2d006d18cb855cf248832d8"} Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.593625 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d90b1a28-1f52-437a-96b4-069f14a01e01","Type":"ContainerStarted","Data":"87cda95df1b770b8b584412ebcbde073903ce0a52875293992028aa45954c8bc"} Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.596655 4946 generic.go:334] "Generic (PLEG): container finished" podID="892175b4-661d-4457-88ca-2f7c9a14a283" containerID="ebd3bf22568928d6ea621f02b27f79712d2cd1900bf33d07982c0fb68ac0db18" exitCode=0 Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.596750 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65fd8458c9-rptbj" Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.596768 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65fd8458c9-rptbj" event={"ID":"892175b4-661d-4457-88ca-2f7c9a14a283","Type":"ContainerDied","Data":"ebd3bf22568928d6ea621f02b27f79712d2cd1900bf33d07982c0fb68ac0db18"} Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.596884 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65fd8458c9-rptbj" event={"ID":"892175b4-661d-4457-88ca-2f7c9a14a283","Type":"ContainerDied","Data":"7ae71678242ee68760cea06496c0c4a77791cff113c9c5c8de08e660a8487949"} Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.596912 4946 scope.go:117] "RemoveContainer" containerID="ebd3bf22568928d6ea621f02b27f79712d2cd1900bf33d07982c0fb68ac0db18" Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.615619 4946 scope.go:117] "RemoveContainer" containerID="664f034b71db2a0022ab4abb2428a8a4a888d23dbddd865707165b239953aecc" Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.682641 4946 scope.go:117] "RemoveContainer" containerID="ebd3bf22568928d6ea621f02b27f79712d2cd1900bf33d07982c0fb68ac0db18" Nov 28 08:35:11 crc kubenswrapper[4946]: E1128 08:35:11.687624 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebd3bf22568928d6ea621f02b27f79712d2cd1900bf33d07982c0fb68ac0db18\": container with ID starting with ebd3bf22568928d6ea621f02b27f79712d2cd1900bf33d07982c0fb68ac0db18 not found: ID does not exist" containerID="ebd3bf22568928d6ea621f02b27f79712d2cd1900bf33d07982c0fb68ac0db18" Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.687677 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebd3bf22568928d6ea621f02b27f79712d2cd1900bf33d07982c0fb68ac0db18"} err="failed to get container status \"ebd3bf22568928d6ea621f02b27f79712d2cd1900bf33d07982c0fb68ac0db18\": rpc error: code = NotFound desc = could not find container \"ebd3bf22568928d6ea621f02b27f79712d2cd1900bf33d07982c0fb68ac0db18\": container with ID starting with ebd3bf22568928d6ea621f02b27f79712d2cd1900bf33d07982c0fb68ac0db18 not found: ID does not exist" Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.687704 4946 scope.go:117] "RemoveContainer" containerID="664f034b71db2a0022ab4abb2428a8a4a888d23dbddd865707165b239953aecc" Nov 28 08:35:11 crc kubenswrapper[4946]: E1128 08:35:11.688110 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"664f034b71db2a0022ab4abb2428a8a4a888d23dbddd865707165b239953aecc\": container with ID starting with 664f034b71db2a0022ab4abb2428a8a4a888d23dbddd865707165b239953aecc not found: ID does not exist" containerID="664f034b71db2a0022ab4abb2428a8a4a888d23dbddd865707165b239953aecc" Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.688155 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"664f034b71db2a0022ab4abb2428a8a4a888d23dbddd865707165b239953aecc"} err="failed to get container status \"664f034b71db2a0022ab4abb2428a8a4a888d23dbddd865707165b239953aecc\": rpc error: code = NotFound desc = could not find container \"664f034b71db2a0022ab4abb2428a8a4a888d23dbddd865707165b239953aecc\": container with ID starting with 664f034b71db2a0022ab4abb2428a8a4a888d23dbddd865707165b239953aecc not found: ID does not exist" Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.694790 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65fd8458c9-rptbj"] Nov 28 08:35:11 crc kubenswrapper[4946]: I1128 08:35:11.707312 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65fd8458c9-rptbj"] Nov 28 08:35:12 crc kubenswrapper[4946]: I1128 08:35:12.003433 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b429b35-b62b-411d-91dc-eee42d2b359c" path="/var/lib/kubelet/pods/0b429b35-b62b-411d-91dc-eee42d2b359c/volumes" Nov 28 08:35:12 crc kubenswrapper[4946]: I1128 08:35:12.004394 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ce2d653-a79e-4b24-b214-7f0d00141c71" path="/var/lib/kubelet/pods/2ce2d653-a79e-4b24-b214-7f0d00141c71/volumes" Nov 28 08:35:12 crc kubenswrapper[4946]: I1128 08:35:12.005798 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="892175b4-661d-4457-88ca-2f7c9a14a283" path="/var/lib/kubelet/pods/892175b4-661d-4457-88ca-2f7c9a14a283/volumes" Nov 28 08:35:12 crc kubenswrapper[4946]: I1128 08:35:12.629667 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d90b1a28-1f52-437a-96b4-069f14a01e01","Type":"ContainerStarted","Data":"f739c198efd480baed170227fb86b99d8021dd976f6c3915c0a735f7c8518cd4"} Nov 28 08:35:12 crc kubenswrapper[4946]: I1128 08:35:12.990799 4946 scope.go:117] "RemoveContainer" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" Nov 28 08:35:12 crc kubenswrapper[4946]: E1128 08:35:12.991380 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:35:13 crc kubenswrapper[4946]: I1128 08:35:13.641795 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d","Type":"ContainerStarted","Data":"818db46776f431a0ec52538df46a0d413e3c8c5f1041be64917a90d0783fd055"} Nov 28 08:35:27 crc kubenswrapper[4946]: I1128 08:35:27.990630 4946 scope.go:117] "RemoveContainer" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" Nov 28 08:35:27 crc kubenswrapper[4946]: E1128 08:35:27.991579 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:35:41 crc kubenswrapper[4946]: I1128 08:35:41.990219 4946 scope.go:117] "RemoveContainer" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" Nov 28 08:35:41 crc kubenswrapper[4946]: E1128 08:35:41.991514 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:35:45 crc kubenswrapper[4946]: I1128 08:35:45.954101 4946 generic.go:334] "Generic (PLEG): container finished" podID="148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d" containerID="818db46776f431a0ec52538df46a0d413e3c8c5f1041be64917a90d0783fd055" exitCode=0 Nov 28 08:35:45 crc kubenswrapper[4946]: I1128 08:35:45.954229 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d","Type":"ContainerDied","Data":"818db46776f431a0ec52538df46a0d413e3c8c5f1041be64917a90d0783fd055"} Nov 28 08:35:45 crc kubenswrapper[4946]: I1128 08:35:45.958154 4946 generic.go:334] "Generic (PLEG): container finished" podID="d90b1a28-1f52-437a-96b4-069f14a01e01" containerID="f739c198efd480baed170227fb86b99d8021dd976f6c3915c0a735f7c8518cd4" exitCode=0 Nov 28 08:35:45 crc kubenswrapper[4946]: I1128 08:35:45.958237 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d90b1a28-1f52-437a-96b4-069f14a01e01","Type":"ContainerDied","Data":"f739c198efd480baed170227fb86b99d8021dd976f6c3915c0a735f7c8518cd4"} Nov 28 08:35:46 crc kubenswrapper[4946]: I1128 08:35:46.972454 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d90b1a28-1f52-437a-96b4-069f14a01e01","Type":"ContainerStarted","Data":"bfedbde17e36141ece302d5b8750af5ec32690f16f87a28b486690a2b289f312"} Nov 28 08:35:46 crc kubenswrapper[4946]: I1128 08:35:46.973458 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 28 08:35:46 crc kubenswrapper[4946]: I1128 08:35:46.976083 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d","Type":"ContainerStarted","Data":"1fa5770573ebdd15416ed903818c80c3d9d64212b4d9194e86d35845ec05bb49"} Nov 28 08:35:46 crc kubenswrapper[4946]: I1128 08:35:46.976429 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:35:47 crc kubenswrapper[4946]: I1128 08:35:47.017270 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.017249803 podStartE2EDuration="37.017249803s" podCreationTimestamp="2025-11-28 08:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:35:47.006661601 +0000 UTC m=+6201.384726742" watchObservedRunningTime="2025-11-28 08:35:47.017249803 +0000 UTC m=+6201.395314914" Nov 28 08:35:47 crc kubenswrapper[4946]: I1128 08:35:47.056925 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.056892294 podStartE2EDuration="37.056892294s" podCreationTimestamp="2025-11-28 08:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:35:47.042431276 +0000 UTC m=+6201.420496427" watchObservedRunningTime="2025-11-28 08:35:47.056892294 +0000 UTC m=+6201.434957435" Nov 28 08:35:54 crc kubenswrapper[4946]: I1128 08:35:54.990035 4946 scope.go:117] "RemoveContainer" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" Nov 28 08:35:54 crc kubenswrapper[4946]: E1128 08:35:54.991204 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:36:00 crc kubenswrapper[4946]: I1128 08:36:00.967677 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 28 08:36:01 crc kubenswrapper[4946]: I1128 08:36:01.043796 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 28 08:36:06 crc kubenswrapper[4946]: I1128 08:36:06.049641 4946 scope.go:117] "RemoveContainer" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" Nov 28 08:36:06 crc kubenswrapper[4946]: E1128 08:36:06.050307 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:36:08 crc kubenswrapper[4946]: I1128 08:36:08.101229 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1-default"] Nov 28 08:36:08 crc kubenswrapper[4946]: E1128 08:36:08.102173 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="892175b4-661d-4457-88ca-2f7c9a14a283" containerName="dnsmasq-dns" Nov 28 08:36:08 crc kubenswrapper[4946]: I1128 08:36:08.102190 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="892175b4-661d-4457-88ca-2f7c9a14a283" containerName="dnsmasq-dns" Nov 28 08:36:08 crc kubenswrapper[4946]: E1128 08:36:08.102217 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="892175b4-661d-4457-88ca-2f7c9a14a283" containerName="init" Nov 28 08:36:08 crc kubenswrapper[4946]: I1128 08:36:08.102226 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="892175b4-661d-4457-88ca-2f7c9a14a283" containerName="init" Nov 28 08:36:08 crc kubenswrapper[4946]: I1128 08:36:08.102673 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="892175b4-661d-4457-88ca-2f7c9a14a283" containerName="dnsmasq-dns" Nov 28 08:36:08 crc kubenswrapper[4946]: I1128 08:36:08.104124 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 28 08:36:08 crc kubenswrapper[4946]: I1128 08:36:08.113263 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zsz6p" Nov 28 08:36:08 crc kubenswrapper[4946]: I1128 08:36:08.114168 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 28 08:36:08 crc kubenswrapper[4946]: I1128 08:36:08.191435 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw6b6\" (UniqueName: \"kubernetes.io/projected/a2bcaae1-4647-40dd-83c2-71b21b0f7be1-kube-api-access-bw6b6\") pod \"mariadb-client-1-default\" (UID: \"a2bcaae1-4647-40dd-83c2-71b21b0f7be1\") " pod="openstack/mariadb-client-1-default" Nov 28 08:36:08 crc kubenswrapper[4946]: I1128 08:36:08.293067 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw6b6\" (UniqueName: \"kubernetes.io/projected/a2bcaae1-4647-40dd-83c2-71b21b0f7be1-kube-api-access-bw6b6\") pod \"mariadb-client-1-default\" (UID: \"a2bcaae1-4647-40dd-83c2-71b21b0f7be1\") " pod="openstack/mariadb-client-1-default" Nov 28 08:36:08 crc kubenswrapper[4946]: I1128 08:36:08.327653 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw6b6\" (UniqueName: \"kubernetes.io/projected/a2bcaae1-4647-40dd-83c2-71b21b0f7be1-kube-api-access-bw6b6\") pod \"mariadb-client-1-default\" (UID: \"a2bcaae1-4647-40dd-83c2-71b21b0f7be1\") " pod="openstack/mariadb-client-1-default" Nov 28 08:36:08 crc kubenswrapper[4946]: I1128 08:36:08.435980 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 28 08:36:08 crc kubenswrapper[4946]: I1128 08:36:08.771819 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 28 08:36:08 crc kubenswrapper[4946]: I1128 08:36:08.787509 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 08:36:09 crc kubenswrapper[4946]: I1128 08:36:09.200251 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"a2bcaae1-4647-40dd-83c2-71b21b0f7be1","Type":"ContainerStarted","Data":"cb4b10ce9c8e118e3ee86d2b2fb3256fd88f87761efb66451eec176861541687"} Nov 28 08:36:10 crc kubenswrapper[4946]: I1128 08:36:10.210723 4946 generic.go:334] "Generic (PLEG): container finished" podID="a2bcaae1-4647-40dd-83c2-71b21b0f7be1" containerID="ea6dcb2418e2c0ed01740ba43078f6ed2533c9223044de980a28c925a5ad56b2" exitCode=0 Nov 28 08:36:10 crc kubenswrapper[4946]: I1128 08:36:10.210791 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"a2bcaae1-4647-40dd-83c2-71b21b0f7be1","Type":"ContainerDied","Data":"ea6dcb2418e2c0ed01740ba43078f6ed2533c9223044de980a28c925a5ad56b2"} Nov 28 08:36:11 crc kubenswrapper[4946]: I1128 08:36:11.691234 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 28 08:36:11 crc kubenswrapper[4946]: I1128 08:36:11.728408 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1-default_a2bcaae1-4647-40dd-83c2-71b21b0f7be1/mariadb-client-1-default/0.log" Nov 28 08:36:11 crc kubenswrapper[4946]: I1128 08:36:11.747386 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw6b6\" (UniqueName: \"kubernetes.io/projected/a2bcaae1-4647-40dd-83c2-71b21b0f7be1-kube-api-access-bw6b6\") pod \"a2bcaae1-4647-40dd-83c2-71b21b0f7be1\" (UID: \"a2bcaae1-4647-40dd-83c2-71b21b0f7be1\") " Nov 28 08:36:11 crc kubenswrapper[4946]: I1128 08:36:11.756715 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2bcaae1-4647-40dd-83c2-71b21b0f7be1-kube-api-access-bw6b6" (OuterVolumeSpecName: "kube-api-access-bw6b6") pod "a2bcaae1-4647-40dd-83c2-71b21b0f7be1" (UID: "a2bcaae1-4647-40dd-83c2-71b21b0f7be1"). InnerVolumeSpecName "kube-api-access-bw6b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:36:11 crc kubenswrapper[4946]: I1128 08:36:11.757580 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 28 08:36:11 crc kubenswrapper[4946]: I1128 08:36:11.765107 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 28 08:36:11 crc kubenswrapper[4946]: I1128 08:36:11.852685 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw6b6\" (UniqueName: \"kubernetes.io/projected/a2bcaae1-4647-40dd-83c2-71b21b0f7be1-kube-api-access-bw6b6\") on node \"crc\" DevicePath \"\"" Nov 28 08:36:12 crc kubenswrapper[4946]: I1128 08:36:12.007111 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2bcaae1-4647-40dd-83c2-71b21b0f7be1" path="/var/lib/kubelet/pods/a2bcaae1-4647-40dd-83c2-71b21b0f7be1/volumes" Nov 28 08:36:12 crc kubenswrapper[4946]: I1128 08:36:12.233923 4946 scope.go:117] "RemoveContainer" containerID="ea6dcb2418e2c0ed01740ba43078f6ed2533c9223044de980a28c925a5ad56b2" Nov 28 08:36:12 crc kubenswrapper[4946]: I1128 08:36:12.234026 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 28 08:36:12 crc kubenswrapper[4946]: I1128 08:36:12.318681 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2-default"] Nov 28 08:36:12 crc kubenswrapper[4946]: E1128 08:36:12.319313 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2bcaae1-4647-40dd-83c2-71b21b0f7be1" containerName="mariadb-client-1-default" Nov 28 08:36:12 crc kubenswrapper[4946]: I1128 08:36:12.319351 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2bcaae1-4647-40dd-83c2-71b21b0f7be1" containerName="mariadb-client-1-default" Nov 28 08:36:12 crc kubenswrapper[4946]: I1128 08:36:12.319697 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2bcaae1-4647-40dd-83c2-71b21b0f7be1" containerName="mariadb-client-1-default" Nov 28 08:36:12 crc kubenswrapper[4946]: I1128 08:36:12.320886 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 28 08:36:12 crc kubenswrapper[4946]: I1128 08:36:12.323492 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zsz6p" Nov 28 08:36:12 crc kubenswrapper[4946]: I1128 08:36:12.327121 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 28 08:36:12 crc kubenswrapper[4946]: I1128 08:36:12.374265 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv9jm\" (UniqueName: \"kubernetes.io/projected/3f37a0ba-7df3-4d29-acb5-520c184326de-kube-api-access-hv9jm\") pod \"mariadb-client-2-default\" (UID: \"3f37a0ba-7df3-4d29-acb5-520c184326de\") " pod="openstack/mariadb-client-2-default" Nov 28 08:36:12 crc kubenswrapper[4946]: I1128 08:36:12.476020 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv9jm\" (UniqueName: \"kubernetes.io/projected/3f37a0ba-7df3-4d29-acb5-520c184326de-kube-api-access-hv9jm\") pod \"mariadb-client-2-default\" (UID: \"3f37a0ba-7df3-4d29-acb5-520c184326de\") " pod="openstack/mariadb-client-2-default" Nov 28 08:36:12 crc kubenswrapper[4946]: I1128 08:36:12.507345 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv9jm\" (UniqueName: \"kubernetes.io/projected/3f37a0ba-7df3-4d29-acb5-520c184326de-kube-api-access-hv9jm\") pod \"mariadb-client-2-default\" (UID: \"3f37a0ba-7df3-4d29-acb5-520c184326de\") " pod="openstack/mariadb-client-2-default" Nov 28 08:36:12 crc kubenswrapper[4946]: I1128 08:36:12.704805 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 28 08:36:13 crc kubenswrapper[4946]: W1128 08:36:13.276290 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f37a0ba_7df3_4d29_acb5_520c184326de.slice/crio-9ee2a10186b3cf6e3b5195d813afdc5e7e8076d8a64a4f872510a21916d061d4 WatchSource:0}: Error finding container 9ee2a10186b3cf6e3b5195d813afdc5e7e8076d8a64a4f872510a21916d061d4: Status 404 returned error can't find the container with id 9ee2a10186b3cf6e3b5195d813afdc5e7e8076d8a64a4f872510a21916d061d4 Nov 28 08:36:13 crc kubenswrapper[4946]: I1128 08:36:13.280361 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 28 08:36:14 crc kubenswrapper[4946]: I1128 08:36:14.259967 4946 generic.go:334] "Generic (PLEG): container finished" podID="3f37a0ba-7df3-4d29-acb5-520c184326de" containerID="527b5d3f115cea98509b977534d5b2ac5e44b251c1fafbd28a84c9d39759713b" exitCode=1 Nov 28 08:36:14 crc kubenswrapper[4946]: I1128 08:36:14.260186 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"3f37a0ba-7df3-4d29-acb5-520c184326de","Type":"ContainerDied","Data":"527b5d3f115cea98509b977534d5b2ac5e44b251c1fafbd28a84c9d39759713b"} Nov 28 08:36:14 crc kubenswrapper[4946]: I1128 08:36:14.260386 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"3f37a0ba-7df3-4d29-acb5-520c184326de","Type":"ContainerStarted","Data":"9ee2a10186b3cf6e3b5195d813afdc5e7e8076d8a64a4f872510a21916d061d4"} Nov 28 08:36:15 crc kubenswrapper[4946]: I1128 08:36:15.796860 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 28 08:36:15 crc kubenswrapper[4946]: I1128 08:36:15.823855 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2-default_3f37a0ba-7df3-4d29-acb5-520c184326de/mariadb-client-2-default/0.log" Nov 28 08:36:15 crc kubenswrapper[4946]: I1128 08:36:15.830538 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv9jm\" (UniqueName: \"kubernetes.io/projected/3f37a0ba-7df3-4d29-acb5-520c184326de-kube-api-access-hv9jm\") pod \"3f37a0ba-7df3-4d29-acb5-520c184326de\" (UID: \"3f37a0ba-7df3-4d29-acb5-520c184326de\") " Nov 28 08:36:15 crc kubenswrapper[4946]: I1128 08:36:15.839953 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f37a0ba-7df3-4d29-acb5-520c184326de-kube-api-access-hv9jm" (OuterVolumeSpecName: "kube-api-access-hv9jm") pod "3f37a0ba-7df3-4d29-acb5-520c184326de" (UID: "3f37a0ba-7df3-4d29-acb5-520c184326de"). InnerVolumeSpecName "kube-api-access-hv9jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:36:15 crc kubenswrapper[4946]: I1128 08:36:15.862451 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 28 08:36:15 crc kubenswrapper[4946]: I1128 08:36:15.873165 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 28 08:36:15 crc kubenswrapper[4946]: I1128 08:36:15.934637 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv9jm\" (UniqueName: \"kubernetes.io/projected/3f37a0ba-7df3-4d29-acb5-520c184326de-kube-api-access-hv9jm\") on node \"crc\" DevicePath \"\"" Nov 28 08:36:16 crc kubenswrapper[4946]: I1128 08:36:16.011069 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f37a0ba-7df3-4d29-acb5-520c184326de" path="/var/lib/kubelet/pods/3f37a0ba-7df3-4d29-acb5-520c184326de/volumes" Nov 28 08:36:16 crc kubenswrapper[4946]: I1128 08:36:16.292662 4946 scope.go:117] "RemoveContainer" containerID="527b5d3f115cea98509b977534d5b2ac5e44b251c1fafbd28a84c9d39759713b" Nov 28 08:36:16 crc kubenswrapper[4946]: I1128 08:36:16.292726 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 28 08:36:16 crc kubenswrapper[4946]: I1128 08:36:16.420455 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1"] Nov 28 08:36:16 crc kubenswrapper[4946]: E1128 08:36:16.421455 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f37a0ba-7df3-4d29-acb5-520c184326de" containerName="mariadb-client-2-default" Nov 28 08:36:16 crc kubenswrapper[4946]: I1128 08:36:16.421529 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f37a0ba-7df3-4d29-acb5-520c184326de" containerName="mariadb-client-2-default" Nov 28 08:36:16 crc kubenswrapper[4946]: I1128 08:36:16.421801 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f37a0ba-7df3-4d29-acb5-520c184326de" containerName="mariadb-client-2-default" Nov 28 08:36:16 crc kubenswrapper[4946]: I1128 08:36:16.422624 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 28 08:36:16 crc kubenswrapper[4946]: I1128 08:36:16.426286 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zsz6p" Nov 28 08:36:16 crc kubenswrapper[4946]: I1128 08:36:16.439906 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Nov 28 08:36:16 crc kubenswrapper[4946]: I1128 08:36:16.542622 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c76cc\" (UniqueName: \"kubernetes.io/projected/0ed8901f-67ce-4e49-bf6e-b1579594d44c-kube-api-access-c76cc\") pod \"mariadb-client-1\" (UID: \"0ed8901f-67ce-4e49-bf6e-b1579594d44c\") " pod="openstack/mariadb-client-1" Nov 28 08:36:16 crc kubenswrapper[4946]: I1128 08:36:16.645077 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c76cc\" (UniqueName: \"kubernetes.io/projected/0ed8901f-67ce-4e49-bf6e-b1579594d44c-kube-api-access-c76cc\") pod \"mariadb-client-1\" (UID: \"0ed8901f-67ce-4e49-bf6e-b1579594d44c\") " pod="openstack/mariadb-client-1" Nov 28 08:36:16 crc kubenswrapper[4946]: I1128 08:36:16.670747 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c76cc\" (UniqueName: \"kubernetes.io/projected/0ed8901f-67ce-4e49-bf6e-b1579594d44c-kube-api-access-c76cc\") pod \"mariadb-client-1\" (UID: \"0ed8901f-67ce-4e49-bf6e-b1579594d44c\") " pod="openstack/mariadb-client-1" Nov 28 08:36:16 crc kubenswrapper[4946]: I1128 08:36:16.748688 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 28 08:36:17 crc kubenswrapper[4946]: I1128 08:36:17.382601 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Nov 28 08:36:17 crc kubenswrapper[4946]: W1128 08:36:17.386783 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ed8901f_67ce_4e49_bf6e_b1579594d44c.slice/crio-8b3a940dea63a2b4a7e34b68e62f6735b078b716b73d535df268c496e7976b8f WatchSource:0}: Error finding container 8b3a940dea63a2b4a7e34b68e62f6735b078b716b73d535df268c496e7976b8f: Status 404 returned error can't find the container with id 8b3a940dea63a2b4a7e34b68e62f6735b078b716b73d535df268c496e7976b8f Nov 28 08:36:18 crc kubenswrapper[4946]: I1128 08:36:18.323538 4946 generic.go:334] "Generic (PLEG): container finished" podID="0ed8901f-67ce-4e49-bf6e-b1579594d44c" containerID="01ff3b96ae1bf69396b955f0256d826bba998ebec43e35f3a18ef6c136a79b89" exitCode=0 Nov 28 08:36:18 crc kubenswrapper[4946]: I1128 08:36:18.323695 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"0ed8901f-67ce-4e49-bf6e-b1579594d44c","Type":"ContainerDied","Data":"01ff3b96ae1bf69396b955f0256d826bba998ebec43e35f3a18ef6c136a79b89"} Nov 28 08:36:18 crc kubenswrapper[4946]: I1128 08:36:18.323974 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"0ed8901f-67ce-4e49-bf6e-b1579594d44c","Type":"ContainerStarted","Data":"8b3a940dea63a2b4a7e34b68e62f6735b078b716b73d535df268c496e7976b8f"} Nov 28 08:36:19 crc kubenswrapper[4946]: I1128 08:36:19.889310 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 28 08:36:19 crc kubenswrapper[4946]: I1128 08:36:19.922539 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1_0ed8901f-67ce-4e49-bf6e-b1579594d44c/mariadb-client-1/0.log" Nov 28 08:36:19 crc kubenswrapper[4946]: I1128 08:36:19.967388 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1"] Nov 28 08:36:19 crc kubenswrapper[4946]: I1128 08:36:19.974189 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1"] Nov 28 08:36:19 crc kubenswrapper[4946]: I1128 08:36:19.991897 4946 scope.go:117] "RemoveContainer" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" Nov 28 08:36:19 crc kubenswrapper[4946]: E1128 08:36:19.992231 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:36:20 crc kubenswrapper[4946]: I1128 08:36:20.005074 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c76cc\" (UniqueName: \"kubernetes.io/projected/0ed8901f-67ce-4e49-bf6e-b1579594d44c-kube-api-access-c76cc\") pod \"0ed8901f-67ce-4e49-bf6e-b1579594d44c\" (UID: \"0ed8901f-67ce-4e49-bf6e-b1579594d44c\") " Nov 28 08:36:20 crc kubenswrapper[4946]: I1128 08:36:20.017725 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ed8901f-67ce-4e49-bf6e-b1579594d44c-kube-api-access-c76cc" (OuterVolumeSpecName: "kube-api-access-c76cc") pod "0ed8901f-67ce-4e49-bf6e-b1579594d44c" (UID: "0ed8901f-67ce-4e49-bf6e-b1579594d44c"). InnerVolumeSpecName "kube-api-access-c76cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:36:20 crc kubenswrapper[4946]: I1128 08:36:20.106499 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c76cc\" (UniqueName: \"kubernetes.io/projected/0ed8901f-67ce-4e49-bf6e-b1579594d44c-kube-api-access-c76cc\") on node \"crc\" DevicePath \"\"" Nov 28 08:36:20 crc kubenswrapper[4946]: I1128 08:36:20.341995 4946 scope.go:117] "RemoveContainer" containerID="01ff3b96ae1bf69396b955f0256d826bba998ebec43e35f3a18ef6c136a79b89" Nov 28 08:36:20 crc kubenswrapper[4946]: I1128 08:36:20.342093 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 28 08:36:20 crc kubenswrapper[4946]: I1128 08:36:20.521651 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-4-default"] Nov 28 08:36:20 crc kubenswrapper[4946]: E1128 08:36:20.522029 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ed8901f-67ce-4e49-bf6e-b1579594d44c" containerName="mariadb-client-1" Nov 28 08:36:20 crc kubenswrapper[4946]: I1128 08:36:20.522047 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ed8901f-67ce-4e49-bf6e-b1579594d44c" containerName="mariadb-client-1" Nov 28 08:36:20 crc kubenswrapper[4946]: I1128 08:36:20.522192 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ed8901f-67ce-4e49-bf6e-b1579594d44c" containerName="mariadb-client-1" Nov 28 08:36:20 crc kubenswrapper[4946]: I1128 08:36:20.527373 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 28 08:36:20 crc kubenswrapper[4946]: I1128 08:36:20.530525 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zsz6p" Nov 28 08:36:20 crc kubenswrapper[4946]: I1128 08:36:20.540207 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 28 08:36:20 crc kubenswrapper[4946]: I1128 08:36:20.613128 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdpbd\" (UniqueName: \"kubernetes.io/projected/ad9b528c-e9cd-409d-a9de-e3ce7d48d7ca-kube-api-access-hdpbd\") pod \"mariadb-client-4-default\" (UID: \"ad9b528c-e9cd-409d-a9de-e3ce7d48d7ca\") " pod="openstack/mariadb-client-4-default" Nov 28 08:36:20 crc kubenswrapper[4946]: I1128 08:36:20.714373 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdpbd\" (UniqueName: \"kubernetes.io/projected/ad9b528c-e9cd-409d-a9de-e3ce7d48d7ca-kube-api-access-hdpbd\") pod \"mariadb-client-4-default\" (UID: \"ad9b528c-e9cd-409d-a9de-e3ce7d48d7ca\") " pod="openstack/mariadb-client-4-default" Nov 28 08:36:20 crc kubenswrapper[4946]: I1128 08:36:20.740327 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdpbd\" (UniqueName: \"kubernetes.io/projected/ad9b528c-e9cd-409d-a9de-e3ce7d48d7ca-kube-api-access-hdpbd\") pod \"mariadb-client-4-default\" (UID: \"ad9b528c-e9cd-409d-a9de-e3ce7d48d7ca\") " pod="openstack/mariadb-client-4-default" Nov 28 08:36:20 crc kubenswrapper[4946]: I1128 08:36:20.855714 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 28 08:36:21 crc kubenswrapper[4946]: I1128 08:36:21.171225 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 28 08:36:21 crc kubenswrapper[4946]: W1128 08:36:21.174447 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad9b528c_e9cd_409d_a9de_e3ce7d48d7ca.slice/crio-25a62e4ea19f048426d4999a34282166973ae4cdf3d2c15f718ed490ac84722c WatchSource:0}: Error finding container 25a62e4ea19f048426d4999a34282166973ae4cdf3d2c15f718ed490ac84722c: Status 404 returned error can't find the container with id 25a62e4ea19f048426d4999a34282166973ae4cdf3d2c15f718ed490ac84722c Nov 28 08:36:21 crc kubenswrapper[4946]: I1128 08:36:21.353119 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"ad9b528c-e9cd-409d-a9de-e3ce7d48d7ca","Type":"ContainerStarted","Data":"25a62e4ea19f048426d4999a34282166973ae4cdf3d2c15f718ed490ac84722c"} Nov 28 08:36:22 crc kubenswrapper[4946]: I1128 08:36:22.009789 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ed8901f-67ce-4e49-bf6e-b1579594d44c" path="/var/lib/kubelet/pods/0ed8901f-67ce-4e49-bf6e-b1579594d44c/volumes" Nov 28 08:36:22 crc kubenswrapper[4946]: I1128 08:36:22.370811 4946 generic.go:334] "Generic (PLEG): container finished" podID="ad9b528c-e9cd-409d-a9de-e3ce7d48d7ca" containerID="ecf2edeaf43bfaf06ced1f733132351644570e2db61ce6c74cdff51f9eb9cee2" exitCode=0 Nov 28 08:36:22 crc kubenswrapper[4946]: I1128 08:36:22.370901 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"ad9b528c-e9cd-409d-a9de-e3ce7d48d7ca","Type":"ContainerDied","Data":"ecf2edeaf43bfaf06ced1f733132351644570e2db61ce6c74cdff51f9eb9cee2"} Nov 28 08:36:23 crc kubenswrapper[4946]: I1128 08:36:23.951248 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 28 08:36:23 crc kubenswrapper[4946]: I1128 08:36:23.972289 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdpbd\" (UniqueName: \"kubernetes.io/projected/ad9b528c-e9cd-409d-a9de-e3ce7d48d7ca-kube-api-access-hdpbd\") pod \"ad9b528c-e9cd-409d-a9de-e3ce7d48d7ca\" (UID: \"ad9b528c-e9cd-409d-a9de-e3ce7d48d7ca\") " Nov 28 08:36:23 crc kubenswrapper[4946]: I1128 08:36:23.979542 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-4-default_ad9b528c-e9cd-409d-a9de-e3ce7d48d7ca/mariadb-client-4-default/0.log" Nov 28 08:36:23 crc kubenswrapper[4946]: I1128 08:36:23.984978 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad9b528c-e9cd-409d-a9de-e3ce7d48d7ca-kube-api-access-hdpbd" (OuterVolumeSpecName: "kube-api-access-hdpbd") pod "ad9b528c-e9cd-409d-a9de-e3ce7d48d7ca" (UID: "ad9b528c-e9cd-409d-a9de-e3ce7d48d7ca"). InnerVolumeSpecName "kube-api-access-hdpbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:36:24 crc kubenswrapper[4946]: I1128 08:36:24.022781 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 28 08:36:24 crc kubenswrapper[4946]: I1128 08:36:24.035628 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 28 08:36:24 crc kubenswrapper[4946]: I1128 08:36:24.074875 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdpbd\" (UniqueName: \"kubernetes.io/projected/ad9b528c-e9cd-409d-a9de-e3ce7d48d7ca-kube-api-access-hdpbd\") on node \"crc\" DevicePath \"\"" Nov 28 08:36:24 crc kubenswrapper[4946]: I1128 08:36:24.399435 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25a62e4ea19f048426d4999a34282166973ae4cdf3d2c15f718ed490ac84722c" Nov 28 08:36:24 crc kubenswrapper[4946]: I1128 08:36:24.399571 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 28 08:36:26 crc kubenswrapper[4946]: I1128 08:36:26.004648 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad9b528c-e9cd-409d-a9de-e3ce7d48d7ca" path="/var/lib/kubelet/pods/ad9b528c-e9cd-409d-a9de-e3ce7d48d7ca/volumes" Nov 28 08:36:27 crc kubenswrapper[4946]: I1128 08:36:27.749814 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-5-default"] Nov 28 08:36:27 crc kubenswrapper[4946]: E1128 08:36:27.750529 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad9b528c-e9cd-409d-a9de-e3ce7d48d7ca" containerName="mariadb-client-4-default" Nov 28 08:36:27 crc kubenswrapper[4946]: I1128 08:36:27.750547 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad9b528c-e9cd-409d-a9de-e3ce7d48d7ca" containerName="mariadb-client-4-default" Nov 28 08:36:27 crc kubenswrapper[4946]: I1128 08:36:27.750755 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad9b528c-e9cd-409d-a9de-e3ce7d48d7ca" containerName="mariadb-client-4-default" Nov 28 08:36:27 crc kubenswrapper[4946]: I1128 08:36:27.751374 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 28 08:36:27 crc kubenswrapper[4946]: I1128 08:36:27.753262 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zsz6p" Nov 28 08:36:27 crc kubenswrapper[4946]: I1128 08:36:27.762217 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 28 08:36:27 crc kubenswrapper[4946]: I1128 08:36:27.841287 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8q8t\" (UniqueName: \"kubernetes.io/projected/8f667e10-4233-4c64-8187-2444672efee5-kube-api-access-p8q8t\") pod \"mariadb-client-5-default\" (UID: \"8f667e10-4233-4c64-8187-2444672efee5\") " pod="openstack/mariadb-client-5-default" Nov 28 08:36:27 crc kubenswrapper[4946]: I1128 08:36:27.942609 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8q8t\" (UniqueName: \"kubernetes.io/projected/8f667e10-4233-4c64-8187-2444672efee5-kube-api-access-p8q8t\") pod \"mariadb-client-5-default\" (UID: \"8f667e10-4233-4c64-8187-2444672efee5\") " pod="openstack/mariadb-client-5-default" Nov 28 08:36:27 crc kubenswrapper[4946]: I1128 08:36:27.963139 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8q8t\" (UniqueName: \"kubernetes.io/projected/8f667e10-4233-4c64-8187-2444672efee5-kube-api-access-p8q8t\") pod \"mariadb-client-5-default\" (UID: \"8f667e10-4233-4c64-8187-2444672efee5\") " pod="openstack/mariadb-client-5-default" Nov 28 08:36:28 crc kubenswrapper[4946]: I1128 08:36:28.068725 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 28 08:36:28 crc kubenswrapper[4946]: I1128 08:36:28.600337 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 28 08:36:28 crc kubenswrapper[4946]: W1128 08:36:28.602980 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f667e10_4233_4c64_8187_2444672efee5.slice/crio-9ce565e3438fbfc5cc308fe56fdd8c27653772de732bc6ac6aa4cbef506d1023 WatchSource:0}: Error finding container 9ce565e3438fbfc5cc308fe56fdd8c27653772de732bc6ac6aa4cbef506d1023: Status 404 returned error can't find the container with id 9ce565e3438fbfc5cc308fe56fdd8c27653772de732bc6ac6aa4cbef506d1023 Nov 28 08:36:29 crc kubenswrapper[4946]: I1128 08:36:29.470617 4946 generic.go:334] "Generic (PLEG): container finished" podID="8f667e10-4233-4c64-8187-2444672efee5" containerID="1e645f4441a94a730eb44e4ea3a6e5922fd739984769a3ff875163f3684154e9" exitCode=0 Nov 28 08:36:29 crc kubenswrapper[4946]: I1128 08:36:29.470973 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"8f667e10-4233-4c64-8187-2444672efee5","Type":"ContainerDied","Data":"1e645f4441a94a730eb44e4ea3a6e5922fd739984769a3ff875163f3684154e9"} Nov 28 08:36:29 crc kubenswrapper[4946]: I1128 08:36:29.471706 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"8f667e10-4233-4c64-8187-2444672efee5","Type":"ContainerStarted","Data":"9ce565e3438fbfc5cc308fe56fdd8c27653772de732bc6ac6aa4cbef506d1023"} Nov 28 08:36:30 crc kubenswrapper[4946]: I1128 08:36:30.907879 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 28 08:36:30 crc kubenswrapper[4946]: I1128 08:36:30.934472 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-5-default_8f667e10-4233-4c64-8187-2444672efee5/mariadb-client-5-default/0.log" Nov 28 08:36:30 crc kubenswrapper[4946]: I1128 08:36:30.967840 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 28 08:36:30 crc kubenswrapper[4946]: I1128 08:36:30.975185 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 28 08:36:30 crc kubenswrapper[4946]: I1128 08:36:30.988382 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8q8t\" (UniqueName: \"kubernetes.io/projected/8f667e10-4233-4c64-8187-2444672efee5-kube-api-access-p8q8t\") pod \"8f667e10-4233-4c64-8187-2444672efee5\" (UID: \"8f667e10-4233-4c64-8187-2444672efee5\") " Nov 28 08:36:30 crc kubenswrapper[4946]: I1128 08:36:30.994297 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f667e10-4233-4c64-8187-2444672efee5-kube-api-access-p8q8t" (OuterVolumeSpecName: "kube-api-access-p8q8t") pod "8f667e10-4233-4c64-8187-2444672efee5" (UID: "8f667e10-4233-4c64-8187-2444672efee5"). InnerVolumeSpecName "kube-api-access-p8q8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:36:31 crc kubenswrapper[4946]: I1128 08:36:31.091142 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8q8t\" (UniqueName: \"kubernetes.io/projected/8f667e10-4233-4c64-8187-2444672efee5-kube-api-access-p8q8t\") on node \"crc\" DevicePath \"\"" Nov 28 08:36:31 crc kubenswrapper[4946]: I1128 08:36:31.154232 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-6-default"] Nov 28 08:36:31 crc kubenswrapper[4946]: E1128 08:36:31.154569 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f667e10-4233-4c64-8187-2444672efee5" containerName="mariadb-client-5-default" Nov 28 08:36:31 crc kubenswrapper[4946]: I1128 08:36:31.154582 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f667e10-4233-4c64-8187-2444672efee5" containerName="mariadb-client-5-default" Nov 28 08:36:31 crc kubenswrapper[4946]: I1128 08:36:31.154736 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f667e10-4233-4c64-8187-2444672efee5" containerName="mariadb-client-5-default" Nov 28 08:36:31 crc kubenswrapper[4946]: I1128 08:36:31.155230 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 28 08:36:31 crc kubenswrapper[4946]: I1128 08:36:31.184569 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 28 08:36:31 crc kubenswrapper[4946]: I1128 08:36:31.192091 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j9d8\" (UniqueName: \"kubernetes.io/projected/b523ab31-abb4-4b27-ac49-b0a1656eb6ec-kube-api-access-2j9d8\") pod \"mariadb-client-6-default\" (UID: \"b523ab31-abb4-4b27-ac49-b0a1656eb6ec\") " pod="openstack/mariadb-client-6-default" Nov 28 08:36:31 crc kubenswrapper[4946]: I1128 08:36:31.293181 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j9d8\" (UniqueName: \"kubernetes.io/projected/b523ab31-abb4-4b27-ac49-b0a1656eb6ec-kube-api-access-2j9d8\") pod \"mariadb-client-6-default\" (UID: \"b523ab31-abb4-4b27-ac49-b0a1656eb6ec\") " pod="openstack/mariadb-client-6-default" Nov 28 08:36:31 crc kubenswrapper[4946]: I1128 08:36:31.310191 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j9d8\" (UniqueName: \"kubernetes.io/projected/b523ab31-abb4-4b27-ac49-b0a1656eb6ec-kube-api-access-2j9d8\") pod \"mariadb-client-6-default\" (UID: \"b523ab31-abb4-4b27-ac49-b0a1656eb6ec\") " pod="openstack/mariadb-client-6-default" Nov 28 08:36:31 crc kubenswrapper[4946]: I1128 08:36:31.489967 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ce565e3438fbfc5cc308fe56fdd8c27653772de732bc6ac6aa4cbef506d1023" Nov 28 08:36:31 crc kubenswrapper[4946]: I1128 08:36:31.490103 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 28 08:36:31 crc kubenswrapper[4946]: I1128 08:36:31.502082 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 28 08:36:32 crc kubenswrapper[4946]: I1128 08:36:32.009427 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f667e10-4233-4c64-8187-2444672efee5" path="/var/lib/kubelet/pods/8f667e10-4233-4c64-8187-2444672efee5/volumes" Nov 28 08:36:32 crc kubenswrapper[4946]: I1128 08:36:32.093060 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 28 08:36:32 crc kubenswrapper[4946]: W1128 08:36:32.104235 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb523ab31_abb4_4b27_ac49_b0a1656eb6ec.slice/crio-885cf64e541af7e17649954accda093bc8c9c689c65da74143289421eab762f5 WatchSource:0}: Error finding container 885cf64e541af7e17649954accda093bc8c9c689c65da74143289421eab762f5: Status 404 returned error can't find the container with id 885cf64e541af7e17649954accda093bc8c9c689c65da74143289421eab762f5 Nov 28 08:36:32 crc kubenswrapper[4946]: I1128 08:36:32.512896 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"b523ab31-abb4-4b27-ac49-b0a1656eb6ec","Type":"ContainerStarted","Data":"5da6e7148e8daa4d413de96670ae6bd279431d8f5422e666c955b9cd2cb73599"} Nov 28 08:36:32 crc kubenswrapper[4946]: I1128 08:36:32.513380 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"b523ab31-abb4-4b27-ac49-b0a1656eb6ec","Type":"ContainerStarted","Data":"885cf64e541af7e17649954accda093bc8c9c689c65da74143289421eab762f5"} Nov 28 08:36:32 crc kubenswrapper[4946]: I1128 08:36:32.539871 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-6-default" podStartSLOduration=1.53984454 podStartE2EDuration="1.53984454s" podCreationTimestamp="2025-11-28 08:36:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:36:32.533121604 +0000 UTC m=+6246.911186745" watchObservedRunningTime="2025-11-28 08:36:32.53984454 +0000 UTC m=+6246.917909681" Nov 28 08:36:33 crc kubenswrapper[4946]: I1128 08:36:33.525006 4946 generic.go:334] "Generic (PLEG): container finished" podID="b523ab31-abb4-4b27-ac49-b0a1656eb6ec" containerID="5da6e7148e8daa4d413de96670ae6bd279431d8f5422e666c955b9cd2cb73599" exitCode=1 Nov 28 08:36:33 crc kubenswrapper[4946]: I1128 08:36:33.525088 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"b523ab31-abb4-4b27-ac49-b0a1656eb6ec","Type":"ContainerDied","Data":"5da6e7148e8daa4d413de96670ae6bd279431d8f5422e666c955b9cd2cb73599"} Nov 28 08:36:33 crc kubenswrapper[4946]: I1128 08:36:33.992096 4946 scope.go:117] "RemoveContainer" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" Nov 28 08:36:33 crc kubenswrapper[4946]: E1128 08:36:33.994554 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:36:34 crc kubenswrapper[4946]: I1128 08:36:34.986782 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 28 08:36:35 crc kubenswrapper[4946]: I1128 08:36:35.034667 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 28 08:36:35 crc kubenswrapper[4946]: I1128 08:36:35.047329 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 28 08:36:35 crc kubenswrapper[4946]: I1128 08:36:35.053839 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j9d8\" (UniqueName: \"kubernetes.io/projected/b523ab31-abb4-4b27-ac49-b0a1656eb6ec-kube-api-access-2j9d8\") pod \"b523ab31-abb4-4b27-ac49-b0a1656eb6ec\" (UID: \"b523ab31-abb4-4b27-ac49-b0a1656eb6ec\") " Nov 28 08:36:35 crc kubenswrapper[4946]: I1128 08:36:35.059660 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b523ab31-abb4-4b27-ac49-b0a1656eb6ec-kube-api-access-2j9d8" (OuterVolumeSpecName: "kube-api-access-2j9d8") pod "b523ab31-abb4-4b27-ac49-b0a1656eb6ec" (UID: "b523ab31-abb4-4b27-ac49-b0a1656eb6ec"). InnerVolumeSpecName "kube-api-access-2j9d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:36:35 crc kubenswrapper[4946]: I1128 08:36:35.155553 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j9d8\" (UniqueName: \"kubernetes.io/projected/b523ab31-abb4-4b27-ac49-b0a1656eb6ec-kube-api-access-2j9d8\") on node \"crc\" DevicePath \"\"" Nov 28 08:36:35 crc kubenswrapper[4946]: I1128 08:36:35.227930 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-7-default"] Nov 28 08:36:35 crc kubenswrapper[4946]: E1128 08:36:35.228306 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b523ab31-abb4-4b27-ac49-b0a1656eb6ec" containerName="mariadb-client-6-default" Nov 28 08:36:35 crc kubenswrapper[4946]: I1128 08:36:35.228320 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b523ab31-abb4-4b27-ac49-b0a1656eb6ec" containerName="mariadb-client-6-default" Nov 28 08:36:35 crc kubenswrapper[4946]: I1128 08:36:35.228477 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b523ab31-abb4-4b27-ac49-b0a1656eb6ec" containerName="mariadb-client-6-default" Nov 28 08:36:35 crc kubenswrapper[4946]: I1128 08:36:35.228996 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 28 08:36:35 crc kubenswrapper[4946]: I1128 08:36:35.244588 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 28 08:36:35 crc kubenswrapper[4946]: I1128 08:36:35.256274 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txxr8\" (UniqueName: \"kubernetes.io/projected/94e807f5-e7cb-4ea9-8285-fc7934f4521f-kube-api-access-txxr8\") pod \"mariadb-client-7-default\" (UID: \"94e807f5-e7cb-4ea9-8285-fc7934f4521f\") " pod="openstack/mariadb-client-7-default" Nov 28 08:36:35 crc kubenswrapper[4946]: I1128 08:36:35.357582 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txxr8\" (UniqueName: \"kubernetes.io/projected/94e807f5-e7cb-4ea9-8285-fc7934f4521f-kube-api-access-txxr8\") pod \"mariadb-client-7-default\" (UID: \"94e807f5-e7cb-4ea9-8285-fc7934f4521f\") " pod="openstack/mariadb-client-7-default" Nov 28 08:36:35 crc kubenswrapper[4946]: I1128 08:36:35.375884 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txxr8\" (UniqueName: \"kubernetes.io/projected/94e807f5-e7cb-4ea9-8285-fc7934f4521f-kube-api-access-txxr8\") pod \"mariadb-client-7-default\" (UID: \"94e807f5-e7cb-4ea9-8285-fc7934f4521f\") " pod="openstack/mariadb-client-7-default" Nov 28 08:36:35 crc kubenswrapper[4946]: I1128 08:36:35.548774 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="885cf64e541af7e17649954accda093bc8c9c689c65da74143289421eab762f5" Nov 28 08:36:35 crc kubenswrapper[4946]: I1128 08:36:35.548828 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 28 08:36:35 crc kubenswrapper[4946]: I1128 08:36:35.556521 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 28 08:36:35 crc kubenswrapper[4946]: I1128 08:36:35.999442 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b523ab31-abb4-4b27-ac49-b0a1656eb6ec" path="/var/lib/kubelet/pods/b523ab31-abb4-4b27-ac49-b0a1656eb6ec/volumes" Nov 28 08:36:36 crc kubenswrapper[4946]: I1128 08:36:36.000439 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 28 08:36:36 crc kubenswrapper[4946]: I1128 08:36:36.557374 4946 generic.go:334] "Generic (PLEG): container finished" podID="94e807f5-e7cb-4ea9-8285-fc7934f4521f" containerID="12d9c7d74de0e3e67d2b8057678039576e371444c8149d23a80534a594b9b7a0" exitCode=0 Nov 28 08:36:36 crc kubenswrapper[4946]: I1128 08:36:36.557426 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"94e807f5-e7cb-4ea9-8285-fc7934f4521f","Type":"ContainerDied","Data":"12d9c7d74de0e3e67d2b8057678039576e371444c8149d23a80534a594b9b7a0"} Nov 28 08:36:36 crc kubenswrapper[4946]: I1128 08:36:36.557497 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"94e807f5-e7cb-4ea9-8285-fc7934f4521f","Type":"ContainerStarted","Data":"9e3d910432bf22c184ab745f81a51da57385759620b658eb48db1016217cc27e"} Nov 28 08:36:37 crc kubenswrapper[4946]: I1128 08:36:37.944350 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 28 08:36:37 crc kubenswrapper[4946]: I1128 08:36:37.966099 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-7-default_94e807f5-e7cb-4ea9-8285-fc7934f4521f/mariadb-client-7-default/0.log" Nov 28 08:36:37 crc kubenswrapper[4946]: I1128 08:36:37.995453 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txxr8\" (UniqueName: \"kubernetes.io/projected/94e807f5-e7cb-4ea9-8285-fc7934f4521f-kube-api-access-txxr8\") pod \"94e807f5-e7cb-4ea9-8285-fc7934f4521f\" (UID: \"94e807f5-e7cb-4ea9-8285-fc7934f4521f\") " Nov 28 08:36:38 crc kubenswrapper[4946]: I1128 08:36:38.003816 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94e807f5-e7cb-4ea9-8285-fc7934f4521f-kube-api-access-txxr8" (OuterVolumeSpecName: "kube-api-access-txxr8") pod "94e807f5-e7cb-4ea9-8285-fc7934f4521f" (UID: "94e807f5-e7cb-4ea9-8285-fc7934f4521f"). InnerVolumeSpecName "kube-api-access-txxr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:36:38 crc kubenswrapper[4946]: I1128 08:36:38.024270 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 28 08:36:38 crc kubenswrapper[4946]: I1128 08:36:38.028095 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 28 08:36:38 crc kubenswrapper[4946]: I1128 08:36:38.099372 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txxr8\" (UniqueName: \"kubernetes.io/projected/94e807f5-e7cb-4ea9-8285-fc7934f4521f-kube-api-access-txxr8\") on node \"crc\" DevicePath \"\"" Nov 28 08:36:38 crc kubenswrapper[4946]: I1128 08:36:38.202886 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2"] Nov 28 08:36:38 crc kubenswrapper[4946]: E1128 08:36:38.203271 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94e807f5-e7cb-4ea9-8285-fc7934f4521f" containerName="mariadb-client-7-default" Nov 28 08:36:38 crc kubenswrapper[4946]: I1128 08:36:38.203285 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="94e807f5-e7cb-4ea9-8285-fc7934f4521f" containerName="mariadb-client-7-default" Nov 28 08:36:38 crc kubenswrapper[4946]: I1128 08:36:38.203495 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="94e807f5-e7cb-4ea9-8285-fc7934f4521f" containerName="mariadb-client-7-default" Nov 28 08:36:38 crc kubenswrapper[4946]: I1128 08:36:38.204119 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 28 08:36:38 crc kubenswrapper[4946]: I1128 08:36:38.210651 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Nov 28 08:36:38 crc kubenswrapper[4946]: I1128 08:36:38.303249 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xw5p\" (UniqueName: \"kubernetes.io/projected/8460f551-1f77-46f0-aee3-62847b656edd-kube-api-access-6xw5p\") pod \"mariadb-client-2\" (UID: \"8460f551-1f77-46f0-aee3-62847b656edd\") " pod="openstack/mariadb-client-2" Nov 28 08:36:38 crc kubenswrapper[4946]: I1128 08:36:38.405435 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xw5p\" (UniqueName: \"kubernetes.io/projected/8460f551-1f77-46f0-aee3-62847b656edd-kube-api-access-6xw5p\") pod \"mariadb-client-2\" (UID: \"8460f551-1f77-46f0-aee3-62847b656edd\") " pod="openstack/mariadb-client-2" Nov 28 08:36:38 crc kubenswrapper[4946]: I1128 08:36:38.424173 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xw5p\" (UniqueName: \"kubernetes.io/projected/8460f551-1f77-46f0-aee3-62847b656edd-kube-api-access-6xw5p\") pod \"mariadb-client-2\" (UID: \"8460f551-1f77-46f0-aee3-62847b656edd\") " pod="openstack/mariadb-client-2" Nov 28 08:36:38 crc kubenswrapper[4946]: I1128 08:36:38.537311 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 28 08:36:38 crc kubenswrapper[4946]: I1128 08:36:38.593266 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e3d910432bf22c184ab745f81a51da57385759620b658eb48db1016217cc27e" Nov 28 08:36:38 crc kubenswrapper[4946]: I1128 08:36:38.593332 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 28 08:36:39 crc kubenswrapper[4946]: I1128 08:36:39.117691 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Nov 28 08:36:39 crc kubenswrapper[4946]: I1128 08:36:39.606908 4946 generic.go:334] "Generic (PLEG): container finished" podID="8460f551-1f77-46f0-aee3-62847b656edd" containerID="274b20d235eedc6dc028dd1597ada530887dc74509736d31746052c97825e531" exitCode=0 Nov 28 08:36:39 crc kubenswrapper[4946]: I1128 08:36:39.607036 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"8460f551-1f77-46f0-aee3-62847b656edd","Type":"ContainerDied","Data":"274b20d235eedc6dc028dd1597ada530887dc74509736d31746052c97825e531"} Nov 28 08:36:39 crc kubenswrapper[4946]: I1128 08:36:39.607420 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"8460f551-1f77-46f0-aee3-62847b656edd","Type":"ContainerStarted","Data":"ac1cc4ac68f299fa4ee43724e63260e1edcc96f67e4eb00ee9bf6a6ed48d081c"} Nov 28 08:36:40 crc kubenswrapper[4946]: I1128 08:36:40.005785 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94e807f5-e7cb-4ea9-8285-fc7934f4521f" path="/var/lib/kubelet/pods/94e807f5-e7cb-4ea9-8285-fc7934f4521f/volumes" Nov 28 08:36:41 crc kubenswrapper[4946]: I1128 08:36:41.023877 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 28 08:36:41 crc kubenswrapper[4946]: I1128 08:36:41.045989 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2_8460f551-1f77-46f0-aee3-62847b656edd/mariadb-client-2/0.log" Nov 28 08:36:41 crc kubenswrapper[4946]: I1128 08:36:41.082919 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2"] Nov 28 08:36:41 crc kubenswrapper[4946]: I1128 08:36:41.087313 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2"] Nov 28 08:36:41 crc kubenswrapper[4946]: I1128 08:36:41.144655 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xw5p\" (UniqueName: \"kubernetes.io/projected/8460f551-1f77-46f0-aee3-62847b656edd-kube-api-access-6xw5p\") pod \"8460f551-1f77-46f0-aee3-62847b656edd\" (UID: \"8460f551-1f77-46f0-aee3-62847b656edd\") " Nov 28 08:36:41 crc kubenswrapper[4946]: I1128 08:36:41.152209 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8460f551-1f77-46f0-aee3-62847b656edd-kube-api-access-6xw5p" (OuterVolumeSpecName: "kube-api-access-6xw5p") pod "8460f551-1f77-46f0-aee3-62847b656edd" (UID: "8460f551-1f77-46f0-aee3-62847b656edd"). InnerVolumeSpecName "kube-api-access-6xw5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:36:41 crc kubenswrapper[4946]: I1128 08:36:41.275166 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xw5p\" (UniqueName: \"kubernetes.io/projected/8460f551-1f77-46f0-aee3-62847b656edd-kube-api-access-6xw5p\") on node \"crc\" DevicePath \"\"" Nov 28 08:36:41 crc kubenswrapper[4946]: I1128 08:36:41.629877 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac1cc4ac68f299fa4ee43724e63260e1edcc96f67e4eb00ee9bf6a6ed48d081c" Nov 28 08:36:41 crc kubenswrapper[4946]: I1128 08:36:41.629981 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 28 08:36:42 crc kubenswrapper[4946]: I1128 08:36:42.017456 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8460f551-1f77-46f0-aee3-62847b656edd" path="/var/lib/kubelet/pods/8460f551-1f77-46f0-aee3-62847b656edd/volumes" Nov 28 08:36:46 crc kubenswrapper[4946]: I1128 08:36:46.990166 4946 scope.go:117] "RemoveContainer" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" Nov 28 08:36:46 crc kubenswrapper[4946]: E1128 08:36:46.991225 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:36:59 crc kubenswrapper[4946]: I1128 08:36:59.989981 4946 scope.go:117] "RemoveContainer" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" Nov 28 08:37:00 crc kubenswrapper[4946]: I1128 08:37:00.856176 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"75dc0ef0b0319332dbc8d06310fbaa445e6b0fd2c2b7e9299aaecc4276d76da3"} Nov 28 08:37:35 crc kubenswrapper[4946]: I1128 08:37:35.332357 4946 scope.go:117] "RemoveContainer" containerID="e95af5789fb71f7aec788985e4fc9087bb14bd4ee0fb9f9dabbb781aa97948c2" Nov 28 08:38:20 crc kubenswrapper[4946]: I1128 08:38:20.515881 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2kqtl"] Nov 28 08:38:20 crc kubenswrapper[4946]: E1128 08:38:20.517161 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8460f551-1f77-46f0-aee3-62847b656edd" containerName="mariadb-client-2" Nov 28 08:38:20 crc kubenswrapper[4946]: I1128 08:38:20.517195 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="8460f551-1f77-46f0-aee3-62847b656edd" containerName="mariadb-client-2" Nov 28 08:38:20 crc kubenswrapper[4946]: I1128 08:38:20.522295 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="8460f551-1f77-46f0-aee3-62847b656edd" containerName="mariadb-client-2" Nov 28 08:38:20 crc kubenswrapper[4946]: I1128 08:38:20.524081 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2kqtl" Nov 28 08:38:20 crc kubenswrapper[4946]: I1128 08:38:20.529269 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2kqtl"] Nov 28 08:38:20 crc kubenswrapper[4946]: I1128 08:38:20.682732 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edc9c85f-c248-4156-b5f8-3379bd7824c5-catalog-content\") pod \"community-operators-2kqtl\" (UID: \"edc9c85f-c248-4156-b5f8-3379bd7824c5\") " pod="openshift-marketplace/community-operators-2kqtl" Nov 28 08:38:20 crc kubenswrapper[4946]: I1128 08:38:20.682826 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twxq8\" (UniqueName: \"kubernetes.io/projected/edc9c85f-c248-4156-b5f8-3379bd7824c5-kube-api-access-twxq8\") pod \"community-operators-2kqtl\" (UID: \"edc9c85f-c248-4156-b5f8-3379bd7824c5\") " pod="openshift-marketplace/community-operators-2kqtl" Nov 28 08:38:20 crc kubenswrapper[4946]: I1128 08:38:20.682878 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edc9c85f-c248-4156-b5f8-3379bd7824c5-utilities\") pod \"community-operators-2kqtl\" (UID: \"edc9c85f-c248-4156-b5f8-3379bd7824c5\") " pod="openshift-marketplace/community-operators-2kqtl" Nov 28 08:38:20 crc kubenswrapper[4946]: I1128 08:38:20.784775 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edc9c85f-c248-4156-b5f8-3379bd7824c5-catalog-content\") pod \"community-operators-2kqtl\" (UID: \"edc9c85f-c248-4156-b5f8-3379bd7824c5\") " pod="openshift-marketplace/community-operators-2kqtl" Nov 28 08:38:20 crc kubenswrapper[4946]: I1128 08:38:20.784844 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twxq8\" (UniqueName: \"kubernetes.io/projected/edc9c85f-c248-4156-b5f8-3379bd7824c5-kube-api-access-twxq8\") pod \"community-operators-2kqtl\" (UID: \"edc9c85f-c248-4156-b5f8-3379bd7824c5\") " pod="openshift-marketplace/community-operators-2kqtl" Nov 28 08:38:20 crc kubenswrapper[4946]: I1128 08:38:20.784871 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edc9c85f-c248-4156-b5f8-3379bd7824c5-utilities\") pod \"community-operators-2kqtl\" (UID: \"edc9c85f-c248-4156-b5f8-3379bd7824c5\") " pod="openshift-marketplace/community-operators-2kqtl" Nov 28 08:38:20 crc kubenswrapper[4946]: I1128 08:38:20.785358 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edc9c85f-c248-4156-b5f8-3379bd7824c5-catalog-content\") pod \"community-operators-2kqtl\" (UID: \"edc9c85f-c248-4156-b5f8-3379bd7824c5\") " pod="openshift-marketplace/community-operators-2kqtl" Nov 28 08:38:20 crc kubenswrapper[4946]: I1128 08:38:20.785392 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edc9c85f-c248-4156-b5f8-3379bd7824c5-utilities\") pod \"community-operators-2kqtl\" (UID: \"edc9c85f-c248-4156-b5f8-3379bd7824c5\") " pod="openshift-marketplace/community-operators-2kqtl" Nov 28 08:38:20 crc kubenswrapper[4946]: I1128 08:38:20.808815 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twxq8\" (UniqueName: \"kubernetes.io/projected/edc9c85f-c248-4156-b5f8-3379bd7824c5-kube-api-access-twxq8\") pod \"community-operators-2kqtl\" (UID: \"edc9c85f-c248-4156-b5f8-3379bd7824c5\") " pod="openshift-marketplace/community-operators-2kqtl" Nov 28 08:38:20 crc kubenswrapper[4946]: I1128 08:38:20.861535 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2kqtl" Nov 28 08:38:21 crc kubenswrapper[4946]: I1128 08:38:21.333744 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2kqtl"] Nov 28 08:38:21 crc kubenswrapper[4946]: I1128 08:38:21.645702 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2kqtl" event={"ID":"edc9c85f-c248-4156-b5f8-3379bd7824c5","Type":"ContainerStarted","Data":"ef79bcb04229f4a8b80ef161eea252800b11963591362a784ee01837a65435f3"} Nov 28 08:38:21 crc kubenswrapper[4946]: I1128 08:38:21.646000 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2kqtl" event={"ID":"edc9c85f-c248-4156-b5f8-3379bd7824c5","Type":"ContainerStarted","Data":"985e9ddc717c20e280dd489c64d350cf6fd9ab9562588155fae9665559c46ffc"} Nov 28 08:38:22 crc kubenswrapper[4946]: I1128 08:38:22.657322 4946 generic.go:334] "Generic (PLEG): container finished" podID="edc9c85f-c248-4156-b5f8-3379bd7824c5" containerID="ef79bcb04229f4a8b80ef161eea252800b11963591362a784ee01837a65435f3" exitCode=0 Nov 28 08:38:22 crc kubenswrapper[4946]: I1128 08:38:22.657367 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2kqtl" event={"ID":"edc9c85f-c248-4156-b5f8-3379bd7824c5","Type":"ContainerDied","Data":"ef79bcb04229f4a8b80ef161eea252800b11963591362a784ee01837a65435f3"} Nov 28 08:38:23 crc kubenswrapper[4946]: I1128 08:38:23.672412 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2kqtl" event={"ID":"edc9c85f-c248-4156-b5f8-3379bd7824c5","Type":"ContainerStarted","Data":"2847176aa888ec8d5203c2d0cae78da6a3ff6ebfa58fa2d6ee30f4554a0d53ac"} Nov 28 08:38:24 crc kubenswrapper[4946]: I1128 08:38:24.686373 4946 generic.go:334] "Generic (PLEG): container finished" podID="edc9c85f-c248-4156-b5f8-3379bd7824c5" containerID="2847176aa888ec8d5203c2d0cae78da6a3ff6ebfa58fa2d6ee30f4554a0d53ac" exitCode=0 Nov 28 08:38:24 crc kubenswrapper[4946]: I1128 08:38:24.686443 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2kqtl" event={"ID":"edc9c85f-c248-4156-b5f8-3379bd7824c5","Type":"ContainerDied","Data":"2847176aa888ec8d5203c2d0cae78da6a3ff6ebfa58fa2d6ee30f4554a0d53ac"} Nov 28 08:38:25 crc kubenswrapper[4946]: I1128 08:38:25.695237 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2kqtl" event={"ID":"edc9c85f-c248-4156-b5f8-3379bd7824c5","Type":"ContainerStarted","Data":"c45c182be5b77897a96e65b18e4823989cf8c84c746b8c08588901bb57f67ec9"} Nov 28 08:38:25 crc kubenswrapper[4946]: I1128 08:38:25.725351 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2kqtl" podStartSLOduration=3.293096579 podStartE2EDuration="5.725322211s" podCreationTimestamp="2025-11-28 08:38:20 +0000 UTC" firstStartedPulling="2025-11-28 08:38:22.659050113 +0000 UTC m=+6357.037115264" lastFinishedPulling="2025-11-28 08:38:25.091275745 +0000 UTC m=+6359.469340896" observedRunningTime="2025-11-28 08:38:25.717861126 +0000 UTC m=+6360.095926237" watchObservedRunningTime="2025-11-28 08:38:25.725322211 +0000 UTC m=+6360.103387362" Nov 28 08:38:30 crc kubenswrapper[4946]: I1128 08:38:30.862917 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2kqtl" Nov 28 08:38:30 crc kubenswrapper[4946]: I1128 08:38:30.863557 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2kqtl" Nov 28 08:38:30 crc kubenswrapper[4946]: I1128 08:38:30.925329 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2kqtl" Nov 28 08:38:31 crc kubenswrapper[4946]: I1128 08:38:31.806494 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2kqtl" Nov 28 08:38:32 crc kubenswrapper[4946]: I1128 08:38:32.503561 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2kqtl"] Nov 28 08:38:33 crc kubenswrapper[4946]: I1128 08:38:33.779656 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2kqtl" podUID="edc9c85f-c248-4156-b5f8-3379bd7824c5" containerName="registry-server" containerID="cri-o://c45c182be5b77897a96e65b18e4823989cf8c84c746b8c08588901bb57f67ec9" gracePeriod=2 Nov 28 08:38:34 crc kubenswrapper[4946]: I1128 08:38:34.792394 4946 generic.go:334] "Generic (PLEG): container finished" podID="edc9c85f-c248-4156-b5f8-3379bd7824c5" containerID="c45c182be5b77897a96e65b18e4823989cf8c84c746b8c08588901bb57f67ec9" exitCode=0 Nov 28 08:38:34 crc kubenswrapper[4946]: I1128 08:38:34.792527 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2kqtl" event={"ID":"edc9c85f-c248-4156-b5f8-3379bd7824c5","Type":"ContainerDied","Data":"c45c182be5b77897a96e65b18e4823989cf8c84c746b8c08588901bb57f67ec9"} Nov 28 08:38:34 crc kubenswrapper[4946]: I1128 08:38:34.911981 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2kqtl" Nov 28 08:38:35 crc kubenswrapper[4946]: I1128 08:38:35.036874 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edc9c85f-c248-4156-b5f8-3379bd7824c5-catalog-content\") pod \"edc9c85f-c248-4156-b5f8-3379bd7824c5\" (UID: \"edc9c85f-c248-4156-b5f8-3379bd7824c5\") " Nov 28 08:38:35 crc kubenswrapper[4946]: I1128 08:38:35.037349 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edc9c85f-c248-4156-b5f8-3379bd7824c5-utilities\") pod \"edc9c85f-c248-4156-b5f8-3379bd7824c5\" (UID: \"edc9c85f-c248-4156-b5f8-3379bd7824c5\") " Nov 28 08:38:35 crc kubenswrapper[4946]: I1128 08:38:35.037564 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twxq8\" (UniqueName: \"kubernetes.io/projected/edc9c85f-c248-4156-b5f8-3379bd7824c5-kube-api-access-twxq8\") pod \"edc9c85f-c248-4156-b5f8-3379bd7824c5\" (UID: \"edc9c85f-c248-4156-b5f8-3379bd7824c5\") " Nov 28 08:38:35 crc kubenswrapper[4946]: I1128 08:38:35.038599 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edc9c85f-c248-4156-b5f8-3379bd7824c5-utilities" (OuterVolumeSpecName: "utilities") pod "edc9c85f-c248-4156-b5f8-3379bd7824c5" (UID: "edc9c85f-c248-4156-b5f8-3379bd7824c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:38:35 crc kubenswrapper[4946]: I1128 08:38:35.049045 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc9c85f-c248-4156-b5f8-3379bd7824c5-kube-api-access-twxq8" (OuterVolumeSpecName: "kube-api-access-twxq8") pod "edc9c85f-c248-4156-b5f8-3379bd7824c5" (UID: "edc9c85f-c248-4156-b5f8-3379bd7824c5"). InnerVolumeSpecName "kube-api-access-twxq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:38:35 crc kubenswrapper[4946]: I1128 08:38:35.124268 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edc9c85f-c248-4156-b5f8-3379bd7824c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "edc9c85f-c248-4156-b5f8-3379bd7824c5" (UID: "edc9c85f-c248-4156-b5f8-3379bd7824c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:38:35 crc kubenswrapper[4946]: I1128 08:38:35.139208 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edc9c85f-c248-4156-b5f8-3379bd7824c5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 08:38:35 crc kubenswrapper[4946]: I1128 08:38:35.139249 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edc9c85f-c248-4156-b5f8-3379bd7824c5-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 08:38:35 crc kubenswrapper[4946]: I1128 08:38:35.139264 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twxq8\" (UniqueName: \"kubernetes.io/projected/edc9c85f-c248-4156-b5f8-3379bd7824c5-kube-api-access-twxq8\") on node \"crc\" DevicePath \"\"" Nov 28 08:38:35 crc kubenswrapper[4946]: I1128 08:38:35.816417 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2kqtl" event={"ID":"edc9c85f-c248-4156-b5f8-3379bd7824c5","Type":"ContainerDied","Data":"985e9ddc717c20e280dd489c64d350cf6fd9ab9562588155fae9665559c46ffc"} Nov 28 08:38:35 crc kubenswrapper[4946]: I1128 08:38:35.816523 4946 scope.go:117] "RemoveContainer" containerID="c45c182be5b77897a96e65b18e4823989cf8c84c746b8c08588901bb57f67ec9" Nov 28 08:38:35 crc kubenswrapper[4946]: I1128 08:38:35.816526 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2kqtl" Nov 28 08:38:35 crc kubenswrapper[4946]: I1128 08:38:35.850351 4946 scope.go:117] "RemoveContainer" containerID="2847176aa888ec8d5203c2d0cae78da6a3ff6ebfa58fa2d6ee30f4554a0d53ac" Nov 28 08:38:35 crc kubenswrapper[4946]: I1128 08:38:35.868674 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2kqtl"] Nov 28 08:38:35 crc kubenswrapper[4946]: I1128 08:38:35.891542 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2kqtl"] Nov 28 08:38:35 crc kubenswrapper[4946]: I1128 08:38:35.893695 4946 scope.go:117] "RemoveContainer" containerID="ef79bcb04229f4a8b80ef161eea252800b11963591362a784ee01837a65435f3" Nov 28 08:38:36 crc kubenswrapper[4946]: I1128 08:38:36.008514 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edc9c85f-c248-4156-b5f8-3379bd7824c5" path="/var/lib/kubelet/pods/edc9c85f-c248-4156-b5f8-3379bd7824c5/volumes" Nov 28 08:39:24 crc kubenswrapper[4946]: I1128 08:39:24.731238 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:39:24 crc kubenswrapper[4946]: I1128 08:39:24.732297 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:39:54 crc kubenswrapper[4946]: I1128 08:39:54.731180 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:39:54 crc kubenswrapper[4946]: I1128 08:39:54.732150 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:40:24 crc kubenswrapper[4946]: I1128 08:40:24.731116 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:40:24 crc kubenswrapper[4946]: I1128 08:40:24.732621 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:40:24 crc kubenswrapper[4946]: I1128 08:40:24.732716 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 08:40:24 crc kubenswrapper[4946]: I1128 08:40:24.733743 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"75dc0ef0b0319332dbc8d06310fbaa445e6b0fd2c2b7e9299aaecc4276d76da3"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 08:40:24 crc kubenswrapper[4946]: I1128 08:40:24.733841 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://75dc0ef0b0319332dbc8d06310fbaa445e6b0fd2c2b7e9299aaecc4276d76da3" gracePeriod=600 Nov 28 08:40:24 crc kubenswrapper[4946]: I1128 08:40:24.971780 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="75dc0ef0b0319332dbc8d06310fbaa445e6b0fd2c2b7e9299aaecc4276d76da3" exitCode=0 Nov 28 08:40:24 crc kubenswrapper[4946]: I1128 08:40:24.971831 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"75dc0ef0b0319332dbc8d06310fbaa445e6b0fd2c2b7e9299aaecc4276d76da3"} Nov 28 08:40:24 crc kubenswrapper[4946]: I1128 08:40:24.971868 4946 scope.go:117] "RemoveContainer" containerID="e8119d549e123f1e930654e36405e56510a1c4898160d24fa79c41d4ac87c927" Nov 28 08:40:25 crc kubenswrapper[4946]: I1128 08:40:25.987429 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd"} Nov 28 08:42:35 crc kubenswrapper[4946]: I1128 08:42:35.489856 4946 scope.go:117] "RemoveContainer" containerID="ecf2edeaf43bfaf06ced1f733132351644570e2db61ce6c74cdff51f9eb9cee2" Nov 28 08:42:35 crc kubenswrapper[4946]: I1128 08:42:35.525207 4946 scope.go:117] "RemoveContainer" containerID="1e645f4441a94a730eb44e4ea3a6e5922fd739984769a3ff875163f3684154e9" Nov 28 08:42:35 crc kubenswrapper[4946]: I1128 08:42:35.583178 4946 scope.go:117] "RemoveContainer" containerID="5da6e7148e8daa4d413de96670ae6bd279431d8f5422e666c955b9cd2cb73599" Nov 28 08:42:54 crc kubenswrapper[4946]: I1128 08:42:54.730931 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:42:54 crc kubenswrapper[4946]: I1128 08:42:54.731639 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:43:24 crc kubenswrapper[4946]: I1128 08:43:24.731230 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:43:24 crc kubenswrapper[4946]: I1128 08:43:24.733530 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:43:35 crc kubenswrapper[4946]: I1128 08:43:35.689349 4946 scope.go:117] "RemoveContainer" containerID="274b20d235eedc6dc028dd1597ada530887dc74509736d31746052c97825e531" Nov 28 08:43:35 crc kubenswrapper[4946]: I1128 08:43:35.721248 4946 scope.go:117] "RemoveContainer" containerID="12d9c7d74de0e3e67d2b8057678039576e371444c8149d23a80534a594b9b7a0" Nov 28 08:43:54 crc kubenswrapper[4946]: I1128 08:43:54.730889 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:43:54 crc kubenswrapper[4946]: I1128 08:43:54.731539 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:43:54 crc kubenswrapper[4946]: I1128 08:43:54.731588 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 08:43:54 crc kubenswrapper[4946]: I1128 08:43:54.732099 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 08:43:54 crc kubenswrapper[4946]: I1128 08:43:54.732151 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" gracePeriod=600 Nov 28 08:43:54 crc kubenswrapper[4946]: E1128 08:43:54.863143 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:43:55 crc kubenswrapper[4946]: I1128 08:43:55.152141 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" exitCode=0 Nov 28 08:43:55 crc kubenswrapper[4946]: I1128 08:43:55.152200 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd"} Nov 28 08:43:55 crc kubenswrapper[4946]: I1128 08:43:55.152243 4946 scope.go:117] "RemoveContainer" containerID="75dc0ef0b0319332dbc8d06310fbaa445e6b0fd2c2b7e9299aaecc4276d76da3" Nov 28 08:43:55 crc kubenswrapper[4946]: I1128 08:43:55.152803 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:43:55 crc kubenswrapper[4946]: E1128 08:43:55.153016 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:44:09 crc kubenswrapper[4946]: I1128 08:44:09.990397 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:44:09 crc kubenswrapper[4946]: E1128 08:44:09.991271 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:44:24 crc kubenswrapper[4946]: I1128 08:44:24.989683 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:44:24 crc kubenswrapper[4946]: E1128 08:44:24.990646 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:44:38 crc kubenswrapper[4946]: I1128 08:44:38.989858 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:44:38 crc kubenswrapper[4946]: E1128 08:44:38.991711 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:44:51 crc kubenswrapper[4946]: I1128 08:44:51.275059 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Nov 28 08:44:51 crc kubenswrapper[4946]: E1128 08:44:51.276622 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc9c85f-c248-4156-b5f8-3379bd7824c5" containerName="registry-server" Nov 28 08:44:51 crc kubenswrapper[4946]: I1128 08:44:51.276659 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc9c85f-c248-4156-b5f8-3379bd7824c5" containerName="registry-server" Nov 28 08:44:51 crc kubenswrapper[4946]: E1128 08:44:51.276706 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc9c85f-c248-4156-b5f8-3379bd7824c5" containerName="extract-utilities" Nov 28 08:44:51 crc kubenswrapper[4946]: I1128 08:44:51.276724 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc9c85f-c248-4156-b5f8-3379bd7824c5" containerName="extract-utilities" Nov 28 08:44:51 crc kubenswrapper[4946]: E1128 08:44:51.276783 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc9c85f-c248-4156-b5f8-3379bd7824c5" containerName="extract-content" Nov 28 08:44:51 crc kubenswrapper[4946]: I1128 08:44:51.276803 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc9c85f-c248-4156-b5f8-3379bd7824c5" containerName="extract-content" Nov 28 08:44:51 crc kubenswrapper[4946]: I1128 08:44:51.277195 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc9c85f-c248-4156-b5f8-3379bd7824c5" containerName="registry-server" Nov 28 08:44:51 crc kubenswrapper[4946]: I1128 08:44:51.278266 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 28 08:44:51 crc kubenswrapper[4946]: I1128 08:44:51.281028 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zsz6p" Nov 28 08:44:51 crc kubenswrapper[4946]: I1128 08:44:51.281859 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Nov 28 08:44:51 crc kubenswrapper[4946]: I1128 08:44:51.387318 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj5br\" (UniqueName: \"kubernetes.io/projected/f79fc6d4-617a-4ee4-95a3-9bfb249b7327-kube-api-access-nj5br\") pod \"mariadb-copy-data\" (UID: \"f79fc6d4-617a-4ee4-95a3-9bfb249b7327\") " pod="openstack/mariadb-copy-data" Nov 28 08:44:51 crc kubenswrapper[4946]: I1128 08:44:51.387801 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6c931a98-c5e9-4bb5-adef-ed47e9be370d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6c931a98-c5e9-4bb5-adef-ed47e9be370d\") pod \"mariadb-copy-data\" (UID: \"f79fc6d4-617a-4ee4-95a3-9bfb249b7327\") " pod="openstack/mariadb-copy-data" Nov 28 08:44:51 crc kubenswrapper[4946]: I1128 08:44:51.489676 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj5br\" (UniqueName: \"kubernetes.io/projected/f79fc6d4-617a-4ee4-95a3-9bfb249b7327-kube-api-access-nj5br\") pod \"mariadb-copy-data\" (UID: \"f79fc6d4-617a-4ee4-95a3-9bfb249b7327\") " pod="openstack/mariadb-copy-data" Nov 28 08:44:51 crc kubenswrapper[4946]: I1128 08:44:51.489783 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6c931a98-c5e9-4bb5-adef-ed47e9be370d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6c931a98-c5e9-4bb5-adef-ed47e9be370d\") pod \"mariadb-copy-data\" (UID: \"f79fc6d4-617a-4ee4-95a3-9bfb249b7327\") " pod="openstack/mariadb-copy-data" Nov 28 08:44:51 crc kubenswrapper[4946]: I1128 08:44:51.492884 4946 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 08:44:51 crc kubenswrapper[4946]: I1128 08:44:51.492918 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6c931a98-c5e9-4bb5-adef-ed47e9be370d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6c931a98-c5e9-4bb5-adef-ed47e9be370d\") pod \"mariadb-copy-data\" (UID: \"f79fc6d4-617a-4ee4-95a3-9bfb249b7327\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/349f8c7007337c70b3eb3b6b9feb8e6e80b74e93aec41a40cfcaa9f0d1da9ac8/globalmount\"" pod="openstack/mariadb-copy-data" Nov 28 08:44:51 crc kubenswrapper[4946]: I1128 08:44:51.512561 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj5br\" (UniqueName: \"kubernetes.io/projected/f79fc6d4-617a-4ee4-95a3-9bfb249b7327-kube-api-access-nj5br\") pod \"mariadb-copy-data\" (UID: \"f79fc6d4-617a-4ee4-95a3-9bfb249b7327\") " pod="openstack/mariadb-copy-data" Nov 28 08:44:51 crc kubenswrapper[4946]: I1128 08:44:51.526826 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6c931a98-c5e9-4bb5-adef-ed47e9be370d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6c931a98-c5e9-4bb5-adef-ed47e9be370d\") pod \"mariadb-copy-data\" (UID: \"f79fc6d4-617a-4ee4-95a3-9bfb249b7327\") " pod="openstack/mariadb-copy-data" Nov 28 08:44:51 crc kubenswrapper[4946]: I1128 08:44:51.611990 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 28 08:44:52 crc kubenswrapper[4946]: I1128 08:44:52.176269 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Nov 28 08:44:52 crc kubenswrapper[4946]: I1128 08:44:52.691730 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"f79fc6d4-617a-4ee4-95a3-9bfb249b7327","Type":"ContainerStarted","Data":"4169463718a6ee9ad980ddf0369d0b77d0bc7ebf0b0843e67b272706799061d9"} Nov 28 08:44:52 crc kubenswrapper[4946]: I1128 08:44:52.692125 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"f79fc6d4-617a-4ee4-95a3-9bfb249b7327","Type":"ContainerStarted","Data":"c56886783167515324d73c2d8fa9b3334f2cd10d3d163284f4d46704f7f11631"} Nov 28 08:44:52 crc kubenswrapper[4946]: I1128 08:44:52.717854 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=2.717827084 podStartE2EDuration="2.717827084s" podCreationTimestamp="2025-11-28 08:44:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:44:52.71240299 +0000 UTC m=+6747.090468131" watchObservedRunningTime="2025-11-28 08:44:52.717827084 +0000 UTC m=+6747.095892225" Nov 28 08:44:53 crc kubenswrapper[4946]: I1128 08:44:53.990509 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:44:53 crc kubenswrapper[4946]: E1128 08:44:53.991018 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:44:56 crc kubenswrapper[4946]: I1128 08:44:56.626423 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Nov 28 08:44:56 crc kubenswrapper[4946]: I1128 08:44:56.627776 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 28 08:44:56 crc kubenswrapper[4946]: I1128 08:44:56.645874 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 28 08:44:56 crc kubenswrapper[4946]: I1128 08:44:56.677041 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmcf2\" (UniqueName: \"kubernetes.io/projected/0c1c5c1f-710a-4b26-b162-1745319ee72f-kube-api-access-jmcf2\") pod \"mariadb-client\" (UID: \"0c1c5c1f-710a-4b26-b162-1745319ee72f\") " pod="openstack/mariadb-client" Nov 28 08:44:56 crc kubenswrapper[4946]: I1128 08:44:56.777827 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmcf2\" (UniqueName: \"kubernetes.io/projected/0c1c5c1f-710a-4b26-b162-1745319ee72f-kube-api-access-jmcf2\") pod \"mariadb-client\" (UID: \"0c1c5c1f-710a-4b26-b162-1745319ee72f\") " pod="openstack/mariadb-client" Nov 28 08:44:56 crc kubenswrapper[4946]: I1128 08:44:56.803815 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmcf2\" (UniqueName: \"kubernetes.io/projected/0c1c5c1f-710a-4b26-b162-1745319ee72f-kube-api-access-jmcf2\") pod \"mariadb-client\" (UID: \"0c1c5c1f-710a-4b26-b162-1745319ee72f\") " pod="openstack/mariadb-client" Nov 28 08:44:56 crc kubenswrapper[4946]: I1128 08:44:56.948077 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 28 08:44:57 crc kubenswrapper[4946]: I1128 08:44:57.199301 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 28 08:44:57 crc kubenswrapper[4946]: W1128 08:44:57.207555 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c1c5c1f_710a_4b26_b162_1745319ee72f.slice/crio-0e6ec25ab776ef6d1f2e71c938c83472d9600becb9bf9e4fdd87ec2c44fc0ba6 WatchSource:0}: Error finding container 0e6ec25ab776ef6d1f2e71c938c83472d9600becb9bf9e4fdd87ec2c44fc0ba6: Status 404 returned error can't find the container with id 0e6ec25ab776ef6d1f2e71c938c83472d9600becb9bf9e4fdd87ec2c44fc0ba6 Nov 28 08:44:57 crc kubenswrapper[4946]: I1128 08:44:57.741439 4946 generic.go:334] "Generic (PLEG): container finished" podID="0c1c5c1f-710a-4b26-b162-1745319ee72f" containerID="ce3518c124a5df086af9397de0857ce5ca25d2d274726a1af2b73491ba41a4f7" exitCode=0 Nov 28 08:44:57 crc kubenswrapper[4946]: I1128 08:44:57.741536 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"0c1c5c1f-710a-4b26-b162-1745319ee72f","Type":"ContainerDied","Data":"ce3518c124a5df086af9397de0857ce5ca25d2d274726a1af2b73491ba41a4f7"} Nov 28 08:44:57 crc kubenswrapper[4946]: I1128 08:44:57.741565 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"0c1c5c1f-710a-4b26-b162-1745319ee72f","Type":"ContainerStarted","Data":"0e6ec25ab776ef6d1f2e71c938c83472d9600becb9bf9e4fdd87ec2c44fc0ba6"} Nov 28 08:44:59 crc kubenswrapper[4946]: I1128 08:44:59.116199 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 28 08:44:59 crc kubenswrapper[4946]: I1128 08:44:59.143694 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_0c1c5c1f-710a-4b26-b162-1745319ee72f/mariadb-client/0.log" Nov 28 08:44:59 crc kubenswrapper[4946]: I1128 08:44:59.178777 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Nov 28 08:44:59 crc kubenswrapper[4946]: I1128 08:44:59.188870 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Nov 28 08:44:59 crc kubenswrapper[4946]: I1128 08:44:59.214900 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmcf2\" (UniqueName: \"kubernetes.io/projected/0c1c5c1f-710a-4b26-b162-1745319ee72f-kube-api-access-jmcf2\") pod \"0c1c5c1f-710a-4b26-b162-1745319ee72f\" (UID: \"0c1c5c1f-710a-4b26-b162-1745319ee72f\") " Nov 28 08:44:59 crc kubenswrapper[4946]: I1128 08:44:59.221730 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c1c5c1f-710a-4b26-b162-1745319ee72f-kube-api-access-jmcf2" (OuterVolumeSpecName: "kube-api-access-jmcf2") pod "0c1c5c1f-710a-4b26-b162-1745319ee72f" (UID: "0c1c5c1f-710a-4b26-b162-1745319ee72f"). InnerVolumeSpecName "kube-api-access-jmcf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:44:59 crc kubenswrapper[4946]: I1128 08:44:59.316827 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmcf2\" (UniqueName: \"kubernetes.io/projected/0c1c5c1f-710a-4b26-b162-1745319ee72f-kube-api-access-jmcf2\") on node \"crc\" DevicePath \"\"" Nov 28 08:44:59 crc kubenswrapper[4946]: I1128 08:44:59.349592 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Nov 28 08:44:59 crc kubenswrapper[4946]: E1128 08:44:59.350488 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c1c5c1f-710a-4b26-b162-1745319ee72f" containerName="mariadb-client" Nov 28 08:44:59 crc kubenswrapper[4946]: I1128 08:44:59.350517 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c1c5c1f-710a-4b26-b162-1745319ee72f" containerName="mariadb-client" Nov 28 08:44:59 crc kubenswrapper[4946]: I1128 08:44:59.350812 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c1c5c1f-710a-4b26-b162-1745319ee72f" containerName="mariadb-client" Nov 28 08:44:59 crc kubenswrapper[4946]: I1128 08:44:59.351736 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 28 08:44:59 crc kubenswrapper[4946]: I1128 08:44:59.355035 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 28 08:44:59 crc kubenswrapper[4946]: I1128 08:44:59.418438 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blsjq\" (UniqueName: \"kubernetes.io/projected/20eec824-7e38-4341-906a-4b369ba4156c-kube-api-access-blsjq\") pod \"mariadb-client\" (UID: \"20eec824-7e38-4341-906a-4b369ba4156c\") " pod="openstack/mariadb-client" Nov 28 08:44:59 crc kubenswrapper[4946]: I1128 08:44:59.520050 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blsjq\" (UniqueName: \"kubernetes.io/projected/20eec824-7e38-4341-906a-4b369ba4156c-kube-api-access-blsjq\") pod \"mariadb-client\" (UID: \"20eec824-7e38-4341-906a-4b369ba4156c\") " pod="openstack/mariadb-client" Nov 28 08:44:59 crc kubenswrapper[4946]: I1128 08:44:59.552356 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blsjq\" (UniqueName: \"kubernetes.io/projected/20eec824-7e38-4341-906a-4b369ba4156c-kube-api-access-blsjq\") pod \"mariadb-client\" (UID: \"20eec824-7e38-4341-906a-4b369ba4156c\") " pod="openstack/mariadb-client" Nov 28 08:44:59 crc kubenswrapper[4946]: I1128 08:44:59.699740 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 28 08:44:59 crc kubenswrapper[4946]: I1128 08:44:59.766155 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e6ec25ab776ef6d1f2e71c938c83472d9600becb9bf9e4fdd87ec2c44fc0ba6" Nov 28 08:44:59 crc kubenswrapper[4946]: I1128 08:44:59.766231 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 28 08:44:59 crc kubenswrapper[4946]: I1128 08:44:59.788871 4946 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="0c1c5c1f-710a-4b26-b162-1745319ee72f" podUID="20eec824-7e38-4341-906a-4b369ba4156c" Nov 28 08:45:00 crc kubenswrapper[4946]: W1128 08:45:00.002224 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20eec824_7e38_4341_906a_4b369ba4156c.slice/crio-d57af38bc29b98a19a5e03cae53e7e8de4029857d242a1320194779e1c646cb3 WatchSource:0}: Error finding container d57af38bc29b98a19a5e03cae53e7e8de4029857d242a1320194779e1c646cb3: Status 404 returned error can't find the container with id d57af38bc29b98a19a5e03cae53e7e8de4029857d242a1320194779e1c646cb3 Nov 28 08:45:00 crc kubenswrapper[4946]: I1128 08:45:00.014168 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c1c5c1f-710a-4b26-b162-1745319ee72f" path="/var/lib/kubelet/pods/0c1c5c1f-710a-4b26-b162-1745319ee72f/volumes" Nov 28 08:45:00 crc kubenswrapper[4946]: I1128 08:45:00.015229 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 28 08:45:00 crc kubenswrapper[4946]: I1128 08:45:00.158458 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405325-75gv6"] Nov 28 08:45:00 crc kubenswrapper[4946]: I1128 08:45:00.159775 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405325-75gv6" Nov 28 08:45:00 crc kubenswrapper[4946]: I1128 08:45:00.165866 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 08:45:00 crc kubenswrapper[4946]: I1128 08:45:00.166243 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 08:45:00 crc kubenswrapper[4946]: I1128 08:45:00.172446 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405325-75gv6"] Nov 28 08:45:00 crc kubenswrapper[4946]: I1128 08:45:00.228051 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ddzj\" (UniqueName: \"kubernetes.io/projected/58bd43b6-936f-4ce9-8f28-d90ddfef9086-kube-api-access-6ddzj\") pod \"collect-profiles-29405325-75gv6\" (UID: \"58bd43b6-936f-4ce9-8f28-d90ddfef9086\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405325-75gv6" Nov 28 08:45:00 crc kubenswrapper[4946]: I1128 08:45:00.228126 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58bd43b6-936f-4ce9-8f28-d90ddfef9086-config-volume\") pod \"collect-profiles-29405325-75gv6\" (UID: \"58bd43b6-936f-4ce9-8f28-d90ddfef9086\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405325-75gv6" Nov 28 08:45:00 crc kubenswrapper[4946]: I1128 08:45:00.228168 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58bd43b6-936f-4ce9-8f28-d90ddfef9086-secret-volume\") pod \"collect-profiles-29405325-75gv6\" (UID: \"58bd43b6-936f-4ce9-8f28-d90ddfef9086\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405325-75gv6" Nov 28 08:45:00 crc kubenswrapper[4946]: I1128 08:45:00.329420 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ddzj\" (UniqueName: \"kubernetes.io/projected/58bd43b6-936f-4ce9-8f28-d90ddfef9086-kube-api-access-6ddzj\") pod \"collect-profiles-29405325-75gv6\" (UID: \"58bd43b6-936f-4ce9-8f28-d90ddfef9086\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405325-75gv6" Nov 28 08:45:00 crc kubenswrapper[4946]: I1128 08:45:00.329584 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58bd43b6-936f-4ce9-8f28-d90ddfef9086-config-volume\") pod \"collect-profiles-29405325-75gv6\" (UID: \"58bd43b6-936f-4ce9-8f28-d90ddfef9086\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405325-75gv6" Nov 28 08:45:00 crc kubenswrapper[4946]: I1128 08:45:00.330588 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58bd43b6-936f-4ce9-8f28-d90ddfef9086-config-volume\") pod \"collect-profiles-29405325-75gv6\" (UID: \"58bd43b6-936f-4ce9-8f28-d90ddfef9086\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405325-75gv6" Nov 28 08:45:00 crc kubenswrapper[4946]: I1128 08:45:00.330637 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58bd43b6-936f-4ce9-8f28-d90ddfef9086-secret-volume\") pod \"collect-profiles-29405325-75gv6\" (UID: \"58bd43b6-936f-4ce9-8f28-d90ddfef9086\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405325-75gv6" Nov 28 08:45:00 crc kubenswrapper[4946]: I1128 08:45:00.334868 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58bd43b6-936f-4ce9-8f28-d90ddfef9086-secret-volume\") pod \"collect-profiles-29405325-75gv6\" (UID: \"58bd43b6-936f-4ce9-8f28-d90ddfef9086\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405325-75gv6" Nov 28 08:45:00 crc kubenswrapper[4946]: I1128 08:45:00.347740 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ddzj\" (UniqueName: \"kubernetes.io/projected/58bd43b6-936f-4ce9-8f28-d90ddfef9086-kube-api-access-6ddzj\") pod \"collect-profiles-29405325-75gv6\" (UID: \"58bd43b6-936f-4ce9-8f28-d90ddfef9086\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405325-75gv6" Nov 28 08:45:00 crc kubenswrapper[4946]: E1128 08:45:00.399180 4946 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20eec824_7e38_4341_906a_4b369ba4156c.slice/crio-conmon-afccba0e154a47ac23fa6a92a761538bf8108de89683898cd10d868362c44ffa.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20eec824_7e38_4341_906a_4b369ba4156c.slice/crio-afccba0e154a47ac23fa6a92a761538bf8108de89683898cd10d868362c44ffa.scope\": RecentStats: unable to find data in memory cache]" Nov 28 08:45:00 crc kubenswrapper[4946]: I1128 08:45:00.487159 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405325-75gv6" Nov 28 08:45:00 crc kubenswrapper[4946]: I1128 08:45:00.762112 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405325-75gv6"] Nov 28 08:45:00 crc kubenswrapper[4946]: W1128 08:45:00.770229 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58bd43b6_936f_4ce9_8f28_d90ddfef9086.slice/crio-0c13905423e014e163b103ef7ec163656ddf9ed558058a70ec93af3bf1bcf5a4 WatchSource:0}: Error finding container 0c13905423e014e163b103ef7ec163656ddf9ed558058a70ec93af3bf1bcf5a4: Status 404 returned error can't find the container with id 0c13905423e014e163b103ef7ec163656ddf9ed558058a70ec93af3bf1bcf5a4 Nov 28 08:45:00 crc kubenswrapper[4946]: I1128 08:45:00.776107 4946 generic.go:334] "Generic (PLEG): container finished" podID="20eec824-7e38-4341-906a-4b369ba4156c" containerID="afccba0e154a47ac23fa6a92a761538bf8108de89683898cd10d868362c44ffa" exitCode=0 Nov 28 08:45:00 crc kubenswrapper[4946]: I1128 08:45:00.776160 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"20eec824-7e38-4341-906a-4b369ba4156c","Type":"ContainerDied","Data":"afccba0e154a47ac23fa6a92a761538bf8108de89683898cd10d868362c44ffa"} Nov 28 08:45:00 crc kubenswrapper[4946]: I1128 08:45:00.776194 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"20eec824-7e38-4341-906a-4b369ba4156c","Type":"ContainerStarted","Data":"d57af38bc29b98a19a5e03cae53e7e8de4029857d242a1320194779e1c646cb3"} Nov 28 08:45:01 crc kubenswrapper[4946]: I1128 08:45:01.787994 4946 generic.go:334] "Generic (PLEG): container finished" podID="58bd43b6-936f-4ce9-8f28-d90ddfef9086" containerID="aba5b3e284429745e165b145ef557f3b47e946e4b5136b26231008f0e2619785" exitCode=0 Nov 28 08:45:01 crc kubenswrapper[4946]: I1128 08:45:01.788206 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405325-75gv6" event={"ID":"58bd43b6-936f-4ce9-8f28-d90ddfef9086","Type":"ContainerDied","Data":"aba5b3e284429745e165b145ef557f3b47e946e4b5136b26231008f0e2619785"} Nov 28 08:45:01 crc kubenswrapper[4946]: I1128 08:45:01.788757 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405325-75gv6" event={"ID":"58bd43b6-936f-4ce9-8f28-d90ddfef9086","Type":"ContainerStarted","Data":"0c13905423e014e163b103ef7ec163656ddf9ed558058a70ec93af3bf1bcf5a4"} Nov 28 08:45:02 crc kubenswrapper[4946]: I1128 08:45:02.193708 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 28 08:45:02 crc kubenswrapper[4946]: I1128 08:45:02.218259 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_20eec824-7e38-4341-906a-4b369ba4156c/mariadb-client/0.log" Nov 28 08:45:02 crc kubenswrapper[4946]: I1128 08:45:02.251323 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Nov 28 08:45:02 crc kubenswrapper[4946]: I1128 08:45:02.260093 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Nov 28 08:45:02 crc kubenswrapper[4946]: I1128 08:45:02.267872 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blsjq\" (UniqueName: \"kubernetes.io/projected/20eec824-7e38-4341-906a-4b369ba4156c-kube-api-access-blsjq\") pod \"20eec824-7e38-4341-906a-4b369ba4156c\" (UID: \"20eec824-7e38-4341-906a-4b369ba4156c\") " Nov 28 08:45:02 crc kubenswrapper[4946]: I1128 08:45:02.273268 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20eec824-7e38-4341-906a-4b369ba4156c-kube-api-access-blsjq" (OuterVolumeSpecName: "kube-api-access-blsjq") pod "20eec824-7e38-4341-906a-4b369ba4156c" (UID: "20eec824-7e38-4341-906a-4b369ba4156c"). InnerVolumeSpecName "kube-api-access-blsjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:45:02 crc kubenswrapper[4946]: I1128 08:45:02.369890 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blsjq\" (UniqueName: \"kubernetes.io/projected/20eec824-7e38-4341-906a-4b369ba4156c-kube-api-access-blsjq\") on node \"crc\" DevicePath \"\"" Nov 28 08:45:02 crc kubenswrapper[4946]: I1128 08:45:02.802333 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d57af38bc29b98a19a5e03cae53e7e8de4029857d242a1320194779e1c646cb3" Nov 28 08:45:02 crc kubenswrapper[4946]: I1128 08:45:02.802576 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.183292 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405325-75gv6" Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.285324 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58bd43b6-936f-4ce9-8f28-d90ddfef9086-secret-volume\") pod \"58bd43b6-936f-4ce9-8f28-d90ddfef9086\" (UID: \"58bd43b6-936f-4ce9-8f28-d90ddfef9086\") " Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.285523 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ddzj\" (UniqueName: \"kubernetes.io/projected/58bd43b6-936f-4ce9-8f28-d90ddfef9086-kube-api-access-6ddzj\") pod \"58bd43b6-936f-4ce9-8f28-d90ddfef9086\" (UID: \"58bd43b6-936f-4ce9-8f28-d90ddfef9086\") " Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.285676 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58bd43b6-936f-4ce9-8f28-d90ddfef9086-config-volume\") pod \"58bd43b6-936f-4ce9-8f28-d90ddfef9086\" (UID: \"58bd43b6-936f-4ce9-8f28-d90ddfef9086\") " Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.286718 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58bd43b6-936f-4ce9-8f28-d90ddfef9086-config-volume" (OuterVolumeSpecName: "config-volume") pod "58bd43b6-936f-4ce9-8f28-d90ddfef9086" (UID: "58bd43b6-936f-4ce9-8f28-d90ddfef9086"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.291728 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58bd43b6-936f-4ce9-8f28-d90ddfef9086-kube-api-access-6ddzj" (OuterVolumeSpecName: "kube-api-access-6ddzj") pod "58bd43b6-936f-4ce9-8f28-d90ddfef9086" (UID: "58bd43b6-936f-4ce9-8f28-d90ddfef9086"). InnerVolumeSpecName "kube-api-access-6ddzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.292534 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58bd43b6-936f-4ce9-8f28-d90ddfef9086-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "58bd43b6-936f-4ce9-8f28-d90ddfef9086" (UID: "58bd43b6-936f-4ce9-8f28-d90ddfef9086"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.387953 4946 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58bd43b6-936f-4ce9-8f28-d90ddfef9086-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.387983 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ddzj\" (UniqueName: \"kubernetes.io/projected/58bd43b6-936f-4ce9-8f28-d90ddfef9086-kube-api-access-6ddzj\") on node \"crc\" DevicePath \"\"" Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.387993 4946 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58bd43b6-936f-4ce9-8f28-d90ddfef9086-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.487590 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rwhs7"] Nov 28 08:45:03 crc kubenswrapper[4946]: E1128 08:45:03.488257 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58bd43b6-936f-4ce9-8f28-d90ddfef9086" containerName="collect-profiles" Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.488288 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="58bd43b6-936f-4ce9-8f28-d90ddfef9086" containerName="collect-profiles" Nov 28 08:45:03 crc kubenswrapper[4946]: E1128 08:45:03.488320 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20eec824-7e38-4341-906a-4b369ba4156c" containerName="mariadb-client" Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.488331 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="20eec824-7e38-4341-906a-4b369ba4156c" containerName="mariadb-client" Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.488667 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="20eec824-7e38-4341-906a-4b369ba4156c" containerName="mariadb-client" Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.488716 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="58bd43b6-936f-4ce9-8f28-d90ddfef9086" containerName="collect-profiles" Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.490782 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rwhs7" Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.505343 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwhs7"] Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.591075 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c8858b6-8732-41df-b35c-d0014ea56d7f-utilities\") pod \"redhat-marketplace-rwhs7\" (UID: \"2c8858b6-8732-41df-b35c-d0014ea56d7f\") " pod="openshift-marketplace/redhat-marketplace-rwhs7" Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.591128 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c8858b6-8732-41df-b35c-d0014ea56d7f-catalog-content\") pod \"redhat-marketplace-rwhs7\" (UID: \"2c8858b6-8732-41df-b35c-d0014ea56d7f\") " pod="openshift-marketplace/redhat-marketplace-rwhs7" Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.591378 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6dzg\" (UniqueName: \"kubernetes.io/projected/2c8858b6-8732-41df-b35c-d0014ea56d7f-kube-api-access-b6dzg\") pod \"redhat-marketplace-rwhs7\" (UID: \"2c8858b6-8732-41df-b35c-d0014ea56d7f\") " pod="openshift-marketplace/redhat-marketplace-rwhs7" Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.692991 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c8858b6-8732-41df-b35c-d0014ea56d7f-utilities\") pod \"redhat-marketplace-rwhs7\" (UID: \"2c8858b6-8732-41df-b35c-d0014ea56d7f\") " pod="openshift-marketplace/redhat-marketplace-rwhs7" Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.693043 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c8858b6-8732-41df-b35c-d0014ea56d7f-catalog-content\") pod \"redhat-marketplace-rwhs7\" (UID: \"2c8858b6-8732-41df-b35c-d0014ea56d7f\") " pod="openshift-marketplace/redhat-marketplace-rwhs7" Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.693125 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6dzg\" (UniqueName: \"kubernetes.io/projected/2c8858b6-8732-41df-b35c-d0014ea56d7f-kube-api-access-b6dzg\") pod \"redhat-marketplace-rwhs7\" (UID: \"2c8858b6-8732-41df-b35c-d0014ea56d7f\") " pod="openshift-marketplace/redhat-marketplace-rwhs7" Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.693846 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c8858b6-8732-41df-b35c-d0014ea56d7f-utilities\") pod \"redhat-marketplace-rwhs7\" (UID: \"2c8858b6-8732-41df-b35c-d0014ea56d7f\") " pod="openshift-marketplace/redhat-marketplace-rwhs7" Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.694128 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c8858b6-8732-41df-b35c-d0014ea56d7f-catalog-content\") pod \"redhat-marketplace-rwhs7\" (UID: \"2c8858b6-8732-41df-b35c-d0014ea56d7f\") " pod="openshift-marketplace/redhat-marketplace-rwhs7" Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.714993 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6dzg\" (UniqueName: \"kubernetes.io/projected/2c8858b6-8732-41df-b35c-d0014ea56d7f-kube-api-access-b6dzg\") pod \"redhat-marketplace-rwhs7\" (UID: \"2c8858b6-8732-41df-b35c-d0014ea56d7f\") " pod="openshift-marketplace/redhat-marketplace-rwhs7" Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.811916 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405325-75gv6" event={"ID":"58bd43b6-936f-4ce9-8f28-d90ddfef9086","Type":"ContainerDied","Data":"0c13905423e014e163b103ef7ec163656ddf9ed558058a70ec93af3bf1bcf5a4"} Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.811966 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c13905423e014e163b103ef7ec163656ddf9ed558058a70ec93af3bf1bcf5a4" Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.812008 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405325-75gv6" Nov 28 08:45:03 crc kubenswrapper[4946]: I1128 08:45:03.816416 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rwhs7" Nov 28 08:45:04 crc kubenswrapper[4946]: I1128 08:45:04.006296 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20eec824-7e38-4341-906a-4b369ba4156c" path="/var/lib/kubelet/pods/20eec824-7e38-4341-906a-4b369ba4156c/volumes" Nov 28 08:45:04 crc kubenswrapper[4946]: I1128 08:45:04.255034 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405280-c8d4d"] Nov 28 08:45:04 crc kubenswrapper[4946]: I1128 08:45:04.264956 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405280-c8d4d"] Nov 28 08:45:04 crc kubenswrapper[4946]: I1128 08:45:04.287776 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwhs7"] Nov 28 08:45:04 crc kubenswrapper[4946]: I1128 08:45:04.823152 4946 generic.go:334] "Generic (PLEG): container finished" podID="2c8858b6-8732-41df-b35c-d0014ea56d7f" containerID="ac8c6064617ff9f43e0359450e01f50af1b393c4eafa9dbd2d6d263cd981ed5f" exitCode=0 Nov 28 08:45:04 crc kubenswrapper[4946]: I1128 08:45:04.823251 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwhs7" event={"ID":"2c8858b6-8732-41df-b35c-d0014ea56d7f","Type":"ContainerDied","Data":"ac8c6064617ff9f43e0359450e01f50af1b393c4eafa9dbd2d6d263cd981ed5f"} Nov 28 08:45:04 crc kubenswrapper[4946]: I1128 08:45:04.823660 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwhs7" event={"ID":"2c8858b6-8732-41df-b35c-d0014ea56d7f","Type":"ContainerStarted","Data":"19f02da1f47a57e1dcfd410c5c688c07a197e084ee22947b9bf0f73cbbe741fb"} Nov 28 08:45:04 crc kubenswrapper[4946]: I1128 08:45:04.826005 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 08:45:05 crc kubenswrapper[4946]: I1128 08:45:05.839767 4946 generic.go:334] "Generic (PLEG): container finished" podID="2c8858b6-8732-41df-b35c-d0014ea56d7f" containerID="e7097c44ade8ab79e3255676c35618ab8d10e7a06a83c9f72c551eda913c6898" exitCode=0 Nov 28 08:45:05 crc kubenswrapper[4946]: I1128 08:45:05.839830 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwhs7" event={"ID":"2c8858b6-8732-41df-b35c-d0014ea56d7f","Type":"ContainerDied","Data":"e7097c44ade8ab79e3255676c35618ab8d10e7a06a83c9f72c551eda913c6898"} Nov 28 08:45:06 crc kubenswrapper[4946]: I1128 08:45:06.002045 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18c21b02-adb9-480f-8577-18a5542d3950" path="/var/lib/kubelet/pods/18c21b02-adb9-480f-8577-18a5542d3950/volumes" Nov 28 08:45:06 crc kubenswrapper[4946]: I1128 08:45:06.852610 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwhs7" event={"ID":"2c8858b6-8732-41df-b35c-d0014ea56d7f","Type":"ContainerStarted","Data":"d251abb6959b9600ad3d62092d8158ffe6f7aa0b241f23ef83db58316eaa55c0"} Nov 28 08:45:06 crc kubenswrapper[4946]: I1128 08:45:06.881778 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rwhs7" podStartSLOduration=2.358531259 podStartE2EDuration="3.881748342s" podCreationTimestamp="2025-11-28 08:45:03 +0000 UTC" firstStartedPulling="2025-11-28 08:45:04.825789639 +0000 UTC m=+6759.203854750" lastFinishedPulling="2025-11-28 08:45:06.349006692 +0000 UTC m=+6760.727071833" observedRunningTime="2025-11-28 08:45:06.872026911 +0000 UTC m=+6761.250092072" watchObservedRunningTime="2025-11-28 08:45:06.881748342 +0000 UTC m=+6761.259813483" Nov 28 08:45:07 crc kubenswrapper[4946]: I1128 08:45:07.991696 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:45:07 crc kubenswrapper[4946]: E1128 08:45:07.993908 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:45:10 crc kubenswrapper[4946]: I1128 08:45:10.662448 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kc9cf"] Nov 28 08:45:10 crc kubenswrapper[4946]: I1128 08:45:10.666120 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kc9cf" Nov 28 08:45:10 crc kubenswrapper[4946]: I1128 08:45:10.672897 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kc9cf"] Nov 28 08:45:10 crc kubenswrapper[4946]: I1128 08:45:10.704608 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k26v2\" (UniqueName: \"kubernetes.io/projected/282d338d-1b91-4583-9499-c72b53475d62-kube-api-access-k26v2\") pod \"redhat-operators-kc9cf\" (UID: \"282d338d-1b91-4583-9499-c72b53475d62\") " pod="openshift-marketplace/redhat-operators-kc9cf" Nov 28 08:45:10 crc kubenswrapper[4946]: I1128 08:45:10.704663 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/282d338d-1b91-4583-9499-c72b53475d62-utilities\") pod \"redhat-operators-kc9cf\" (UID: \"282d338d-1b91-4583-9499-c72b53475d62\") " pod="openshift-marketplace/redhat-operators-kc9cf" Nov 28 08:45:10 crc kubenswrapper[4946]: I1128 08:45:10.704692 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/282d338d-1b91-4583-9499-c72b53475d62-catalog-content\") pod \"redhat-operators-kc9cf\" (UID: \"282d338d-1b91-4583-9499-c72b53475d62\") " pod="openshift-marketplace/redhat-operators-kc9cf" Nov 28 08:45:10 crc kubenswrapper[4946]: I1128 08:45:10.806370 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/282d338d-1b91-4583-9499-c72b53475d62-utilities\") pod \"redhat-operators-kc9cf\" (UID: \"282d338d-1b91-4583-9499-c72b53475d62\") " pod="openshift-marketplace/redhat-operators-kc9cf" Nov 28 08:45:10 crc kubenswrapper[4946]: I1128 08:45:10.806430 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/282d338d-1b91-4583-9499-c72b53475d62-catalog-content\") pod \"redhat-operators-kc9cf\" (UID: \"282d338d-1b91-4583-9499-c72b53475d62\") " pod="openshift-marketplace/redhat-operators-kc9cf" Nov 28 08:45:10 crc kubenswrapper[4946]: I1128 08:45:10.806533 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k26v2\" (UniqueName: \"kubernetes.io/projected/282d338d-1b91-4583-9499-c72b53475d62-kube-api-access-k26v2\") pod \"redhat-operators-kc9cf\" (UID: \"282d338d-1b91-4583-9499-c72b53475d62\") " pod="openshift-marketplace/redhat-operators-kc9cf" Nov 28 08:45:10 crc kubenswrapper[4946]: I1128 08:45:10.807014 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/282d338d-1b91-4583-9499-c72b53475d62-utilities\") pod \"redhat-operators-kc9cf\" (UID: \"282d338d-1b91-4583-9499-c72b53475d62\") " pod="openshift-marketplace/redhat-operators-kc9cf" Nov 28 08:45:10 crc kubenswrapper[4946]: I1128 08:45:10.807286 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/282d338d-1b91-4583-9499-c72b53475d62-catalog-content\") pod \"redhat-operators-kc9cf\" (UID: \"282d338d-1b91-4583-9499-c72b53475d62\") " pod="openshift-marketplace/redhat-operators-kc9cf" Nov 28 08:45:10 crc kubenswrapper[4946]: I1128 08:45:10.829302 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k26v2\" (UniqueName: \"kubernetes.io/projected/282d338d-1b91-4583-9499-c72b53475d62-kube-api-access-k26v2\") pod \"redhat-operators-kc9cf\" (UID: \"282d338d-1b91-4583-9499-c72b53475d62\") " pod="openshift-marketplace/redhat-operators-kc9cf" Nov 28 08:45:11 crc kubenswrapper[4946]: I1128 08:45:11.002225 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kc9cf" Nov 28 08:45:11 crc kubenswrapper[4946]: I1128 08:45:11.471761 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kc9cf"] Nov 28 08:45:11 crc kubenswrapper[4946]: W1128 08:45:11.479981 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod282d338d_1b91_4583_9499_c72b53475d62.slice/crio-ab0424634b9a259bdc51b28a652713cb541f411a25d4fe121724811508ca6cfc WatchSource:0}: Error finding container ab0424634b9a259bdc51b28a652713cb541f411a25d4fe121724811508ca6cfc: Status 404 returned error can't find the container with id ab0424634b9a259bdc51b28a652713cb541f411a25d4fe121724811508ca6cfc Nov 28 08:45:11 crc kubenswrapper[4946]: I1128 08:45:11.898823 4946 generic.go:334] "Generic (PLEG): container finished" podID="282d338d-1b91-4583-9499-c72b53475d62" containerID="a9f17fe3cd62480224c65e62a03493e9150171ef2a744aaa3a2118fcfd3de4c9" exitCode=0 Nov 28 08:45:11 crc kubenswrapper[4946]: I1128 08:45:11.898918 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kc9cf" event={"ID":"282d338d-1b91-4583-9499-c72b53475d62","Type":"ContainerDied","Data":"a9f17fe3cd62480224c65e62a03493e9150171ef2a744aaa3a2118fcfd3de4c9"} Nov 28 08:45:11 crc kubenswrapper[4946]: I1128 08:45:11.899136 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kc9cf" event={"ID":"282d338d-1b91-4583-9499-c72b53475d62","Type":"ContainerStarted","Data":"ab0424634b9a259bdc51b28a652713cb541f411a25d4fe121724811508ca6cfc"} Nov 28 08:45:12 crc kubenswrapper[4946]: I1128 08:45:12.911039 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kc9cf" event={"ID":"282d338d-1b91-4583-9499-c72b53475d62","Type":"ContainerStarted","Data":"cea120be9e32b572b9c1ceb7b48d648a2381746d18d6344f1dae50bca7b8b4f0"} Nov 28 08:45:13 crc kubenswrapper[4946]: I1128 08:45:13.064580 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8rbst"] Nov 28 08:45:13 crc kubenswrapper[4946]: I1128 08:45:13.066309 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rbst" Nov 28 08:45:13 crc kubenswrapper[4946]: I1128 08:45:13.082676 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8rbst"] Nov 28 08:45:13 crc kubenswrapper[4946]: I1128 08:45:13.141759 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afbd247d-2c6c-4e82-8643-4a0cf6fb7bda-utilities\") pod \"certified-operators-8rbst\" (UID: \"afbd247d-2c6c-4e82-8643-4a0cf6fb7bda\") " pod="openshift-marketplace/certified-operators-8rbst" Nov 28 08:45:13 crc kubenswrapper[4946]: I1128 08:45:13.141847 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dr27\" (UniqueName: \"kubernetes.io/projected/afbd247d-2c6c-4e82-8643-4a0cf6fb7bda-kube-api-access-8dr27\") pod \"certified-operators-8rbst\" (UID: \"afbd247d-2c6c-4e82-8643-4a0cf6fb7bda\") " pod="openshift-marketplace/certified-operators-8rbst" Nov 28 08:45:13 crc kubenswrapper[4946]: I1128 08:45:13.141964 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afbd247d-2c6c-4e82-8643-4a0cf6fb7bda-catalog-content\") pod \"certified-operators-8rbst\" (UID: \"afbd247d-2c6c-4e82-8643-4a0cf6fb7bda\") " pod="openshift-marketplace/certified-operators-8rbst" Nov 28 08:45:13 crc kubenswrapper[4946]: I1128 08:45:13.243563 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afbd247d-2c6c-4e82-8643-4a0cf6fb7bda-catalog-content\") pod \"certified-operators-8rbst\" (UID: \"afbd247d-2c6c-4e82-8643-4a0cf6fb7bda\") " pod="openshift-marketplace/certified-operators-8rbst" Nov 28 08:45:13 crc kubenswrapper[4946]: I1128 08:45:13.243675 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afbd247d-2c6c-4e82-8643-4a0cf6fb7bda-utilities\") pod \"certified-operators-8rbst\" (UID: \"afbd247d-2c6c-4e82-8643-4a0cf6fb7bda\") " pod="openshift-marketplace/certified-operators-8rbst" Nov 28 08:45:13 crc kubenswrapper[4946]: I1128 08:45:13.243710 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dr27\" (UniqueName: \"kubernetes.io/projected/afbd247d-2c6c-4e82-8643-4a0cf6fb7bda-kube-api-access-8dr27\") pod \"certified-operators-8rbst\" (UID: \"afbd247d-2c6c-4e82-8643-4a0cf6fb7bda\") " pod="openshift-marketplace/certified-operators-8rbst" Nov 28 08:45:13 crc kubenswrapper[4946]: I1128 08:45:13.244525 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afbd247d-2c6c-4e82-8643-4a0cf6fb7bda-catalog-content\") pod \"certified-operators-8rbst\" (UID: \"afbd247d-2c6c-4e82-8643-4a0cf6fb7bda\") " pod="openshift-marketplace/certified-operators-8rbst" Nov 28 08:45:13 crc kubenswrapper[4946]: I1128 08:45:13.244820 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afbd247d-2c6c-4e82-8643-4a0cf6fb7bda-utilities\") pod \"certified-operators-8rbst\" (UID: \"afbd247d-2c6c-4e82-8643-4a0cf6fb7bda\") " pod="openshift-marketplace/certified-operators-8rbst" Nov 28 08:45:13 crc kubenswrapper[4946]: I1128 08:45:13.265207 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dr27\" (UniqueName: \"kubernetes.io/projected/afbd247d-2c6c-4e82-8643-4a0cf6fb7bda-kube-api-access-8dr27\") pod \"certified-operators-8rbst\" (UID: \"afbd247d-2c6c-4e82-8643-4a0cf6fb7bda\") " pod="openshift-marketplace/certified-operators-8rbst" Nov 28 08:45:13 crc kubenswrapper[4946]: I1128 08:45:13.385116 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rbst" Nov 28 08:45:13 crc kubenswrapper[4946]: I1128 08:45:13.817179 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rwhs7" Nov 28 08:45:13 crc kubenswrapper[4946]: I1128 08:45:13.817613 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rwhs7" Nov 28 08:45:13 crc kubenswrapper[4946]: I1128 08:45:13.864967 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rwhs7" Nov 28 08:45:13 crc kubenswrapper[4946]: I1128 08:45:13.865638 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8rbst"] Nov 28 08:45:13 crc kubenswrapper[4946]: W1128 08:45:13.876849 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafbd247d_2c6c_4e82_8643_4a0cf6fb7bda.slice/crio-6b7b5e5d248b373ad5db8dd7718afe2e6410406a40ccef7530cdace6b5893716 WatchSource:0}: Error finding container 6b7b5e5d248b373ad5db8dd7718afe2e6410406a40ccef7530cdace6b5893716: Status 404 returned error can't find the container with id 6b7b5e5d248b373ad5db8dd7718afe2e6410406a40ccef7530cdace6b5893716 Nov 28 08:45:13 crc kubenswrapper[4946]: I1128 08:45:13.922574 4946 generic.go:334] "Generic (PLEG): container finished" podID="282d338d-1b91-4583-9499-c72b53475d62" containerID="cea120be9e32b572b9c1ceb7b48d648a2381746d18d6344f1dae50bca7b8b4f0" exitCode=0 Nov 28 08:45:13 crc kubenswrapper[4946]: I1128 08:45:13.922659 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kc9cf" event={"ID":"282d338d-1b91-4583-9499-c72b53475d62","Type":"ContainerDied","Data":"cea120be9e32b572b9c1ceb7b48d648a2381746d18d6344f1dae50bca7b8b4f0"} Nov 28 08:45:13 crc kubenswrapper[4946]: I1128 08:45:13.923728 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rbst" event={"ID":"afbd247d-2c6c-4e82-8643-4a0cf6fb7bda","Type":"ContainerStarted","Data":"6b7b5e5d248b373ad5db8dd7718afe2e6410406a40ccef7530cdace6b5893716"} Nov 28 08:45:13 crc kubenswrapper[4946]: I1128 08:45:13.961573 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rwhs7" Nov 28 08:45:14 crc kubenswrapper[4946]: I1128 08:45:14.937976 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kc9cf" event={"ID":"282d338d-1b91-4583-9499-c72b53475d62","Type":"ContainerStarted","Data":"d98c61506e64a98753faddd5fe6d73693166b89af3cbcb1d706778b07098996b"} Nov 28 08:45:14 crc kubenswrapper[4946]: I1128 08:45:14.942170 4946 generic.go:334] "Generic (PLEG): container finished" podID="afbd247d-2c6c-4e82-8643-4a0cf6fb7bda" containerID="00aa06e1bdaba9e9908a4a60afd23c739205a17ce4022b1a14e1f5154ad5d001" exitCode=0 Nov 28 08:45:14 crc kubenswrapper[4946]: I1128 08:45:14.942976 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rbst" event={"ID":"afbd247d-2c6c-4e82-8643-4a0cf6fb7bda","Type":"ContainerDied","Data":"00aa06e1bdaba9e9908a4a60afd23c739205a17ce4022b1a14e1f5154ad5d001"} Nov 28 08:45:14 crc kubenswrapper[4946]: I1128 08:45:14.976787 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kc9cf" podStartSLOduration=2.498238852 podStartE2EDuration="4.975459157s" podCreationTimestamp="2025-11-28 08:45:10 +0000 UTC" firstStartedPulling="2025-11-28 08:45:11.900227827 +0000 UTC m=+6766.278292938" lastFinishedPulling="2025-11-28 08:45:14.377448112 +0000 UTC m=+6768.755513243" observedRunningTime="2025-11-28 08:45:14.966553907 +0000 UTC m=+6769.344619018" watchObservedRunningTime="2025-11-28 08:45:14.975459157 +0000 UTC m=+6769.353524308" Nov 28 08:45:15 crc kubenswrapper[4946]: I1128 08:45:15.953724 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rbst" event={"ID":"afbd247d-2c6c-4e82-8643-4a0cf6fb7bda","Type":"ContainerStarted","Data":"3300b51e395c8a4bc4f716da9dcb446d8238e10e9106cb181be0cab669d23c94"} Nov 28 08:45:16 crc kubenswrapper[4946]: I1128 08:45:16.966722 4946 generic.go:334] "Generic (PLEG): container finished" podID="afbd247d-2c6c-4e82-8643-4a0cf6fb7bda" containerID="3300b51e395c8a4bc4f716da9dcb446d8238e10e9106cb181be0cab669d23c94" exitCode=0 Nov 28 08:45:16 crc kubenswrapper[4946]: I1128 08:45:16.966822 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rbst" event={"ID":"afbd247d-2c6c-4e82-8643-4a0cf6fb7bda","Type":"ContainerDied","Data":"3300b51e395c8a4bc4f716da9dcb446d8238e10e9106cb181be0cab669d23c94"} Nov 28 08:45:17 crc kubenswrapper[4946]: I1128 08:45:17.444434 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwhs7"] Nov 28 08:45:17 crc kubenswrapper[4946]: I1128 08:45:17.444711 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rwhs7" podUID="2c8858b6-8732-41df-b35c-d0014ea56d7f" containerName="registry-server" containerID="cri-o://d251abb6959b9600ad3d62092d8158ffe6f7aa0b241f23ef83db58316eaa55c0" gracePeriod=2 Nov 28 08:45:17 crc kubenswrapper[4946]: I1128 08:45:17.877296 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rwhs7" Nov 28 08:45:17 crc kubenswrapper[4946]: I1128 08:45:17.915181 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6dzg\" (UniqueName: \"kubernetes.io/projected/2c8858b6-8732-41df-b35c-d0014ea56d7f-kube-api-access-b6dzg\") pod \"2c8858b6-8732-41df-b35c-d0014ea56d7f\" (UID: \"2c8858b6-8732-41df-b35c-d0014ea56d7f\") " Nov 28 08:45:17 crc kubenswrapper[4946]: I1128 08:45:17.915300 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c8858b6-8732-41df-b35c-d0014ea56d7f-catalog-content\") pod \"2c8858b6-8732-41df-b35c-d0014ea56d7f\" (UID: \"2c8858b6-8732-41df-b35c-d0014ea56d7f\") " Nov 28 08:45:17 crc kubenswrapper[4946]: I1128 08:45:17.915344 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c8858b6-8732-41df-b35c-d0014ea56d7f-utilities\") pod \"2c8858b6-8732-41df-b35c-d0014ea56d7f\" (UID: \"2c8858b6-8732-41df-b35c-d0014ea56d7f\") " Nov 28 08:45:17 crc kubenswrapper[4946]: I1128 08:45:17.916641 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c8858b6-8732-41df-b35c-d0014ea56d7f-utilities" (OuterVolumeSpecName: "utilities") pod "2c8858b6-8732-41df-b35c-d0014ea56d7f" (UID: "2c8858b6-8732-41df-b35c-d0014ea56d7f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:45:17 crc kubenswrapper[4946]: I1128 08:45:17.926857 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c8858b6-8732-41df-b35c-d0014ea56d7f-kube-api-access-b6dzg" (OuterVolumeSpecName: "kube-api-access-b6dzg") pod "2c8858b6-8732-41df-b35c-d0014ea56d7f" (UID: "2c8858b6-8732-41df-b35c-d0014ea56d7f"). InnerVolumeSpecName "kube-api-access-b6dzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:45:17 crc kubenswrapper[4946]: I1128 08:45:17.949048 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c8858b6-8732-41df-b35c-d0014ea56d7f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c8858b6-8732-41df-b35c-d0014ea56d7f" (UID: "2c8858b6-8732-41df-b35c-d0014ea56d7f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:45:17 crc kubenswrapper[4946]: I1128 08:45:17.975754 4946 generic.go:334] "Generic (PLEG): container finished" podID="2c8858b6-8732-41df-b35c-d0014ea56d7f" containerID="d251abb6959b9600ad3d62092d8158ffe6f7aa0b241f23ef83db58316eaa55c0" exitCode=0 Nov 28 08:45:17 crc kubenswrapper[4946]: I1128 08:45:17.975818 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rwhs7" Nov 28 08:45:17 crc kubenswrapper[4946]: I1128 08:45:17.975832 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwhs7" event={"ID":"2c8858b6-8732-41df-b35c-d0014ea56d7f","Type":"ContainerDied","Data":"d251abb6959b9600ad3d62092d8158ffe6f7aa0b241f23ef83db58316eaa55c0"} Nov 28 08:45:17 crc kubenswrapper[4946]: I1128 08:45:17.975885 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwhs7" event={"ID":"2c8858b6-8732-41df-b35c-d0014ea56d7f","Type":"ContainerDied","Data":"19f02da1f47a57e1dcfd410c5c688c07a197e084ee22947b9bf0f73cbbe741fb"} Nov 28 08:45:17 crc kubenswrapper[4946]: I1128 08:45:17.975915 4946 scope.go:117] "RemoveContainer" containerID="d251abb6959b9600ad3d62092d8158ffe6f7aa0b241f23ef83db58316eaa55c0" Nov 28 08:45:17 crc kubenswrapper[4946]: I1128 08:45:17.978123 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rbst" event={"ID":"afbd247d-2c6c-4e82-8643-4a0cf6fb7bda","Type":"ContainerStarted","Data":"99a57821571b956be921371ec56350aa98744c79f8690c11f0e93f48fae06597"} Nov 28 08:45:18 crc kubenswrapper[4946]: I1128 08:45:18.004664 4946 scope.go:117] "RemoveContainer" containerID="e7097c44ade8ab79e3255676c35618ab8d10e7a06a83c9f72c551eda913c6898" Nov 28 08:45:18 crc kubenswrapper[4946]: I1128 08:45:18.014294 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8rbst" podStartSLOduration=2.476364339 podStartE2EDuration="5.014272956s" podCreationTimestamp="2025-11-28 08:45:13 +0000 UTC" firstStartedPulling="2025-11-28 08:45:14.94486915 +0000 UTC m=+6769.322934261" lastFinishedPulling="2025-11-28 08:45:17.482777767 +0000 UTC m=+6771.860842878" observedRunningTime="2025-11-28 08:45:18.00635333 +0000 UTC m=+6772.384418461" watchObservedRunningTime="2025-11-28 08:45:18.014272956 +0000 UTC m=+6772.392338067" Nov 28 08:45:18 crc kubenswrapper[4946]: I1128 08:45:18.022948 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6dzg\" (UniqueName: \"kubernetes.io/projected/2c8858b6-8732-41df-b35c-d0014ea56d7f-kube-api-access-b6dzg\") on node \"crc\" DevicePath \"\"" Nov 28 08:45:18 crc kubenswrapper[4946]: I1128 08:45:18.022980 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c8858b6-8732-41df-b35c-d0014ea56d7f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 08:45:18 crc kubenswrapper[4946]: I1128 08:45:18.022994 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c8858b6-8732-41df-b35c-d0014ea56d7f-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 08:45:18 crc kubenswrapper[4946]: I1128 08:45:18.031146 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwhs7"] Nov 28 08:45:18 crc kubenswrapper[4946]: I1128 08:45:18.032834 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwhs7"] Nov 28 08:45:18 crc kubenswrapper[4946]: I1128 08:45:18.033847 4946 scope.go:117] "RemoveContainer" containerID="ac8c6064617ff9f43e0359450e01f50af1b393c4eafa9dbd2d6d263cd981ed5f" Nov 28 08:45:18 crc kubenswrapper[4946]: I1128 08:45:18.051200 4946 scope.go:117] "RemoveContainer" containerID="d251abb6959b9600ad3d62092d8158ffe6f7aa0b241f23ef83db58316eaa55c0" Nov 28 08:45:18 crc kubenswrapper[4946]: E1128 08:45:18.051568 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d251abb6959b9600ad3d62092d8158ffe6f7aa0b241f23ef83db58316eaa55c0\": container with ID starting with d251abb6959b9600ad3d62092d8158ffe6f7aa0b241f23ef83db58316eaa55c0 not found: ID does not exist" containerID="d251abb6959b9600ad3d62092d8158ffe6f7aa0b241f23ef83db58316eaa55c0" Nov 28 08:45:18 crc kubenswrapper[4946]: I1128 08:45:18.051598 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d251abb6959b9600ad3d62092d8158ffe6f7aa0b241f23ef83db58316eaa55c0"} err="failed to get container status \"d251abb6959b9600ad3d62092d8158ffe6f7aa0b241f23ef83db58316eaa55c0\": rpc error: code = NotFound desc = could not find container \"d251abb6959b9600ad3d62092d8158ffe6f7aa0b241f23ef83db58316eaa55c0\": container with ID starting with d251abb6959b9600ad3d62092d8158ffe6f7aa0b241f23ef83db58316eaa55c0 not found: ID does not exist" Nov 28 08:45:18 crc kubenswrapper[4946]: I1128 08:45:18.051619 4946 scope.go:117] "RemoveContainer" containerID="e7097c44ade8ab79e3255676c35618ab8d10e7a06a83c9f72c551eda913c6898" Nov 28 08:45:18 crc kubenswrapper[4946]: E1128 08:45:18.051901 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7097c44ade8ab79e3255676c35618ab8d10e7a06a83c9f72c551eda913c6898\": container with ID starting with e7097c44ade8ab79e3255676c35618ab8d10e7a06a83c9f72c551eda913c6898 not found: ID does not exist" containerID="e7097c44ade8ab79e3255676c35618ab8d10e7a06a83c9f72c551eda913c6898" Nov 28 08:45:18 crc kubenswrapper[4946]: I1128 08:45:18.051942 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7097c44ade8ab79e3255676c35618ab8d10e7a06a83c9f72c551eda913c6898"} err="failed to get container status \"e7097c44ade8ab79e3255676c35618ab8d10e7a06a83c9f72c551eda913c6898\": rpc error: code = NotFound desc = could not find container \"e7097c44ade8ab79e3255676c35618ab8d10e7a06a83c9f72c551eda913c6898\": container with ID starting with e7097c44ade8ab79e3255676c35618ab8d10e7a06a83c9f72c551eda913c6898 not found: ID does not exist" Nov 28 08:45:18 crc kubenswrapper[4946]: I1128 08:45:18.051984 4946 scope.go:117] "RemoveContainer" containerID="ac8c6064617ff9f43e0359450e01f50af1b393c4eafa9dbd2d6d263cd981ed5f" Nov 28 08:45:18 crc kubenswrapper[4946]: E1128 08:45:18.052278 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac8c6064617ff9f43e0359450e01f50af1b393c4eafa9dbd2d6d263cd981ed5f\": container with ID starting with ac8c6064617ff9f43e0359450e01f50af1b393c4eafa9dbd2d6d263cd981ed5f not found: ID does not exist" containerID="ac8c6064617ff9f43e0359450e01f50af1b393c4eafa9dbd2d6d263cd981ed5f" Nov 28 08:45:18 crc kubenswrapper[4946]: I1128 08:45:18.052308 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac8c6064617ff9f43e0359450e01f50af1b393c4eafa9dbd2d6d263cd981ed5f"} err="failed to get container status \"ac8c6064617ff9f43e0359450e01f50af1b393c4eafa9dbd2d6d263cd981ed5f\": rpc error: code = NotFound desc = could not find container \"ac8c6064617ff9f43e0359450e01f50af1b393c4eafa9dbd2d6d263cd981ed5f\": container with ID starting with ac8c6064617ff9f43e0359450e01f50af1b393c4eafa9dbd2d6d263cd981ed5f not found: ID does not exist" Nov 28 08:45:18 crc kubenswrapper[4946]: I1128 08:45:18.990625 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:45:18 crc kubenswrapper[4946]: E1128 08:45:18.991671 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:45:20 crc kubenswrapper[4946]: I1128 08:45:20.001047 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c8858b6-8732-41df-b35c-d0014ea56d7f" path="/var/lib/kubelet/pods/2c8858b6-8732-41df-b35c-d0014ea56d7f/volumes" Nov 28 08:45:21 crc kubenswrapper[4946]: I1128 08:45:21.003382 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kc9cf" Nov 28 08:45:21 crc kubenswrapper[4946]: I1128 08:45:21.003819 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kc9cf" Nov 28 08:45:21 crc kubenswrapper[4946]: I1128 08:45:21.068798 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kc9cf" Nov 28 08:45:22 crc kubenswrapper[4946]: I1128 08:45:22.075302 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kc9cf" Nov 28 08:45:22 crc kubenswrapper[4946]: I1128 08:45:22.847697 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kc9cf"] Nov 28 08:45:23 crc kubenswrapper[4946]: I1128 08:45:23.385484 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8rbst" Nov 28 08:45:23 crc kubenswrapper[4946]: I1128 08:45:23.385914 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8rbst" Nov 28 08:45:23 crc kubenswrapper[4946]: I1128 08:45:23.431818 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8rbst" Nov 28 08:45:24 crc kubenswrapper[4946]: I1128 08:45:24.031856 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kc9cf" podUID="282d338d-1b91-4583-9499-c72b53475d62" containerName="registry-server" containerID="cri-o://d98c61506e64a98753faddd5fe6d73693166b89af3cbcb1d706778b07098996b" gracePeriod=2 Nov 28 08:45:24 crc kubenswrapper[4946]: I1128 08:45:24.080044 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8rbst" Nov 28 08:45:25 crc kubenswrapper[4946]: I1128 08:45:25.044056 4946 generic.go:334] "Generic (PLEG): container finished" podID="282d338d-1b91-4583-9499-c72b53475d62" containerID="d98c61506e64a98753faddd5fe6d73693166b89af3cbcb1d706778b07098996b" exitCode=0 Nov 28 08:45:25 crc kubenswrapper[4946]: I1128 08:45:25.044139 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kc9cf" event={"ID":"282d338d-1b91-4583-9499-c72b53475d62","Type":"ContainerDied","Data":"d98c61506e64a98753faddd5fe6d73693166b89af3cbcb1d706778b07098996b"} Nov 28 08:45:25 crc kubenswrapper[4946]: I1128 08:45:25.573533 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kc9cf" Nov 28 08:45:25 crc kubenswrapper[4946]: I1128 08:45:25.744715 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/282d338d-1b91-4583-9499-c72b53475d62-utilities\") pod \"282d338d-1b91-4583-9499-c72b53475d62\" (UID: \"282d338d-1b91-4583-9499-c72b53475d62\") " Nov 28 08:45:25 crc kubenswrapper[4946]: I1128 08:45:25.744768 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/282d338d-1b91-4583-9499-c72b53475d62-catalog-content\") pod \"282d338d-1b91-4583-9499-c72b53475d62\" (UID: \"282d338d-1b91-4583-9499-c72b53475d62\") " Nov 28 08:45:25 crc kubenswrapper[4946]: I1128 08:45:25.744971 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k26v2\" (UniqueName: \"kubernetes.io/projected/282d338d-1b91-4583-9499-c72b53475d62-kube-api-access-k26v2\") pod \"282d338d-1b91-4583-9499-c72b53475d62\" (UID: \"282d338d-1b91-4583-9499-c72b53475d62\") " Nov 28 08:45:25 crc kubenswrapper[4946]: I1128 08:45:25.745827 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/282d338d-1b91-4583-9499-c72b53475d62-utilities" (OuterVolumeSpecName: "utilities") pod "282d338d-1b91-4583-9499-c72b53475d62" (UID: "282d338d-1b91-4583-9499-c72b53475d62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:45:25 crc kubenswrapper[4946]: I1128 08:45:25.751637 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/282d338d-1b91-4583-9499-c72b53475d62-kube-api-access-k26v2" (OuterVolumeSpecName: "kube-api-access-k26v2") pod "282d338d-1b91-4583-9499-c72b53475d62" (UID: "282d338d-1b91-4583-9499-c72b53475d62"). InnerVolumeSpecName "kube-api-access-k26v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:45:25 crc kubenswrapper[4946]: I1128 08:45:25.847199 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k26v2\" (UniqueName: \"kubernetes.io/projected/282d338d-1b91-4583-9499-c72b53475d62-kube-api-access-k26v2\") on node \"crc\" DevicePath \"\"" Nov 28 08:45:25 crc kubenswrapper[4946]: I1128 08:45:25.847250 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/282d338d-1b91-4583-9499-c72b53475d62-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 08:45:25 crc kubenswrapper[4946]: I1128 08:45:25.850988 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8rbst"] Nov 28 08:45:25 crc kubenswrapper[4946]: I1128 08:45:25.881611 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/282d338d-1b91-4583-9499-c72b53475d62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "282d338d-1b91-4583-9499-c72b53475d62" (UID: "282d338d-1b91-4583-9499-c72b53475d62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:45:25 crc kubenswrapper[4946]: I1128 08:45:25.948428 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/282d338d-1b91-4583-9499-c72b53475d62-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 08:45:26 crc kubenswrapper[4946]: I1128 08:45:26.055795 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kc9cf" Nov 28 08:45:26 crc kubenswrapper[4946]: I1128 08:45:26.056061 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kc9cf" event={"ID":"282d338d-1b91-4583-9499-c72b53475d62","Type":"ContainerDied","Data":"ab0424634b9a259bdc51b28a652713cb541f411a25d4fe121724811508ca6cfc"} Nov 28 08:45:26 crc kubenswrapper[4946]: I1128 08:45:26.056100 4946 scope.go:117] "RemoveContainer" containerID="d98c61506e64a98753faddd5fe6d73693166b89af3cbcb1d706778b07098996b" Nov 28 08:45:26 crc kubenswrapper[4946]: I1128 08:45:26.086772 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kc9cf"] Nov 28 08:45:26 crc kubenswrapper[4946]: I1128 08:45:26.096043 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kc9cf"] Nov 28 08:45:26 crc kubenswrapper[4946]: I1128 08:45:26.102304 4946 scope.go:117] "RemoveContainer" containerID="cea120be9e32b572b9c1ceb7b48d648a2381746d18d6344f1dae50bca7b8b4f0" Nov 28 08:45:26 crc kubenswrapper[4946]: I1128 08:45:26.125216 4946 scope.go:117] "RemoveContainer" containerID="a9f17fe3cd62480224c65e62a03493e9150171ef2a744aaa3a2118fcfd3de4c9" Nov 28 08:45:27 crc kubenswrapper[4946]: I1128 08:45:27.064660 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8rbst" podUID="afbd247d-2c6c-4e82-8643-4a0cf6fb7bda" containerName="registry-server" containerID="cri-o://99a57821571b956be921371ec56350aa98744c79f8690c11f0e93f48fae06597" gracePeriod=2 Nov 28 08:45:27 crc kubenswrapper[4946]: I1128 08:45:27.521587 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rbst" Nov 28 08:45:27 crc kubenswrapper[4946]: I1128 08:45:27.684087 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afbd247d-2c6c-4e82-8643-4a0cf6fb7bda-catalog-content\") pod \"afbd247d-2c6c-4e82-8643-4a0cf6fb7bda\" (UID: \"afbd247d-2c6c-4e82-8643-4a0cf6fb7bda\") " Nov 28 08:45:27 crc kubenswrapper[4946]: I1128 08:45:27.684159 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afbd247d-2c6c-4e82-8643-4a0cf6fb7bda-utilities\") pod \"afbd247d-2c6c-4e82-8643-4a0cf6fb7bda\" (UID: \"afbd247d-2c6c-4e82-8643-4a0cf6fb7bda\") " Nov 28 08:45:27 crc kubenswrapper[4946]: I1128 08:45:27.685335 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afbd247d-2c6c-4e82-8643-4a0cf6fb7bda-utilities" (OuterVolumeSpecName: "utilities") pod "afbd247d-2c6c-4e82-8643-4a0cf6fb7bda" (UID: "afbd247d-2c6c-4e82-8643-4a0cf6fb7bda"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:45:27 crc kubenswrapper[4946]: I1128 08:45:27.685687 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dr27\" (UniqueName: \"kubernetes.io/projected/afbd247d-2c6c-4e82-8643-4a0cf6fb7bda-kube-api-access-8dr27\") pod \"afbd247d-2c6c-4e82-8643-4a0cf6fb7bda\" (UID: \"afbd247d-2c6c-4e82-8643-4a0cf6fb7bda\") " Nov 28 08:45:27 crc kubenswrapper[4946]: I1128 08:45:27.686247 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afbd247d-2c6c-4e82-8643-4a0cf6fb7bda-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 08:45:27 crc kubenswrapper[4946]: I1128 08:45:27.696651 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afbd247d-2c6c-4e82-8643-4a0cf6fb7bda-kube-api-access-8dr27" (OuterVolumeSpecName: "kube-api-access-8dr27") pod "afbd247d-2c6c-4e82-8643-4a0cf6fb7bda" (UID: "afbd247d-2c6c-4e82-8643-4a0cf6fb7bda"). InnerVolumeSpecName "kube-api-access-8dr27". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:45:27 crc kubenswrapper[4946]: I1128 08:45:27.729162 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afbd247d-2c6c-4e82-8643-4a0cf6fb7bda-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "afbd247d-2c6c-4e82-8643-4a0cf6fb7bda" (UID: "afbd247d-2c6c-4e82-8643-4a0cf6fb7bda"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:45:27 crc kubenswrapper[4946]: I1128 08:45:27.787547 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afbd247d-2c6c-4e82-8643-4a0cf6fb7bda-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 08:45:27 crc kubenswrapper[4946]: I1128 08:45:27.787589 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dr27\" (UniqueName: \"kubernetes.io/projected/afbd247d-2c6c-4e82-8643-4a0cf6fb7bda-kube-api-access-8dr27\") on node \"crc\" DevicePath \"\"" Nov 28 08:45:28 crc kubenswrapper[4946]: I1128 08:45:28.007783 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="282d338d-1b91-4583-9499-c72b53475d62" path="/var/lib/kubelet/pods/282d338d-1b91-4583-9499-c72b53475d62/volumes" Nov 28 08:45:28 crc kubenswrapper[4946]: I1128 08:45:28.075934 4946 generic.go:334] "Generic (PLEG): container finished" podID="afbd247d-2c6c-4e82-8643-4a0cf6fb7bda" containerID="99a57821571b956be921371ec56350aa98744c79f8690c11f0e93f48fae06597" exitCode=0 Nov 28 08:45:28 crc kubenswrapper[4946]: I1128 08:45:28.075987 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rbst" event={"ID":"afbd247d-2c6c-4e82-8643-4a0cf6fb7bda","Type":"ContainerDied","Data":"99a57821571b956be921371ec56350aa98744c79f8690c11f0e93f48fae06597"} Nov 28 08:45:28 crc kubenswrapper[4946]: I1128 08:45:28.076019 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rbst" event={"ID":"afbd247d-2c6c-4e82-8643-4a0cf6fb7bda","Type":"ContainerDied","Data":"6b7b5e5d248b373ad5db8dd7718afe2e6410406a40ccef7530cdace6b5893716"} Nov 28 08:45:28 crc kubenswrapper[4946]: I1128 08:45:28.076040 4946 scope.go:117] "RemoveContainer" containerID="99a57821571b956be921371ec56350aa98744c79f8690c11f0e93f48fae06597" Nov 28 08:45:28 crc kubenswrapper[4946]: I1128 08:45:28.076051 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rbst" Nov 28 08:45:28 crc kubenswrapper[4946]: I1128 08:45:28.108963 4946 scope.go:117] "RemoveContainer" containerID="3300b51e395c8a4bc4f716da9dcb446d8238e10e9106cb181be0cab669d23c94" Nov 28 08:45:28 crc kubenswrapper[4946]: I1128 08:45:28.111308 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8rbst"] Nov 28 08:45:28 crc kubenswrapper[4946]: I1128 08:45:28.123234 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8rbst"] Nov 28 08:45:28 crc kubenswrapper[4946]: I1128 08:45:28.131944 4946 scope.go:117] "RemoveContainer" containerID="00aa06e1bdaba9e9908a4a60afd23c739205a17ce4022b1a14e1f5154ad5d001" Nov 28 08:45:28 crc kubenswrapper[4946]: I1128 08:45:28.166769 4946 scope.go:117] "RemoveContainer" containerID="99a57821571b956be921371ec56350aa98744c79f8690c11f0e93f48fae06597" Nov 28 08:45:28 crc kubenswrapper[4946]: E1128 08:45:28.169308 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99a57821571b956be921371ec56350aa98744c79f8690c11f0e93f48fae06597\": container with ID starting with 99a57821571b956be921371ec56350aa98744c79f8690c11f0e93f48fae06597 not found: ID does not exist" containerID="99a57821571b956be921371ec56350aa98744c79f8690c11f0e93f48fae06597" Nov 28 08:45:28 crc kubenswrapper[4946]: I1128 08:45:28.169383 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99a57821571b956be921371ec56350aa98744c79f8690c11f0e93f48fae06597"} err="failed to get container status \"99a57821571b956be921371ec56350aa98744c79f8690c11f0e93f48fae06597\": rpc error: code = NotFound desc = could not find container \"99a57821571b956be921371ec56350aa98744c79f8690c11f0e93f48fae06597\": container with ID starting with 99a57821571b956be921371ec56350aa98744c79f8690c11f0e93f48fae06597 not found: ID does not exist" Nov 28 08:45:28 crc kubenswrapper[4946]: I1128 08:45:28.169430 4946 scope.go:117] "RemoveContainer" containerID="3300b51e395c8a4bc4f716da9dcb446d8238e10e9106cb181be0cab669d23c94" Nov 28 08:45:28 crc kubenswrapper[4946]: E1128 08:45:28.170107 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3300b51e395c8a4bc4f716da9dcb446d8238e10e9106cb181be0cab669d23c94\": container with ID starting with 3300b51e395c8a4bc4f716da9dcb446d8238e10e9106cb181be0cab669d23c94 not found: ID does not exist" containerID="3300b51e395c8a4bc4f716da9dcb446d8238e10e9106cb181be0cab669d23c94" Nov 28 08:45:28 crc kubenswrapper[4946]: I1128 08:45:28.170141 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3300b51e395c8a4bc4f716da9dcb446d8238e10e9106cb181be0cab669d23c94"} err="failed to get container status \"3300b51e395c8a4bc4f716da9dcb446d8238e10e9106cb181be0cab669d23c94\": rpc error: code = NotFound desc = could not find container \"3300b51e395c8a4bc4f716da9dcb446d8238e10e9106cb181be0cab669d23c94\": container with ID starting with 3300b51e395c8a4bc4f716da9dcb446d8238e10e9106cb181be0cab669d23c94 not found: ID does not exist" Nov 28 08:45:28 crc kubenswrapper[4946]: I1128 08:45:28.170163 4946 scope.go:117] "RemoveContainer" containerID="00aa06e1bdaba9e9908a4a60afd23c739205a17ce4022b1a14e1f5154ad5d001" Nov 28 08:45:28 crc kubenswrapper[4946]: E1128 08:45:28.170428 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00aa06e1bdaba9e9908a4a60afd23c739205a17ce4022b1a14e1f5154ad5d001\": container with ID starting with 00aa06e1bdaba9e9908a4a60afd23c739205a17ce4022b1a14e1f5154ad5d001 not found: ID does not exist" containerID="00aa06e1bdaba9e9908a4a60afd23c739205a17ce4022b1a14e1f5154ad5d001" Nov 28 08:45:28 crc kubenswrapper[4946]: I1128 08:45:28.170499 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00aa06e1bdaba9e9908a4a60afd23c739205a17ce4022b1a14e1f5154ad5d001"} err="failed to get container status \"00aa06e1bdaba9e9908a4a60afd23c739205a17ce4022b1a14e1f5154ad5d001\": rpc error: code = NotFound desc = could not find container \"00aa06e1bdaba9e9908a4a60afd23c739205a17ce4022b1a14e1f5154ad5d001\": container with ID starting with 00aa06e1bdaba9e9908a4a60afd23c739205a17ce4022b1a14e1f5154ad5d001 not found: ID does not exist" Nov 28 08:45:30 crc kubenswrapper[4946]: I1128 08:45:30.009646 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afbd247d-2c6c-4e82-8643-4a0cf6fb7bda" path="/var/lib/kubelet/pods/afbd247d-2c6c-4e82-8643-4a0cf6fb7bda/volumes" Nov 28 08:45:30 crc kubenswrapper[4946]: I1128 08:45:30.989786 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:45:30 crc kubenswrapper[4946]: E1128 08:45:30.990526 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:45:34 crc kubenswrapper[4946]: I1128 08:45:34.846775 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 08:45:34 crc kubenswrapper[4946]: E1128 08:45:34.847904 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbd247d-2c6c-4e82-8643-4a0cf6fb7bda" containerName="extract-utilities" Nov 28 08:45:34 crc kubenswrapper[4946]: I1128 08:45:34.847928 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbd247d-2c6c-4e82-8643-4a0cf6fb7bda" containerName="extract-utilities" Nov 28 08:45:34 crc kubenswrapper[4946]: E1128 08:45:34.847955 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="282d338d-1b91-4583-9499-c72b53475d62" containerName="registry-server" Nov 28 08:45:34 crc kubenswrapper[4946]: I1128 08:45:34.847969 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="282d338d-1b91-4583-9499-c72b53475d62" containerName="registry-server" Nov 28 08:45:34 crc kubenswrapper[4946]: E1128 08:45:34.847995 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbd247d-2c6c-4e82-8643-4a0cf6fb7bda" containerName="registry-server" Nov 28 08:45:34 crc kubenswrapper[4946]: I1128 08:45:34.848006 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbd247d-2c6c-4e82-8643-4a0cf6fb7bda" containerName="registry-server" Nov 28 08:45:34 crc kubenswrapper[4946]: E1128 08:45:34.848025 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="282d338d-1b91-4583-9499-c72b53475d62" containerName="extract-content" Nov 28 08:45:34 crc kubenswrapper[4946]: I1128 08:45:34.848035 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="282d338d-1b91-4583-9499-c72b53475d62" containerName="extract-content" Nov 28 08:45:34 crc kubenswrapper[4946]: E1128 08:45:34.848057 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c8858b6-8732-41df-b35c-d0014ea56d7f" containerName="extract-content" Nov 28 08:45:34 crc kubenswrapper[4946]: I1128 08:45:34.848066 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c8858b6-8732-41df-b35c-d0014ea56d7f" containerName="extract-content" Nov 28 08:45:34 crc kubenswrapper[4946]: E1128 08:45:34.848095 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c8858b6-8732-41df-b35c-d0014ea56d7f" containerName="extract-utilities" Nov 28 08:45:34 crc kubenswrapper[4946]: I1128 08:45:34.848105 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c8858b6-8732-41df-b35c-d0014ea56d7f" containerName="extract-utilities" Nov 28 08:45:34 crc kubenswrapper[4946]: E1128 08:45:34.848129 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c8858b6-8732-41df-b35c-d0014ea56d7f" containerName="registry-server" Nov 28 08:45:34 crc kubenswrapper[4946]: I1128 08:45:34.848139 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c8858b6-8732-41df-b35c-d0014ea56d7f" containerName="registry-server" Nov 28 08:45:34 crc kubenswrapper[4946]: E1128 08:45:34.848159 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="282d338d-1b91-4583-9499-c72b53475d62" containerName="extract-utilities" Nov 28 08:45:34 crc kubenswrapper[4946]: I1128 08:45:34.848170 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="282d338d-1b91-4583-9499-c72b53475d62" containerName="extract-utilities" Nov 28 08:45:34 crc kubenswrapper[4946]: E1128 08:45:34.848188 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbd247d-2c6c-4e82-8643-4a0cf6fb7bda" containerName="extract-content" Nov 28 08:45:34 crc kubenswrapper[4946]: I1128 08:45:34.848197 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbd247d-2c6c-4e82-8643-4a0cf6fb7bda" containerName="extract-content" Nov 28 08:45:34 crc kubenswrapper[4946]: I1128 08:45:34.848421 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c8858b6-8732-41df-b35c-d0014ea56d7f" containerName="registry-server" Nov 28 08:45:34 crc kubenswrapper[4946]: I1128 08:45:34.848487 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbd247d-2c6c-4e82-8643-4a0cf6fb7bda" containerName="registry-server" Nov 28 08:45:34 crc kubenswrapper[4946]: I1128 08:45:34.848509 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="282d338d-1b91-4583-9499-c72b53475d62" containerName="registry-server" Nov 28 08:45:34 crc kubenswrapper[4946]: I1128 08:45:34.849813 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 28 08:45:34 crc kubenswrapper[4946]: I1128 08:45:34.858651 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 28 08:45:34 crc kubenswrapper[4946]: I1128 08:45:34.859875 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 28 08:45:34 crc kubenswrapper[4946]: I1128 08:45:34.860165 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Nov 28 08:45:34 crc kubenswrapper[4946]: I1128 08:45:34.863802 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-p6prw" Nov 28 08:45:34 crc kubenswrapper[4946]: I1128 08:45:34.865802 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 28 08:45:34 crc kubenswrapper[4946]: I1128 08:45:34.866465 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 28 08:45:34 crc kubenswrapper[4946]: I1128 08:45:34.867315 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Nov 28 08:45:34 crc kubenswrapper[4946]: I1128 08:45:34.876169 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 08:45:34 crc kubenswrapper[4946]: I1128 08:45:34.890016 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 28 08:45:34 crc kubenswrapper[4946]: I1128 08:45:34.898150 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.006942 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfj9j\" (UniqueName: \"kubernetes.io/projected/ed69d76d-986f-4ef9-b0a2-920c71c2b72a-kube-api-access-wfj9j\") pod \"ovsdbserver-nb-0\" (UID: \"ed69d76d-986f-4ef9-b0a2-920c71c2b72a\") " pod="openstack/ovsdbserver-nb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.006996 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ed69d76d-986f-4ef9-b0a2-920c71c2b72a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ed69d76d-986f-4ef9-b0a2-920c71c2b72a\") " pod="openstack/ovsdbserver-nb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.007027 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e2ad07a2-22e0-462f-be2e-8e06e6d507b1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2ad07a2-22e0-462f-be2e-8e06e6d507b1\") pod \"ovsdbserver-nb-1\" (UID: \"76686c34-5f17-4139-aed5-05658e66a812\") " pod="openstack/ovsdbserver-nb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.007078 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed69d76d-986f-4ef9-b0a2-920c71c2b72a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ed69d76d-986f-4ef9-b0a2-920c71c2b72a\") " pod="openstack/ovsdbserver-nb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.007098 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-013c7e0e-073d-4f3a-bfef-918816678d91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-013c7e0e-073d-4f3a-bfef-918816678d91\") pod \"ovsdbserver-nb-0\" (UID: \"ed69d76d-986f-4ef9-b0a2-920c71c2b72a\") " pod="openstack/ovsdbserver-nb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.007120 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed69d76d-986f-4ef9-b0a2-920c71c2b72a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ed69d76d-986f-4ef9-b0a2-920c71c2b72a\") " pod="openstack/ovsdbserver-nb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.007139 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68kch\" (UniqueName: \"kubernetes.io/projected/76686c34-5f17-4139-aed5-05658e66a812-kube-api-access-68kch\") pod \"ovsdbserver-nb-1\" (UID: \"76686c34-5f17-4139-aed5-05658e66a812\") " pod="openstack/ovsdbserver-nb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.007172 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10a36adb-4aa2-49a7-a5b2-8903d8089a26-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"10a36adb-4aa2-49a7-a5b2-8903d8089a26\") " pod="openstack/ovsdbserver-nb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.007211 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76686c34-5f17-4139-aed5-05658e66a812-config\") pod \"ovsdbserver-nb-1\" (UID: \"76686c34-5f17-4139-aed5-05658e66a812\") " pod="openstack/ovsdbserver-nb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.007235 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed69d76d-986f-4ef9-b0a2-920c71c2b72a-config\") pod \"ovsdbserver-nb-0\" (UID: \"ed69d76d-986f-4ef9-b0a2-920c71c2b72a\") " pod="openstack/ovsdbserver-nb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.007253 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76686c34-5f17-4139-aed5-05658e66a812-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"76686c34-5f17-4139-aed5-05658e66a812\") " pod="openstack/ovsdbserver-nb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.007279 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/76686c34-5f17-4139-aed5-05658e66a812-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"76686c34-5f17-4139-aed5-05658e66a812\") " pod="openstack/ovsdbserver-nb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.007300 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10a36adb-4aa2-49a7-a5b2-8903d8089a26-config\") pod \"ovsdbserver-nb-2\" (UID: \"10a36adb-4aa2-49a7-a5b2-8903d8089a26\") " pod="openstack/ovsdbserver-nb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.007320 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwnl2\" (UniqueName: \"kubernetes.io/projected/10a36adb-4aa2-49a7-a5b2-8903d8089a26-kube-api-access-lwnl2\") pod \"ovsdbserver-nb-2\" (UID: \"10a36adb-4aa2-49a7-a5b2-8903d8089a26\") " pod="openstack/ovsdbserver-nb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.007338 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a36adb-4aa2-49a7-a5b2-8903d8089a26-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"10a36adb-4aa2-49a7-a5b2-8903d8089a26\") " pod="openstack/ovsdbserver-nb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.007517 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/10a36adb-4aa2-49a7-a5b2-8903d8089a26-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"10a36adb-4aa2-49a7-a5b2-8903d8089a26\") " pod="openstack/ovsdbserver-nb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.007554 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-83bc95d0-6e37-468f-a95b-ec982f4e3b49\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-83bc95d0-6e37-468f-a95b-ec982f4e3b49\") pod \"ovsdbserver-nb-2\" (UID: \"10a36adb-4aa2-49a7-a5b2-8903d8089a26\") " pod="openstack/ovsdbserver-nb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.007584 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76686c34-5f17-4139-aed5-05658e66a812-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"76686c34-5f17-4139-aed5-05658e66a812\") " pod="openstack/ovsdbserver-nb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.047866 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.049276 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.051598 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.051753 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.064272 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.071006 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-txt5d" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.085661 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.086985 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.094004 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.095670 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.110264 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.115189 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-83bc95d0-6e37-468f-a95b-ec982f4e3b49\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-83bc95d0-6e37-468f-a95b-ec982f4e3b49\") pod \"ovsdbserver-nb-2\" (UID: \"10a36adb-4aa2-49a7-a5b2-8903d8089a26\") " pod="openstack/ovsdbserver-nb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.115273 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76686c34-5f17-4139-aed5-05658e66a812-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"76686c34-5f17-4139-aed5-05658e66a812\") " pod="openstack/ovsdbserver-nb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.115334 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfj9j\" (UniqueName: \"kubernetes.io/projected/ed69d76d-986f-4ef9-b0a2-920c71c2b72a-kube-api-access-wfj9j\") pod \"ovsdbserver-nb-0\" (UID: \"ed69d76d-986f-4ef9-b0a2-920c71c2b72a\") " pod="openstack/ovsdbserver-nb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.115396 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ed69d76d-986f-4ef9-b0a2-920c71c2b72a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ed69d76d-986f-4ef9-b0a2-920c71c2b72a\") " pod="openstack/ovsdbserver-nb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.115449 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e2ad07a2-22e0-462f-be2e-8e06e6d507b1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2ad07a2-22e0-462f-be2e-8e06e6d507b1\") pod \"ovsdbserver-nb-1\" (UID: \"76686c34-5f17-4139-aed5-05658e66a812\") " pod="openstack/ovsdbserver-nb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.115511 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccf5d58f-8529-453c-b840-d5839d294d38-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ccf5d58f-8529-453c-b840-d5839d294d38\") " pod="openstack/ovsdbserver-sb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.115539 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdh52\" (UniqueName: \"kubernetes.io/projected/ccf5d58f-8529-453c-b840-d5839d294d38-kube-api-access-mdh52\") pod \"ovsdbserver-sb-0\" (UID: \"ccf5d58f-8529-453c-b840-d5839d294d38\") " pod="openstack/ovsdbserver-sb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.115583 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed69d76d-986f-4ef9-b0a2-920c71c2b72a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ed69d76d-986f-4ef9-b0a2-920c71c2b72a\") " pod="openstack/ovsdbserver-nb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.115615 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-013c7e0e-073d-4f3a-bfef-918816678d91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-013c7e0e-073d-4f3a-bfef-918816678d91\") pod \"ovsdbserver-nb-0\" (UID: \"ed69d76d-986f-4ef9-b0a2-920c71c2b72a\") " pod="openstack/ovsdbserver-nb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.115648 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ccf5d58f-8529-453c-b840-d5839d294d38-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ccf5d58f-8529-453c-b840-d5839d294d38\") " pod="openstack/ovsdbserver-sb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.115674 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccf5d58f-8529-453c-b840-d5839d294d38-config\") pod \"ovsdbserver-sb-0\" (UID: \"ccf5d58f-8529-453c-b840-d5839d294d38\") " pod="openstack/ovsdbserver-sb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.115708 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed69d76d-986f-4ef9-b0a2-920c71c2b72a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ed69d76d-986f-4ef9-b0a2-920c71c2b72a\") " pod="openstack/ovsdbserver-nb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.115746 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68kch\" (UniqueName: \"kubernetes.io/projected/76686c34-5f17-4139-aed5-05658e66a812-kube-api-access-68kch\") pod \"ovsdbserver-nb-1\" (UID: \"76686c34-5f17-4139-aed5-05658e66a812\") " pod="openstack/ovsdbserver-nb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.115795 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10a36adb-4aa2-49a7-a5b2-8903d8089a26-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"10a36adb-4aa2-49a7-a5b2-8903d8089a26\") " pod="openstack/ovsdbserver-nb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.115826 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ca93aaa6-37c4-4d39-a07d-36ef881ba85f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ca93aaa6-37c4-4d39-a07d-36ef881ba85f\") pod \"ovsdbserver-sb-0\" (UID: \"ccf5d58f-8529-453c-b840-d5839d294d38\") " pod="openstack/ovsdbserver-sb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.115858 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76686c34-5f17-4139-aed5-05658e66a812-config\") pod \"ovsdbserver-nb-1\" (UID: \"76686c34-5f17-4139-aed5-05658e66a812\") " pod="openstack/ovsdbserver-nb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.115912 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed69d76d-986f-4ef9-b0a2-920c71c2b72a-config\") pod \"ovsdbserver-nb-0\" (UID: \"ed69d76d-986f-4ef9-b0a2-920c71c2b72a\") " pod="openstack/ovsdbserver-nb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.115943 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ccf5d58f-8529-453c-b840-d5839d294d38-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ccf5d58f-8529-453c-b840-d5839d294d38\") " pod="openstack/ovsdbserver-sb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.115971 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76686c34-5f17-4139-aed5-05658e66a812-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"76686c34-5f17-4139-aed5-05658e66a812\") " pod="openstack/ovsdbserver-nb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.116012 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/76686c34-5f17-4139-aed5-05658e66a812-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"76686c34-5f17-4139-aed5-05658e66a812\") " pod="openstack/ovsdbserver-nb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.116039 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10a36adb-4aa2-49a7-a5b2-8903d8089a26-config\") pod \"ovsdbserver-nb-2\" (UID: \"10a36adb-4aa2-49a7-a5b2-8903d8089a26\") " pod="openstack/ovsdbserver-nb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.116066 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwnl2\" (UniqueName: \"kubernetes.io/projected/10a36adb-4aa2-49a7-a5b2-8903d8089a26-kube-api-access-lwnl2\") pod \"ovsdbserver-nb-2\" (UID: \"10a36adb-4aa2-49a7-a5b2-8903d8089a26\") " pod="openstack/ovsdbserver-nb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.116097 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a36adb-4aa2-49a7-a5b2-8903d8089a26-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"10a36adb-4aa2-49a7-a5b2-8903d8089a26\") " pod="openstack/ovsdbserver-nb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.117855 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/10a36adb-4aa2-49a7-a5b2-8903d8089a26-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"10a36adb-4aa2-49a7-a5b2-8903d8089a26\") " pod="openstack/ovsdbserver-nb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.118664 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/10a36adb-4aa2-49a7-a5b2-8903d8089a26-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"10a36adb-4aa2-49a7-a5b2-8903d8089a26\") " pod="openstack/ovsdbserver-nb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.119630 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed69d76d-986f-4ef9-b0a2-920c71c2b72a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ed69d76d-986f-4ef9-b0a2-920c71c2b72a\") " pod="openstack/ovsdbserver-nb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.120215 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10a36adb-4aa2-49a7-a5b2-8903d8089a26-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"10a36adb-4aa2-49a7-a5b2-8903d8089a26\") " pod="openstack/ovsdbserver-nb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.120865 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76686c34-5f17-4139-aed5-05658e66a812-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"76686c34-5f17-4139-aed5-05658e66a812\") " pod="openstack/ovsdbserver-nb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.121382 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ed69d76d-986f-4ef9-b0a2-920c71c2b72a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ed69d76d-986f-4ef9-b0a2-920c71c2b72a\") " pod="openstack/ovsdbserver-nb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.121434 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76686c34-5f17-4139-aed5-05658e66a812-config\") pod \"ovsdbserver-nb-1\" (UID: \"76686c34-5f17-4139-aed5-05658e66a812\") " pod="openstack/ovsdbserver-nb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.122196 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/76686c34-5f17-4139-aed5-05658e66a812-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"76686c34-5f17-4139-aed5-05658e66a812\") " pod="openstack/ovsdbserver-nb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.122257 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed69d76d-986f-4ef9-b0a2-920c71c2b72a-config\") pod \"ovsdbserver-nb-0\" (UID: \"ed69d76d-986f-4ef9-b0a2-920c71c2b72a\") " pod="openstack/ovsdbserver-nb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.123594 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10a36adb-4aa2-49a7-a5b2-8903d8089a26-config\") pod \"ovsdbserver-nb-2\" (UID: \"10a36adb-4aa2-49a7-a5b2-8903d8089a26\") " pod="openstack/ovsdbserver-nb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.130446 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76686c34-5f17-4139-aed5-05658e66a812-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"76686c34-5f17-4139-aed5-05658e66a812\") " pod="openstack/ovsdbserver-nb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.136169 4946 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.136296 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e2ad07a2-22e0-462f-be2e-8e06e6d507b1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2ad07a2-22e0-462f-be2e-8e06e6d507b1\") pod \"ovsdbserver-nb-1\" (UID: \"76686c34-5f17-4139-aed5-05658e66a812\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4ad63c5241191a42d14d5b9d764b2661993256cc348d9312e3eac5ce71956cf1/globalmount\"" pod="openstack/ovsdbserver-nb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.137916 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a36adb-4aa2-49a7-a5b2-8903d8089a26-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"10a36adb-4aa2-49a7-a5b2-8903d8089a26\") " pod="openstack/ovsdbserver-nb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.146813 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed69d76d-986f-4ef9-b0a2-920c71c2b72a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ed69d76d-986f-4ef9-b0a2-920c71c2b72a\") " pod="openstack/ovsdbserver-nb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.148260 4946 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.148302 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-83bc95d0-6e37-468f-a95b-ec982f4e3b49\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-83bc95d0-6e37-468f-a95b-ec982f4e3b49\") pod \"ovsdbserver-nb-2\" (UID: \"10a36adb-4aa2-49a7-a5b2-8903d8089a26\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d1c763ff04461eda0af1ecb8860c2f68a03ec8654437d0e46324bec973802268/globalmount\"" pod="openstack/ovsdbserver-nb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.148320 4946 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.148361 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-013c7e0e-073d-4f3a-bfef-918816678d91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-013c7e0e-073d-4f3a-bfef-918816678d91\") pod \"ovsdbserver-nb-0\" (UID: \"ed69d76d-986f-4ef9-b0a2-920c71c2b72a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/40c117a85d4da23206f33deaa6d0db90d23eaa4488451826950e23d752f633f6/globalmount\"" pod="openstack/ovsdbserver-nb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.151964 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68kch\" (UniqueName: \"kubernetes.io/projected/76686c34-5f17-4139-aed5-05658e66a812-kube-api-access-68kch\") pod \"ovsdbserver-nb-1\" (UID: \"76686c34-5f17-4139-aed5-05658e66a812\") " pod="openstack/ovsdbserver-nb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.152251 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfj9j\" (UniqueName: \"kubernetes.io/projected/ed69d76d-986f-4ef9-b0a2-920c71c2b72a-kube-api-access-wfj9j\") pod \"ovsdbserver-nb-0\" (UID: \"ed69d76d-986f-4ef9-b0a2-920c71c2b72a\") " pod="openstack/ovsdbserver-nb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.153391 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwnl2\" (UniqueName: \"kubernetes.io/projected/10a36adb-4aa2-49a7-a5b2-8903d8089a26-kube-api-access-lwnl2\") pod \"ovsdbserver-nb-2\" (UID: \"10a36adb-4aa2-49a7-a5b2-8903d8089a26\") " pod="openstack/ovsdbserver-nb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.157035 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.175558 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-83bc95d0-6e37-468f-a95b-ec982f4e3b49\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-83bc95d0-6e37-468f-a95b-ec982f4e3b49\") pod \"ovsdbserver-nb-2\" (UID: \"10a36adb-4aa2-49a7-a5b2-8903d8089a26\") " pod="openstack/ovsdbserver-nb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.175715 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e2ad07a2-22e0-462f-be2e-8e06e6d507b1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2ad07a2-22e0-462f-be2e-8e06e6d507b1\") pod \"ovsdbserver-nb-1\" (UID: \"76686c34-5f17-4139-aed5-05658e66a812\") " pod="openstack/ovsdbserver-nb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.176386 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-013c7e0e-073d-4f3a-bfef-918816678d91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-013c7e0e-073d-4f3a-bfef-918816678d91\") pod \"ovsdbserver-nb-0\" (UID: \"ed69d76d-986f-4ef9-b0a2-920c71c2b72a\") " pod="openstack/ovsdbserver-nb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.184053 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.196017 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.215336 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.219014 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c283f69f-56de-48e3-a4f5-c1fc7f8497ec-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"c283f69f-56de-48e3-a4f5-c1fc7f8497ec\") " pod="openstack/ovsdbserver-sb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.219056 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c283f69f-56de-48e3-a4f5-c1fc7f8497ec-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"c283f69f-56de-48e3-a4f5-c1fc7f8497ec\") " pod="openstack/ovsdbserver-sb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.219092 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdh52\" (UniqueName: \"kubernetes.io/projected/ccf5d58f-8529-453c-b840-d5839d294d38-kube-api-access-mdh52\") pod \"ovsdbserver-sb-0\" (UID: \"ccf5d58f-8529-453c-b840-d5839d294d38\") " pod="openstack/ovsdbserver-sb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.219111 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccf5d58f-8529-453c-b840-d5839d294d38-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ccf5d58f-8529-453c-b840-d5839d294d38\") " pod="openstack/ovsdbserver-sb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.219130 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8b3da77b-9e83-49fe-a2de-d7e88f24cc7e-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"8b3da77b-9e83-49fe-a2de-d7e88f24cc7e\") " pod="openstack/ovsdbserver-sb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.219155 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c283f69f-56de-48e3-a4f5-c1fc7f8497ec-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"c283f69f-56de-48e3-a4f5-c1fc7f8497ec\") " pod="openstack/ovsdbserver-sb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.219177 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-122f1a6e-c584-4485-94a0-662ca7bd60bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122f1a6e-c584-4485-94a0-662ca7bd60bf\") pod \"ovsdbserver-sb-1\" (UID: \"c283f69f-56de-48e3-a4f5-c1fc7f8497ec\") " pod="openstack/ovsdbserver-sb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.219198 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ccf5d58f-8529-453c-b840-d5839d294d38-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ccf5d58f-8529-453c-b840-d5839d294d38\") " pod="openstack/ovsdbserver-sb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.219218 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccf5d58f-8529-453c-b840-d5839d294d38-config\") pod \"ovsdbserver-sb-0\" (UID: \"ccf5d58f-8529-453c-b840-d5839d294d38\") " pod="openstack/ovsdbserver-sb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.219247 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gzgp\" (UniqueName: \"kubernetes.io/projected/c283f69f-56de-48e3-a4f5-c1fc7f8497ec-kube-api-access-9gzgp\") pod \"ovsdbserver-sb-1\" (UID: \"c283f69f-56de-48e3-a4f5-c1fc7f8497ec\") " pod="openstack/ovsdbserver-sb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.219274 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ca93aaa6-37c4-4d39-a07d-36ef881ba85f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ca93aaa6-37c4-4d39-a07d-36ef881ba85f\") pod \"ovsdbserver-sb-0\" (UID: \"ccf5d58f-8529-453c-b840-d5839d294d38\") " pod="openstack/ovsdbserver-sb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.219300 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ccf5d58f-8529-453c-b840-d5839d294d38-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ccf5d58f-8529-453c-b840-d5839d294d38\") " pod="openstack/ovsdbserver-sb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.219335 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b3da77b-9e83-49fe-a2de-d7e88f24cc7e-config\") pod \"ovsdbserver-sb-2\" (UID: \"8b3da77b-9e83-49fe-a2de-d7e88f24cc7e\") " pod="openstack/ovsdbserver-sb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.219353 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b3da77b-9e83-49fe-a2de-d7e88f24cc7e-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"8b3da77b-9e83-49fe-a2de-d7e88f24cc7e\") " pod="openstack/ovsdbserver-sb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.219373 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c283f69f-56de-48e3-a4f5-c1fc7f8497ec-config\") pod \"ovsdbserver-sb-1\" (UID: \"c283f69f-56de-48e3-a4f5-c1fc7f8497ec\") " pod="openstack/ovsdbserver-sb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.219390 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqghk\" (UniqueName: \"kubernetes.io/projected/8b3da77b-9e83-49fe-a2de-d7e88f24cc7e-kube-api-access-nqghk\") pod \"ovsdbserver-sb-2\" (UID: \"8b3da77b-9e83-49fe-a2de-d7e88f24cc7e\") " pod="openstack/ovsdbserver-sb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.219406 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3da77b-9e83-49fe-a2de-d7e88f24cc7e-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"8b3da77b-9e83-49fe-a2de-d7e88f24cc7e\") " pod="openstack/ovsdbserver-sb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.219429 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8fb5af98-4b4b-4277-93ed-6d9a52925187\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8fb5af98-4b4b-4277-93ed-6d9a52925187\") pod \"ovsdbserver-sb-2\" (UID: \"8b3da77b-9e83-49fe-a2de-d7e88f24cc7e\") " pod="openstack/ovsdbserver-sb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.219955 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ccf5d58f-8529-453c-b840-d5839d294d38-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ccf5d58f-8529-453c-b840-d5839d294d38\") " pod="openstack/ovsdbserver-sb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.220519 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ccf5d58f-8529-453c-b840-d5839d294d38-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ccf5d58f-8529-453c-b840-d5839d294d38\") " pod="openstack/ovsdbserver-sb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.220532 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccf5d58f-8529-453c-b840-d5839d294d38-config\") pod \"ovsdbserver-sb-0\" (UID: \"ccf5d58f-8529-453c-b840-d5839d294d38\") " pod="openstack/ovsdbserver-sb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.223835 4946 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.223876 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ca93aaa6-37c4-4d39-a07d-36ef881ba85f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ca93aaa6-37c4-4d39-a07d-36ef881ba85f\") pod \"ovsdbserver-sb-0\" (UID: \"ccf5d58f-8529-453c-b840-d5839d294d38\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5e2e14391c61886e7c60f7faea313c3de63a355fe4cb1c431712273a420c48e9/globalmount\"" pod="openstack/ovsdbserver-sb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.223831 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccf5d58f-8529-453c-b840-d5839d294d38-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ccf5d58f-8529-453c-b840-d5839d294d38\") " pod="openstack/ovsdbserver-sb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.236289 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdh52\" (UniqueName: \"kubernetes.io/projected/ccf5d58f-8529-453c-b840-d5839d294d38-kube-api-access-mdh52\") pod \"ovsdbserver-sb-0\" (UID: \"ccf5d58f-8529-453c-b840-d5839d294d38\") " pod="openstack/ovsdbserver-sb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.256498 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ca93aaa6-37c4-4d39-a07d-36ef881ba85f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ca93aaa6-37c4-4d39-a07d-36ef881ba85f\") pod \"ovsdbserver-sb-0\" (UID: \"ccf5d58f-8529-453c-b840-d5839d294d38\") " pod="openstack/ovsdbserver-sb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.320516 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c283f69f-56de-48e3-a4f5-c1fc7f8497ec-config\") pod \"ovsdbserver-sb-1\" (UID: \"c283f69f-56de-48e3-a4f5-c1fc7f8497ec\") " pod="openstack/ovsdbserver-sb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.320971 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqghk\" (UniqueName: \"kubernetes.io/projected/8b3da77b-9e83-49fe-a2de-d7e88f24cc7e-kube-api-access-nqghk\") pod \"ovsdbserver-sb-2\" (UID: \"8b3da77b-9e83-49fe-a2de-d7e88f24cc7e\") " pod="openstack/ovsdbserver-sb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.321010 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3da77b-9e83-49fe-a2de-d7e88f24cc7e-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"8b3da77b-9e83-49fe-a2de-d7e88f24cc7e\") " pod="openstack/ovsdbserver-sb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.321050 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8fb5af98-4b4b-4277-93ed-6d9a52925187\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8fb5af98-4b4b-4277-93ed-6d9a52925187\") pod \"ovsdbserver-sb-2\" (UID: \"8b3da77b-9e83-49fe-a2de-d7e88f24cc7e\") " pod="openstack/ovsdbserver-sb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.321127 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c283f69f-56de-48e3-a4f5-c1fc7f8497ec-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"c283f69f-56de-48e3-a4f5-c1fc7f8497ec\") " pod="openstack/ovsdbserver-sb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.321163 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c283f69f-56de-48e3-a4f5-c1fc7f8497ec-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"c283f69f-56de-48e3-a4f5-c1fc7f8497ec\") " pod="openstack/ovsdbserver-sb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.321202 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8b3da77b-9e83-49fe-a2de-d7e88f24cc7e-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"8b3da77b-9e83-49fe-a2de-d7e88f24cc7e\") " pod="openstack/ovsdbserver-sb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.321237 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c283f69f-56de-48e3-a4f5-c1fc7f8497ec-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"c283f69f-56de-48e3-a4f5-c1fc7f8497ec\") " pod="openstack/ovsdbserver-sb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.321268 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-122f1a6e-c584-4485-94a0-662ca7bd60bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122f1a6e-c584-4485-94a0-662ca7bd60bf\") pod \"ovsdbserver-sb-1\" (UID: \"c283f69f-56de-48e3-a4f5-c1fc7f8497ec\") " pod="openstack/ovsdbserver-sb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.321308 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gzgp\" (UniqueName: \"kubernetes.io/projected/c283f69f-56de-48e3-a4f5-c1fc7f8497ec-kube-api-access-9gzgp\") pod \"ovsdbserver-sb-1\" (UID: \"c283f69f-56de-48e3-a4f5-c1fc7f8497ec\") " pod="openstack/ovsdbserver-sb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.321373 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b3da77b-9e83-49fe-a2de-d7e88f24cc7e-config\") pod \"ovsdbserver-sb-2\" (UID: \"8b3da77b-9e83-49fe-a2de-d7e88f24cc7e\") " pod="openstack/ovsdbserver-sb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.321399 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b3da77b-9e83-49fe-a2de-d7e88f24cc7e-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"8b3da77b-9e83-49fe-a2de-d7e88f24cc7e\") " pod="openstack/ovsdbserver-sb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.321543 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c283f69f-56de-48e3-a4f5-c1fc7f8497ec-config\") pod \"ovsdbserver-sb-1\" (UID: \"c283f69f-56de-48e3-a4f5-c1fc7f8497ec\") " pod="openstack/ovsdbserver-sb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.323499 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c283f69f-56de-48e3-a4f5-c1fc7f8497ec-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"c283f69f-56de-48e3-a4f5-c1fc7f8497ec\") " pod="openstack/ovsdbserver-sb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.324114 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b3da77b-9e83-49fe-a2de-d7e88f24cc7e-config\") pod \"ovsdbserver-sb-2\" (UID: \"8b3da77b-9e83-49fe-a2de-d7e88f24cc7e\") " pod="openstack/ovsdbserver-sb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.324573 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c283f69f-56de-48e3-a4f5-c1fc7f8497ec-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"c283f69f-56de-48e3-a4f5-c1fc7f8497ec\") " pod="openstack/ovsdbserver-sb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.324957 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b3da77b-9e83-49fe-a2de-d7e88f24cc7e-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"8b3da77b-9e83-49fe-a2de-d7e88f24cc7e\") " pod="openstack/ovsdbserver-sb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.325307 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8b3da77b-9e83-49fe-a2de-d7e88f24cc7e-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"8b3da77b-9e83-49fe-a2de-d7e88f24cc7e\") " pod="openstack/ovsdbserver-sb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.326347 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c283f69f-56de-48e3-a4f5-c1fc7f8497ec-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"c283f69f-56de-48e3-a4f5-c1fc7f8497ec\") " pod="openstack/ovsdbserver-sb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.327549 4946 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.327581 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8fb5af98-4b4b-4277-93ed-6d9a52925187\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8fb5af98-4b4b-4277-93ed-6d9a52925187\") pod \"ovsdbserver-sb-2\" (UID: \"8b3da77b-9e83-49fe-a2de-d7e88f24cc7e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f750f57a717cffd535586b9c6f429553d53e5e629b3ad0242458851628b5658a/globalmount\"" pod="openstack/ovsdbserver-sb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.327633 4946 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.327680 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-122f1a6e-c584-4485-94a0-662ca7bd60bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122f1a6e-c584-4485-94a0-662ca7bd60bf\") pod \"ovsdbserver-sb-1\" (UID: \"c283f69f-56de-48e3-a4f5-c1fc7f8497ec\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/02389437b87644d6d32fb16090d25835e2f8d81852df801df8a093613841a571/globalmount\"" pod="openstack/ovsdbserver-sb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.329657 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3da77b-9e83-49fe-a2de-d7e88f24cc7e-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"8b3da77b-9e83-49fe-a2de-d7e88f24cc7e\") " pod="openstack/ovsdbserver-sb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.340391 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqghk\" (UniqueName: \"kubernetes.io/projected/8b3da77b-9e83-49fe-a2de-d7e88f24cc7e-kube-api-access-nqghk\") pod \"ovsdbserver-sb-2\" (UID: \"8b3da77b-9e83-49fe-a2de-d7e88f24cc7e\") " pod="openstack/ovsdbserver-sb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.348741 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gzgp\" (UniqueName: \"kubernetes.io/projected/c283f69f-56de-48e3-a4f5-c1fc7f8497ec-kube-api-access-9gzgp\") pod \"ovsdbserver-sb-1\" (UID: \"c283f69f-56de-48e3-a4f5-c1fc7f8497ec\") " pod="openstack/ovsdbserver-sb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.371200 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.373057 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8fb5af98-4b4b-4277-93ed-6d9a52925187\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8fb5af98-4b4b-4277-93ed-6d9a52925187\") pod \"ovsdbserver-sb-2\" (UID: \"8b3da77b-9e83-49fe-a2de-d7e88f24cc7e\") " pod="openstack/ovsdbserver-sb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.375687 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-122f1a6e-c584-4485-94a0-662ca7bd60bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122f1a6e-c584-4485-94a0-662ca7bd60bf\") pod \"ovsdbserver-sb-1\" (UID: \"c283f69f-56de-48e3-a4f5-c1fc7f8497ec\") " pod="openstack/ovsdbserver-sb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.407941 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.432931 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.748256 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.853213 4946 scope.go:117] "RemoveContainer" containerID="a49d87de1b2c16571e3b70f8e9bede63c7bfb51b59480392d7780fb6fb294966" Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.859034 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 28 08:45:35 crc kubenswrapper[4946]: W1128 08:45:35.862019 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10a36adb_4aa2_49a7_a5b2_8903d8089a26.slice/crio-74a02d8b276aef2f40955fa3166651c6099806618591bcb9bfd210b6c93c52a6 WatchSource:0}: Error finding container 74a02d8b276aef2f40955fa3166651c6099806618591bcb9bfd210b6c93c52a6: Status 404 returned error can't find the container with id 74a02d8b276aef2f40955fa3166651c6099806618591bcb9bfd210b6c93c52a6 Nov 28 08:45:35 crc kubenswrapper[4946]: I1128 08:45:35.976325 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 08:45:35 crc kubenswrapper[4946]: W1128 08:45:35.985209 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccf5d58f_8529_453c_b840_d5839d294d38.slice/crio-71e3a2fa28a3c0329f6c09757458e952660241dcc55e403cdb25dc19b6b0f9c4 WatchSource:0}: Error finding container 71e3a2fa28a3c0329f6c09757458e952660241dcc55e403cdb25dc19b6b0f9c4: Status 404 returned error can't find the container with id 71e3a2fa28a3c0329f6c09757458e952660241dcc55e403cdb25dc19b6b0f9c4 Nov 28 08:45:36 crc kubenswrapper[4946]: W1128 08:45:36.074077 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc283f69f_56de_48e3_a4f5_c1fc7f8497ec.slice/crio-34f170f3af4a85e20af16fb061bcf63ca936882129b5dcee6516603ef4a29665 WatchSource:0}: Error finding container 34f170f3af4a85e20af16fb061bcf63ca936882129b5dcee6516603ef4a29665: Status 404 returned error can't find the container with id 34f170f3af4a85e20af16fb061bcf63ca936882129b5dcee6516603ef4a29665 Nov 28 08:45:36 crc kubenswrapper[4946]: I1128 08:45:36.076307 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 28 08:45:36 crc kubenswrapper[4946]: I1128 08:45:36.162719 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"10a36adb-4aa2-49a7-a5b2-8903d8089a26","Type":"ContainerStarted","Data":"74a02d8b276aef2f40955fa3166651c6099806618591bcb9bfd210b6c93c52a6"} Nov 28 08:45:36 crc kubenswrapper[4946]: I1128 08:45:36.164316 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ed69d76d-986f-4ef9-b0a2-920c71c2b72a","Type":"ContainerStarted","Data":"978e97c640736fb5a65bbe156b2435af7d2e99c324a357d7bd915ab239f1d2b2"} Nov 28 08:45:36 crc kubenswrapper[4946]: I1128 08:45:36.165412 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ccf5d58f-8529-453c-b840-d5839d294d38","Type":"ContainerStarted","Data":"71e3a2fa28a3c0329f6c09757458e952660241dcc55e403cdb25dc19b6b0f9c4"} Nov 28 08:45:36 crc kubenswrapper[4946]: I1128 08:45:36.167114 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"c283f69f-56de-48e3-a4f5-c1fc7f8497ec","Type":"ContainerStarted","Data":"34f170f3af4a85e20af16fb061bcf63ca936882129b5dcee6516603ef4a29665"} Nov 28 08:45:36 crc kubenswrapper[4946]: I1128 08:45:36.831098 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 28 08:45:36 crc kubenswrapper[4946]: W1128 08:45:36.852657 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76686c34_5f17_4139_aed5_05658e66a812.slice/crio-87bba71b793e3c444b618340e71c9af7dfc6791ba859c16ee312939548b43cfa WatchSource:0}: Error finding container 87bba71b793e3c444b618340e71c9af7dfc6791ba859c16ee312939548b43cfa: Status 404 returned error can't find the container with id 87bba71b793e3c444b618340e71c9af7dfc6791ba859c16ee312939548b43cfa Nov 28 08:45:36 crc kubenswrapper[4946]: I1128 08:45:36.952228 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 28 08:45:36 crc kubenswrapper[4946]: W1128 08:45:36.960723 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b3da77b_9e83_49fe_a2de_d7e88f24cc7e.slice/crio-c553fd21ef66f0c1c155c82844c45a067ed0c6c04ff05db4b3e49d421765085d WatchSource:0}: Error finding container c553fd21ef66f0c1c155c82844c45a067ed0c6c04ff05db4b3e49d421765085d: Status 404 returned error can't find the container with id c553fd21ef66f0c1c155c82844c45a067ed0c6c04ff05db4b3e49d421765085d Nov 28 08:45:37 crc kubenswrapper[4946]: I1128 08:45:37.177557 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"8b3da77b-9e83-49fe-a2de-d7e88f24cc7e","Type":"ContainerStarted","Data":"c553fd21ef66f0c1c155c82844c45a067ed0c6c04ff05db4b3e49d421765085d"} Nov 28 08:45:37 crc kubenswrapper[4946]: I1128 08:45:37.178587 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"76686c34-5f17-4139-aed5-05658e66a812","Type":"ContainerStarted","Data":"87bba71b793e3c444b618340e71c9af7dfc6791ba859c16ee312939548b43cfa"} Nov 28 08:45:41 crc kubenswrapper[4946]: I1128 08:45:41.237870 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ed69d76d-986f-4ef9-b0a2-920c71c2b72a","Type":"ContainerStarted","Data":"130d3f7bac5645ebdd7a3181697f0d1be6e087d6a929454a987ae4f3712af734"} Nov 28 08:45:41 crc kubenswrapper[4946]: I1128 08:45:41.238539 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ed69d76d-986f-4ef9-b0a2-920c71c2b72a","Type":"ContainerStarted","Data":"8edd849ef582516cab3f42eed23c05bd64a4f0de58692f54ea16748f48fcb45f"} Nov 28 08:45:41 crc kubenswrapper[4946]: I1128 08:45:41.240696 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ccf5d58f-8529-453c-b840-d5839d294d38","Type":"ContainerStarted","Data":"bc4ab102d7a3f7d22f1b753dce14899c0c1f97983ddc35db2652d2139cf6cc36"} Nov 28 08:45:41 crc kubenswrapper[4946]: I1128 08:45:41.240724 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ccf5d58f-8529-453c-b840-d5839d294d38","Type":"ContainerStarted","Data":"f369c5a02ada30f3b4e3d0c3da1e975b401def9fcefaeadfbd8ec12fe2686535"} Nov 28 08:45:41 crc kubenswrapper[4946]: I1128 08:45:41.242213 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"8b3da77b-9e83-49fe-a2de-d7e88f24cc7e","Type":"ContainerStarted","Data":"d63c4dcea7edb20dcfcdd2d89daac640fb2f950c6bac1eac985a751d27eafdba"} Nov 28 08:45:41 crc kubenswrapper[4946]: I1128 08:45:41.242247 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"8b3da77b-9e83-49fe-a2de-d7e88f24cc7e","Type":"ContainerStarted","Data":"5d0f66f45f485ceba6bb7d81c408e804e5c514f53a739f79ed7751ab57913d56"} Nov 28 08:45:41 crc kubenswrapper[4946]: I1128 08:45:41.244488 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"76686c34-5f17-4139-aed5-05658e66a812","Type":"ContainerStarted","Data":"8c19c1e72c6d4efe7b0e0b066e49b76499078e55ff6d7c46973c1100708fb079"} Nov 28 08:45:41 crc kubenswrapper[4946]: I1128 08:45:41.245595 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"c283f69f-56de-48e3-a4f5-c1fc7f8497ec","Type":"ContainerStarted","Data":"82aba374eff5997840b7407235977f6becc05c4005b9cd8456be66bd8f3e64d9"} Nov 28 08:45:41 crc kubenswrapper[4946]: I1128 08:45:41.245620 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"c283f69f-56de-48e3-a4f5-c1fc7f8497ec","Type":"ContainerStarted","Data":"cde98af14c5917a91e805e9e5fc08823eae719ae900c2145501d4d0ea49f188b"} Nov 28 08:45:41 crc kubenswrapper[4946]: I1128 08:45:41.246898 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"10a36adb-4aa2-49a7-a5b2-8903d8089a26","Type":"ContainerStarted","Data":"479b34ed95f2e945a3081ae70334c0018442666f36cabae730609ade268e8c49"} Nov 28 08:45:41 crc kubenswrapper[4946]: I1128 08:45:41.246936 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"10a36adb-4aa2-49a7-a5b2-8903d8089a26","Type":"ContainerStarted","Data":"4be6266912ee2f2a76133db5cb918a824342fc0113503ff8a102eed15234eb92"} Nov 28 08:45:41 crc kubenswrapper[4946]: I1128 08:45:41.310454 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=4.060901811 podStartE2EDuration="8.31034916s" podCreationTimestamp="2025-11-28 08:45:33 +0000 UTC" firstStartedPulling="2025-11-28 08:45:35.76618006 +0000 UTC m=+6790.144245161" lastFinishedPulling="2025-11-28 08:45:40.015627369 +0000 UTC m=+6794.393692510" observedRunningTime="2025-11-28 08:45:41.30952613 +0000 UTC m=+6795.687591241" watchObservedRunningTime="2025-11-28 08:45:41.31034916 +0000 UTC m=+6795.688414271" Nov 28 08:45:41 crc kubenswrapper[4946]: I1128 08:45:41.349954 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=4.321139429 podStartE2EDuration="8.34993817s" podCreationTimestamp="2025-11-28 08:45:33 +0000 UTC" firstStartedPulling="2025-11-28 08:45:36.85472642 +0000 UTC m=+6791.232791531" lastFinishedPulling="2025-11-28 08:45:40.883525171 +0000 UTC m=+6795.261590272" observedRunningTime="2025-11-28 08:45:41.349501789 +0000 UTC m=+6795.727566920" watchObservedRunningTime="2025-11-28 08:45:41.34993817 +0000 UTC m=+6795.728003281" Nov 28 08:45:41 crc kubenswrapper[4946]: I1128 08:45:41.350874 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=4.283628051 podStartE2EDuration="7.350863563s" podCreationTimestamp="2025-11-28 08:45:34 +0000 UTC" firstStartedPulling="2025-11-28 08:45:36.963354967 +0000 UTC m=+6791.341420078" lastFinishedPulling="2025-11-28 08:45:40.030590449 +0000 UTC m=+6794.408655590" observedRunningTime="2025-11-28 08:45:41.33015284 +0000 UTC m=+6795.708217971" watchObservedRunningTime="2025-11-28 08:45:41.350863563 +0000 UTC m=+6795.728928674" Nov 28 08:45:41 crc kubenswrapper[4946]: I1128 08:45:41.367639 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=3.195324676 podStartE2EDuration="7.367617567s" podCreationTimestamp="2025-11-28 08:45:34 +0000 UTC" firstStartedPulling="2025-11-28 08:45:36.076541638 +0000 UTC m=+6790.454606749" lastFinishedPulling="2025-11-28 08:45:40.248834529 +0000 UTC m=+6794.626899640" observedRunningTime="2025-11-28 08:45:41.365977576 +0000 UTC m=+6795.744042697" watchObservedRunningTime="2025-11-28 08:45:41.367617567 +0000 UTC m=+6795.745682688" Nov 28 08:45:41 crc kubenswrapper[4946]: I1128 08:45:41.371975 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 28 08:45:41 crc kubenswrapper[4946]: I1128 08:45:41.391244 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=4.24965236 podStartE2EDuration="8.391224941s" podCreationTimestamp="2025-11-28 08:45:33 +0000 UTC" firstStartedPulling="2025-11-28 08:45:35.874001717 +0000 UTC m=+6790.252066828" lastFinishedPulling="2025-11-28 08:45:40.015574298 +0000 UTC m=+6794.393639409" observedRunningTime="2025-11-28 08:45:41.382994677 +0000 UTC m=+6795.761059798" watchObservedRunningTime="2025-11-28 08:45:41.391224941 +0000 UTC m=+6795.769290052" Nov 28 08:45:41 crc kubenswrapper[4946]: I1128 08:45:41.408819 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Nov 28 08:45:41 crc kubenswrapper[4946]: I1128 08:45:41.410379 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.3846137499999998 podStartE2EDuration="7.410362245s" podCreationTimestamp="2025-11-28 08:45:34 +0000 UTC" firstStartedPulling="2025-11-28 08:45:35.988376717 +0000 UTC m=+6790.366441828" lastFinishedPulling="2025-11-28 08:45:40.014125212 +0000 UTC m=+6794.392190323" observedRunningTime="2025-11-28 08:45:41.397350103 +0000 UTC m=+6795.775415224" watchObservedRunningTime="2025-11-28 08:45:41.410362245 +0000 UTC m=+6795.788427356" Nov 28 08:45:41 crc kubenswrapper[4946]: I1128 08:45:41.433967 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Nov 28 08:45:42 crc kubenswrapper[4946]: I1128 08:45:42.261174 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"76686c34-5f17-4139-aed5-05658e66a812","Type":"ContainerStarted","Data":"0e8b1479beecb3a268ce191d7dcdeef4bfa0b0c5fe53c7c8c456bfb979cc81a6"} Nov 28 08:45:44 crc kubenswrapper[4946]: I1128 08:45:44.184721 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 28 08:45:44 crc kubenswrapper[4946]: I1128 08:45:44.196545 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Nov 28 08:45:44 crc kubenswrapper[4946]: I1128 08:45:44.215932 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Nov 28 08:45:44 crc kubenswrapper[4946]: I1128 08:45:44.232512 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 28 08:45:44 crc kubenswrapper[4946]: I1128 08:45:44.240398 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Nov 28 08:45:44 crc kubenswrapper[4946]: I1128 08:45:44.286447 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Nov 28 08:45:44 crc kubenswrapper[4946]: I1128 08:45:44.286585 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 28 08:45:44 crc kubenswrapper[4946]: I1128 08:45:44.288814 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Nov 28 08:45:44 crc kubenswrapper[4946]: I1128 08:45:44.289881 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Nov 28 08:45:44 crc kubenswrapper[4946]: I1128 08:45:44.415412 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 28 08:45:44 crc kubenswrapper[4946]: I1128 08:45:44.416019 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 28 08:45:44 crc kubenswrapper[4946]: I1128 08:45:44.456848 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Nov 28 08:45:44 crc kubenswrapper[4946]: I1128 08:45:44.457407 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Nov 28 08:45:44 crc kubenswrapper[4946]: I1128 08:45:44.481536 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Nov 28 08:45:44 crc kubenswrapper[4946]: I1128 08:45:44.482096 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Nov 28 08:45:44 crc kubenswrapper[4946]: I1128 08:45:44.990874 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:45:44 crc kubenswrapper[4946]: E1128 08:45:44.991296 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.249297 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.298007 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.362128 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.364760 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.383329 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.537211 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84c9dd9f6f-t6ccn"] Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.538833 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84c9dd9f6f-t6ccn" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.540367 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.565993 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84c9dd9f6f-t6ccn"] Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.609081 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b2354c9-6059-4c73-8e60-7d843422235b-config\") pod \"dnsmasq-dns-84c9dd9f6f-t6ccn\" (UID: \"5b2354c9-6059-4c73-8e60-7d843422235b\") " pod="openstack/dnsmasq-dns-84c9dd9f6f-t6ccn" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.609115 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b2354c9-6059-4c73-8e60-7d843422235b-dns-svc\") pod \"dnsmasq-dns-84c9dd9f6f-t6ccn\" (UID: \"5b2354c9-6059-4c73-8e60-7d843422235b\") " pod="openstack/dnsmasq-dns-84c9dd9f6f-t6ccn" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.609140 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b2354c9-6059-4c73-8e60-7d843422235b-ovsdbserver-nb\") pod \"dnsmasq-dns-84c9dd9f6f-t6ccn\" (UID: \"5b2354c9-6059-4c73-8e60-7d843422235b\") " pod="openstack/dnsmasq-dns-84c9dd9f6f-t6ccn" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.609162 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rggs\" (UniqueName: \"kubernetes.io/projected/5b2354c9-6059-4c73-8e60-7d843422235b-kube-api-access-2rggs\") pod \"dnsmasq-dns-84c9dd9f6f-t6ccn\" (UID: \"5b2354c9-6059-4c73-8e60-7d843422235b\") " pod="openstack/dnsmasq-dns-84c9dd9f6f-t6ccn" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.710889 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b2354c9-6059-4c73-8e60-7d843422235b-config\") pod \"dnsmasq-dns-84c9dd9f6f-t6ccn\" (UID: \"5b2354c9-6059-4c73-8e60-7d843422235b\") " pod="openstack/dnsmasq-dns-84c9dd9f6f-t6ccn" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.710920 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b2354c9-6059-4c73-8e60-7d843422235b-dns-svc\") pod \"dnsmasq-dns-84c9dd9f6f-t6ccn\" (UID: \"5b2354c9-6059-4c73-8e60-7d843422235b\") " pod="openstack/dnsmasq-dns-84c9dd9f6f-t6ccn" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.710942 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b2354c9-6059-4c73-8e60-7d843422235b-ovsdbserver-nb\") pod \"dnsmasq-dns-84c9dd9f6f-t6ccn\" (UID: \"5b2354c9-6059-4c73-8e60-7d843422235b\") " pod="openstack/dnsmasq-dns-84c9dd9f6f-t6ccn" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.710962 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rggs\" (UniqueName: \"kubernetes.io/projected/5b2354c9-6059-4c73-8e60-7d843422235b-kube-api-access-2rggs\") pod \"dnsmasq-dns-84c9dd9f6f-t6ccn\" (UID: \"5b2354c9-6059-4c73-8e60-7d843422235b\") " pod="openstack/dnsmasq-dns-84c9dd9f6f-t6ccn" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.712015 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b2354c9-6059-4c73-8e60-7d843422235b-config\") pod \"dnsmasq-dns-84c9dd9f6f-t6ccn\" (UID: \"5b2354c9-6059-4c73-8e60-7d843422235b\") " pod="openstack/dnsmasq-dns-84c9dd9f6f-t6ccn" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.712530 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b2354c9-6059-4c73-8e60-7d843422235b-dns-svc\") pod \"dnsmasq-dns-84c9dd9f6f-t6ccn\" (UID: \"5b2354c9-6059-4c73-8e60-7d843422235b\") " pod="openstack/dnsmasq-dns-84c9dd9f6f-t6ccn" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.713025 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b2354c9-6059-4c73-8e60-7d843422235b-ovsdbserver-nb\") pod \"dnsmasq-dns-84c9dd9f6f-t6ccn\" (UID: \"5b2354c9-6059-4c73-8e60-7d843422235b\") " pod="openstack/dnsmasq-dns-84c9dd9f6f-t6ccn" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.739205 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rggs\" (UniqueName: \"kubernetes.io/projected/5b2354c9-6059-4c73-8e60-7d843422235b-kube-api-access-2rggs\") pod \"dnsmasq-dns-84c9dd9f6f-t6ccn\" (UID: \"5b2354c9-6059-4c73-8e60-7d843422235b\") " pod="openstack/dnsmasq-dns-84c9dd9f6f-t6ccn" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.797160 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84c9dd9f6f-t6ccn"] Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.797708 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84c9dd9f6f-t6ccn" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.830075 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77bb748d8c-bj4w4"] Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.832021 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.834129 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.853602 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77bb748d8c-bj4w4"] Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.914770 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-config\") pod \"dnsmasq-dns-77bb748d8c-bj4w4\" (UID: \"72d2a0bf-5f6b-4074-b171-3b2b6500cf54\") " pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.914826 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xm6h\" (UniqueName: \"kubernetes.io/projected/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-kube-api-access-4xm6h\") pod \"dnsmasq-dns-77bb748d8c-bj4w4\" (UID: \"72d2a0bf-5f6b-4074-b171-3b2b6500cf54\") " pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.914860 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-ovsdbserver-sb\") pod \"dnsmasq-dns-77bb748d8c-bj4w4\" (UID: \"72d2a0bf-5f6b-4074-b171-3b2b6500cf54\") " pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.914908 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-dns-svc\") pod \"dnsmasq-dns-77bb748d8c-bj4w4\" (UID: \"72d2a0bf-5f6b-4074-b171-3b2b6500cf54\") " pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" Nov 28 08:45:45 crc kubenswrapper[4946]: I1128 08:45:45.914923 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-ovsdbserver-nb\") pod \"dnsmasq-dns-77bb748d8c-bj4w4\" (UID: \"72d2a0bf-5f6b-4074-b171-3b2b6500cf54\") " pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" Nov 28 08:45:46 crc kubenswrapper[4946]: I1128 08:45:46.017403 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-config\") pod \"dnsmasq-dns-77bb748d8c-bj4w4\" (UID: \"72d2a0bf-5f6b-4074-b171-3b2b6500cf54\") " pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" Nov 28 08:45:46 crc kubenswrapper[4946]: I1128 08:45:46.017761 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xm6h\" (UniqueName: \"kubernetes.io/projected/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-kube-api-access-4xm6h\") pod \"dnsmasq-dns-77bb748d8c-bj4w4\" (UID: \"72d2a0bf-5f6b-4074-b171-3b2b6500cf54\") " pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" Nov 28 08:45:46 crc kubenswrapper[4946]: I1128 08:45:46.017808 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-ovsdbserver-sb\") pod \"dnsmasq-dns-77bb748d8c-bj4w4\" (UID: \"72d2a0bf-5f6b-4074-b171-3b2b6500cf54\") " pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" Nov 28 08:45:46 crc kubenswrapper[4946]: I1128 08:45:46.017870 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-dns-svc\") pod \"dnsmasq-dns-77bb748d8c-bj4w4\" (UID: \"72d2a0bf-5f6b-4074-b171-3b2b6500cf54\") " pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" Nov 28 08:45:46 crc kubenswrapper[4946]: I1128 08:45:46.017903 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-ovsdbserver-nb\") pod \"dnsmasq-dns-77bb748d8c-bj4w4\" (UID: \"72d2a0bf-5f6b-4074-b171-3b2b6500cf54\") " pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" Nov 28 08:45:46 crc kubenswrapper[4946]: I1128 08:45:46.019067 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-ovsdbserver-nb\") pod \"dnsmasq-dns-77bb748d8c-bj4w4\" (UID: \"72d2a0bf-5f6b-4074-b171-3b2b6500cf54\") " pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" Nov 28 08:45:46 crc kubenswrapper[4946]: I1128 08:45:46.019797 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-config\") pod \"dnsmasq-dns-77bb748d8c-bj4w4\" (UID: \"72d2a0bf-5f6b-4074-b171-3b2b6500cf54\") " pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" Nov 28 08:45:46 crc kubenswrapper[4946]: I1128 08:45:46.021168 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-ovsdbserver-sb\") pod \"dnsmasq-dns-77bb748d8c-bj4w4\" (UID: \"72d2a0bf-5f6b-4074-b171-3b2b6500cf54\") " pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" Nov 28 08:45:46 crc kubenswrapper[4946]: I1128 08:45:46.022148 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-dns-svc\") pod \"dnsmasq-dns-77bb748d8c-bj4w4\" (UID: \"72d2a0bf-5f6b-4074-b171-3b2b6500cf54\") " pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" Nov 28 08:45:46 crc kubenswrapper[4946]: I1128 08:45:46.049209 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xm6h\" (UniqueName: \"kubernetes.io/projected/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-kube-api-access-4xm6h\") pod \"dnsmasq-dns-77bb748d8c-bj4w4\" (UID: \"72d2a0bf-5f6b-4074-b171-3b2b6500cf54\") " pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" Nov 28 08:45:46 crc kubenswrapper[4946]: I1128 08:45:46.195831 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" Nov 28 08:45:46 crc kubenswrapper[4946]: I1128 08:45:46.278242 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84c9dd9f6f-t6ccn"] Nov 28 08:45:46 crc kubenswrapper[4946]: I1128 08:45:46.315956 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c9dd9f6f-t6ccn" event={"ID":"5b2354c9-6059-4c73-8e60-7d843422235b","Type":"ContainerStarted","Data":"ab6c9d72eb732ac9ca355c3f5a8cef5ac07303ea5ca7bd4ecd2ce71f25e63b29"} Nov 28 08:45:46 crc kubenswrapper[4946]: I1128 08:45:46.365120 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Nov 28 08:45:46 crc kubenswrapper[4946]: I1128 08:45:46.612257 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77bb748d8c-bj4w4"] Nov 28 08:45:46 crc kubenswrapper[4946]: W1128 08:45:46.617555 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72d2a0bf_5f6b_4074_b171_3b2b6500cf54.slice/crio-fab6530122a07ac7b30d635b0214699eb4104974c52cfc004b873767f5bf999f WatchSource:0}: Error finding container fab6530122a07ac7b30d635b0214699eb4104974c52cfc004b873767f5bf999f: Status 404 returned error can't find the container with id fab6530122a07ac7b30d635b0214699eb4104974c52cfc004b873767f5bf999f Nov 28 08:45:47 crc kubenswrapper[4946]: I1128 08:45:47.332853 4946 generic.go:334] "Generic (PLEG): container finished" podID="72d2a0bf-5f6b-4074-b171-3b2b6500cf54" containerID="c7218b3750dbf59b3713bc47d1d4e0dc755a055dc807afcea2468472046a2e32" exitCode=0 Nov 28 08:45:47 crc kubenswrapper[4946]: I1128 08:45:47.332972 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" event={"ID":"72d2a0bf-5f6b-4074-b171-3b2b6500cf54","Type":"ContainerDied","Data":"c7218b3750dbf59b3713bc47d1d4e0dc755a055dc807afcea2468472046a2e32"} Nov 28 08:45:47 crc kubenswrapper[4946]: I1128 08:45:47.333012 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" event={"ID":"72d2a0bf-5f6b-4074-b171-3b2b6500cf54","Type":"ContainerStarted","Data":"fab6530122a07ac7b30d635b0214699eb4104974c52cfc004b873767f5bf999f"} Nov 28 08:45:47 crc kubenswrapper[4946]: I1128 08:45:47.339784 4946 generic.go:334] "Generic (PLEG): container finished" podID="5b2354c9-6059-4c73-8e60-7d843422235b" containerID="05256c44eac982606bed4b2debff270cccd2d2314ae163472ce0d7e03f7e7358" exitCode=0 Nov 28 08:45:47 crc kubenswrapper[4946]: I1128 08:45:47.340453 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c9dd9f6f-t6ccn" event={"ID":"5b2354c9-6059-4c73-8e60-7d843422235b","Type":"ContainerDied","Data":"05256c44eac982606bed4b2debff270cccd2d2314ae163472ce0d7e03f7e7358"} Nov 28 08:45:47 crc kubenswrapper[4946]: I1128 08:45:47.681982 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84c9dd9f6f-t6ccn" Nov 28 08:45:47 crc kubenswrapper[4946]: I1128 08:45:47.877443 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b2354c9-6059-4c73-8e60-7d843422235b-config\") pod \"5b2354c9-6059-4c73-8e60-7d843422235b\" (UID: \"5b2354c9-6059-4c73-8e60-7d843422235b\") " Nov 28 08:45:47 crc kubenswrapper[4946]: I1128 08:45:47.877581 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b2354c9-6059-4c73-8e60-7d843422235b-ovsdbserver-nb\") pod \"5b2354c9-6059-4c73-8e60-7d843422235b\" (UID: \"5b2354c9-6059-4c73-8e60-7d843422235b\") " Nov 28 08:45:47 crc kubenswrapper[4946]: I1128 08:45:47.877691 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rggs\" (UniqueName: \"kubernetes.io/projected/5b2354c9-6059-4c73-8e60-7d843422235b-kube-api-access-2rggs\") pod \"5b2354c9-6059-4c73-8e60-7d843422235b\" (UID: \"5b2354c9-6059-4c73-8e60-7d843422235b\") " Nov 28 08:45:47 crc kubenswrapper[4946]: I1128 08:45:47.877730 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b2354c9-6059-4c73-8e60-7d843422235b-dns-svc\") pod \"5b2354c9-6059-4c73-8e60-7d843422235b\" (UID: \"5b2354c9-6059-4c73-8e60-7d843422235b\") " Nov 28 08:45:47 crc kubenswrapper[4946]: I1128 08:45:47.883335 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b2354c9-6059-4c73-8e60-7d843422235b-kube-api-access-2rggs" (OuterVolumeSpecName: "kube-api-access-2rggs") pod "5b2354c9-6059-4c73-8e60-7d843422235b" (UID: "5b2354c9-6059-4c73-8e60-7d843422235b"). InnerVolumeSpecName "kube-api-access-2rggs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:45:47 crc kubenswrapper[4946]: I1128 08:45:47.899290 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b2354c9-6059-4c73-8e60-7d843422235b-config" (OuterVolumeSpecName: "config") pod "5b2354c9-6059-4c73-8e60-7d843422235b" (UID: "5b2354c9-6059-4c73-8e60-7d843422235b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:45:47 crc kubenswrapper[4946]: I1128 08:45:47.915396 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b2354c9-6059-4c73-8e60-7d843422235b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5b2354c9-6059-4c73-8e60-7d843422235b" (UID: "5b2354c9-6059-4c73-8e60-7d843422235b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:45:47 crc kubenswrapper[4946]: I1128 08:45:47.932141 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b2354c9-6059-4c73-8e60-7d843422235b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5b2354c9-6059-4c73-8e60-7d843422235b" (UID: "5b2354c9-6059-4c73-8e60-7d843422235b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:45:47 crc kubenswrapper[4946]: I1128 08:45:47.979576 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b2354c9-6059-4c73-8e60-7d843422235b-config\") on node \"crc\" DevicePath \"\"" Nov 28 08:45:47 crc kubenswrapper[4946]: I1128 08:45:47.979617 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b2354c9-6059-4c73-8e60-7d843422235b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 08:45:47 crc kubenswrapper[4946]: I1128 08:45:47.979633 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rggs\" (UniqueName: \"kubernetes.io/projected/5b2354c9-6059-4c73-8e60-7d843422235b-kube-api-access-2rggs\") on node \"crc\" DevicePath \"\"" Nov 28 08:45:47 crc kubenswrapper[4946]: I1128 08:45:47.979644 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b2354c9-6059-4c73-8e60-7d843422235b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 08:45:48 crc kubenswrapper[4946]: I1128 08:45:48.351569 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" event={"ID":"72d2a0bf-5f6b-4074-b171-3b2b6500cf54","Type":"ContainerStarted","Data":"5d520c379b358058b305fbe69b4a7400d5a9022724ffd538389312fc2f9cc58f"} Nov 28 08:45:48 crc kubenswrapper[4946]: I1128 08:45:48.352085 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" Nov 28 08:45:48 crc kubenswrapper[4946]: I1128 08:45:48.353749 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c9dd9f6f-t6ccn" event={"ID":"5b2354c9-6059-4c73-8e60-7d843422235b","Type":"ContainerDied","Data":"ab6c9d72eb732ac9ca355c3f5a8cef5ac07303ea5ca7bd4ecd2ce71f25e63b29"} Nov 28 08:45:48 crc kubenswrapper[4946]: I1128 08:45:48.353822 4946 scope.go:117] "RemoveContainer" containerID="05256c44eac982606bed4b2debff270cccd2d2314ae163472ce0d7e03f7e7358" Nov 28 08:45:48 crc kubenswrapper[4946]: I1128 08:45:48.353856 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84c9dd9f6f-t6ccn" Nov 28 08:45:48 crc kubenswrapper[4946]: I1128 08:45:48.400383 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" podStartSLOduration=3.400356554 podStartE2EDuration="3.400356554s" podCreationTimestamp="2025-11-28 08:45:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:45:48.38122283 +0000 UTC m=+6802.759288011" watchObservedRunningTime="2025-11-28 08:45:48.400356554 +0000 UTC m=+6802.778421695" Nov 28 08:45:48 crc kubenswrapper[4946]: I1128 08:45:48.448014 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84c9dd9f6f-t6ccn"] Nov 28 08:45:48 crc kubenswrapper[4946]: I1128 08:45:48.457404 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84c9dd9f6f-t6ccn"] Nov 28 08:45:49 crc kubenswrapper[4946]: I1128 08:45:49.643836 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Nov 28 08:45:49 crc kubenswrapper[4946]: E1128 08:45:49.644388 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b2354c9-6059-4c73-8e60-7d843422235b" containerName="init" Nov 28 08:45:49 crc kubenswrapper[4946]: I1128 08:45:49.644417 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b2354c9-6059-4c73-8e60-7d843422235b" containerName="init" Nov 28 08:45:49 crc kubenswrapper[4946]: I1128 08:45:49.644822 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b2354c9-6059-4c73-8e60-7d843422235b" containerName="init" Nov 28 08:45:49 crc kubenswrapper[4946]: I1128 08:45:49.645818 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 28 08:45:49 crc kubenswrapper[4946]: I1128 08:45:49.650561 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Nov 28 08:45:49 crc kubenswrapper[4946]: I1128 08:45:49.664783 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Nov 28 08:45:49 crc kubenswrapper[4946]: I1128 08:45:49.810387 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/bb6ec39d-f695-4672-97ab-1abee5a16d04-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"bb6ec39d-f695-4672-97ab-1abee5a16d04\") " pod="openstack/ovn-copy-data" Nov 28 08:45:49 crc kubenswrapper[4946]: I1128 08:45:49.810519 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbf5q\" (UniqueName: \"kubernetes.io/projected/bb6ec39d-f695-4672-97ab-1abee5a16d04-kube-api-access-bbf5q\") pod \"ovn-copy-data\" (UID: \"bb6ec39d-f695-4672-97ab-1abee5a16d04\") " pod="openstack/ovn-copy-data" Nov 28 08:45:49 crc kubenswrapper[4946]: I1128 08:45:49.810564 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-56d395d5-8cb9-4311-ba02-1f17954cb811\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56d395d5-8cb9-4311-ba02-1f17954cb811\") pod \"ovn-copy-data\" (UID: \"bb6ec39d-f695-4672-97ab-1abee5a16d04\") " pod="openstack/ovn-copy-data" Nov 28 08:45:49 crc kubenswrapper[4946]: I1128 08:45:49.912551 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbf5q\" (UniqueName: \"kubernetes.io/projected/bb6ec39d-f695-4672-97ab-1abee5a16d04-kube-api-access-bbf5q\") pod \"ovn-copy-data\" (UID: \"bb6ec39d-f695-4672-97ab-1abee5a16d04\") " pod="openstack/ovn-copy-data" Nov 28 08:45:49 crc kubenswrapper[4946]: I1128 08:45:49.912637 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-56d395d5-8cb9-4311-ba02-1f17954cb811\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56d395d5-8cb9-4311-ba02-1f17954cb811\") pod \"ovn-copy-data\" (UID: \"bb6ec39d-f695-4672-97ab-1abee5a16d04\") " pod="openstack/ovn-copy-data" Nov 28 08:45:49 crc kubenswrapper[4946]: I1128 08:45:49.912803 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/bb6ec39d-f695-4672-97ab-1abee5a16d04-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"bb6ec39d-f695-4672-97ab-1abee5a16d04\") " pod="openstack/ovn-copy-data" Nov 28 08:45:49 crc kubenswrapper[4946]: I1128 08:45:49.918667 4946 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 08:45:49 crc kubenswrapper[4946]: I1128 08:45:49.918731 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-56d395d5-8cb9-4311-ba02-1f17954cb811\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56d395d5-8cb9-4311-ba02-1f17954cb811\") pod \"ovn-copy-data\" (UID: \"bb6ec39d-f695-4672-97ab-1abee5a16d04\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e3cb2a3c3a49b633060b2f79e691e4a5a0a62ac371370ffb165a732af42a8ed0/globalmount\"" pod="openstack/ovn-copy-data" Nov 28 08:45:49 crc kubenswrapper[4946]: I1128 08:45:49.921787 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/bb6ec39d-f695-4672-97ab-1abee5a16d04-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"bb6ec39d-f695-4672-97ab-1abee5a16d04\") " pod="openstack/ovn-copy-data" Nov 28 08:45:49 crc kubenswrapper[4946]: I1128 08:45:49.932232 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbf5q\" (UniqueName: \"kubernetes.io/projected/bb6ec39d-f695-4672-97ab-1abee5a16d04-kube-api-access-bbf5q\") pod \"ovn-copy-data\" (UID: \"bb6ec39d-f695-4672-97ab-1abee5a16d04\") " pod="openstack/ovn-copy-data" Nov 28 08:45:49 crc kubenswrapper[4946]: I1128 08:45:49.966171 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-56d395d5-8cb9-4311-ba02-1f17954cb811\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56d395d5-8cb9-4311-ba02-1f17954cb811\") pod \"ovn-copy-data\" (UID: \"bb6ec39d-f695-4672-97ab-1abee5a16d04\") " pod="openstack/ovn-copy-data" Nov 28 08:45:49 crc kubenswrapper[4946]: I1128 08:45:49.978193 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 28 08:45:50 crc kubenswrapper[4946]: I1128 08:45:50.008278 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b2354c9-6059-4c73-8e60-7d843422235b" path="/var/lib/kubelet/pods/5b2354c9-6059-4c73-8e60-7d843422235b/volumes" Nov 28 08:45:50 crc kubenswrapper[4946]: I1128 08:45:50.456205 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Nov 28 08:45:50 crc kubenswrapper[4946]: W1128 08:45:50.472731 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb6ec39d_f695_4672_97ab_1abee5a16d04.slice/crio-f6c57f2facb95861a040de435da0d11d52e0c33d572712af56f615c61474d54a WatchSource:0}: Error finding container f6c57f2facb95861a040de435da0d11d52e0c33d572712af56f615c61474d54a: Status 404 returned error can't find the container with id f6c57f2facb95861a040de435da0d11d52e0c33d572712af56f615c61474d54a Nov 28 08:45:51 crc kubenswrapper[4946]: I1128 08:45:51.400957 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"bb6ec39d-f695-4672-97ab-1abee5a16d04","Type":"ContainerStarted","Data":"f6c57f2facb95861a040de435da0d11d52e0c33d572712af56f615c61474d54a"} Nov 28 08:45:52 crc kubenswrapper[4946]: I1128 08:45:52.412199 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"bb6ec39d-f695-4672-97ab-1abee5a16d04","Type":"ContainerStarted","Data":"9fa555db011d896b07aaffd467b20fb27b12b1a8643760a3bc263d025a1c69cd"} Nov 28 08:45:52 crc kubenswrapper[4946]: I1128 08:45:52.433240 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.676380151 podStartE2EDuration="4.433206034s" podCreationTimestamp="2025-11-28 08:45:48 +0000 UTC" firstStartedPulling="2025-11-28 08:45:50.474011475 +0000 UTC m=+6804.852076586" lastFinishedPulling="2025-11-28 08:45:51.230837348 +0000 UTC m=+6805.608902469" observedRunningTime="2025-11-28 08:45:52.432060036 +0000 UTC m=+6806.810125187" watchObservedRunningTime="2025-11-28 08:45:52.433206034 +0000 UTC m=+6806.811271185" Nov 28 08:45:56 crc kubenswrapper[4946]: I1128 08:45:56.201672 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" Nov 28 08:45:56 crc kubenswrapper[4946]: I1128 08:45:56.538219 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dfd7bdf7-rpvww"] Nov 28 08:45:56 crc kubenswrapper[4946]: I1128 08:45:56.538493 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5dfd7bdf7-rpvww" podUID="75b5d08b-c397-4583-9f57-ec8dbed322b5" containerName="dnsmasq-dns" containerID="cri-o://9eba2fca6051a5e6114d24268ba5bc6d5b0e2fc3091a75114ad0f1f126ef8281" gracePeriod=10 Nov 28 08:45:57 crc kubenswrapper[4946]: I1128 08:45:57.465188 4946 generic.go:334] "Generic (PLEG): container finished" podID="75b5d08b-c397-4583-9f57-ec8dbed322b5" containerID="9eba2fca6051a5e6114d24268ba5bc6d5b0e2fc3091a75114ad0f1f126ef8281" exitCode=0 Nov 28 08:45:57 crc kubenswrapper[4946]: I1128 08:45:57.465268 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dfd7bdf7-rpvww" event={"ID":"75b5d08b-c397-4583-9f57-ec8dbed322b5","Type":"ContainerDied","Data":"9eba2fca6051a5e6114d24268ba5bc6d5b0e2fc3091a75114ad0f1f126ef8281"} Nov 28 08:45:57 crc kubenswrapper[4946]: I1128 08:45:57.618371 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dfd7bdf7-rpvww" Nov 28 08:45:57 crc kubenswrapper[4946]: I1128 08:45:57.652083 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6l26\" (UniqueName: \"kubernetes.io/projected/75b5d08b-c397-4583-9f57-ec8dbed322b5-kube-api-access-t6l26\") pod \"75b5d08b-c397-4583-9f57-ec8dbed322b5\" (UID: \"75b5d08b-c397-4583-9f57-ec8dbed322b5\") " Nov 28 08:45:57 crc kubenswrapper[4946]: I1128 08:45:57.653407 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b5d08b-c397-4583-9f57-ec8dbed322b5-config\") pod \"75b5d08b-c397-4583-9f57-ec8dbed322b5\" (UID: \"75b5d08b-c397-4583-9f57-ec8dbed322b5\") " Nov 28 08:45:57 crc kubenswrapper[4946]: I1128 08:45:57.653647 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b5d08b-c397-4583-9f57-ec8dbed322b5-dns-svc\") pod \"75b5d08b-c397-4583-9f57-ec8dbed322b5\" (UID: \"75b5d08b-c397-4583-9f57-ec8dbed322b5\") " Nov 28 08:45:57 crc kubenswrapper[4946]: I1128 08:45:57.663529 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b5d08b-c397-4583-9f57-ec8dbed322b5-kube-api-access-t6l26" (OuterVolumeSpecName: "kube-api-access-t6l26") pod "75b5d08b-c397-4583-9f57-ec8dbed322b5" (UID: "75b5d08b-c397-4583-9f57-ec8dbed322b5"). InnerVolumeSpecName "kube-api-access-t6l26". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:45:57 crc kubenswrapper[4946]: I1128 08:45:57.711757 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b5d08b-c397-4583-9f57-ec8dbed322b5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "75b5d08b-c397-4583-9f57-ec8dbed322b5" (UID: "75b5d08b-c397-4583-9f57-ec8dbed322b5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:45:57 crc kubenswrapper[4946]: I1128 08:45:57.717236 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b5d08b-c397-4583-9f57-ec8dbed322b5-config" (OuterVolumeSpecName: "config") pod "75b5d08b-c397-4583-9f57-ec8dbed322b5" (UID: "75b5d08b-c397-4583-9f57-ec8dbed322b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:45:57 crc kubenswrapper[4946]: I1128 08:45:57.757529 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6l26\" (UniqueName: \"kubernetes.io/projected/75b5d08b-c397-4583-9f57-ec8dbed322b5-kube-api-access-t6l26\") on node \"crc\" DevicePath \"\"" Nov 28 08:45:57 crc kubenswrapper[4946]: I1128 08:45:57.757618 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b5d08b-c397-4583-9f57-ec8dbed322b5-config\") on node \"crc\" DevicePath \"\"" Nov 28 08:45:57 crc kubenswrapper[4946]: I1128 08:45:57.757631 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b5d08b-c397-4583-9f57-ec8dbed322b5-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 08:45:58 crc kubenswrapper[4946]: I1128 08:45:58.482622 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dfd7bdf7-rpvww" event={"ID":"75b5d08b-c397-4583-9f57-ec8dbed322b5","Type":"ContainerDied","Data":"beb816e0561bc0fbec56d5bd651f07a8fc5a963ee972adcf4e0837c5dcbc8a1e"} Nov 28 08:45:58 crc kubenswrapper[4946]: I1128 08:45:58.482717 4946 scope.go:117] "RemoveContainer" containerID="9eba2fca6051a5e6114d24268ba5bc6d5b0e2fc3091a75114ad0f1f126ef8281" Nov 28 08:45:58 crc kubenswrapper[4946]: I1128 08:45:58.482735 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dfd7bdf7-rpvww" Nov 28 08:45:58 crc kubenswrapper[4946]: I1128 08:45:58.517733 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dfd7bdf7-rpvww"] Nov 28 08:45:58 crc kubenswrapper[4946]: I1128 08:45:58.531660 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dfd7bdf7-rpvww"] Nov 28 08:45:58 crc kubenswrapper[4946]: I1128 08:45:58.537413 4946 scope.go:117] "RemoveContainer" containerID="d6014fcb139895e27f64033039aa5975fdb993e7a8a6a157d02329870cb77e6f" Nov 28 08:45:58 crc kubenswrapper[4946]: I1128 08:45:58.990385 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:45:58 crc kubenswrapper[4946]: E1128 08:45:58.991275 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:46:00 crc kubenswrapper[4946]: I1128 08:46:00.009903 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75b5d08b-c397-4583-9f57-ec8dbed322b5" path="/var/lib/kubelet/pods/75b5d08b-c397-4583-9f57-ec8dbed322b5/volumes" Nov 28 08:46:01 crc kubenswrapper[4946]: I1128 08:46:01.482966 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 28 08:46:01 crc kubenswrapper[4946]: E1128 08:46:01.483492 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b5d08b-c397-4583-9f57-ec8dbed322b5" containerName="dnsmasq-dns" Nov 28 08:46:01 crc kubenswrapper[4946]: I1128 08:46:01.483515 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b5d08b-c397-4583-9f57-ec8dbed322b5" containerName="dnsmasq-dns" Nov 28 08:46:01 crc kubenswrapper[4946]: E1128 08:46:01.483538 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b5d08b-c397-4583-9f57-ec8dbed322b5" containerName="init" Nov 28 08:46:01 crc kubenswrapper[4946]: I1128 08:46:01.483548 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b5d08b-c397-4583-9f57-ec8dbed322b5" containerName="init" Nov 28 08:46:01 crc kubenswrapper[4946]: I1128 08:46:01.484987 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b5d08b-c397-4583-9f57-ec8dbed322b5" containerName="dnsmasq-dns" Nov 28 08:46:01 crc kubenswrapper[4946]: I1128 08:46:01.486755 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 28 08:46:01 crc kubenswrapper[4946]: I1128 08:46:01.489962 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-pn68j" Nov 28 08:46:01 crc kubenswrapper[4946]: I1128 08:46:01.491201 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 28 08:46:01 crc kubenswrapper[4946]: I1128 08:46:01.496870 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 28 08:46:01 crc kubenswrapper[4946]: I1128 08:46:01.521480 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 28 08:46:01 crc kubenswrapper[4946]: I1128 08:46:01.526678 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gq2t\" (UniqueName: \"kubernetes.io/projected/602749e9-4345-4dcc-a573-f6fe0101fcba-kube-api-access-2gq2t\") pod \"ovn-northd-0\" (UID: \"602749e9-4345-4dcc-a573-f6fe0101fcba\") " pod="openstack/ovn-northd-0" Nov 28 08:46:01 crc kubenswrapper[4946]: I1128 08:46:01.526731 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/602749e9-4345-4dcc-a573-f6fe0101fcba-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"602749e9-4345-4dcc-a573-f6fe0101fcba\") " pod="openstack/ovn-northd-0" Nov 28 08:46:01 crc kubenswrapper[4946]: I1128 08:46:01.526755 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/602749e9-4345-4dcc-a573-f6fe0101fcba-config\") pod \"ovn-northd-0\" (UID: \"602749e9-4345-4dcc-a573-f6fe0101fcba\") " pod="openstack/ovn-northd-0" Nov 28 08:46:01 crc kubenswrapper[4946]: I1128 08:46:01.527022 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602749e9-4345-4dcc-a573-f6fe0101fcba-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"602749e9-4345-4dcc-a573-f6fe0101fcba\") " pod="openstack/ovn-northd-0" Nov 28 08:46:01 crc kubenswrapper[4946]: I1128 08:46:01.527846 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/602749e9-4345-4dcc-a573-f6fe0101fcba-scripts\") pod \"ovn-northd-0\" (UID: \"602749e9-4345-4dcc-a573-f6fe0101fcba\") " pod="openstack/ovn-northd-0" Nov 28 08:46:01 crc kubenswrapper[4946]: I1128 08:46:01.629682 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/602749e9-4345-4dcc-a573-f6fe0101fcba-scripts\") pod \"ovn-northd-0\" (UID: \"602749e9-4345-4dcc-a573-f6fe0101fcba\") " pod="openstack/ovn-northd-0" Nov 28 08:46:01 crc kubenswrapper[4946]: I1128 08:46:01.629748 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gq2t\" (UniqueName: \"kubernetes.io/projected/602749e9-4345-4dcc-a573-f6fe0101fcba-kube-api-access-2gq2t\") pod \"ovn-northd-0\" (UID: \"602749e9-4345-4dcc-a573-f6fe0101fcba\") " pod="openstack/ovn-northd-0" Nov 28 08:46:01 crc kubenswrapper[4946]: I1128 08:46:01.629806 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/602749e9-4345-4dcc-a573-f6fe0101fcba-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"602749e9-4345-4dcc-a573-f6fe0101fcba\") " pod="openstack/ovn-northd-0" Nov 28 08:46:01 crc kubenswrapper[4946]: I1128 08:46:01.629831 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/602749e9-4345-4dcc-a573-f6fe0101fcba-config\") pod \"ovn-northd-0\" (UID: \"602749e9-4345-4dcc-a573-f6fe0101fcba\") " pod="openstack/ovn-northd-0" Nov 28 08:46:01 crc kubenswrapper[4946]: I1128 08:46:01.629881 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602749e9-4345-4dcc-a573-f6fe0101fcba-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"602749e9-4345-4dcc-a573-f6fe0101fcba\") " pod="openstack/ovn-northd-0" Nov 28 08:46:01 crc kubenswrapper[4946]: I1128 08:46:01.630650 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/602749e9-4345-4dcc-a573-f6fe0101fcba-scripts\") pod \"ovn-northd-0\" (UID: \"602749e9-4345-4dcc-a573-f6fe0101fcba\") " pod="openstack/ovn-northd-0" Nov 28 08:46:01 crc kubenswrapper[4946]: I1128 08:46:01.630823 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/602749e9-4345-4dcc-a573-f6fe0101fcba-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"602749e9-4345-4dcc-a573-f6fe0101fcba\") " pod="openstack/ovn-northd-0" Nov 28 08:46:01 crc kubenswrapper[4946]: I1128 08:46:01.631351 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/602749e9-4345-4dcc-a573-f6fe0101fcba-config\") pod \"ovn-northd-0\" (UID: \"602749e9-4345-4dcc-a573-f6fe0101fcba\") " pod="openstack/ovn-northd-0" Nov 28 08:46:01 crc kubenswrapper[4946]: I1128 08:46:01.639193 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602749e9-4345-4dcc-a573-f6fe0101fcba-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"602749e9-4345-4dcc-a573-f6fe0101fcba\") " pod="openstack/ovn-northd-0" Nov 28 08:46:01 crc kubenswrapper[4946]: I1128 08:46:01.648202 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gq2t\" (UniqueName: \"kubernetes.io/projected/602749e9-4345-4dcc-a573-f6fe0101fcba-kube-api-access-2gq2t\") pod \"ovn-northd-0\" (UID: \"602749e9-4345-4dcc-a573-f6fe0101fcba\") " pod="openstack/ovn-northd-0" Nov 28 08:46:01 crc kubenswrapper[4946]: I1128 08:46:01.814228 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 28 08:46:02 crc kubenswrapper[4946]: I1128 08:46:02.371223 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 28 08:46:02 crc kubenswrapper[4946]: W1128 08:46:02.380221 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod602749e9_4345_4dcc_a573_f6fe0101fcba.slice/crio-e0eb28f857690cfbd302832d55fa0e1c262edd2a3dfcc913d4cb0cb644ac6b4c WatchSource:0}: Error finding container e0eb28f857690cfbd302832d55fa0e1c262edd2a3dfcc913d4cb0cb644ac6b4c: Status 404 returned error can't find the container with id e0eb28f857690cfbd302832d55fa0e1c262edd2a3dfcc913d4cb0cb644ac6b4c Nov 28 08:46:02 crc kubenswrapper[4946]: I1128 08:46:02.532321 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"602749e9-4345-4dcc-a573-f6fe0101fcba","Type":"ContainerStarted","Data":"e0eb28f857690cfbd302832d55fa0e1c262edd2a3dfcc913d4cb0cb644ac6b4c"} Nov 28 08:46:03 crc kubenswrapper[4946]: I1128 08:46:03.547206 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"602749e9-4345-4dcc-a573-f6fe0101fcba","Type":"ContainerStarted","Data":"50488e9919a30f54d56323fb22035fc3ef5c91f75d0b70c83f96635568059789"} Nov 28 08:46:03 crc kubenswrapper[4946]: I1128 08:46:03.547774 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"602749e9-4345-4dcc-a573-f6fe0101fcba","Type":"ContainerStarted","Data":"c486f7607b48a100a5af5f42ccd134f88f2afe770eafd699f4919738cd3e433d"} Nov 28 08:46:03 crc kubenswrapper[4946]: I1128 08:46:03.547820 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 28 08:46:03 crc kubenswrapper[4946]: I1128 08:46:03.582946 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.910257182 podStartE2EDuration="2.582923393s" podCreationTimestamp="2025-11-28 08:46:01 +0000 UTC" firstStartedPulling="2025-11-28 08:46:02.390108634 +0000 UTC m=+6816.768173745" lastFinishedPulling="2025-11-28 08:46:03.062774845 +0000 UTC m=+6817.440839956" observedRunningTime="2025-11-28 08:46:03.572749691 +0000 UTC m=+6817.950814842" watchObservedRunningTime="2025-11-28 08:46:03.582923393 +0000 UTC m=+6817.960988514" Nov 28 08:46:09 crc kubenswrapper[4946]: I1128 08:46:09.641928 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6fb29"] Nov 28 08:46:09 crc kubenswrapper[4946]: I1128 08:46:09.644285 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6fb29" Nov 28 08:46:09 crc kubenswrapper[4946]: I1128 08:46:09.658817 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b55e-account-create-update-lcv5z"] Nov 28 08:46:09 crc kubenswrapper[4946]: I1128 08:46:09.660372 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b55e-account-create-update-lcv5z" Nov 28 08:46:09 crc kubenswrapper[4946]: I1128 08:46:09.664367 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 28 08:46:09 crc kubenswrapper[4946]: I1128 08:46:09.665685 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6fb29"] Nov 28 08:46:09 crc kubenswrapper[4946]: I1128 08:46:09.671326 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b55e-account-create-update-lcv5z"] Nov 28 08:46:09 crc kubenswrapper[4946]: I1128 08:46:09.705408 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcp95\" (UniqueName: \"kubernetes.io/projected/9f1e52f2-1e13-44f6-b98e-293fc481aac4-kube-api-access-dcp95\") pod \"keystone-db-create-6fb29\" (UID: \"9f1e52f2-1e13-44f6-b98e-293fc481aac4\") " pod="openstack/keystone-db-create-6fb29" Nov 28 08:46:09 crc kubenswrapper[4946]: I1128 08:46:09.705450 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f1e52f2-1e13-44f6-b98e-293fc481aac4-operator-scripts\") pod \"keystone-db-create-6fb29\" (UID: \"9f1e52f2-1e13-44f6-b98e-293fc481aac4\") " pod="openstack/keystone-db-create-6fb29" Nov 28 08:46:09 crc kubenswrapper[4946]: I1128 08:46:09.705554 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v86xp\" (UniqueName: \"kubernetes.io/projected/1d0bf946-6b41-4e55-9840-9757ca834ad9-kube-api-access-v86xp\") pod \"keystone-b55e-account-create-update-lcv5z\" (UID: \"1d0bf946-6b41-4e55-9840-9757ca834ad9\") " pod="openstack/keystone-b55e-account-create-update-lcv5z" Nov 28 08:46:09 crc kubenswrapper[4946]: I1128 08:46:09.705643 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d0bf946-6b41-4e55-9840-9757ca834ad9-operator-scripts\") pod \"keystone-b55e-account-create-update-lcv5z\" (UID: \"1d0bf946-6b41-4e55-9840-9757ca834ad9\") " pod="openstack/keystone-b55e-account-create-update-lcv5z" Nov 28 08:46:09 crc kubenswrapper[4946]: I1128 08:46:09.807344 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcp95\" (UniqueName: \"kubernetes.io/projected/9f1e52f2-1e13-44f6-b98e-293fc481aac4-kube-api-access-dcp95\") pod \"keystone-db-create-6fb29\" (UID: \"9f1e52f2-1e13-44f6-b98e-293fc481aac4\") " pod="openstack/keystone-db-create-6fb29" Nov 28 08:46:09 crc kubenswrapper[4946]: I1128 08:46:09.807393 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f1e52f2-1e13-44f6-b98e-293fc481aac4-operator-scripts\") pod \"keystone-db-create-6fb29\" (UID: \"9f1e52f2-1e13-44f6-b98e-293fc481aac4\") " pod="openstack/keystone-db-create-6fb29" Nov 28 08:46:09 crc kubenswrapper[4946]: I1128 08:46:09.807438 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v86xp\" (UniqueName: \"kubernetes.io/projected/1d0bf946-6b41-4e55-9840-9757ca834ad9-kube-api-access-v86xp\") pod \"keystone-b55e-account-create-update-lcv5z\" (UID: \"1d0bf946-6b41-4e55-9840-9757ca834ad9\") " pod="openstack/keystone-b55e-account-create-update-lcv5z" Nov 28 08:46:09 crc kubenswrapper[4946]: I1128 08:46:09.807537 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d0bf946-6b41-4e55-9840-9757ca834ad9-operator-scripts\") pod \"keystone-b55e-account-create-update-lcv5z\" (UID: \"1d0bf946-6b41-4e55-9840-9757ca834ad9\") " pod="openstack/keystone-b55e-account-create-update-lcv5z" Nov 28 08:46:09 crc kubenswrapper[4946]: I1128 08:46:09.808333 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d0bf946-6b41-4e55-9840-9757ca834ad9-operator-scripts\") pod \"keystone-b55e-account-create-update-lcv5z\" (UID: \"1d0bf946-6b41-4e55-9840-9757ca834ad9\") " pod="openstack/keystone-b55e-account-create-update-lcv5z" Nov 28 08:46:09 crc kubenswrapper[4946]: I1128 08:46:09.808439 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f1e52f2-1e13-44f6-b98e-293fc481aac4-operator-scripts\") pod \"keystone-db-create-6fb29\" (UID: \"9f1e52f2-1e13-44f6-b98e-293fc481aac4\") " pod="openstack/keystone-db-create-6fb29" Nov 28 08:46:09 crc kubenswrapper[4946]: I1128 08:46:09.827437 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v86xp\" (UniqueName: \"kubernetes.io/projected/1d0bf946-6b41-4e55-9840-9757ca834ad9-kube-api-access-v86xp\") pod \"keystone-b55e-account-create-update-lcv5z\" (UID: \"1d0bf946-6b41-4e55-9840-9757ca834ad9\") " pod="openstack/keystone-b55e-account-create-update-lcv5z" Nov 28 08:46:09 crc kubenswrapper[4946]: I1128 08:46:09.827450 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcp95\" (UniqueName: \"kubernetes.io/projected/9f1e52f2-1e13-44f6-b98e-293fc481aac4-kube-api-access-dcp95\") pod \"keystone-db-create-6fb29\" (UID: \"9f1e52f2-1e13-44f6-b98e-293fc481aac4\") " pod="openstack/keystone-db-create-6fb29" Nov 28 08:46:10 crc kubenswrapper[4946]: I1128 08:46:10.020075 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6fb29" Nov 28 08:46:10 crc kubenswrapper[4946]: I1128 08:46:10.035309 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b55e-account-create-update-lcv5z" Nov 28 08:46:10 crc kubenswrapper[4946]: I1128 08:46:10.463589 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6fb29"] Nov 28 08:46:10 crc kubenswrapper[4946]: I1128 08:46:10.518638 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b55e-account-create-update-lcv5z"] Nov 28 08:46:10 crc kubenswrapper[4946]: W1128 08:46:10.525760 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d0bf946_6b41_4e55_9840_9757ca834ad9.slice/crio-cddfcd5fdde503645700814133eb96c5f744812994437db7ba9acde3499573ae WatchSource:0}: Error finding container cddfcd5fdde503645700814133eb96c5f744812994437db7ba9acde3499573ae: Status 404 returned error can't find the container with id cddfcd5fdde503645700814133eb96c5f744812994437db7ba9acde3499573ae Nov 28 08:46:10 crc kubenswrapper[4946]: I1128 08:46:10.610511 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6fb29" event={"ID":"9f1e52f2-1e13-44f6-b98e-293fc481aac4","Type":"ContainerStarted","Data":"c0007b15f187f2612d1748e0a48f79d3075eaa28bea34542d77049300dae073f"} Nov 28 08:46:10 crc kubenswrapper[4946]: I1128 08:46:10.612296 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b55e-account-create-update-lcv5z" event={"ID":"1d0bf946-6b41-4e55-9840-9757ca834ad9","Type":"ContainerStarted","Data":"cddfcd5fdde503645700814133eb96c5f744812994437db7ba9acde3499573ae"} Nov 28 08:46:11 crc kubenswrapper[4946]: I1128 08:46:11.008572 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:46:11 crc kubenswrapper[4946]: E1128 08:46:11.008963 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:46:11 crc kubenswrapper[4946]: I1128 08:46:11.625009 4946 generic.go:334] "Generic (PLEG): container finished" podID="1d0bf946-6b41-4e55-9840-9757ca834ad9" containerID="9fea45ecdbf06b48e01e977b867a6dae2c334519ba5b66a2a30971852c90c63c" exitCode=0 Nov 28 08:46:11 crc kubenswrapper[4946]: I1128 08:46:11.625317 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b55e-account-create-update-lcv5z" event={"ID":"1d0bf946-6b41-4e55-9840-9757ca834ad9","Type":"ContainerDied","Data":"9fea45ecdbf06b48e01e977b867a6dae2c334519ba5b66a2a30971852c90c63c"} Nov 28 08:46:11 crc kubenswrapper[4946]: I1128 08:46:11.627959 4946 generic.go:334] "Generic (PLEG): container finished" podID="9f1e52f2-1e13-44f6-b98e-293fc481aac4" containerID="9c7f8bdd9a6dc0da3cf813d0c3fcaa42c57e69d3b550326c129e30628f15116e" exitCode=0 Nov 28 08:46:11 crc kubenswrapper[4946]: I1128 08:46:11.627994 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6fb29" event={"ID":"9f1e52f2-1e13-44f6-b98e-293fc481aac4","Type":"ContainerDied","Data":"9c7f8bdd9a6dc0da3cf813d0c3fcaa42c57e69d3b550326c129e30628f15116e"} Nov 28 08:46:13 crc kubenswrapper[4946]: I1128 08:46:13.091600 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b55e-account-create-update-lcv5z" Nov 28 08:46:13 crc kubenswrapper[4946]: I1128 08:46:13.102711 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6fb29" Nov 28 08:46:13 crc kubenswrapper[4946]: I1128 08:46:13.175339 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v86xp\" (UniqueName: \"kubernetes.io/projected/1d0bf946-6b41-4e55-9840-9757ca834ad9-kube-api-access-v86xp\") pod \"1d0bf946-6b41-4e55-9840-9757ca834ad9\" (UID: \"1d0bf946-6b41-4e55-9840-9757ca834ad9\") " Nov 28 08:46:13 crc kubenswrapper[4946]: I1128 08:46:13.175588 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcp95\" (UniqueName: \"kubernetes.io/projected/9f1e52f2-1e13-44f6-b98e-293fc481aac4-kube-api-access-dcp95\") pod \"9f1e52f2-1e13-44f6-b98e-293fc481aac4\" (UID: \"9f1e52f2-1e13-44f6-b98e-293fc481aac4\") " Nov 28 08:46:13 crc kubenswrapper[4946]: I1128 08:46:13.175650 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f1e52f2-1e13-44f6-b98e-293fc481aac4-operator-scripts\") pod \"9f1e52f2-1e13-44f6-b98e-293fc481aac4\" (UID: \"9f1e52f2-1e13-44f6-b98e-293fc481aac4\") " Nov 28 08:46:13 crc kubenswrapper[4946]: I1128 08:46:13.175745 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d0bf946-6b41-4e55-9840-9757ca834ad9-operator-scripts\") pod \"1d0bf946-6b41-4e55-9840-9757ca834ad9\" (UID: \"1d0bf946-6b41-4e55-9840-9757ca834ad9\") " Nov 28 08:46:13 crc kubenswrapper[4946]: I1128 08:46:13.176490 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d0bf946-6b41-4e55-9840-9757ca834ad9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d0bf946-6b41-4e55-9840-9757ca834ad9" (UID: "1d0bf946-6b41-4e55-9840-9757ca834ad9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:46:13 crc kubenswrapper[4946]: I1128 08:46:13.176522 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f1e52f2-1e13-44f6-b98e-293fc481aac4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9f1e52f2-1e13-44f6-b98e-293fc481aac4" (UID: "9f1e52f2-1e13-44f6-b98e-293fc481aac4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:46:13 crc kubenswrapper[4946]: I1128 08:46:13.182076 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d0bf946-6b41-4e55-9840-9757ca834ad9-kube-api-access-v86xp" (OuterVolumeSpecName: "kube-api-access-v86xp") pod "1d0bf946-6b41-4e55-9840-9757ca834ad9" (UID: "1d0bf946-6b41-4e55-9840-9757ca834ad9"). InnerVolumeSpecName "kube-api-access-v86xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:46:13 crc kubenswrapper[4946]: I1128 08:46:13.184687 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f1e52f2-1e13-44f6-b98e-293fc481aac4-kube-api-access-dcp95" (OuterVolumeSpecName: "kube-api-access-dcp95") pod "9f1e52f2-1e13-44f6-b98e-293fc481aac4" (UID: "9f1e52f2-1e13-44f6-b98e-293fc481aac4"). InnerVolumeSpecName "kube-api-access-dcp95". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:46:13 crc kubenswrapper[4946]: I1128 08:46:13.278151 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v86xp\" (UniqueName: \"kubernetes.io/projected/1d0bf946-6b41-4e55-9840-9757ca834ad9-kube-api-access-v86xp\") on node \"crc\" DevicePath \"\"" Nov 28 08:46:13 crc kubenswrapper[4946]: I1128 08:46:13.278236 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcp95\" (UniqueName: \"kubernetes.io/projected/9f1e52f2-1e13-44f6-b98e-293fc481aac4-kube-api-access-dcp95\") on node \"crc\" DevicePath \"\"" Nov 28 08:46:13 crc kubenswrapper[4946]: I1128 08:46:13.278259 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f1e52f2-1e13-44f6-b98e-293fc481aac4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:46:13 crc kubenswrapper[4946]: I1128 08:46:13.278275 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d0bf946-6b41-4e55-9840-9757ca834ad9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:46:13 crc kubenswrapper[4946]: I1128 08:46:13.655417 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b55e-account-create-update-lcv5z" event={"ID":"1d0bf946-6b41-4e55-9840-9757ca834ad9","Type":"ContainerDied","Data":"cddfcd5fdde503645700814133eb96c5f744812994437db7ba9acde3499573ae"} Nov 28 08:46:13 crc kubenswrapper[4946]: I1128 08:46:13.655514 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cddfcd5fdde503645700814133eb96c5f744812994437db7ba9acde3499573ae" Nov 28 08:46:13 crc kubenswrapper[4946]: I1128 08:46:13.655505 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b55e-account-create-update-lcv5z" Nov 28 08:46:13 crc kubenswrapper[4946]: I1128 08:46:13.658287 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6fb29" event={"ID":"9f1e52f2-1e13-44f6-b98e-293fc481aac4","Type":"ContainerDied","Data":"c0007b15f187f2612d1748e0a48f79d3075eaa28bea34542d77049300dae073f"} Nov 28 08:46:13 crc kubenswrapper[4946]: I1128 08:46:13.658326 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0007b15f187f2612d1748e0a48f79d3075eaa28bea34542d77049300dae073f" Nov 28 08:46:13 crc kubenswrapper[4946]: I1128 08:46:13.658400 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6fb29" Nov 28 08:46:15 crc kubenswrapper[4946]: I1128 08:46:15.302075 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-fdw2z"] Nov 28 08:46:15 crc kubenswrapper[4946]: E1128 08:46:15.302722 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f1e52f2-1e13-44f6-b98e-293fc481aac4" containerName="mariadb-database-create" Nov 28 08:46:15 crc kubenswrapper[4946]: I1128 08:46:15.302738 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f1e52f2-1e13-44f6-b98e-293fc481aac4" containerName="mariadb-database-create" Nov 28 08:46:15 crc kubenswrapper[4946]: E1128 08:46:15.302762 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d0bf946-6b41-4e55-9840-9757ca834ad9" containerName="mariadb-account-create-update" Nov 28 08:46:15 crc kubenswrapper[4946]: I1128 08:46:15.302769 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d0bf946-6b41-4e55-9840-9757ca834ad9" containerName="mariadb-account-create-update" Nov 28 08:46:15 crc kubenswrapper[4946]: I1128 08:46:15.302915 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f1e52f2-1e13-44f6-b98e-293fc481aac4" containerName="mariadb-database-create" Nov 28 08:46:15 crc kubenswrapper[4946]: I1128 08:46:15.302933 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d0bf946-6b41-4e55-9840-9757ca834ad9" containerName="mariadb-account-create-update" Nov 28 08:46:15 crc kubenswrapper[4946]: I1128 08:46:15.303499 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fdw2z" Nov 28 08:46:15 crc kubenswrapper[4946]: I1128 08:46:15.306360 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 08:46:15 crc kubenswrapper[4946]: I1128 08:46:15.306562 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 08:46:15 crc kubenswrapper[4946]: I1128 08:46:15.306768 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 08:46:15 crc kubenswrapper[4946]: I1128 08:46:15.306953 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7df6x" Nov 28 08:46:15 crc kubenswrapper[4946]: I1128 08:46:15.314111 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-fdw2z"] Nov 28 08:46:15 crc kubenswrapper[4946]: I1128 08:46:15.416527 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b9dcab-32d0-4854-b903-c81f133031ec-combined-ca-bundle\") pod \"keystone-db-sync-fdw2z\" (UID: \"29b9dcab-32d0-4854-b903-c81f133031ec\") " pod="openstack/keystone-db-sync-fdw2z" Nov 28 08:46:15 crc kubenswrapper[4946]: I1128 08:46:15.416593 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29b9dcab-32d0-4854-b903-c81f133031ec-config-data\") pod \"keystone-db-sync-fdw2z\" (UID: \"29b9dcab-32d0-4854-b903-c81f133031ec\") " pod="openstack/keystone-db-sync-fdw2z" Nov 28 08:46:15 crc kubenswrapper[4946]: I1128 08:46:15.416617 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46r7z\" (UniqueName: \"kubernetes.io/projected/29b9dcab-32d0-4854-b903-c81f133031ec-kube-api-access-46r7z\") pod \"keystone-db-sync-fdw2z\" (UID: \"29b9dcab-32d0-4854-b903-c81f133031ec\") " pod="openstack/keystone-db-sync-fdw2z" Nov 28 08:46:15 crc kubenswrapper[4946]: I1128 08:46:15.517923 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b9dcab-32d0-4854-b903-c81f133031ec-combined-ca-bundle\") pod \"keystone-db-sync-fdw2z\" (UID: \"29b9dcab-32d0-4854-b903-c81f133031ec\") " pod="openstack/keystone-db-sync-fdw2z" Nov 28 08:46:15 crc kubenswrapper[4946]: I1128 08:46:15.518005 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29b9dcab-32d0-4854-b903-c81f133031ec-config-data\") pod \"keystone-db-sync-fdw2z\" (UID: \"29b9dcab-32d0-4854-b903-c81f133031ec\") " pod="openstack/keystone-db-sync-fdw2z" Nov 28 08:46:15 crc kubenswrapper[4946]: I1128 08:46:15.518030 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46r7z\" (UniqueName: \"kubernetes.io/projected/29b9dcab-32d0-4854-b903-c81f133031ec-kube-api-access-46r7z\") pod \"keystone-db-sync-fdw2z\" (UID: \"29b9dcab-32d0-4854-b903-c81f133031ec\") " pod="openstack/keystone-db-sync-fdw2z" Nov 28 08:46:15 crc kubenswrapper[4946]: I1128 08:46:15.523503 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29b9dcab-32d0-4854-b903-c81f133031ec-config-data\") pod \"keystone-db-sync-fdw2z\" (UID: \"29b9dcab-32d0-4854-b903-c81f133031ec\") " pod="openstack/keystone-db-sync-fdw2z" Nov 28 08:46:15 crc kubenswrapper[4946]: I1128 08:46:15.527324 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b9dcab-32d0-4854-b903-c81f133031ec-combined-ca-bundle\") pod \"keystone-db-sync-fdw2z\" (UID: \"29b9dcab-32d0-4854-b903-c81f133031ec\") " pod="openstack/keystone-db-sync-fdw2z" Nov 28 08:46:15 crc kubenswrapper[4946]: I1128 08:46:15.539218 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46r7z\" (UniqueName: \"kubernetes.io/projected/29b9dcab-32d0-4854-b903-c81f133031ec-kube-api-access-46r7z\") pod \"keystone-db-sync-fdw2z\" (UID: \"29b9dcab-32d0-4854-b903-c81f133031ec\") " pod="openstack/keystone-db-sync-fdw2z" Nov 28 08:46:15 crc kubenswrapper[4946]: I1128 08:46:15.631028 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fdw2z" Nov 28 08:46:16 crc kubenswrapper[4946]: I1128 08:46:16.085223 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-fdw2z"] Nov 28 08:46:16 crc kubenswrapper[4946]: I1128 08:46:16.688057 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fdw2z" event={"ID":"29b9dcab-32d0-4854-b903-c81f133031ec","Type":"ContainerStarted","Data":"5005214bcc091123b5b8204777920fba457e7b77884b67b178de0ce061268b23"} Nov 28 08:46:16 crc kubenswrapper[4946]: I1128 08:46:16.884558 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 28 08:46:21 crc kubenswrapper[4946]: I1128 08:46:21.730685 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fdw2z" event={"ID":"29b9dcab-32d0-4854-b903-c81f133031ec","Type":"ContainerStarted","Data":"aabed6754523ef4c0afb6dd0f1afb470741353e55042d71601d2e5a6ae491ede"} Nov 28 08:46:21 crc kubenswrapper[4946]: I1128 08:46:21.756104 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-fdw2z" podStartSLOduration=2.073836932 podStartE2EDuration="6.756080698s" podCreationTimestamp="2025-11-28 08:46:15 +0000 UTC" firstStartedPulling="2025-11-28 08:46:16.094450373 +0000 UTC m=+6830.472515514" lastFinishedPulling="2025-11-28 08:46:20.776694129 +0000 UTC m=+6835.154759280" observedRunningTime="2025-11-28 08:46:21.747825004 +0000 UTC m=+6836.125890135" watchObservedRunningTime="2025-11-28 08:46:21.756080698 +0000 UTC m=+6836.134145829" Nov 28 08:46:21 crc kubenswrapper[4946]: I1128 08:46:21.990777 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:46:21 crc kubenswrapper[4946]: E1128 08:46:21.991223 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:46:22 crc kubenswrapper[4946]: I1128 08:46:22.744100 4946 generic.go:334] "Generic (PLEG): container finished" podID="29b9dcab-32d0-4854-b903-c81f133031ec" containerID="aabed6754523ef4c0afb6dd0f1afb470741353e55042d71601d2e5a6ae491ede" exitCode=0 Nov 28 08:46:22 crc kubenswrapper[4946]: I1128 08:46:22.744139 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fdw2z" event={"ID":"29b9dcab-32d0-4854-b903-c81f133031ec","Type":"ContainerDied","Data":"aabed6754523ef4c0afb6dd0f1afb470741353e55042d71601d2e5a6ae491ede"} Nov 28 08:46:24 crc kubenswrapper[4946]: I1128 08:46:24.170954 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fdw2z" Nov 28 08:46:24 crc kubenswrapper[4946]: I1128 08:46:24.271635 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b9dcab-32d0-4854-b903-c81f133031ec-combined-ca-bundle\") pod \"29b9dcab-32d0-4854-b903-c81f133031ec\" (UID: \"29b9dcab-32d0-4854-b903-c81f133031ec\") " Nov 28 08:46:24 crc kubenswrapper[4946]: I1128 08:46:24.271935 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29b9dcab-32d0-4854-b903-c81f133031ec-config-data\") pod \"29b9dcab-32d0-4854-b903-c81f133031ec\" (UID: \"29b9dcab-32d0-4854-b903-c81f133031ec\") " Nov 28 08:46:24 crc kubenswrapper[4946]: I1128 08:46:24.272031 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46r7z\" (UniqueName: \"kubernetes.io/projected/29b9dcab-32d0-4854-b903-c81f133031ec-kube-api-access-46r7z\") pod \"29b9dcab-32d0-4854-b903-c81f133031ec\" (UID: \"29b9dcab-32d0-4854-b903-c81f133031ec\") " Nov 28 08:46:24 crc kubenswrapper[4946]: I1128 08:46:24.285818 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29b9dcab-32d0-4854-b903-c81f133031ec-kube-api-access-46r7z" (OuterVolumeSpecName: "kube-api-access-46r7z") pod "29b9dcab-32d0-4854-b903-c81f133031ec" (UID: "29b9dcab-32d0-4854-b903-c81f133031ec"). InnerVolumeSpecName "kube-api-access-46r7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:46:24 crc kubenswrapper[4946]: I1128 08:46:24.314109 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29b9dcab-32d0-4854-b903-c81f133031ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29b9dcab-32d0-4854-b903-c81f133031ec" (UID: "29b9dcab-32d0-4854-b903-c81f133031ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:46:24 crc kubenswrapper[4946]: I1128 08:46:24.327189 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29b9dcab-32d0-4854-b903-c81f133031ec-config-data" (OuterVolumeSpecName: "config-data") pod "29b9dcab-32d0-4854-b903-c81f133031ec" (UID: "29b9dcab-32d0-4854-b903-c81f133031ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:46:24 crc kubenswrapper[4946]: I1128 08:46:24.373667 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29b9dcab-32d0-4854-b903-c81f133031ec-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:46:24 crc kubenswrapper[4946]: I1128 08:46:24.373695 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46r7z\" (UniqueName: \"kubernetes.io/projected/29b9dcab-32d0-4854-b903-c81f133031ec-kube-api-access-46r7z\") on node \"crc\" DevicePath \"\"" Nov 28 08:46:24 crc kubenswrapper[4946]: I1128 08:46:24.373705 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b9dcab-32d0-4854-b903-c81f133031ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:46:24 crc kubenswrapper[4946]: I1128 08:46:24.771411 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fdw2z" event={"ID":"29b9dcab-32d0-4854-b903-c81f133031ec","Type":"ContainerDied","Data":"5005214bcc091123b5b8204777920fba457e7b77884b67b178de0ce061268b23"} Nov 28 08:46:24 crc kubenswrapper[4946]: I1128 08:46:24.771530 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5005214bcc091123b5b8204777920fba457e7b77884b67b178de0ce061268b23" Nov 28 08:46:24 crc kubenswrapper[4946]: I1128 08:46:24.771538 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fdw2z" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.043770 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65b98775c-hq9l6"] Nov 28 08:46:25 crc kubenswrapper[4946]: E1128 08:46:25.044121 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b9dcab-32d0-4854-b903-c81f133031ec" containerName="keystone-db-sync" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.044139 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b9dcab-32d0-4854-b903-c81f133031ec" containerName="keystone-db-sync" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.044343 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="29b9dcab-32d0-4854-b903-c81f133031ec" containerName="keystone-db-sync" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.049522 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65b98775c-hq9l6" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.057598 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-k7tng"] Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.059006 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k7tng" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.064846 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.064924 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7df6x" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.064994 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.065119 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.065916 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.068525 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65b98775c-hq9l6"] Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.083272 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-k7tng"] Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.085706 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-fernet-keys\") pod \"keystone-bootstrap-k7tng\" (UID: \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\") " pod="openstack/keystone-bootstrap-k7tng" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.085772 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-combined-ca-bundle\") pod \"keystone-bootstrap-k7tng\" (UID: \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\") " pod="openstack/keystone-bootstrap-k7tng" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.085819 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhbk4\" (UniqueName: \"kubernetes.io/projected/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-kube-api-access-lhbk4\") pod \"keystone-bootstrap-k7tng\" (UID: \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\") " pod="openstack/keystone-bootstrap-k7tng" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.085834 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-config-data\") pod \"keystone-bootstrap-k7tng\" (UID: \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\") " pod="openstack/keystone-bootstrap-k7tng" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.085882 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-scripts\") pod \"keystone-bootstrap-k7tng\" (UID: \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\") " pod="openstack/keystone-bootstrap-k7tng" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.085910 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43286ac2-75c9-470f-902a-e77da7b447fb-ovsdbserver-nb\") pod \"dnsmasq-dns-65b98775c-hq9l6\" (UID: \"43286ac2-75c9-470f-902a-e77da7b447fb\") " pod="openstack/dnsmasq-dns-65b98775c-hq9l6" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.085945 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43286ac2-75c9-470f-902a-e77da7b447fb-ovsdbserver-sb\") pod \"dnsmasq-dns-65b98775c-hq9l6\" (UID: \"43286ac2-75c9-470f-902a-e77da7b447fb\") " pod="openstack/dnsmasq-dns-65b98775c-hq9l6" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.085965 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43286ac2-75c9-470f-902a-e77da7b447fb-dns-svc\") pod \"dnsmasq-dns-65b98775c-hq9l6\" (UID: \"43286ac2-75c9-470f-902a-e77da7b447fb\") " pod="openstack/dnsmasq-dns-65b98775c-hq9l6" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.085980 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-credential-keys\") pod \"keystone-bootstrap-k7tng\" (UID: \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\") " pod="openstack/keystone-bootstrap-k7tng" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.085997 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fsnc\" (UniqueName: \"kubernetes.io/projected/43286ac2-75c9-470f-902a-e77da7b447fb-kube-api-access-9fsnc\") pod \"dnsmasq-dns-65b98775c-hq9l6\" (UID: \"43286ac2-75c9-470f-902a-e77da7b447fb\") " pod="openstack/dnsmasq-dns-65b98775c-hq9l6" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.086020 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43286ac2-75c9-470f-902a-e77da7b447fb-config\") pod \"dnsmasq-dns-65b98775c-hq9l6\" (UID: \"43286ac2-75c9-470f-902a-e77da7b447fb\") " pod="openstack/dnsmasq-dns-65b98775c-hq9l6" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.187696 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-fernet-keys\") pod \"keystone-bootstrap-k7tng\" (UID: \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\") " pod="openstack/keystone-bootstrap-k7tng" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.187784 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-combined-ca-bundle\") pod \"keystone-bootstrap-k7tng\" (UID: \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\") " pod="openstack/keystone-bootstrap-k7tng" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.187814 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhbk4\" (UniqueName: \"kubernetes.io/projected/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-kube-api-access-lhbk4\") pod \"keystone-bootstrap-k7tng\" (UID: \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\") " pod="openstack/keystone-bootstrap-k7tng" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.188326 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-config-data\") pod \"keystone-bootstrap-k7tng\" (UID: \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\") " pod="openstack/keystone-bootstrap-k7tng" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.188382 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-scripts\") pod \"keystone-bootstrap-k7tng\" (UID: \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\") " pod="openstack/keystone-bootstrap-k7tng" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.188418 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43286ac2-75c9-470f-902a-e77da7b447fb-ovsdbserver-nb\") pod \"dnsmasq-dns-65b98775c-hq9l6\" (UID: \"43286ac2-75c9-470f-902a-e77da7b447fb\") " pod="openstack/dnsmasq-dns-65b98775c-hq9l6" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.188443 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43286ac2-75c9-470f-902a-e77da7b447fb-ovsdbserver-sb\") pod \"dnsmasq-dns-65b98775c-hq9l6\" (UID: \"43286ac2-75c9-470f-902a-e77da7b447fb\") " pod="openstack/dnsmasq-dns-65b98775c-hq9l6" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.188504 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43286ac2-75c9-470f-902a-e77da7b447fb-dns-svc\") pod \"dnsmasq-dns-65b98775c-hq9l6\" (UID: \"43286ac2-75c9-470f-902a-e77da7b447fb\") " pod="openstack/dnsmasq-dns-65b98775c-hq9l6" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.188527 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-credential-keys\") pod \"keystone-bootstrap-k7tng\" (UID: \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\") " pod="openstack/keystone-bootstrap-k7tng" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.188547 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fsnc\" (UniqueName: \"kubernetes.io/projected/43286ac2-75c9-470f-902a-e77da7b447fb-kube-api-access-9fsnc\") pod \"dnsmasq-dns-65b98775c-hq9l6\" (UID: \"43286ac2-75c9-470f-902a-e77da7b447fb\") " pod="openstack/dnsmasq-dns-65b98775c-hq9l6" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.188570 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43286ac2-75c9-470f-902a-e77da7b447fb-config\") pod \"dnsmasq-dns-65b98775c-hq9l6\" (UID: \"43286ac2-75c9-470f-902a-e77da7b447fb\") " pod="openstack/dnsmasq-dns-65b98775c-hq9l6" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.189348 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43286ac2-75c9-470f-902a-e77da7b447fb-config\") pod \"dnsmasq-dns-65b98775c-hq9l6\" (UID: \"43286ac2-75c9-470f-902a-e77da7b447fb\") " pod="openstack/dnsmasq-dns-65b98775c-hq9l6" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.190369 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43286ac2-75c9-470f-902a-e77da7b447fb-ovsdbserver-nb\") pod \"dnsmasq-dns-65b98775c-hq9l6\" (UID: \"43286ac2-75c9-470f-902a-e77da7b447fb\") " pod="openstack/dnsmasq-dns-65b98775c-hq9l6" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.190972 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43286ac2-75c9-470f-902a-e77da7b447fb-dns-svc\") pod \"dnsmasq-dns-65b98775c-hq9l6\" (UID: \"43286ac2-75c9-470f-902a-e77da7b447fb\") " pod="openstack/dnsmasq-dns-65b98775c-hq9l6" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.192060 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43286ac2-75c9-470f-902a-e77da7b447fb-ovsdbserver-sb\") pod \"dnsmasq-dns-65b98775c-hq9l6\" (UID: \"43286ac2-75c9-470f-902a-e77da7b447fb\") " pod="openstack/dnsmasq-dns-65b98775c-hq9l6" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.192088 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-scripts\") pod \"keystone-bootstrap-k7tng\" (UID: \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\") " pod="openstack/keystone-bootstrap-k7tng" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.192829 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-fernet-keys\") pod \"keystone-bootstrap-k7tng\" (UID: \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\") " pod="openstack/keystone-bootstrap-k7tng" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.193382 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-combined-ca-bundle\") pod \"keystone-bootstrap-k7tng\" (UID: \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\") " pod="openstack/keystone-bootstrap-k7tng" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.196779 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-config-data\") pod \"keystone-bootstrap-k7tng\" (UID: \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\") " pod="openstack/keystone-bootstrap-k7tng" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.208908 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhbk4\" (UniqueName: \"kubernetes.io/projected/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-kube-api-access-lhbk4\") pod \"keystone-bootstrap-k7tng\" (UID: \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\") " pod="openstack/keystone-bootstrap-k7tng" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.209798 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fsnc\" (UniqueName: \"kubernetes.io/projected/43286ac2-75c9-470f-902a-e77da7b447fb-kube-api-access-9fsnc\") pod \"dnsmasq-dns-65b98775c-hq9l6\" (UID: \"43286ac2-75c9-470f-902a-e77da7b447fb\") " pod="openstack/dnsmasq-dns-65b98775c-hq9l6" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.223205 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-credential-keys\") pod \"keystone-bootstrap-k7tng\" (UID: \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\") " pod="openstack/keystone-bootstrap-k7tng" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.371984 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65b98775c-hq9l6" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.383431 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k7tng" Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.820219 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65b98775c-hq9l6"] Nov 28 08:46:25 crc kubenswrapper[4946]: W1128 08:46:25.824481 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43286ac2_75c9_470f_902a_e77da7b447fb.slice/crio-2c549c2afd503aaf2f794b47425827a89e3e568f520f1f5503ef86e790817f97 WatchSource:0}: Error finding container 2c549c2afd503aaf2f794b47425827a89e3e568f520f1f5503ef86e790817f97: Status 404 returned error can't find the container with id 2c549c2afd503aaf2f794b47425827a89e3e568f520f1f5503ef86e790817f97 Nov 28 08:46:25 crc kubenswrapper[4946]: I1128 08:46:25.878905 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-k7tng"] Nov 28 08:46:25 crc kubenswrapper[4946]: W1128 08:46:25.889219 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod106d41f1_b72c_4342_aeb6_e4cbca7f7d23.slice/crio-536468c55868d3c96967b8bf29c51b8ec7830cd94a164bdd1f7cbc9565e1df58 WatchSource:0}: Error finding container 536468c55868d3c96967b8bf29c51b8ec7830cd94a164bdd1f7cbc9565e1df58: Status 404 returned error can't find the container with id 536468c55868d3c96967b8bf29c51b8ec7830cd94a164bdd1f7cbc9565e1df58 Nov 28 08:46:26 crc kubenswrapper[4946]: I1128 08:46:26.793248 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k7tng" event={"ID":"106d41f1-b72c-4342-aeb6-e4cbca7f7d23","Type":"ContainerStarted","Data":"538b5221263527cec4c2ef9f4ec8ab4e45a7f9afb89feed17ffc1295c9c13c28"} Nov 28 08:46:26 crc kubenswrapper[4946]: I1128 08:46:26.794007 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k7tng" event={"ID":"106d41f1-b72c-4342-aeb6-e4cbca7f7d23","Type":"ContainerStarted","Data":"536468c55868d3c96967b8bf29c51b8ec7830cd94a164bdd1f7cbc9565e1df58"} Nov 28 08:46:26 crc kubenswrapper[4946]: I1128 08:46:26.796241 4946 generic.go:334] "Generic (PLEG): container finished" podID="43286ac2-75c9-470f-902a-e77da7b447fb" containerID="08b006942894dcd13ac42b15081e2980b4aa84b1d8a1177ff0eb24e1f1819ade" exitCode=0 Nov 28 08:46:26 crc kubenswrapper[4946]: I1128 08:46:26.796291 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65b98775c-hq9l6" event={"ID":"43286ac2-75c9-470f-902a-e77da7b447fb","Type":"ContainerDied","Data":"08b006942894dcd13ac42b15081e2980b4aa84b1d8a1177ff0eb24e1f1819ade"} Nov 28 08:46:26 crc kubenswrapper[4946]: I1128 08:46:26.796316 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65b98775c-hq9l6" event={"ID":"43286ac2-75c9-470f-902a-e77da7b447fb","Type":"ContainerStarted","Data":"2c549c2afd503aaf2f794b47425827a89e3e568f520f1f5503ef86e790817f97"} Nov 28 08:46:26 crc kubenswrapper[4946]: I1128 08:46:26.834324 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-k7tng" podStartSLOduration=1.834290711 podStartE2EDuration="1.834290711s" podCreationTimestamp="2025-11-28 08:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:46:26.830447996 +0000 UTC m=+6841.208513107" watchObservedRunningTime="2025-11-28 08:46:26.834290711 +0000 UTC m=+6841.212355892" Nov 28 08:46:27 crc kubenswrapper[4946]: I1128 08:46:27.825773 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65b98775c-hq9l6" event={"ID":"43286ac2-75c9-470f-902a-e77da7b447fb","Type":"ContainerStarted","Data":"398d867ed212e31cb88fbda498bdf894b9addc66a9ec67f2dab7bb743886fb14"} Nov 28 08:46:27 crc kubenswrapper[4946]: I1128 08:46:27.864148 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65b98775c-hq9l6" podStartSLOduration=2.864122299 podStartE2EDuration="2.864122299s" podCreationTimestamp="2025-11-28 08:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:46:27.8524127 +0000 UTC m=+6842.230477851" watchObservedRunningTime="2025-11-28 08:46:27.864122299 +0000 UTC m=+6842.242187440" Nov 28 08:46:28 crc kubenswrapper[4946]: I1128 08:46:28.834118 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65b98775c-hq9l6" Nov 28 08:46:29 crc kubenswrapper[4946]: I1128 08:46:29.843167 4946 generic.go:334] "Generic (PLEG): container finished" podID="106d41f1-b72c-4342-aeb6-e4cbca7f7d23" containerID="538b5221263527cec4c2ef9f4ec8ab4e45a7f9afb89feed17ffc1295c9c13c28" exitCode=0 Nov 28 08:46:29 crc kubenswrapper[4946]: I1128 08:46:29.843258 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k7tng" event={"ID":"106d41f1-b72c-4342-aeb6-e4cbca7f7d23","Type":"ContainerDied","Data":"538b5221263527cec4c2ef9f4ec8ab4e45a7f9afb89feed17ffc1295c9c13c28"} Nov 28 08:46:31 crc kubenswrapper[4946]: I1128 08:46:31.314353 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k7tng" Nov 28 08:46:31 crc kubenswrapper[4946]: I1128 08:46:31.411147 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-combined-ca-bundle\") pod \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\" (UID: \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\") " Nov 28 08:46:31 crc kubenswrapper[4946]: I1128 08:46:31.411598 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-scripts\") pod \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\" (UID: \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\") " Nov 28 08:46:31 crc kubenswrapper[4946]: I1128 08:46:31.411636 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-config-data\") pod \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\" (UID: \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\") " Nov 28 08:46:31 crc kubenswrapper[4946]: I1128 08:46:31.411748 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-fernet-keys\") pod \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\" (UID: \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\") " Nov 28 08:46:31 crc kubenswrapper[4946]: I1128 08:46:31.411789 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhbk4\" (UniqueName: \"kubernetes.io/projected/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-kube-api-access-lhbk4\") pod \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\" (UID: \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\") " Nov 28 08:46:31 crc kubenswrapper[4946]: I1128 08:46:31.411848 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-credential-keys\") pod \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\" (UID: \"106d41f1-b72c-4342-aeb6-e4cbca7f7d23\") " Nov 28 08:46:31 crc kubenswrapper[4946]: I1128 08:46:31.417084 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-scripts" (OuterVolumeSpecName: "scripts") pod "106d41f1-b72c-4342-aeb6-e4cbca7f7d23" (UID: "106d41f1-b72c-4342-aeb6-e4cbca7f7d23"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:46:31 crc kubenswrapper[4946]: I1128 08:46:31.417117 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "106d41f1-b72c-4342-aeb6-e4cbca7f7d23" (UID: "106d41f1-b72c-4342-aeb6-e4cbca7f7d23"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:46:31 crc kubenswrapper[4946]: I1128 08:46:31.418841 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-kube-api-access-lhbk4" (OuterVolumeSpecName: "kube-api-access-lhbk4") pod "106d41f1-b72c-4342-aeb6-e4cbca7f7d23" (UID: "106d41f1-b72c-4342-aeb6-e4cbca7f7d23"). InnerVolumeSpecName "kube-api-access-lhbk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:46:31 crc kubenswrapper[4946]: I1128 08:46:31.420710 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "106d41f1-b72c-4342-aeb6-e4cbca7f7d23" (UID: "106d41f1-b72c-4342-aeb6-e4cbca7f7d23"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:46:31 crc kubenswrapper[4946]: I1128 08:46:31.436312 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "106d41f1-b72c-4342-aeb6-e4cbca7f7d23" (UID: "106d41f1-b72c-4342-aeb6-e4cbca7f7d23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:46:31 crc kubenswrapper[4946]: I1128 08:46:31.437752 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-config-data" (OuterVolumeSpecName: "config-data") pod "106d41f1-b72c-4342-aeb6-e4cbca7f7d23" (UID: "106d41f1-b72c-4342-aeb6-e4cbca7f7d23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:46:31 crc kubenswrapper[4946]: I1128 08:46:31.513703 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:46:31 crc kubenswrapper[4946]: I1128 08:46:31.513734 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:46:31 crc kubenswrapper[4946]: I1128 08:46:31.513743 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:46:31 crc kubenswrapper[4946]: I1128 08:46:31.513750 4946 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 08:46:31 crc kubenswrapper[4946]: I1128 08:46:31.513759 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhbk4\" (UniqueName: \"kubernetes.io/projected/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-kube-api-access-lhbk4\") on node \"crc\" DevicePath \"\"" Nov 28 08:46:31 crc kubenswrapper[4946]: I1128 08:46:31.513768 4946 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/106d41f1-b72c-4342-aeb6-e4cbca7f7d23-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 28 08:46:31 crc kubenswrapper[4946]: I1128 08:46:31.864591 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k7tng" event={"ID":"106d41f1-b72c-4342-aeb6-e4cbca7f7d23","Type":"ContainerDied","Data":"536468c55868d3c96967b8bf29c51b8ec7830cd94a164bdd1f7cbc9565e1df58"} Nov 28 08:46:31 crc kubenswrapper[4946]: I1128 08:46:31.864630 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="536468c55868d3c96967b8bf29c51b8ec7830cd94a164bdd1f7cbc9565e1df58" Nov 28 08:46:31 crc kubenswrapper[4946]: I1128 08:46:31.864665 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k7tng" Nov 28 08:46:31 crc kubenswrapper[4946]: I1128 08:46:31.930977 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-k7tng"] Nov 28 08:46:31 crc kubenswrapper[4946]: I1128 08:46:31.934922 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-k7tng"] Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.012403 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="106d41f1-b72c-4342-aeb6-e4cbca7f7d23" path="/var/lib/kubelet/pods/106d41f1-b72c-4342-aeb6-e4cbca7f7d23/volumes" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.038978 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-dj9lp"] Nov 28 08:46:32 crc kubenswrapper[4946]: E1128 08:46:32.039513 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="106d41f1-b72c-4342-aeb6-e4cbca7f7d23" containerName="keystone-bootstrap" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.039542 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="106d41f1-b72c-4342-aeb6-e4cbca7f7d23" containerName="keystone-bootstrap" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.039847 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="106d41f1-b72c-4342-aeb6-e4cbca7f7d23" containerName="keystone-bootstrap" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.042101 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dj9lp" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.043148 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dj9lp"] Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.131964 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.132116 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.132256 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7df6x" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.132635 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.132768 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.133276 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-scripts\") pod \"keystone-bootstrap-dj9lp\" (UID: \"859d5115-6c6c-4452-b0f7-9a031bc502ee\") " pod="openstack/keystone-bootstrap-dj9lp" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.133364 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-combined-ca-bundle\") pod \"keystone-bootstrap-dj9lp\" (UID: \"859d5115-6c6c-4452-b0f7-9a031bc502ee\") " pod="openstack/keystone-bootstrap-dj9lp" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.133426 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-credential-keys\") pod \"keystone-bootstrap-dj9lp\" (UID: \"859d5115-6c6c-4452-b0f7-9a031bc502ee\") " pod="openstack/keystone-bootstrap-dj9lp" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.133492 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-config-data\") pod \"keystone-bootstrap-dj9lp\" (UID: \"859d5115-6c6c-4452-b0f7-9a031bc502ee\") " pod="openstack/keystone-bootstrap-dj9lp" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.133917 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-fernet-keys\") pod \"keystone-bootstrap-dj9lp\" (UID: \"859d5115-6c6c-4452-b0f7-9a031bc502ee\") " pod="openstack/keystone-bootstrap-dj9lp" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.134011 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqfhc\" (UniqueName: \"kubernetes.io/projected/859d5115-6c6c-4452-b0f7-9a031bc502ee-kube-api-access-fqfhc\") pod \"keystone-bootstrap-dj9lp\" (UID: \"859d5115-6c6c-4452-b0f7-9a031bc502ee\") " pod="openstack/keystone-bootstrap-dj9lp" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.235292 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-fernet-keys\") pod \"keystone-bootstrap-dj9lp\" (UID: \"859d5115-6c6c-4452-b0f7-9a031bc502ee\") " pod="openstack/keystone-bootstrap-dj9lp" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.235355 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqfhc\" (UniqueName: \"kubernetes.io/projected/859d5115-6c6c-4452-b0f7-9a031bc502ee-kube-api-access-fqfhc\") pod \"keystone-bootstrap-dj9lp\" (UID: \"859d5115-6c6c-4452-b0f7-9a031bc502ee\") " pod="openstack/keystone-bootstrap-dj9lp" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.235407 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-scripts\") pod \"keystone-bootstrap-dj9lp\" (UID: \"859d5115-6c6c-4452-b0f7-9a031bc502ee\") " pod="openstack/keystone-bootstrap-dj9lp" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.235427 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-combined-ca-bundle\") pod \"keystone-bootstrap-dj9lp\" (UID: \"859d5115-6c6c-4452-b0f7-9a031bc502ee\") " pod="openstack/keystone-bootstrap-dj9lp" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.235456 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-credential-keys\") pod \"keystone-bootstrap-dj9lp\" (UID: \"859d5115-6c6c-4452-b0f7-9a031bc502ee\") " pod="openstack/keystone-bootstrap-dj9lp" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.235514 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-config-data\") pod \"keystone-bootstrap-dj9lp\" (UID: \"859d5115-6c6c-4452-b0f7-9a031bc502ee\") " pod="openstack/keystone-bootstrap-dj9lp" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.241086 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-config-data\") pod \"keystone-bootstrap-dj9lp\" (UID: \"859d5115-6c6c-4452-b0f7-9a031bc502ee\") " pod="openstack/keystone-bootstrap-dj9lp" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.241087 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-scripts\") pod \"keystone-bootstrap-dj9lp\" (UID: \"859d5115-6c6c-4452-b0f7-9a031bc502ee\") " pod="openstack/keystone-bootstrap-dj9lp" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.241107 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-fernet-keys\") pod \"keystone-bootstrap-dj9lp\" (UID: \"859d5115-6c6c-4452-b0f7-9a031bc502ee\") " pod="openstack/keystone-bootstrap-dj9lp" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.248369 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-credential-keys\") pod \"keystone-bootstrap-dj9lp\" (UID: \"859d5115-6c6c-4452-b0f7-9a031bc502ee\") " pod="openstack/keystone-bootstrap-dj9lp" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.249486 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-combined-ca-bundle\") pod \"keystone-bootstrap-dj9lp\" (UID: \"859d5115-6c6c-4452-b0f7-9a031bc502ee\") " pod="openstack/keystone-bootstrap-dj9lp" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.251873 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqfhc\" (UniqueName: \"kubernetes.io/projected/859d5115-6c6c-4452-b0f7-9a031bc502ee-kube-api-access-fqfhc\") pod \"keystone-bootstrap-dj9lp\" (UID: \"859d5115-6c6c-4452-b0f7-9a031bc502ee\") " pod="openstack/keystone-bootstrap-dj9lp" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.454209 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dj9lp" Nov 28 08:46:32 crc kubenswrapper[4946]: I1128 08:46:32.933558 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dj9lp"] Nov 28 08:46:32 crc kubenswrapper[4946]: W1128 08:46:32.946991 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod859d5115_6c6c_4452_b0f7_9a031bc502ee.slice/crio-16cddd235478331546cc108453b14f6757301fe666abb2ae76bed659b28523f8 WatchSource:0}: Error finding container 16cddd235478331546cc108453b14f6757301fe666abb2ae76bed659b28523f8: Status 404 returned error can't find the container with id 16cddd235478331546cc108453b14f6757301fe666abb2ae76bed659b28523f8 Nov 28 08:46:33 crc kubenswrapper[4946]: I1128 08:46:33.896207 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dj9lp" event={"ID":"859d5115-6c6c-4452-b0f7-9a031bc502ee","Type":"ContainerStarted","Data":"d0c8381e7025e9aad7e5c6c45bd710ae6e10dd7f4e242c0f196cb9ca4af925b7"} Nov 28 08:46:33 crc kubenswrapper[4946]: I1128 08:46:33.896566 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dj9lp" event={"ID":"859d5115-6c6c-4452-b0f7-9a031bc502ee","Type":"ContainerStarted","Data":"16cddd235478331546cc108453b14f6757301fe666abb2ae76bed659b28523f8"} Nov 28 08:46:33 crc kubenswrapper[4946]: I1128 08:46:33.934404 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-dj9lp" podStartSLOduration=1.934380355 podStartE2EDuration="1.934380355s" podCreationTimestamp="2025-11-28 08:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:46:33.930056578 +0000 UTC m=+6848.308121719" watchObservedRunningTime="2025-11-28 08:46:33.934380355 +0000 UTC m=+6848.312445476" Nov 28 08:46:33 crc kubenswrapper[4946]: I1128 08:46:33.989831 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:46:33 crc kubenswrapper[4946]: E1128 08:46:33.990064 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:46:35 crc kubenswrapper[4946]: I1128 08:46:35.374217 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65b98775c-hq9l6" Nov 28 08:46:35 crc kubenswrapper[4946]: I1128 08:46:35.462702 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77bb748d8c-bj4w4"] Nov 28 08:46:35 crc kubenswrapper[4946]: I1128 08:46:35.463236 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" podUID="72d2a0bf-5f6b-4074-b171-3b2b6500cf54" containerName="dnsmasq-dns" containerID="cri-o://5d520c379b358058b305fbe69b4a7400d5a9022724ffd538389312fc2f9cc58f" gracePeriod=10 Nov 28 08:46:35 crc kubenswrapper[4946]: I1128 08:46:35.904576 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" Nov 28 08:46:35 crc kubenswrapper[4946]: I1128 08:46:35.966661 4946 generic.go:334] "Generic (PLEG): container finished" podID="72d2a0bf-5f6b-4074-b171-3b2b6500cf54" containerID="5d520c379b358058b305fbe69b4a7400d5a9022724ffd538389312fc2f9cc58f" exitCode=0 Nov 28 08:46:35 crc kubenswrapper[4946]: I1128 08:46:35.966706 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" event={"ID":"72d2a0bf-5f6b-4074-b171-3b2b6500cf54","Type":"ContainerDied","Data":"5d520c379b358058b305fbe69b4a7400d5a9022724ffd538389312fc2f9cc58f"} Nov 28 08:46:35 crc kubenswrapper[4946]: I1128 08:46:35.966732 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" event={"ID":"72d2a0bf-5f6b-4074-b171-3b2b6500cf54","Type":"ContainerDied","Data":"fab6530122a07ac7b30d635b0214699eb4104974c52cfc004b873767f5bf999f"} Nov 28 08:46:35 crc kubenswrapper[4946]: I1128 08:46:35.966749 4946 scope.go:117] "RemoveContainer" containerID="5d520c379b358058b305fbe69b4a7400d5a9022724ffd538389312fc2f9cc58f" Nov 28 08:46:35 crc kubenswrapper[4946]: I1128 08:46:35.967035 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77bb748d8c-bj4w4" Nov 28 08:46:35 crc kubenswrapper[4946]: I1128 08:46:35.997752 4946 scope.go:117] "RemoveContainer" containerID="c7218b3750dbf59b3713bc47d1d4e0dc755a055dc807afcea2468472046a2e32" Nov 28 08:46:36 crc kubenswrapper[4946]: I1128 08:46:36.019983 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xm6h\" (UniqueName: \"kubernetes.io/projected/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-kube-api-access-4xm6h\") pod \"72d2a0bf-5f6b-4074-b171-3b2b6500cf54\" (UID: \"72d2a0bf-5f6b-4074-b171-3b2b6500cf54\") " Nov 28 08:46:36 crc kubenswrapper[4946]: I1128 08:46:36.020105 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-ovsdbserver-sb\") pod \"72d2a0bf-5f6b-4074-b171-3b2b6500cf54\" (UID: \"72d2a0bf-5f6b-4074-b171-3b2b6500cf54\") " Nov 28 08:46:36 crc kubenswrapper[4946]: I1128 08:46:36.020181 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-dns-svc\") pod \"72d2a0bf-5f6b-4074-b171-3b2b6500cf54\" (UID: \"72d2a0bf-5f6b-4074-b171-3b2b6500cf54\") " Nov 28 08:46:36 crc kubenswrapper[4946]: I1128 08:46:36.020246 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-ovsdbserver-nb\") pod \"72d2a0bf-5f6b-4074-b171-3b2b6500cf54\" (UID: \"72d2a0bf-5f6b-4074-b171-3b2b6500cf54\") " Nov 28 08:46:36 crc kubenswrapper[4946]: I1128 08:46:36.020276 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-config\") pod \"72d2a0bf-5f6b-4074-b171-3b2b6500cf54\" (UID: \"72d2a0bf-5f6b-4074-b171-3b2b6500cf54\") " Nov 28 08:46:36 crc kubenswrapper[4946]: I1128 08:46:36.032655 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-kube-api-access-4xm6h" (OuterVolumeSpecName: "kube-api-access-4xm6h") pod "72d2a0bf-5f6b-4074-b171-3b2b6500cf54" (UID: "72d2a0bf-5f6b-4074-b171-3b2b6500cf54"). InnerVolumeSpecName "kube-api-access-4xm6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:46:36 crc kubenswrapper[4946]: I1128 08:46:36.052622 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "72d2a0bf-5f6b-4074-b171-3b2b6500cf54" (UID: "72d2a0bf-5f6b-4074-b171-3b2b6500cf54"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:46:36 crc kubenswrapper[4946]: I1128 08:46:36.054288 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "72d2a0bf-5f6b-4074-b171-3b2b6500cf54" (UID: "72d2a0bf-5f6b-4074-b171-3b2b6500cf54"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:46:36 crc kubenswrapper[4946]: I1128 08:46:36.058191 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "72d2a0bf-5f6b-4074-b171-3b2b6500cf54" (UID: "72d2a0bf-5f6b-4074-b171-3b2b6500cf54"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:46:36 crc kubenswrapper[4946]: I1128 08:46:36.064073 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-config" (OuterVolumeSpecName: "config") pod "72d2a0bf-5f6b-4074-b171-3b2b6500cf54" (UID: "72d2a0bf-5f6b-4074-b171-3b2b6500cf54"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:46:36 crc kubenswrapper[4946]: I1128 08:46:36.122981 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 08:46:36 crc kubenswrapper[4946]: I1128 08:46:36.123241 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 08:46:36 crc kubenswrapper[4946]: I1128 08:46:36.123319 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 08:46:36 crc kubenswrapper[4946]: I1128 08:46:36.123395 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-config\") on node \"crc\" DevicePath \"\"" Nov 28 08:46:36 crc kubenswrapper[4946]: I1128 08:46:36.123490 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xm6h\" (UniqueName: \"kubernetes.io/projected/72d2a0bf-5f6b-4074-b171-3b2b6500cf54-kube-api-access-4xm6h\") on node \"crc\" DevicePath \"\"" Nov 28 08:46:36 crc kubenswrapper[4946]: I1128 08:46:36.143306 4946 scope.go:117] "RemoveContainer" containerID="5d520c379b358058b305fbe69b4a7400d5a9022724ffd538389312fc2f9cc58f" Nov 28 08:46:36 crc kubenswrapper[4946]: E1128 08:46:36.143928 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d520c379b358058b305fbe69b4a7400d5a9022724ffd538389312fc2f9cc58f\": container with ID starting with 5d520c379b358058b305fbe69b4a7400d5a9022724ffd538389312fc2f9cc58f not found: ID does not exist" containerID="5d520c379b358058b305fbe69b4a7400d5a9022724ffd538389312fc2f9cc58f" Nov 28 08:46:36 crc kubenswrapper[4946]: I1128 08:46:36.143957 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d520c379b358058b305fbe69b4a7400d5a9022724ffd538389312fc2f9cc58f"} err="failed to get container status \"5d520c379b358058b305fbe69b4a7400d5a9022724ffd538389312fc2f9cc58f\": rpc error: code = NotFound desc = could not find container \"5d520c379b358058b305fbe69b4a7400d5a9022724ffd538389312fc2f9cc58f\": container with ID starting with 5d520c379b358058b305fbe69b4a7400d5a9022724ffd538389312fc2f9cc58f not found: ID does not exist" Nov 28 08:46:36 crc kubenswrapper[4946]: I1128 08:46:36.143976 4946 scope.go:117] "RemoveContainer" containerID="c7218b3750dbf59b3713bc47d1d4e0dc755a055dc807afcea2468472046a2e32" Nov 28 08:46:36 crc kubenswrapper[4946]: E1128 08:46:36.144484 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7218b3750dbf59b3713bc47d1d4e0dc755a055dc807afcea2468472046a2e32\": container with ID starting with c7218b3750dbf59b3713bc47d1d4e0dc755a055dc807afcea2468472046a2e32 not found: ID does not exist" containerID="c7218b3750dbf59b3713bc47d1d4e0dc755a055dc807afcea2468472046a2e32" Nov 28 08:46:36 crc kubenswrapper[4946]: I1128 08:46:36.144508 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7218b3750dbf59b3713bc47d1d4e0dc755a055dc807afcea2468472046a2e32"} err="failed to get container status \"c7218b3750dbf59b3713bc47d1d4e0dc755a055dc807afcea2468472046a2e32\": rpc error: code = NotFound desc = could not find container \"c7218b3750dbf59b3713bc47d1d4e0dc755a055dc807afcea2468472046a2e32\": container with ID starting with c7218b3750dbf59b3713bc47d1d4e0dc755a055dc807afcea2468472046a2e32 not found: ID does not exist" Nov 28 08:46:36 crc kubenswrapper[4946]: I1128 08:46:36.308570 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77bb748d8c-bj4w4"] Nov 28 08:46:36 crc kubenswrapper[4946]: I1128 08:46:36.315193 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77bb748d8c-bj4w4"] Nov 28 08:46:36 crc kubenswrapper[4946]: I1128 08:46:36.974842 4946 generic.go:334] "Generic (PLEG): container finished" podID="859d5115-6c6c-4452-b0f7-9a031bc502ee" containerID="d0c8381e7025e9aad7e5c6c45bd710ae6e10dd7f4e242c0f196cb9ca4af925b7" exitCode=0 Nov 28 08:46:36 crc kubenswrapper[4946]: I1128 08:46:36.974907 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dj9lp" event={"ID":"859d5115-6c6c-4452-b0f7-9a031bc502ee","Type":"ContainerDied","Data":"d0c8381e7025e9aad7e5c6c45bd710ae6e10dd7f4e242c0f196cb9ca4af925b7"} Nov 28 08:46:38 crc kubenswrapper[4946]: I1128 08:46:38.010310 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72d2a0bf-5f6b-4074-b171-3b2b6500cf54" path="/var/lib/kubelet/pods/72d2a0bf-5f6b-4074-b171-3b2b6500cf54/volumes" Nov 28 08:46:38 crc kubenswrapper[4946]: I1128 08:46:38.411805 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dj9lp" Nov 28 08:46:38 crc kubenswrapper[4946]: I1128 08:46:38.564964 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqfhc\" (UniqueName: \"kubernetes.io/projected/859d5115-6c6c-4452-b0f7-9a031bc502ee-kube-api-access-fqfhc\") pod \"859d5115-6c6c-4452-b0f7-9a031bc502ee\" (UID: \"859d5115-6c6c-4452-b0f7-9a031bc502ee\") " Nov 28 08:46:38 crc kubenswrapper[4946]: I1128 08:46:38.565064 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-combined-ca-bundle\") pod \"859d5115-6c6c-4452-b0f7-9a031bc502ee\" (UID: \"859d5115-6c6c-4452-b0f7-9a031bc502ee\") " Nov 28 08:46:38 crc kubenswrapper[4946]: I1128 08:46:38.565156 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-credential-keys\") pod \"859d5115-6c6c-4452-b0f7-9a031bc502ee\" (UID: \"859d5115-6c6c-4452-b0f7-9a031bc502ee\") " Nov 28 08:46:38 crc kubenswrapper[4946]: I1128 08:46:38.565289 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-config-data\") pod \"859d5115-6c6c-4452-b0f7-9a031bc502ee\" (UID: \"859d5115-6c6c-4452-b0f7-9a031bc502ee\") " Nov 28 08:46:38 crc kubenswrapper[4946]: I1128 08:46:38.565323 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-scripts\") pod \"859d5115-6c6c-4452-b0f7-9a031bc502ee\" (UID: \"859d5115-6c6c-4452-b0f7-9a031bc502ee\") " Nov 28 08:46:38 crc kubenswrapper[4946]: I1128 08:46:38.565377 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-fernet-keys\") pod \"859d5115-6c6c-4452-b0f7-9a031bc502ee\" (UID: \"859d5115-6c6c-4452-b0f7-9a031bc502ee\") " Nov 28 08:46:38 crc kubenswrapper[4946]: I1128 08:46:38.577689 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-scripts" (OuterVolumeSpecName: "scripts") pod "859d5115-6c6c-4452-b0f7-9a031bc502ee" (UID: "859d5115-6c6c-4452-b0f7-9a031bc502ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:46:38 crc kubenswrapper[4946]: I1128 08:46:38.581813 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/859d5115-6c6c-4452-b0f7-9a031bc502ee-kube-api-access-fqfhc" (OuterVolumeSpecName: "kube-api-access-fqfhc") pod "859d5115-6c6c-4452-b0f7-9a031bc502ee" (UID: "859d5115-6c6c-4452-b0f7-9a031bc502ee"). InnerVolumeSpecName "kube-api-access-fqfhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:46:38 crc kubenswrapper[4946]: I1128 08:46:38.582146 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "859d5115-6c6c-4452-b0f7-9a031bc502ee" (UID: "859d5115-6c6c-4452-b0f7-9a031bc502ee"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:46:38 crc kubenswrapper[4946]: I1128 08:46:38.582954 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "859d5115-6c6c-4452-b0f7-9a031bc502ee" (UID: "859d5115-6c6c-4452-b0f7-9a031bc502ee"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:46:38 crc kubenswrapper[4946]: I1128 08:46:38.593870 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-config-data" (OuterVolumeSpecName: "config-data") pod "859d5115-6c6c-4452-b0f7-9a031bc502ee" (UID: "859d5115-6c6c-4452-b0f7-9a031bc502ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:46:38 crc kubenswrapper[4946]: I1128 08:46:38.615391 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "859d5115-6c6c-4452-b0f7-9a031bc502ee" (UID: "859d5115-6c6c-4452-b0f7-9a031bc502ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:46:38 crc kubenswrapper[4946]: I1128 08:46:38.667448 4946 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 28 08:46:38 crc kubenswrapper[4946]: I1128 08:46:38.667560 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:46:38 crc kubenswrapper[4946]: I1128 08:46:38.667576 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:46:38 crc kubenswrapper[4946]: I1128 08:46:38.667589 4946 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 08:46:38 crc kubenswrapper[4946]: I1128 08:46:38.667603 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqfhc\" (UniqueName: \"kubernetes.io/projected/859d5115-6c6c-4452-b0f7-9a031bc502ee-kube-api-access-fqfhc\") on node \"crc\" DevicePath \"\"" Nov 28 08:46:38 crc kubenswrapper[4946]: I1128 08:46:38.667615 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/859d5115-6c6c-4452-b0f7-9a031bc502ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:46:38 crc kubenswrapper[4946]: I1128 08:46:38.998846 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dj9lp" event={"ID":"859d5115-6c6c-4452-b0f7-9a031bc502ee","Type":"ContainerDied","Data":"16cddd235478331546cc108453b14f6757301fe666abb2ae76bed659b28523f8"} Nov 28 08:46:38 crc kubenswrapper[4946]: I1128 08:46:38.998882 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16cddd235478331546cc108453b14f6757301fe666abb2ae76bed659b28523f8" Nov 28 08:46:38 crc kubenswrapper[4946]: I1128 08:46:38.998934 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dj9lp" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.132277 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-c6c9f9658-b6wk7"] Nov 28 08:46:39 crc kubenswrapper[4946]: E1128 08:46:39.132771 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859d5115-6c6c-4452-b0f7-9a031bc502ee" containerName="keystone-bootstrap" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.132789 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="859d5115-6c6c-4452-b0f7-9a031bc502ee" containerName="keystone-bootstrap" Nov 28 08:46:39 crc kubenswrapper[4946]: E1128 08:46:39.132812 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d2a0bf-5f6b-4074-b171-3b2b6500cf54" containerName="init" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.132821 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d2a0bf-5f6b-4074-b171-3b2b6500cf54" containerName="init" Nov 28 08:46:39 crc kubenswrapper[4946]: E1128 08:46:39.132846 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d2a0bf-5f6b-4074-b171-3b2b6500cf54" containerName="dnsmasq-dns" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.132857 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d2a0bf-5f6b-4074-b171-3b2b6500cf54" containerName="dnsmasq-dns" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.133057 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="859d5115-6c6c-4452-b0f7-9a031bc502ee" containerName="keystone-bootstrap" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.133074 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="72d2a0bf-5f6b-4074-b171-3b2b6500cf54" containerName="dnsmasq-dns" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.133807 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c6c9f9658-b6wk7" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.136952 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.137416 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.139815 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.146278 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7df6x" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.155768 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c6c9f9658-b6wk7"] Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.284508 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a0ae4a4-be53-42c7-a400-7d49ea62d95d-combined-ca-bundle\") pod \"keystone-c6c9f9658-b6wk7\" (UID: \"5a0ae4a4-be53-42c7-a400-7d49ea62d95d\") " pod="openstack/keystone-c6c9f9658-b6wk7" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.285098 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dcxl\" (UniqueName: \"kubernetes.io/projected/5a0ae4a4-be53-42c7-a400-7d49ea62d95d-kube-api-access-4dcxl\") pod \"keystone-c6c9f9658-b6wk7\" (UID: \"5a0ae4a4-be53-42c7-a400-7d49ea62d95d\") " pod="openstack/keystone-c6c9f9658-b6wk7" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.285176 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a0ae4a4-be53-42c7-a400-7d49ea62d95d-config-data\") pod \"keystone-c6c9f9658-b6wk7\" (UID: \"5a0ae4a4-be53-42c7-a400-7d49ea62d95d\") " pod="openstack/keystone-c6c9f9658-b6wk7" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.285216 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a0ae4a4-be53-42c7-a400-7d49ea62d95d-scripts\") pod \"keystone-c6c9f9658-b6wk7\" (UID: \"5a0ae4a4-be53-42c7-a400-7d49ea62d95d\") " pod="openstack/keystone-c6c9f9658-b6wk7" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.285374 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a0ae4a4-be53-42c7-a400-7d49ea62d95d-fernet-keys\") pod \"keystone-c6c9f9658-b6wk7\" (UID: \"5a0ae4a4-be53-42c7-a400-7d49ea62d95d\") " pod="openstack/keystone-c6c9f9658-b6wk7" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.285594 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5a0ae4a4-be53-42c7-a400-7d49ea62d95d-credential-keys\") pod \"keystone-c6c9f9658-b6wk7\" (UID: \"5a0ae4a4-be53-42c7-a400-7d49ea62d95d\") " pod="openstack/keystone-c6c9f9658-b6wk7" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.388212 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a0ae4a4-be53-42c7-a400-7d49ea62d95d-config-data\") pod \"keystone-c6c9f9658-b6wk7\" (UID: \"5a0ae4a4-be53-42c7-a400-7d49ea62d95d\") " pod="openstack/keystone-c6c9f9658-b6wk7" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.388732 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a0ae4a4-be53-42c7-a400-7d49ea62d95d-scripts\") pod \"keystone-c6c9f9658-b6wk7\" (UID: \"5a0ae4a4-be53-42c7-a400-7d49ea62d95d\") " pod="openstack/keystone-c6c9f9658-b6wk7" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.388966 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a0ae4a4-be53-42c7-a400-7d49ea62d95d-fernet-keys\") pod \"keystone-c6c9f9658-b6wk7\" (UID: \"5a0ae4a4-be53-42c7-a400-7d49ea62d95d\") " pod="openstack/keystone-c6c9f9658-b6wk7" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.389154 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5a0ae4a4-be53-42c7-a400-7d49ea62d95d-credential-keys\") pod \"keystone-c6c9f9658-b6wk7\" (UID: \"5a0ae4a4-be53-42c7-a400-7d49ea62d95d\") " pod="openstack/keystone-c6c9f9658-b6wk7" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.389543 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a0ae4a4-be53-42c7-a400-7d49ea62d95d-combined-ca-bundle\") pod \"keystone-c6c9f9658-b6wk7\" (UID: \"5a0ae4a4-be53-42c7-a400-7d49ea62d95d\") " pod="openstack/keystone-c6c9f9658-b6wk7" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.389774 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dcxl\" (UniqueName: \"kubernetes.io/projected/5a0ae4a4-be53-42c7-a400-7d49ea62d95d-kube-api-access-4dcxl\") pod \"keystone-c6c9f9658-b6wk7\" (UID: \"5a0ae4a4-be53-42c7-a400-7d49ea62d95d\") " pod="openstack/keystone-c6c9f9658-b6wk7" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.392939 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a0ae4a4-be53-42c7-a400-7d49ea62d95d-scripts\") pod \"keystone-c6c9f9658-b6wk7\" (UID: \"5a0ae4a4-be53-42c7-a400-7d49ea62d95d\") " pod="openstack/keystone-c6c9f9658-b6wk7" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.394138 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a0ae4a4-be53-42c7-a400-7d49ea62d95d-combined-ca-bundle\") pod \"keystone-c6c9f9658-b6wk7\" (UID: \"5a0ae4a4-be53-42c7-a400-7d49ea62d95d\") " pod="openstack/keystone-c6c9f9658-b6wk7" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.394627 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a0ae4a4-be53-42c7-a400-7d49ea62d95d-config-data\") pod \"keystone-c6c9f9658-b6wk7\" (UID: \"5a0ae4a4-be53-42c7-a400-7d49ea62d95d\") " pod="openstack/keystone-c6c9f9658-b6wk7" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.398330 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a0ae4a4-be53-42c7-a400-7d49ea62d95d-fernet-keys\") pod \"keystone-c6c9f9658-b6wk7\" (UID: \"5a0ae4a4-be53-42c7-a400-7d49ea62d95d\") " pod="openstack/keystone-c6c9f9658-b6wk7" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.404013 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5a0ae4a4-be53-42c7-a400-7d49ea62d95d-credential-keys\") pod \"keystone-c6c9f9658-b6wk7\" (UID: \"5a0ae4a4-be53-42c7-a400-7d49ea62d95d\") " pod="openstack/keystone-c6c9f9658-b6wk7" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.407957 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dcxl\" (UniqueName: \"kubernetes.io/projected/5a0ae4a4-be53-42c7-a400-7d49ea62d95d-kube-api-access-4dcxl\") pod \"keystone-c6c9f9658-b6wk7\" (UID: \"5a0ae4a4-be53-42c7-a400-7d49ea62d95d\") " pod="openstack/keystone-c6c9f9658-b6wk7" Nov 28 08:46:39 crc kubenswrapper[4946]: I1128 08:46:39.465727 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c6c9f9658-b6wk7" Nov 28 08:46:40 crc kubenswrapper[4946]: I1128 08:46:40.032185 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c6c9f9658-b6wk7"] Nov 28 08:46:41 crc kubenswrapper[4946]: I1128 08:46:41.025686 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c6c9f9658-b6wk7" event={"ID":"5a0ae4a4-be53-42c7-a400-7d49ea62d95d","Type":"ContainerStarted","Data":"4c5ef727bc90ddcb10c598bd1f497763228e381b1c1f54df06d985df19127c46"} Nov 28 08:46:41 crc kubenswrapper[4946]: I1128 08:46:41.026233 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-c6c9f9658-b6wk7" Nov 28 08:46:41 crc kubenswrapper[4946]: I1128 08:46:41.026259 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c6c9f9658-b6wk7" event={"ID":"5a0ae4a4-be53-42c7-a400-7d49ea62d95d","Type":"ContainerStarted","Data":"37b47231eb101fec53ba68139dfdefd91813c256ac798fa596ac55ec89bfb0f4"} Nov 28 08:46:41 crc kubenswrapper[4946]: I1128 08:46:41.053531 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-c6c9f9658-b6wk7" podStartSLOduration=2.053457747 podStartE2EDuration="2.053457747s" podCreationTimestamp="2025-11-28 08:46:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:46:41.052820702 +0000 UTC m=+6855.430885853" watchObservedRunningTime="2025-11-28 08:46:41.053457747 +0000 UTC m=+6855.431522898" Nov 28 08:46:44 crc kubenswrapper[4946]: I1128 08:46:44.990292 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:46:44 crc kubenswrapper[4946]: E1128 08:46:44.991338 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:46:56 crc kubenswrapper[4946]: I1128 08:46:56.077156 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:46:56 crc kubenswrapper[4946]: E1128 08:46:56.077842 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:47:10 crc kubenswrapper[4946]: I1128 08:47:10.932348 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-c6c9f9658-b6wk7" Nov 28 08:47:10 crc kubenswrapper[4946]: I1128 08:47:10.989394 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:47:10 crc kubenswrapper[4946]: E1128 08:47:10.989663 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:47:14 crc kubenswrapper[4946]: I1128 08:47:14.676455 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 28 08:47:14 crc kubenswrapper[4946]: I1128 08:47:14.678891 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 08:47:14 crc kubenswrapper[4946]: I1128 08:47:14.681627 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 28 08:47:14 crc kubenswrapper[4946]: I1128 08:47:14.681795 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-s82vx" Nov 28 08:47:14 crc kubenswrapper[4946]: I1128 08:47:14.682900 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 28 08:47:14 crc kubenswrapper[4946]: I1128 08:47:14.694246 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 08:47:14 crc kubenswrapper[4946]: I1128 08:47:14.837881 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fca9a6bf-87fc-4862-bb8a-7aa72eeabc64-openstack-config\") pod \"openstackclient\" (UID: \"fca9a6bf-87fc-4862-bb8a-7aa72eeabc64\") " pod="openstack/openstackclient" Nov 28 08:47:14 crc kubenswrapper[4946]: I1128 08:47:14.838140 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m6nr\" (UniqueName: \"kubernetes.io/projected/fca9a6bf-87fc-4862-bb8a-7aa72eeabc64-kube-api-access-4m6nr\") pod \"openstackclient\" (UID: \"fca9a6bf-87fc-4862-bb8a-7aa72eeabc64\") " pod="openstack/openstackclient" Nov 28 08:47:14 crc kubenswrapper[4946]: I1128 08:47:14.838240 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fca9a6bf-87fc-4862-bb8a-7aa72eeabc64-openstack-config-secret\") pod \"openstackclient\" (UID: \"fca9a6bf-87fc-4862-bb8a-7aa72eeabc64\") " pod="openstack/openstackclient" Nov 28 08:47:14 crc kubenswrapper[4946]: I1128 08:47:14.940047 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fca9a6bf-87fc-4862-bb8a-7aa72eeabc64-openstack-config\") pod \"openstackclient\" (UID: \"fca9a6bf-87fc-4862-bb8a-7aa72eeabc64\") " pod="openstack/openstackclient" Nov 28 08:47:14 crc kubenswrapper[4946]: I1128 08:47:14.940155 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m6nr\" (UniqueName: \"kubernetes.io/projected/fca9a6bf-87fc-4862-bb8a-7aa72eeabc64-kube-api-access-4m6nr\") pod \"openstackclient\" (UID: \"fca9a6bf-87fc-4862-bb8a-7aa72eeabc64\") " pod="openstack/openstackclient" Nov 28 08:47:14 crc kubenswrapper[4946]: I1128 08:47:14.940188 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fca9a6bf-87fc-4862-bb8a-7aa72eeabc64-openstack-config-secret\") pod \"openstackclient\" (UID: \"fca9a6bf-87fc-4862-bb8a-7aa72eeabc64\") " pod="openstack/openstackclient" Nov 28 08:47:14 crc kubenswrapper[4946]: I1128 08:47:14.941160 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fca9a6bf-87fc-4862-bb8a-7aa72eeabc64-openstack-config\") pod \"openstackclient\" (UID: \"fca9a6bf-87fc-4862-bb8a-7aa72eeabc64\") " pod="openstack/openstackclient" Nov 28 08:47:14 crc kubenswrapper[4946]: I1128 08:47:14.946866 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fca9a6bf-87fc-4862-bb8a-7aa72eeabc64-openstack-config-secret\") pod \"openstackclient\" (UID: \"fca9a6bf-87fc-4862-bb8a-7aa72eeabc64\") " pod="openstack/openstackclient" Nov 28 08:47:14 crc kubenswrapper[4946]: I1128 08:47:14.961033 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m6nr\" (UniqueName: \"kubernetes.io/projected/fca9a6bf-87fc-4862-bb8a-7aa72eeabc64-kube-api-access-4m6nr\") pod \"openstackclient\" (UID: \"fca9a6bf-87fc-4862-bb8a-7aa72eeabc64\") " pod="openstack/openstackclient" Nov 28 08:47:15 crc kubenswrapper[4946]: I1128 08:47:15.026625 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 08:47:15 crc kubenswrapper[4946]: I1128 08:47:15.298814 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 08:47:15 crc kubenswrapper[4946]: I1128 08:47:15.406579 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"fca9a6bf-87fc-4862-bb8a-7aa72eeabc64","Type":"ContainerStarted","Data":"bf8297856e4704c568b69800d1fc921fd6ff5b4f78e6083032ff01212912bfb5"} Nov 28 08:47:22 crc kubenswrapper[4946]: I1128 08:47:22.990299 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:47:22 crc kubenswrapper[4946]: E1128 08:47:22.991130 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:47:26 crc kubenswrapper[4946]: I1128 08:47:26.495765 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"fca9a6bf-87fc-4862-bb8a-7aa72eeabc64","Type":"ContainerStarted","Data":"bc7e8bd3178a163ac1d345a60a8e45120428b9eef9eb6bfebe77d056216515b0"} Nov 28 08:47:26 crc kubenswrapper[4946]: I1128 08:47:26.519580 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.800360209 podStartE2EDuration="12.519550456s" podCreationTimestamp="2025-11-28 08:47:14 +0000 UTC" firstStartedPulling="2025-11-28 08:47:15.308147012 +0000 UTC m=+6889.686212123" lastFinishedPulling="2025-11-28 08:47:26.027337259 +0000 UTC m=+6900.405402370" observedRunningTime="2025-11-28 08:47:26.5104051 +0000 UTC m=+6900.888470231" watchObservedRunningTime="2025-11-28 08:47:26.519550456 +0000 UTC m=+6900.897615567" Nov 28 08:47:34 crc kubenswrapper[4946]: I1128 08:47:34.990770 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:47:34 crc kubenswrapper[4946]: E1128 08:47:34.992099 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:47:49 crc kubenswrapper[4946]: I1128 08:47:49.991078 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:47:49 crc kubenswrapper[4946]: E1128 08:47:49.992439 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:48:02 crc kubenswrapper[4946]: I1128 08:48:02.990210 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:48:02 crc kubenswrapper[4946]: E1128 08:48:02.991132 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:48:13 crc kubenswrapper[4946]: I1128 08:48:13.990089 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:48:13 crc kubenswrapper[4946]: E1128 08:48:13.990853 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:48:28 crc kubenswrapper[4946]: I1128 08:48:28.990787 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:48:28 crc kubenswrapper[4946]: E1128 08:48:28.993572 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:48:39 crc kubenswrapper[4946]: I1128 08:48:39.990100 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:48:39 crc kubenswrapper[4946]: E1128 08:48:39.991172 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:48:51 crc kubenswrapper[4946]: I1128 08:48:51.539785 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-kswft"] Nov 28 08:48:51 crc kubenswrapper[4946]: I1128 08:48:51.541737 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-kswft" Nov 28 08:48:51 crc kubenswrapper[4946]: I1128 08:48:51.556159 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-kswft"] Nov 28 08:48:51 crc kubenswrapper[4946]: I1128 08:48:51.623855 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-6601-account-create-update-6gc7l"] Nov 28 08:48:51 crc kubenswrapper[4946]: I1128 08:48:51.625198 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6601-account-create-update-6gc7l" Nov 28 08:48:51 crc kubenswrapper[4946]: I1128 08:48:51.626835 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 28 08:48:51 crc kubenswrapper[4946]: I1128 08:48:51.632978 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6601-account-create-update-6gc7l"] Nov 28 08:48:51 crc kubenswrapper[4946]: I1128 08:48:51.706972 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7e99b54-40de-4764-a18b-ecccd01a9887-operator-scripts\") pod \"barbican-6601-account-create-update-6gc7l\" (UID: \"d7e99b54-40de-4764-a18b-ecccd01a9887\") " pod="openstack/barbican-6601-account-create-update-6gc7l" Nov 28 08:48:51 crc kubenswrapper[4946]: I1128 08:48:51.707375 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn9x6\" (UniqueName: \"kubernetes.io/projected/d7e99b54-40de-4764-a18b-ecccd01a9887-kube-api-access-tn9x6\") pod \"barbican-6601-account-create-update-6gc7l\" (UID: \"d7e99b54-40de-4764-a18b-ecccd01a9887\") " pod="openstack/barbican-6601-account-create-update-6gc7l" Nov 28 08:48:51 crc kubenswrapper[4946]: I1128 08:48:51.707490 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxz7v\" (UniqueName: \"kubernetes.io/projected/23efb7b3-ab4a-4127-9b80-2475de4c5c17-kube-api-access-vxz7v\") pod \"barbican-db-create-kswft\" (UID: \"23efb7b3-ab4a-4127-9b80-2475de4c5c17\") " pod="openstack/barbican-db-create-kswft" Nov 28 08:48:51 crc kubenswrapper[4946]: I1128 08:48:51.707633 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23efb7b3-ab4a-4127-9b80-2475de4c5c17-operator-scripts\") pod \"barbican-db-create-kswft\" (UID: \"23efb7b3-ab4a-4127-9b80-2475de4c5c17\") " pod="openstack/barbican-db-create-kswft" Nov 28 08:48:51 crc kubenswrapper[4946]: I1128 08:48:51.808992 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7e99b54-40de-4764-a18b-ecccd01a9887-operator-scripts\") pod \"barbican-6601-account-create-update-6gc7l\" (UID: \"d7e99b54-40de-4764-a18b-ecccd01a9887\") " pod="openstack/barbican-6601-account-create-update-6gc7l" Nov 28 08:48:51 crc kubenswrapper[4946]: I1128 08:48:51.809104 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn9x6\" (UniqueName: \"kubernetes.io/projected/d7e99b54-40de-4764-a18b-ecccd01a9887-kube-api-access-tn9x6\") pod \"barbican-6601-account-create-update-6gc7l\" (UID: \"d7e99b54-40de-4764-a18b-ecccd01a9887\") " pod="openstack/barbican-6601-account-create-update-6gc7l" Nov 28 08:48:51 crc kubenswrapper[4946]: I1128 08:48:51.809154 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxz7v\" (UniqueName: \"kubernetes.io/projected/23efb7b3-ab4a-4127-9b80-2475de4c5c17-kube-api-access-vxz7v\") pod \"barbican-db-create-kswft\" (UID: \"23efb7b3-ab4a-4127-9b80-2475de4c5c17\") " pod="openstack/barbican-db-create-kswft" Nov 28 08:48:51 crc kubenswrapper[4946]: I1128 08:48:51.809192 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23efb7b3-ab4a-4127-9b80-2475de4c5c17-operator-scripts\") pod \"barbican-db-create-kswft\" (UID: \"23efb7b3-ab4a-4127-9b80-2475de4c5c17\") " pod="openstack/barbican-db-create-kswft" Nov 28 08:48:51 crc kubenswrapper[4946]: I1128 08:48:51.809957 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7e99b54-40de-4764-a18b-ecccd01a9887-operator-scripts\") pod \"barbican-6601-account-create-update-6gc7l\" (UID: \"d7e99b54-40de-4764-a18b-ecccd01a9887\") " pod="openstack/barbican-6601-account-create-update-6gc7l" Nov 28 08:48:51 crc kubenswrapper[4946]: I1128 08:48:51.810366 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23efb7b3-ab4a-4127-9b80-2475de4c5c17-operator-scripts\") pod \"barbican-db-create-kswft\" (UID: \"23efb7b3-ab4a-4127-9b80-2475de4c5c17\") " pod="openstack/barbican-db-create-kswft" Nov 28 08:48:51 crc kubenswrapper[4946]: I1128 08:48:51.826795 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn9x6\" (UniqueName: \"kubernetes.io/projected/d7e99b54-40de-4764-a18b-ecccd01a9887-kube-api-access-tn9x6\") pod \"barbican-6601-account-create-update-6gc7l\" (UID: \"d7e99b54-40de-4764-a18b-ecccd01a9887\") " pod="openstack/barbican-6601-account-create-update-6gc7l" Nov 28 08:48:51 crc kubenswrapper[4946]: I1128 08:48:51.834816 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxz7v\" (UniqueName: \"kubernetes.io/projected/23efb7b3-ab4a-4127-9b80-2475de4c5c17-kube-api-access-vxz7v\") pod \"barbican-db-create-kswft\" (UID: \"23efb7b3-ab4a-4127-9b80-2475de4c5c17\") " pod="openstack/barbican-db-create-kswft" Nov 28 08:48:51 crc kubenswrapper[4946]: I1128 08:48:51.862416 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-kswft" Nov 28 08:48:51 crc kubenswrapper[4946]: I1128 08:48:51.939572 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6601-account-create-update-6gc7l" Nov 28 08:48:52 crc kubenswrapper[4946]: I1128 08:48:52.324210 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-kswft"] Nov 28 08:48:52 crc kubenswrapper[4946]: I1128 08:48:52.385740 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6601-account-create-update-6gc7l"] Nov 28 08:48:52 crc kubenswrapper[4946]: I1128 08:48:52.387212 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-kswft" event={"ID":"23efb7b3-ab4a-4127-9b80-2475de4c5c17","Type":"ContainerStarted","Data":"de2e31259c690e7e8d546f462096fd353ea8ea33c7d3466cfd24b7cfe730740b"} Nov 28 08:48:52 crc kubenswrapper[4946]: W1128 08:48:52.393843 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7e99b54_40de_4764_a18b_ecccd01a9887.slice/crio-531bc2987e74f822709ceaf4214a56503726e9a7a220b47a4404f21f967b06cb WatchSource:0}: Error finding container 531bc2987e74f822709ceaf4214a56503726e9a7a220b47a4404f21f967b06cb: Status 404 returned error can't find the container with id 531bc2987e74f822709ceaf4214a56503726e9a7a220b47a4404f21f967b06cb Nov 28 08:48:53 crc kubenswrapper[4946]: I1128 08:48:53.401511 4946 generic.go:334] "Generic (PLEG): container finished" podID="23efb7b3-ab4a-4127-9b80-2475de4c5c17" containerID="10b925f316dd4a74af64efce73ab4dd436122184227ab7db1ba402b1f2eb3727" exitCode=0 Nov 28 08:48:53 crc kubenswrapper[4946]: I1128 08:48:53.401632 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-kswft" event={"ID":"23efb7b3-ab4a-4127-9b80-2475de4c5c17","Type":"ContainerDied","Data":"10b925f316dd4a74af64efce73ab4dd436122184227ab7db1ba402b1f2eb3727"} Nov 28 08:48:53 crc kubenswrapper[4946]: I1128 08:48:53.405044 4946 generic.go:334] "Generic (PLEG): container finished" podID="d7e99b54-40de-4764-a18b-ecccd01a9887" containerID="b568b9dfffd5e27aaaf2f400cb75ee51512dfc416c665867c9c086e46a354d2f" exitCode=0 Nov 28 08:48:53 crc kubenswrapper[4946]: I1128 08:48:53.405102 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6601-account-create-update-6gc7l" event={"ID":"d7e99b54-40de-4764-a18b-ecccd01a9887","Type":"ContainerDied","Data":"b568b9dfffd5e27aaaf2f400cb75ee51512dfc416c665867c9c086e46a354d2f"} Nov 28 08:48:53 crc kubenswrapper[4946]: I1128 08:48:53.405135 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6601-account-create-update-6gc7l" event={"ID":"d7e99b54-40de-4764-a18b-ecccd01a9887","Type":"ContainerStarted","Data":"531bc2987e74f822709ceaf4214a56503726e9a7a220b47a4404f21f967b06cb"} Nov 28 08:48:53 crc kubenswrapper[4946]: I1128 08:48:53.990414 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:48:53 crc kubenswrapper[4946]: E1128 08:48:53.990784 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:48:54 crc kubenswrapper[4946]: I1128 08:48:54.871195 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6601-account-create-update-6gc7l" Nov 28 08:48:54 crc kubenswrapper[4946]: I1128 08:48:54.878128 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-kswft" Nov 28 08:48:54 crc kubenswrapper[4946]: I1128 08:48:54.966525 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23efb7b3-ab4a-4127-9b80-2475de4c5c17-operator-scripts\") pod \"23efb7b3-ab4a-4127-9b80-2475de4c5c17\" (UID: \"23efb7b3-ab4a-4127-9b80-2475de4c5c17\") " Nov 28 08:48:54 crc kubenswrapper[4946]: I1128 08:48:54.966748 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxz7v\" (UniqueName: \"kubernetes.io/projected/23efb7b3-ab4a-4127-9b80-2475de4c5c17-kube-api-access-vxz7v\") pod \"23efb7b3-ab4a-4127-9b80-2475de4c5c17\" (UID: \"23efb7b3-ab4a-4127-9b80-2475de4c5c17\") " Nov 28 08:48:54 crc kubenswrapper[4946]: I1128 08:48:54.966845 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7e99b54-40de-4764-a18b-ecccd01a9887-operator-scripts\") pod \"d7e99b54-40de-4764-a18b-ecccd01a9887\" (UID: \"d7e99b54-40de-4764-a18b-ecccd01a9887\") " Nov 28 08:48:54 crc kubenswrapper[4946]: I1128 08:48:54.967233 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn9x6\" (UniqueName: \"kubernetes.io/projected/d7e99b54-40de-4764-a18b-ecccd01a9887-kube-api-access-tn9x6\") pod \"d7e99b54-40de-4764-a18b-ecccd01a9887\" (UID: \"d7e99b54-40de-4764-a18b-ecccd01a9887\") " Nov 28 08:48:54 crc kubenswrapper[4946]: I1128 08:48:54.967336 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23efb7b3-ab4a-4127-9b80-2475de4c5c17-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "23efb7b3-ab4a-4127-9b80-2475de4c5c17" (UID: "23efb7b3-ab4a-4127-9b80-2475de4c5c17"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:48:54 crc kubenswrapper[4946]: I1128 08:48:54.968024 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23efb7b3-ab4a-4127-9b80-2475de4c5c17-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:48:54 crc kubenswrapper[4946]: I1128 08:48:54.968080 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e99b54-40de-4764-a18b-ecccd01a9887-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d7e99b54-40de-4764-a18b-ecccd01a9887" (UID: "d7e99b54-40de-4764-a18b-ecccd01a9887"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:48:54 crc kubenswrapper[4946]: I1128 08:48:54.972898 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e99b54-40de-4764-a18b-ecccd01a9887-kube-api-access-tn9x6" (OuterVolumeSpecName: "kube-api-access-tn9x6") pod "d7e99b54-40de-4764-a18b-ecccd01a9887" (UID: "d7e99b54-40de-4764-a18b-ecccd01a9887"). InnerVolumeSpecName "kube-api-access-tn9x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:48:54 crc kubenswrapper[4946]: I1128 08:48:54.973983 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23efb7b3-ab4a-4127-9b80-2475de4c5c17-kube-api-access-vxz7v" (OuterVolumeSpecName: "kube-api-access-vxz7v") pod "23efb7b3-ab4a-4127-9b80-2475de4c5c17" (UID: "23efb7b3-ab4a-4127-9b80-2475de4c5c17"). InnerVolumeSpecName "kube-api-access-vxz7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:48:55 crc kubenswrapper[4946]: I1128 08:48:55.070023 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxz7v\" (UniqueName: \"kubernetes.io/projected/23efb7b3-ab4a-4127-9b80-2475de4c5c17-kube-api-access-vxz7v\") on node \"crc\" DevicePath \"\"" Nov 28 08:48:55 crc kubenswrapper[4946]: I1128 08:48:55.070080 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7e99b54-40de-4764-a18b-ecccd01a9887-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:48:55 crc kubenswrapper[4946]: I1128 08:48:55.070098 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn9x6\" (UniqueName: \"kubernetes.io/projected/d7e99b54-40de-4764-a18b-ecccd01a9887-kube-api-access-tn9x6\") on node \"crc\" DevicePath \"\"" Nov 28 08:48:55 crc kubenswrapper[4946]: I1128 08:48:55.470914 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6601-account-create-update-6gc7l" event={"ID":"d7e99b54-40de-4764-a18b-ecccd01a9887","Type":"ContainerDied","Data":"531bc2987e74f822709ceaf4214a56503726e9a7a220b47a4404f21f967b06cb"} Nov 28 08:48:55 crc kubenswrapper[4946]: I1128 08:48:55.470961 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="531bc2987e74f822709ceaf4214a56503726e9a7a220b47a4404f21f967b06cb" Nov 28 08:48:55 crc kubenswrapper[4946]: I1128 08:48:55.471002 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6601-account-create-update-6gc7l" Nov 28 08:48:55 crc kubenswrapper[4946]: I1128 08:48:55.472931 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-kswft" event={"ID":"23efb7b3-ab4a-4127-9b80-2475de4c5c17","Type":"ContainerDied","Data":"de2e31259c690e7e8d546f462096fd353ea8ea33c7d3466cfd24b7cfe730740b"} Nov 28 08:48:55 crc kubenswrapper[4946]: I1128 08:48:55.472955 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de2e31259c690e7e8d546f462096fd353ea8ea33c7d3466cfd24b7cfe730740b" Nov 28 08:48:55 crc kubenswrapper[4946]: I1128 08:48:55.473027 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-kswft" Nov 28 08:48:56 crc kubenswrapper[4946]: I1128 08:48:56.911937 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-qhf5g"] Nov 28 08:48:56 crc kubenswrapper[4946]: E1128 08:48:56.912747 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e99b54-40de-4764-a18b-ecccd01a9887" containerName="mariadb-account-create-update" Nov 28 08:48:56 crc kubenswrapper[4946]: I1128 08:48:56.912762 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e99b54-40de-4764-a18b-ecccd01a9887" containerName="mariadb-account-create-update" Nov 28 08:48:56 crc kubenswrapper[4946]: E1128 08:48:56.912784 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23efb7b3-ab4a-4127-9b80-2475de4c5c17" containerName="mariadb-database-create" Nov 28 08:48:56 crc kubenswrapper[4946]: I1128 08:48:56.912790 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="23efb7b3-ab4a-4127-9b80-2475de4c5c17" containerName="mariadb-database-create" Nov 28 08:48:56 crc kubenswrapper[4946]: I1128 08:48:56.912976 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7e99b54-40de-4764-a18b-ecccd01a9887" containerName="mariadb-account-create-update" Nov 28 08:48:56 crc kubenswrapper[4946]: I1128 08:48:56.913005 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="23efb7b3-ab4a-4127-9b80-2475de4c5c17" containerName="mariadb-database-create" Nov 28 08:48:56 crc kubenswrapper[4946]: I1128 08:48:56.913696 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qhf5g" Nov 28 08:48:56 crc kubenswrapper[4946]: I1128 08:48:56.916228 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-f8rdt" Nov 28 08:48:56 crc kubenswrapper[4946]: I1128 08:48:56.916715 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 28 08:48:56 crc kubenswrapper[4946]: I1128 08:48:56.928533 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qhf5g"] Nov 28 08:48:57 crc kubenswrapper[4946]: I1128 08:48:57.002044 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd158f2-cbc1-4965-bcbf-68d0ab35afaf-combined-ca-bundle\") pod \"barbican-db-sync-qhf5g\" (UID: \"ecd158f2-cbc1-4965-bcbf-68d0ab35afaf\") " pod="openstack/barbican-db-sync-qhf5g" Nov 28 08:48:57 crc kubenswrapper[4946]: I1128 08:48:57.002129 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g8vt\" (UniqueName: \"kubernetes.io/projected/ecd158f2-cbc1-4965-bcbf-68d0ab35afaf-kube-api-access-4g8vt\") pod \"barbican-db-sync-qhf5g\" (UID: \"ecd158f2-cbc1-4965-bcbf-68d0ab35afaf\") " pod="openstack/barbican-db-sync-qhf5g" Nov 28 08:48:57 crc kubenswrapper[4946]: I1128 08:48:57.002476 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ecd158f2-cbc1-4965-bcbf-68d0ab35afaf-db-sync-config-data\") pod \"barbican-db-sync-qhf5g\" (UID: \"ecd158f2-cbc1-4965-bcbf-68d0ab35afaf\") " pod="openstack/barbican-db-sync-qhf5g" Nov 28 08:48:57 crc kubenswrapper[4946]: I1128 08:48:57.120203 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd158f2-cbc1-4965-bcbf-68d0ab35afaf-combined-ca-bundle\") pod \"barbican-db-sync-qhf5g\" (UID: \"ecd158f2-cbc1-4965-bcbf-68d0ab35afaf\") " pod="openstack/barbican-db-sync-qhf5g" Nov 28 08:48:57 crc kubenswrapper[4946]: I1128 08:48:57.120347 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g8vt\" (UniqueName: \"kubernetes.io/projected/ecd158f2-cbc1-4965-bcbf-68d0ab35afaf-kube-api-access-4g8vt\") pod \"barbican-db-sync-qhf5g\" (UID: \"ecd158f2-cbc1-4965-bcbf-68d0ab35afaf\") " pod="openstack/barbican-db-sync-qhf5g" Nov 28 08:48:57 crc kubenswrapper[4946]: I1128 08:48:57.122066 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ecd158f2-cbc1-4965-bcbf-68d0ab35afaf-db-sync-config-data\") pod \"barbican-db-sync-qhf5g\" (UID: \"ecd158f2-cbc1-4965-bcbf-68d0ab35afaf\") " pod="openstack/barbican-db-sync-qhf5g" Nov 28 08:48:57 crc kubenswrapper[4946]: I1128 08:48:57.130734 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ecd158f2-cbc1-4965-bcbf-68d0ab35afaf-db-sync-config-data\") pod \"barbican-db-sync-qhf5g\" (UID: \"ecd158f2-cbc1-4965-bcbf-68d0ab35afaf\") " pod="openstack/barbican-db-sync-qhf5g" Nov 28 08:48:57 crc kubenswrapper[4946]: I1128 08:48:57.139408 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd158f2-cbc1-4965-bcbf-68d0ab35afaf-combined-ca-bundle\") pod \"barbican-db-sync-qhf5g\" (UID: \"ecd158f2-cbc1-4965-bcbf-68d0ab35afaf\") " pod="openstack/barbican-db-sync-qhf5g" Nov 28 08:48:57 crc kubenswrapper[4946]: I1128 08:48:57.144167 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g8vt\" (UniqueName: \"kubernetes.io/projected/ecd158f2-cbc1-4965-bcbf-68d0ab35afaf-kube-api-access-4g8vt\") pod \"barbican-db-sync-qhf5g\" (UID: \"ecd158f2-cbc1-4965-bcbf-68d0ab35afaf\") " pod="openstack/barbican-db-sync-qhf5g" Nov 28 08:48:57 crc kubenswrapper[4946]: I1128 08:48:57.238607 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qhf5g" Nov 28 08:48:57 crc kubenswrapper[4946]: I1128 08:48:57.723066 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qhf5g"] Nov 28 08:48:58 crc kubenswrapper[4946]: I1128 08:48:58.506295 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qhf5g" event={"ID":"ecd158f2-cbc1-4965-bcbf-68d0ab35afaf","Type":"ContainerStarted","Data":"15944ceba04f00598e5d2640358d2b1fe107aee0f00f73dce598c2cc1e5b5021"} Nov 28 08:49:03 crc kubenswrapper[4946]: I1128 08:49:03.552374 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qhf5g" event={"ID":"ecd158f2-cbc1-4965-bcbf-68d0ab35afaf","Type":"ContainerStarted","Data":"83bc162eac8bc64d5e2ddc5a5afb207bf352aa7239de745e33448cb364cfa2bf"} Nov 28 08:49:03 crc kubenswrapper[4946]: I1128 08:49:03.570886 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-qhf5g" podStartSLOduration=2.724287378 podStartE2EDuration="7.570865619s" podCreationTimestamp="2025-11-28 08:48:56 +0000 UTC" firstStartedPulling="2025-11-28 08:48:57.727037896 +0000 UTC m=+6992.105103017" lastFinishedPulling="2025-11-28 08:49:02.573616127 +0000 UTC m=+6996.951681258" observedRunningTime="2025-11-28 08:49:03.569571417 +0000 UTC m=+6997.947636578" watchObservedRunningTime="2025-11-28 08:49:03.570865619 +0000 UTC m=+6997.948930740" Nov 28 08:49:05 crc kubenswrapper[4946]: I1128 08:49:05.579806 4946 generic.go:334] "Generic (PLEG): container finished" podID="ecd158f2-cbc1-4965-bcbf-68d0ab35afaf" containerID="83bc162eac8bc64d5e2ddc5a5afb207bf352aa7239de745e33448cb364cfa2bf" exitCode=0 Nov 28 08:49:05 crc kubenswrapper[4946]: I1128 08:49:05.580187 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qhf5g" event={"ID":"ecd158f2-cbc1-4965-bcbf-68d0ab35afaf","Type":"ContainerDied","Data":"83bc162eac8bc64d5e2ddc5a5afb207bf352aa7239de745e33448cb364cfa2bf"} Nov 28 08:49:05 crc kubenswrapper[4946]: I1128 08:49:05.995766 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:49:06 crc kubenswrapper[4946]: I1128 08:49:06.592782 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"5d101f00978c0d1403b405e85a9fdb8ca2bcf5d89e67e93ef9450e8af3a21554"} Nov 28 08:49:06 crc kubenswrapper[4946]: I1128 08:49:06.951106 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qhf5g" Nov 28 08:49:07 crc kubenswrapper[4946]: I1128 08:49:07.104487 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd158f2-cbc1-4965-bcbf-68d0ab35afaf-combined-ca-bundle\") pod \"ecd158f2-cbc1-4965-bcbf-68d0ab35afaf\" (UID: \"ecd158f2-cbc1-4965-bcbf-68d0ab35afaf\") " Nov 28 08:49:07 crc kubenswrapper[4946]: I1128 08:49:07.104535 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8vt\" (UniqueName: \"kubernetes.io/projected/ecd158f2-cbc1-4965-bcbf-68d0ab35afaf-kube-api-access-4g8vt\") pod \"ecd158f2-cbc1-4965-bcbf-68d0ab35afaf\" (UID: \"ecd158f2-cbc1-4965-bcbf-68d0ab35afaf\") " Nov 28 08:49:07 crc kubenswrapper[4946]: I1128 08:49:07.104755 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ecd158f2-cbc1-4965-bcbf-68d0ab35afaf-db-sync-config-data\") pod \"ecd158f2-cbc1-4965-bcbf-68d0ab35afaf\" (UID: \"ecd158f2-cbc1-4965-bcbf-68d0ab35afaf\") " Nov 28 08:49:07 crc kubenswrapper[4946]: I1128 08:49:07.110068 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecd158f2-cbc1-4965-bcbf-68d0ab35afaf-kube-api-access-4g8vt" (OuterVolumeSpecName: "kube-api-access-4g8vt") pod "ecd158f2-cbc1-4965-bcbf-68d0ab35afaf" (UID: "ecd158f2-cbc1-4965-bcbf-68d0ab35afaf"). InnerVolumeSpecName "kube-api-access-4g8vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:49:07 crc kubenswrapper[4946]: I1128 08:49:07.110611 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecd158f2-cbc1-4965-bcbf-68d0ab35afaf-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ecd158f2-cbc1-4965-bcbf-68d0ab35afaf" (UID: "ecd158f2-cbc1-4965-bcbf-68d0ab35afaf"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:49:07 crc kubenswrapper[4946]: I1128 08:49:07.126989 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecd158f2-cbc1-4965-bcbf-68d0ab35afaf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ecd158f2-cbc1-4965-bcbf-68d0ab35afaf" (UID: "ecd158f2-cbc1-4965-bcbf-68d0ab35afaf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:49:07 crc kubenswrapper[4946]: I1128 08:49:07.206422 4946 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ecd158f2-cbc1-4965-bcbf-68d0ab35afaf-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:49:07 crc kubenswrapper[4946]: I1128 08:49:07.206481 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g8vt\" (UniqueName: \"kubernetes.io/projected/ecd158f2-cbc1-4965-bcbf-68d0ab35afaf-kube-api-access-4g8vt\") on node \"crc\" DevicePath \"\"" Nov 28 08:49:07 crc kubenswrapper[4946]: I1128 08:49:07.206497 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd158f2-cbc1-4965-bcbf-68d0ab35afaf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:49:07 crc kubenswrapper[4946]: I1128 08:49:07.604552 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qhf5g" event={"ID":"ecd158f2-cbc1-4965-bcbf-68d0ab35afaf","Type":"ContainerDied","Data":"15944ceba04f00598e5d2640358d2b1fe107aee0f00f73dce598c2cc1e5b5021"} Nov 28 08:49:07 crc kubenswrapper[4946]: I1128 08:49:07.604617 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qhf5g" Nov 28 08:49:07 crc kubenswrapper[4946]: I1128 08:49:07.604623 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15944ceba04f00598e5d2640358d2b1fe107aee0f00f73dce598c2cc1e5b5021" Nov 28 08:49:07 crc kubenswrapper[4946]: I1128 08:49:07.876342 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-77b5dd78d9-268ll"] Nov 28 08:49:07 crc kubenswrapper[4946]: E1128 08:49:07.876703 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecd158f2-cbc1-4965-bcbf-68d0ab35afaf" containerName="barbican-db-sync" Nov 28 08:49:07 crc kubenswrapper[4946]: I1128 08:49:07.876714 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecd158f2-cbc1-4965-bcbf-68d0ab35afaf" containerName="barbican-db-sync" Nov 28 08:49:07 crc kubenswrapper[4946]: I1128 08:49:07.876886 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecd158f2-cbc1-4965-bcbf-68d0ab35afaf" containerName="barbican-db-sync" Nov 28 08:49:07 crc kubenswrapper[4946]: I1128 08:49:07.877664 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-77b5dd78d9-268ll" Nov 28 08:49:07 crc kubenswrapper[4946]: I1128 08:49:07.889325 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 28 08:49:07 crc kubenswrapper[4946]: I1128 08:49:07.889684 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 28 08:49:07 crc kubenswrapper[4946]: I1128 08:49:07.890159 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-f8rdt" Nov 28 08:49:07 crc kubenswrapper[4946]: I1128 08:49:07.890244 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-77b5dd78d9-268ll"] Nov 28 08:49:07 crc kubenswrapper[4946]: I1128 08:49:07.899992 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6968f5d7bb-x689b"] Nov 28 08:49:07 crc kubenswrapper[4946]: I1128 08:49:07.901240 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6968f5d7bb-x689b" Nov 28 08:49:07 crc kubenswrapper[4946]: I1128 08:49:07.904791 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 28 08:49:07 crc kubenswrapper[4946]: I1128 08:49:07.963749 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6968f5d7bb-x689b"] Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.031218 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04af3580-115f-4b0e-a549-b66d28ccce66-combined-ca-bundle\") pod \"barbican-keystone-listener-6968f5d7bb-x689b\" (UID: \"04af3580-115f-4b0e-a549-b66d28ccce66\") " pod="openstack/barbican-keystone-listener-6968f5d7bb-x689b" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.039348 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5194021-7a69-42d3-8a19-8bfb471434db-config-data-custom\") pod \"barbican-worker-77b5dd78d9-268ll\" (UID: \"f5194021-7a69-42d3-8a19-8bfb471434db\") " pod="openstack/barbican-worker-77b5dd78d9-268ll" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.039516 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04af3580-115f-4b0e-a549-b66d28ccce66-config-data-custom\") pod \"barbican-keystone-listener-6968f5d7bb-x689b\" (UID: \"04af3580-115f-4b0e-a549-b66d28ccce66\") " pod="openstack/barbican-keystone-listener-6968f5d7bb-x689b" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.039597 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl8jb\" (UniqueName: \"kubernetes.io/projected/04af3580-115f-4b0e-a549-b66d28ccce66-kube-api-access-xl8jb\") pod \"barbican-keystone-listener-6968f5d7bb-x689b\" (UID: \"04af3580-115f-4b0e-a549-b66d28ccce66\") " pod="openstack/barbican-keystone-listener-6968f5d7bb-x689b" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.039629 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04af3580-115f-4b0e-a549-b66d28ccce66-config-data\") pod \"barbican-keystone-listener-6968f5d7bb-x689b\" (UID: \"04af3580-115f-4b0e-a549-b66d28ccce66\") " pod="openstack/barbican-keystone-listener-6968f5d7bb-x689b" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.039718 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04af3580-115f-4b0e-a549-b66d28ccce66-logs\") pod \"barbican-keystone-listener-6968f5d7bb-x689b\" (UID: \"04af3580-115f-4b0e-a549-b66d28ccce66\") " pod="openstack/barbican-keystone-listener-6968f5d7bb-x689b" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.039758 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5194021-7a69-42d3-8a19-8bfb471434db-logs\") pod \"barbican-worker-77b5dd78d9-268ll\" (UID: \"f5194021-7a69-42d3-8a19-8bfb471434db\") " pod="openstack/barbican-worker-77b5dd78d9-268ll" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.039794 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhbmj\" (UniqueName: \"kubernetes.io/projected/f5194021-7a69-42d3-8a19-8bfb471434db-kube-api-access-qhbmj\") pod \"barbican-worker-77b5dd78d9-268ll\" (UID: \"f5194021-7a69-42d3-8a19-8bfb471434db\") " pod="openstack/barbican-worker-77b5dd78d9-268ll" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.039832 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5194021-7a69-42d3-8a19-8bfb471434db-combined-ca-bundle\") pod \"barbican-worker-77b5dd78d9-268ll\" (UID: \"f5194021-7a69-42d3-8a19-8bfb471434db\") " pod="openstack/barbican-worker-77b5dd78d9-268ll" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.039848 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5194021-7a69-42d3-8a19-8bfb471434db-config-data\") pod \"barbican-worker-77b5dd78d9-268ll\" (UID: \"f5194021-7a69-42d3-8a19-8bfb471434db\") " pod="openstack/barbican-worker-77b5dd78d9-268ll" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.058476 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57fb645c4f-tglxs"] Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.059876 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.079198 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57fb645c4f-tglxs"] Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.107517 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5dfc584b48-xv5w9"] Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.108997 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5dfc584b48-xv5w9"] Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.109085 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5dfc584b48-xv5w9" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.110724 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.141226 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5194021-7a69-42d3-8a19-8bfb471434db-config-data-custom\") pod \"barbican-worker-77b5dd78d9-268ll\" (UID: \"f5194021-7a69-42d3-8a19-8bfb471434db\") " pod="openstack/barbican-worker-77b5dd78d9-268ll" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.141335 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04af3580-115f-4b0e-a549-b66d28ccce66-config-data-custom\") pod \"barbican-keystone-listener-6968f5d7bb-x689b\" (UID: \"04af3580-115f-4b0e-a549-b66d28ccce66\") " pod="openstack/barbican-keystone-listener-6968f5d7bb-x689b" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.141376 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl8jb\" (UniqueName: \"kubernetes.io/projected/04af3580-115f-4b0e-a549-b66d28ccce66-kube-api-access-xl8jb\") pod \"barbican-keystone-listener-6968f5d7bb-x689b\" (UID: \"04af3580-115f-4b0e-a549-b66d28ccce66\") " pod="openstack/barbican-keystone-listener-6968f5d7bb-x689b" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.141416 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04af3580-115f-4b0e-a549-b66d28ccce66-config-data\") pod \"barbican-keystone-listener-6968f5d7bb-x689b\" (UID: \"04af3580-115f-4b0e-a549-b66d28ccce66\") " pod="openstack/barbican-keystone-listener-6968f5d7bb-x689b" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.141456 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04af3580-115f-4b0e-a549-b66d28ccce66-logs\") pod \"barbican-keystone-listener-6968f5d7bb-x689b\" (UID: \"04af3580-115f-4b0e-a549-b66d28ccce66\") " pod="openstack/barbican-keystone-listener-6968f5d7bb-x689b" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.141550 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5194021-7a69-42d3-8a19-8bfb471434db-logs\") pod \"barbican-worker-77b5dd78d9-268ll\" (UID: \"f5194021-7a69-42d3-8a19-8bfb471434db\") " pod="openstack/barbican-worker-77b5dd78d9-268ll" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.141643 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhbmj\" (UniqueName: \"kubernetes.io/projected/f5194021-7a69-42d3-8a19-8bfb471434db-kube-api-access-qhbmj\") pod \"barbican-worker-77b5dd78d9-268ll\" (UID: \"f5194021-7a69-42d3-8a19-8bfb471434db\") " pod="openstack/barbican-worker-77b5dd78d9-268ll" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.141712 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5194021-7a69-42d3-8a19-8bfb471434db-config-data\") pod \"barbican-worker-77b5dd78d9-268ll\" (UID: \"f5194021-7a69-42d3-8a19-8bfb471434db\") " pod="openstack/barbican-worker-77b5dd78d9-268ll" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.141728 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5194021-7a69-42d3-8a19-8bfb471434db-combined-ca-bundle\") pod \"barbican-worker-77b5dd78d9-268ll\" (UID: \"f5194021-7a69-42d3-8a19-8bfb471434db\") " pod="openstack/barbican-worker-77b5dd78d9-268ll" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.141789 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04af3580-115f-4b0e-a549-b66d28ccce66-combined-ca-bundle\") pod \"barbican-keystone-listener-6968f5d7bb-x689b\" (UID: \"04af3580-115f-4b0e-a549-b66d28ccce66\") " pod="openstack/barbican-keystone-listener-6968f5d7bb-x689b" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.143312 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04af3580-115f-4b0e-a549-b66d28ccce66-logs\") pod \"barbican-keystone-listener-6968f5d7bb-x689b\" (UID: \"04af3580-115f-4b0e-a549-b66d28ccce66\") " pod="openstack/barbican-keystone-listener-6968f5d7bb-x689b" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.144023 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5194021-7a69-42d3-8a19-8bfb471434db-logs\") pod \"barbican-worker-77b5dd78d9-268ll\" (UID: \"f5194021-7a69-42d3-8a19-8bfb471434db\") " pod="openstack/barbican-worker-77b5dd78d9-268ll" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.148531 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04af3580-115f-4b0e-a549-b66d28ccce66-combined-ca-bundle\") pod \"barbican-keystone-listener-6968f5d7bb-x689b\" (UID: \"04af3580-115f-4b0e-a549-b66d28ccce66\") " pod="openstack/barbican-keystone-listener-6968f5d7bb-x689b" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.148695 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5194021-7a69-42d3-8a19-8bfb471434db-config-data\") pod \"barbican-worker-77b5dd78d9-268ll\" (UID: \"f5194021-7a69-42d3-8a19-8bfb471434db\") " pod="openstack/barbican-worker-77b5dd78d9-268ll" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.148711 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04af3580-115f-4b0e-a549-b66d28ccce66-config-data\") pod \"barbican-keystone-listener-6968f5d7bb-x689b\" (UID: \"04af3580-115f-4b0e-a549-b66d28ccce66\") " pod="openstack/barbican-keystone-listener-6968f5d7bb-x689b" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.149218 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5194021-7a69-42d3-8a19-8bfb471434db-combined-ca-bundle\") pod \"barbican-worker-77b5dd78d9-268ll\" (UID: \"f5194021-7a69-42d3-8a19-8bfb471434db\") " pod="openstack/barbican-worker-77b5dd78d9-268ll" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.150081 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5194021-7a69-42d3-8a19-8bfb471434db-config-data-custom\") pod \"barbican-worker-77b5dd78d9-268ll\" (UID: \"f5194021-7a69-42d3-8a19-8bfb471434db\") " pod="openstack/barbican-worker-77b5dd78d9-268ll" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.160213 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl8jb\" (UniqueName: \"kubernetes.io/projected/04af3580-115f-4b0e-a549-b66d28ccce66-kube-api-access-xl8jb\") pod \"barbican-keystone-listener-6968f5d7bb-x689b\" (UID: \"04af3580-115f-4b0e-a549-b66d28ccce66\") " pod="openstack/barbican-keystone-listener-6968f5d7bb-x689b" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.163661 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhbmj\" (UniqueName: \"kubernetes.io/projected/f5194021-7a69-42d3-8a19-8bfb471434db-kube-api-access-qhbmj\") pod \"barbican-worker-77b5dd78d9-268ll\" (UID: \"f5194021-7a69-42d3-8a19-8bfb471434db\") " pod="openstack/barbican-worker-77b5dd78d9-268ll" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.164361 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04af3580-115f-4b0e-a549-b66d28ccce66-config-data-custom\") pod \"barbican-keystone-listener-6968f5d7bb-x689b\" (UID: \"04af3580-115f-4b0e-a549-b66d28ccce66\") " pod="openstack/barbican-keystone-listener-6968f5d7bb-x689b" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.202740 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-77b5dd78d9-268ll" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.247217 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/409ff561-0177-4450-b505-7225027f0b06-config-data-custom\") pod \"barbican-api-5dfc584b48-xv5w9\" (UID: \"409ff561-0177-4450-b505-7225027f0b06\") " pod="openstack/barbican-api-5dfc584b48-xv5w9" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.247287 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcwrk\" (UniqueName: \"kubernetes.io/projected/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-kube-api-access-xcwrk\") pod \"dnsmasq-dns-57fb645c4f-tglxs\" (UID: \"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4\") " pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.247328 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-ovsdbserver-nb\") pod \"dnsmasq-dns-57fb645c4f-tglxs\" (UID: \"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4\") " pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.247349 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-dns-svc\") pod \"dnsmasq-dns-57fb645c4f-tglxs\" (UID: \"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4\") " pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.247387 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/409ff561-0177-4450-b505-7225027f0b06-config-data\") pod \"barbican-api-5dfc584b48-xv5w9\" (UID: \"409ff561-0177-4450-b505-7225027f0b06\") " pod="openstack/barbican-api-5dfc584b48-xv5w9" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.247405 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409ff561-0177-4450-b505-7225027f0b06-combined-ca-bundle\") pod \"barbican-api-5dfc584b48-xv5w9\" (UID: \"409ff561-0177-4450-b505-7225027f0b06\") " pod="openstack/barbican-api-5dfc584b48-xv5w9" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.247420 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-ovsdbserver-sb\") pod \"dnsmasq-dns-57fb645c4f-tglxs\" (UID: \"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4\") " pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.247436 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgv9f\" (UniqueName: \"kubernetes.io/projected/409ff561-0177-4450-b505-7225027f0b06-kube-api-access-dgv9f\") pod \"barbican-api-5dfc584b48-xv5w9\" (UID: \"409ff561-0177-4450-b505-7225027f0b06\") " pod="openstack/barbican-api-5dfc584b48-xv5w9" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.247458 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-config\") pod \"dnsmasq-dns-57fb645c4f-tglxs\" (UID: \"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4\") " pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.247492 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/409ff561-0177-4450-b505-7225027f0b06-logs\") pod \"barbican-api-5dfc584b48-xv5w9\" (UID: \"409ff561-0177-4450-b505-7225027f0b06\") " pod="openstack/barbican-api-5dfc584b48-xv5w9" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.247557 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6968f5d7bb-x689b" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.349047 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/409ff561-0177-4450-b505-7225027f0b06-config-data-custom\") pod \"barbican-api-5dfc584b48-xv5w9\" (UID: \"409ff561-0177-4450-b505-7225027f0b06\") " pod="openstack/barbican-api-5dfc584b48-xv5w9" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.349416 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcwrk\" (UniqueName: \"kubernetes.io/projected/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-kube-api-access-xcwrk\") pod \"dnsmasq-dns-57fb645c4f-tglxs\" (UID: \"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4\") " pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.349473 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-ovsdbserver-nb\") pod \"dnsmasq-dns-57fb645c4f-tglxs\" (UID: \"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4\") " pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.349493 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-dns-svc\") pod \"dnsmasq-dns-57fb645c4f-tglxs\" (UID: \"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4\") " pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.349529 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/409ff561-0177-4450-b505-7225027f0b06-config-data\") pod \"barbican-api-5dfc584b48-xv5w9\" (UID: \"409ff561-0177-4450-b505-7225027f0b06\") " pod="openstack/barbican-api-5dfc584b48-xv5w9" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.349546 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409ff561-0177-4450-b505-7225027f0b06-combined-ca-bundle\") pod \"barbican-api-5dfc584b48-xv5w9\" (UID: \"409ff561-0177-4450-b505-7225027f0b06\") " pod="openstack/barbican-api-5dfc584b48-xv5w9" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.349563 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-ovsdbserver-sb\") pod \"dnsmasq-dns-57fb645c4f-tglxs\" (UID: \"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4\") " pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.349581 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgv9f\" (UniqueName: \"kubernetes.io/projected/409ff561-0177-4450-b505-7225027f0b06-kube-api-access-dgv9f\") pod \"barbican-api-5dfc584b48-xv5w9\" (UID: \"409ff561-0177-4450-b505-7225027f0b06\") " pod="openstack/barbican-api-5dfc584b48-xv5w9" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.349603 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-config\") pod \"dnsmasq-dns-57fb645c4f-tglxs\" (UID: \"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4\") " pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.349621 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/409ff561-0177-4450-b505-7225027f0b06-logs\") pod \"barbican-api-5dfc584b48-xv5w9\" (UID: \"409ff561-0177-4450-b505-7225027f0b06\") " pod="openstack/barbican-api-5dfc584b48-xv5w9" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.350016 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/409ff561-0177-4450-b505-7225027f0b06-logs\") pod \"barbican-api-5dfc584b48-xv5w9\" (UID: \"409ff561-0177-4450-b505-7225027f0b06\") " pod="openstack/barbican-api-5dfc584b48-xv5w9" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.354403 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-ovsdbserver-sb\") pod \"dnsmasq-dns-57fb645c4f-tglxs\" (UID: \"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4\") " pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.360178 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/409ff561-0177-4450-b505-7225027f0b06-config-data-custom\") pod \"barbican-api-5dfc584b48-xv5w9\" (UID: \"409ff561-0177-4450-b505-7225027f0b06\") " pod="openstack/barbican-api-5dfc584b48-xv5w9" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.360405 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409ff561-0177-4450-b505-7225027f0b06-combined-ca-bundle\") pod \"barbican-api-5dfc584b48-xv5w9\" (UID: \"409ff561-0177-4450-b505-7225027f0b06\") " pod="openstack/barbican-api-5dfc584b48-xv5w9" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.362138 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-config\") pod \"dnsmasq-dns-57fb645c4f-tglxs\" (UID: \"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4\") " pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.364889 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-ovsdbserver-nb\") pod \"dnsmasq-dns-57fb645c4f-tglxs\" (UID: \"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4\") " pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.365742 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-dns-svc\") pod \"dnsmasq-dns-57fb645c4f-tglxs\" (UID: \"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4\") " pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.367000 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcwrk\" (UniqueName: \"kubernetes.io/projected/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-kube-api-access-xcwrk\") pod \"dnsmasq-dns-57fb645c4f-tglxs\" (UID: \"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4\") " pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.378805 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/409ff561-0177-4450-b505-7225027f0b06-config-data\") pod \"barbican-api-5dfc584b48-xv5w9\" (UID: \"409ff561-0177-4450-b505-7225027f0b06\") " pod="openstack/barbican-api-5dfc584b48-xv5w9" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.379122 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgv9f\" (UniqueName: \"kubernetes.io/projected/409ff561-0177-4450-b505-7225027f0b06-kube-api-access-dgv9f\") pod \"barbican-api-5dfc584b48-xv5w9\" (UID: \"409ff561-0177-4450-b505-7225027f0b06\") " pod="openstack/barbican-api-5dfc584b48-xv5w9" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.399961 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.434514 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5dfc584b48-xv5w9" Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.680276 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-77b5dd78d9-268ll"] Nov 28 08:49:08 crc kubenswrapper[4946]: W1128 08:49:08.688770 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5194021_7a69_42d3_8a19_8bfb471434db.slice/crio-287e23a5950b0fc305d6d85562f3d1bb5074a5ab79d798185931407016f979b8 WatchSource:0}: Error finding container 287e23a5950b0fc305d6d85562f3d1bb5074a5ab79d798185931407016f979b8: Status 404 returned error can't find the container with id 287e23a5950b0fc305d6d85562f3d1bb5074a5ab79d798185931407016f979b8 Nov 28 08:49:08 crc kubenswrapper[4946]: W1128 08:49:08.771332 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04af3580_115f_4b0e_a549_b66d28ccce66.slice/crio-b0a4de57a0495a67f8fe0677110f978a299812e394f28751f71e5b6017022864 WatchSource:0}: Error finding container b0a4de57a0495a67f8fe0677110f978a299812e394f28751f71e5b6017022864: Status 404 returned error can't find the container with id b0a4de57a0495a67f8fe0677110f978a299812e394f28751f71e5b6017022864 Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.773704 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6968f5d7bb-x689b"] Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.897544 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57fb645c4f-tglxs"] Nov 28 08:49:08 crc kubenswrapper[4946]: I1128 08:49:08.910143 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5dfc584b48-xv5w9"] Nov 28 08:49:08 crc kubenswrapper[4946]: W1128 08:49:08.911895 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1fcc079_c977_42f0_b8ea_3abeb7c64cd4.slice/crio-89b08e63ed9c4be97d1519b4b4dc2576d9f7233e10e3063811d6927723f791f4 WatchSource:0}: Error finding container 89b08e63ed9c4be97d1519b4b4dc2576d9f7233e10e3063811d6927723f791f4: Status 404 returned error can't find the container with id 89b08e63ed9c4be97d1519b4b4dc2576d9f7233e10e3063811d6927723f791f4 Nov 28 08:49:09 crc kubenswrapper[4946]: I1128 08:49:09.626147 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6968f5d7bb-x689b" event={"ID":"04af3580-115f-4b0e-a549-b66d28ccce66","Type":"ContainerStarted","Data":"b0a4de57a0495a67f8fe0677110f978a299812e394f28751f71e5b6017022864"} Nov 28 08:49:09 crc kubenswrapper[4946]: I1128 08:49:09.628302 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5dfc584b48-xv5w9" event={"ID":"409ff561-0177-4450-b505-7225027f0b06","Type":"ContainerStarted","Data":"b98213f4262483e1d99a244175b1218f8b58aef976e8ade01fa93aa2b8a00747"} Nov 28 08:49:09 crc kubenswrapper[4946]: I1128 08:49:09.628340 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5dfc584b48-xv5w9" event={"ID":"409ff561-0177-4450-b505-7225027f0b06","Type":"ContainerStarted","Data":"272f43476e541ae284b9c539502155cf1fe7ee32d45ce4231f1678fa8262be5e"} Nov 28 08:49:09 crc kubenswrapper[4946]: I1128 08:49:09.628353 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5dfc584b48-xv5w9" event={"ID":"409ff561-0177-4450-b505-7225027f0b06","Type":"ContainerStarted","Data":"13821c01f441f0988de61a5b7b0645944d6f773bfe8e8ef3e62eaf830384dce2"} Nov 28 08:49:09 crc kubenswrapper[4946]: I1128 08:49:09.629594 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5dfc584b48-xv5w9" Nov 28 08:49:09 crc kubenswrapper[4946]: I1128 08:49:09.629620 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5dfc584b48-xv5w9" Nov 28 08:49:09 crc kubenswrapper[4946]: I1128 08:49:09.630530 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-77b5dd78d9-268ll" event={"ID":"f5194021-7a69-42d3-8a19-8bfb471434db","Type":"ContainerStarted","Data":"287e23a5950b0fc305d6d85562f3d1bb5074a5ab79d798185931407016f979b8"} Nov 28 08:49:09 crc kubenswrapper[4946]: I1128 08:49:09.638006 4946 generic.go:334] "Generic (PLEG): container finished" podID="a1fcc079-c977-42f0-b8ea-3abeb7c64cd4" containerID="62b0d6d2bd1461420c6b5f8d128b07fa496148708db84d47f6c21819c2cb68cf" exitCode=0 Nov 28 08:49:09 crc kubenswrapper[4946]: I1128 08:49:09.638048 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" event={"ID":"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4","Type":"ContainerDied","Data":"62b0d6d2bd1461420c6b5f8d128b07fa496148708db84d47f6c21819c2cb68cf"} Nov 28 08:49:09 crc kubenswrapper[4946]: I1128 08:49:09.638071 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" event={"ID":"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4","Type":"ContainerStarted","Data":"89b08e63ed9c4be97d1519b4b4dc2576d9f7233e10e3063811d6927723f791f4"} Nov 28 08:49:09 crc kubenswrapper[4946]: I1128 08:49:09.647772 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5dfc584b48-xv5w9" podStartSLOduration=1.647756808 podStartE2EDuration="1.647756808s" podCreationTimestamp="2025-11-28 08:49:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:49:09.645611695 +0000 UTC m=+7004.023676806" watchObservedRunningTime="2025-11-28 08:49:09.647756808 +0000 UTC m=+7004.025821919" Nov 28 08:49:10 crc kubenswrapper[4946]: I1128 08:49:10.650570 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6968f5d7bb-x689b" event={"ID":"04af3580-115f-4b0e-a549-b66d28ccce66","Type":"ContainerStarted","Data":"86cafb1f2384835bed343402a0f63746460bc4675d9773b13b95939184d472aa"} Nov 28 08:49:10 crc kubenswrapper[4946]: I1128 08:49:10.651035 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6968f5d7bb-x689b" event={"ID":"04af3580-115f-4b0e-a549-b66d28ccce66","Type":"ContainerStarted","Data":"2d77503646c177143a760950f7e97435ffe3d45b37afe315480533073b48822e"} Nov 28 08:49:10 crc kubenswrapper[4946]: I1128 08:49:10.654353 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-77b5dd78d9-268ll" event={"ID":"f5194021-7a69-42d3-8a19-8bfb471434db","Type":"ContainerStarted","Data":"c736e743e57aee5e889e660e69c947b6e7d61353cc4305d47fa99baf1f374582"} Nov 28 08:49:10 crc kubenswrapper[4946]: I1128 08:49:10.654520 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-77b5dd78d9-268ll" event={"ID":"f5194021-7a69-42d3-8a19-8bfb471434db","Type":"ContainerStarted","Data":"fecf4822879ad0e9268dcbd359b21d314f8446ebdf37582618b2d3be180e9020"} Nov 28 08:49:10 crc kubenswrapper[4946]: I1128 08:49:10.658783 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" event={"ID":"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4","Type":"ContainerStarted","Data":"0411449c80fa1a3c76a139822e13a30410c8a1cd077b702f6140f30f8b6f03ae"} Nov 28 08:49:10 crc kubenswrapper[4946]: I1128 08:49:10.658827 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" Nov 28 08:49:10 crc kubenswrapper[4946]: I1128 08:49:10.686881 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6968f5d7bb-x689b" podStartSLOduration=2.552488753 podStartE2EDuration="3.686850505s" podCreationTimestamp="2025-11-28 08:49:07 +0000 UTC" firstStartedPulling="2025-11-28 08:49:08.773600273 +0000 UTC m=+7003.151665394" lastFinishedPulling="2025-11-28 08:49:09.907962045 +0000 UTC m=+7004.286027146" observedRunningTime="2025-11-28 08:49:10.676752345 +0000 UTC m=+7005.054817486" watchObservedRunningTime="2025-11-28 08:49:10.686850505 +0000 UTC m=+7005.064915656" Nov 28 08:49:10 crc kubenswrapper[4946]: I1128 08:49:10.709482 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-77b5dd78d9-268ll" podStartSLOduration=2.495780799 podStartE2EDuration="3.709420133s" podCreationTimestamp="2025-11-28 08:49:07 +0000 UTC" firstStartedPulling="2025-11-28 08:49:08.691130812 +0000 UTC m=+7003.069195923" lastFinishedPulling="2025-11-28 08:49:09.904770146 +0000 UTC m=+7004.282835257" observedRunningTime="2025-11-28 08:49:10.701340233 +0000 UTC m=+7005.079405384" watchObservedRunningTime="2025-11-28 08:49:10.709420133 +0000 UTC m=+7005.087485274" Nov 28 08:49:10 crc kubenswrapper[4946]: I1128 08:49:10.737134 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" podStartSLOduration=3.737112849 podStartE2EDuration="3.737112849s" podCreationTimestamp="2025-11-28 08:49:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:49:10.732505725 +0000 UTC m=+7005.110570836" watchObservedRunningTime="2025-11-28 08:49:10.737112849 +0000 UTC m=+7005.115177960" Nov 28 08:49:18 crc kubenswrapper[4946]: I1128 08:49:18.401710 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" Nov 28 08:49:18 crc kubenswrapper[4946]: I1128 08:49:18.507423 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65b98775c-hq9l6"] Nov 28 08:49:18 crc kubenswrapper[4946]: I1128 08:49:18.507694 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65b98775c-hq9l6" podUID="43286ac2-75c9-470f-902a-e77da7b447fb" containerName="dnsmasq-dns" containerID="cri-o://398d867ed212e31cb88fbda498bdf894b9addc66a9ec67f2dab7bb743886fb14" gracePeriod=10 Nov 28 08:49:18 crc kubenswrapper[4946]: I1128 08:49:18.740509 4946 generic.go:334] "Generic (PLEG): container finished" podID="43286ac2-75c9-470f-902a-e77da7b447fb" containerID="398d867ed212e31cb88fbda498bdf894b9addc66a9ec67f2dab7bb743886fb14" exitCode=0 Nov 28 08:49:18 crc kubenswrapper[4946]: I1128 08:49:18.740580 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65b98775c-hq9l6" event={"ID":"43286ac2-75c9-470f-902a-e77da7b447fb","Type":"ContainerDied","Data":"398d867ed212e31cb88fbda498bdf894b9addc66a9ec67f2dab7bb743886fb14"} Nov 28 08:49:18 crc kubenswrapper[4946]: I1128 08:49:18.994053 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65b98775c-hq9l6" Nov 28 08:49:19 crc kubenswrapper[4946]: I1128 08:49:19.177837 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fsnc\" (UniqueName: \"kubernetes.io/projected/43286ac2-75c9-470f-902a-e77da7b447fb-kube-api-access-9fsnc\") pod \"43286ac2-75c9-470f-902a-e77da7b447fb\" (UID: \"43286ac2-75c9-470f-902a-e77da7b447fb\") " Nov 28 08:49:19 crc kubenswrapper[4946]: I1128 08:49:19.177936 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43286ac2-75c9-470f-902a-e77da7b447fb-ovsdbserver-nb\") pod \"43286ac2-75c9-470f-902a-e77da7b447fb\" (UID: \"43286ac2-75c9-470f-902a-e77da7b447fb\") " Nov 28 08:49:19 crc kubenswrapper[4946]: I1128 08:49:19.178119 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43286ac2-75c9-470f-902a-e77da7b447fb-config\") pod \"43286ac2-75c9-470f-902a-e77da7b447fb\" (UID: \"43286ac2-75c9-470f-902a-e77da7b447fb\") " Nov 28 08:49:19 crc kubenswrapper[4946]: I1128 08:49:19.178156 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43286ac2-75c9-470f-902a-e77da7b447fb-dns-svc\") pod \"43286ac2-75c9-470f-902a-e77da7b447fb\" (UID: \"43286ac2-75c9-470f-902a-e77da7b447fb\") " Nov 28 08:49:19 crc kubenswrapper[4946]: I1128 08:49:19.178202 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43286ac2-75c9-470f-902a-e77da7b447fb-ovsdbserver-sb\") pod \"43286ac2-75c9-470f-902a-e77da7b447fb\" (UID: \"43286ac2-75c9-470f-902a-e77da7b447fb\") " Nov 28 08:49:19 crc kubenswrapper[4946]: I1128 08:49:19.187594 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43286ac2-75c9-470f-902a-e77da7b447fb-kube-api-access-9fsnc" (OuterVolumeSpecName: "kube-api-access-9fsnc") pod "43286ac2-75c9-470f-902a-e77da7b447fb" (UID: "43286ac2-75c9-470f-902a-e77da7b447fb"). InnerVolumeSpecName "kube-api-access-9fsnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:49:19 crc kubenswrapper[4946]: I1128 08:49:19.246403 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43286ac2-75c9-470f-902a-e77da7b447fb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "43286ac2-75c9-470f-902a-e77da7b447fb" (UID: "43286ac2-75c9-470f-902a-e77da7b447fb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:49:19 crc kubenswrapper[4946]: I1128 08:49:19.252359 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43286ac2-75c9-470f-902a-e77da7b447fb-config" (OuterVolumeSpecName: "config") pod "43286ac2-75c9-470f-902a-e77da7b447fb" (UID: "43286ac2-75c9-470f-902a-e77da7b447fb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:49:19 crc kubenswrapper[4946]: I1128 08:49:19.267606 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43286ac2-75c9-470f-902a-e77da7b447fb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "43286ac2-75c9-470f-902a-e77da7b447fb" (UID: "43286ac2-75c9-470f-902a-e77da7b447fb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:49:19 crc kubenswrapper[4946]: I1128 08:49:19.267880 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43286ac2-75c9-470f-902a-e77da7b447fb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "43286ac2-75c9-470f-902a-e77da7b447fb" (UID: "43286ac2-75c9-470f-902a-e77da7b447fb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:49:19 crc kubenswrapper[4946]: I1128 08:49:19.283078 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fsnc\" (UniqueName: \"kubernetes.io/projected/43286ac2-75c9-470f-902a-e77da7b447fb-kube-api-access-9fsnc\") on node \"crc\" DevicePath \"\"" Nov 28 08:49:19 crc kubenswrapper[4946]: I1128 08:49:19.283823 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43286ac2-75c9-470f-902a-e77da7b447fb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 08:49:19 crc kubenswrapper[4946]: I1128 08:49:19.283944 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43286ac2-75c9-470f-902a-e77da7b447fb-config\") on node \"crc\" DevicePath \"\"" Nov 28 08:49:19 crc kubenswrapper[4946]: I1128 08:49:19.283989 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43286ac2-75c9-470f-902a-e77da7b447fb-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 08:49:19 crc kubenswrapper[4946]: I1128 08:49:19.284018 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43286ac2-75c9-470f-902a-e77da7b447fb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 08:49:19 crc kubenswrapper[4946]: I1128 08:49:19.757130 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5dfc584b48-xv5w9" Nov 28 08:49:19 crc kubenswrapper[4946]: I1128 08:49:19.762185 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65b98775c-hq9l6" event={"ID":"43286ac2-75c9-470f-902a-e77da7b447fb","Type":"ContainerDied","Data":"2c549c2afd503aaf2f794b47425827a89e3e568f520f1f5503ef86e790817f97"} Nov 28 08:49:19 crc kubenswrapper[4946]: I1128 08:49:19.762284 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65b98775c-hq9l6" Nov 28 08:49:19 crc kubenswrapper[4946]: I1128 08:49:19.762333 4946 scope.go:117] "RemoveContainer" containerID="398d867ed212e31cb88fbda498bdf894b9addc66a9ec67f2dab7bb743886fb14" Nov 28 08:49:19 crc kubenswrapper[4946]: I1128 08:49:19.802998 4946 scope.go:117] "RemoveContainer" containerID="08b006942894dcd13ac42b15081e2980b4aa84b1d8a1177ff0eb24e1f1819ade" Nov 28 08:49:19 crc kubenswrapper[4946]: I1128 08:49:19.831489 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65b98775c-hq9l6"] Nov 28 08:49:19 crc kubenswrapper[4946]: I1128 08:49:19.845923 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65b98775c-hq9l6"] Nov 28 08:49:19 crc kubenswrapper[4946]: I1128 08:49:19.874647 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5dfc584b48-xv5w9" Nov 28 08:49:20 crc kubenswrapper[4946]: I1128 08:49:20.009237 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43286ac2-75c9-470f-902a-e77da7b447fb" path="/var/lib/kubelet/pods/43286ac2-75c9-470f-902a-e77da7b447fb/volumes" Nov 28 08:49:24 crc kubenswrapper[4946]: I1128 08:49:24.303007 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zqhsb"] Nov 28 08:49:24 crc kubenswrapper[4946]: E1128 08:49:24.303985 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43286ac2-75c9-470f-902a-e77da7b447fb" containerName="dnsmasq-dns" Nov 28 08:49:24 crc kubenswrapper[4946]: I1128 08:49:24.304004 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="43286ac2-75c9-470f-902a-e77da7b447fb" containerName="dnsmasq-dns" Nov 28 08:49:24 crc kubenswrapper[4946]: E1128 08:49:24.304054 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43286ac2-75c9-470f-902a-e77da7b447fb" containerName="init" Nov 28 08:49:24 crc kubenswrapper[4946]: I1128 08:49:24.304063 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="43286ac2-75c9-470f-902a-e77da7b447fb" containerName="init" Nov 28 08:49:24 crc kubenswrapper[4946]: I1128 08:49:24.304280 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="43286ac2-75c9-470f-902a-e77da7b447fb" containerName="dnsmasq-dns" Nov 28 08:49:24 crc kubenswrapper[4946]: I1128 08:49:24.305716 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zqhsb" Nov 28 08:49:24 crc kubenswrapper[4946]: I1128 08:49:24.316243 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zqhsb"] Nov 28 08:49:24 crc kubenswrapper[4946]: I1128 08:49:24.487813 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d1400b-13ee-48a0-a8a3-57a12a80ad78-catalog-content\") pod \"community-operators-zqhsb\" (UID: \"83d1400b-13ee-48a0-a8a3-57a12a80ad78\") " pod="openshift-marketplace/community-operators-zqhsb" Nov 28 08:49:24 crc kubenswrapper[4946]: I1128 08:49:24.488177 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d1400b-13ee-48a0-a8a3-57a12a80ad78-utilities\") pod \"community-operators-zqhsb\" (UID: \"83d1400b-13ee-48a0-a8a3-57a12a80ad78\") " pod="openshift-marketplace/community-operators-zqhsb" Nov 28 08:49:24 crc kubenswrapper[4946]: I1128 08:49:24.488262 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xv2w\" (UniqueName: \"kubernetes.io/projected/83d1400b-13ee-48a0-a8a3-57a12a80ad78-kube-api-access-6xv2w\") pod \"community-operators-zqhsb\" (UID: \"83d1400b-13ee-48a0-a8a3-57a12a80ad78\") " pod="openshift-marketplace/community-operators-zqhsb" Nov 28 08:49:24 crc kubenswrapper[4946]: I1128 08:49:24.589890 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d1400b-13ee-48a0-a8a3-57a12a80ad78-catalog-content\") pod \"community-operators-zqhsb\" (UID: \"83d1400b-13ee-48a0-a8a3-57a12a80ad78\") " pod="openshift-marketplace/community-operators-zqhsb" Nov 28 08:49:24 crc kubenswrapper[4946]: I1128 08:49:24.589964 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d1400b-13ee-48a0-a8a3-57a12a80ad78-utilities\") pod \"community-operators-zqhsb\" (UID: \"83d1400b-13ee-48a0-a8a3-57a12a80ad78\") " pod="openshift-marketplace/community-operators-zqhsb" Nov 28 08:49:24 crc kubenswrapper[4946]: I1128 08:49:24.590001 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xv2w\" (UniqueName: \"kubernetes.io/projected/83d1400b-13ee-48a0-a8a3-57a12a80ad78-kube-api-access-6xv2w\") pod \"community-operators-zqhsb\" (UID: \"83d1400b-13ee-48a0-a8a3-57a12a80ad78\") " pod="openshift-marketplace/community-operators-zqhsb" Nov 28 08:49:24 crc kubenswrapper[4946]: I1128 08:49:24.590489 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d1400b-13ee-48a0-a8a3-57a12a80ad78-catalog-content\") pod \"community-operators-zqhsb\" (UID: \"83d1400b-13ee-48a0-a8a3-57a12a80ad78\") " pod="openshift-marketplace/community-operators-zqhsb" Nov 28 08:49:24 crc kubenswrapper[4946]: I1128 08:49:24.590557 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d1400b-13ee-48a0-a8a3-57a12a80ad78-utilities\") pod \"community-operators-zqhsb\" (UID: \"83d1400b-13ee-48a0-a8a3-57a12a80ad78\") " pod="openshift-marketplace/community-operators-zqhsb" Nov 28 08:49:24 crc kubenswrapper[4946]: I1128 08:49:24.610435 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xv2w\" (UniqueName: \"kubernetes.io/projected/83d1400b-13ee-48a0-a8a3-57a12a80ad78-kube-api-access-6xv2w\") pod \"community-operators-zqhsb\" (UID: \"83d1400b-13ee-48a0-a8a3-57a12a80ad78\") " pod="openshift-marketplace/community-operators-zqhsb" Nov 28 08:49:24 crc kubenswrapper[4946]: I1128 08:49:24.689631 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zqhsb" Nov 28 08:49:25 crc kubenswrapper[4946]: I1128 08:49:25.177512 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zqhsb"] Nov 28 08:49:25 crc kubenswrapper[4946]: I1128 08:49:25.835453 4946 generic.go:334] "Generic (PLEG): container finished" podID="83d1400b-13ee-48a0-a8a3-57a12a80ad78" containerID="20f253f15f14cc8d4bbaa9edab6b8d23b60f4dd621e1318a1125bd9119bb48b6" exitCode=0 Nov 28 08:49:25 crc kubenswrapper[4946]: I1128 08:49:25.835613 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqhsb" event={"ID":"83d1400b-13ee-48a0-a8a3-57a12a80ad78","Type":"ContainerDied","Data":"20f253f15f14cc8d4bbaa9edab6b8d23b60f4dd621e1318a1125bd9119bb48b6"} Nov 28 08:49:25 crc kubenswrapper[4946]: I1128 08:49:25.835824 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqhsb" event={"ID":"83d1400b-13ee-48a0-a8a3-57a12a80ad78","Type":"ContainerStarted","Data":"e77130319540dadee4caf0ea889b96dfd45524c4bdaf4759bb18e88fad17fb8c"} Nov 28 08:49:26 crc kubenswrapper[4946]: I1128 08:49:26.591153 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-zkc24"] Nov 28 08:49:26 crc kubenswrapper[4946]: I1128 08:49:26.592452 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zkc24" Nov 28 08:49:26 crc kubenswrapper[4946]: I1128 08:49:26.602449 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-zkc24"] Nov 28 08:49:26 crc kubenswrapper[4946]: I1128 08:49:26.682585 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-1d3f-account-create-update-q8h5l"] Nov 28 08:49:26 crc kubenswrapper[4946]: I1128 08:49:26.683654 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1d3f-account-create-update-q8h5l" Nov 28 08:49:26 crc kubenswrapper[4946]: I1128 08:49:26.685694 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 28 08:49:26 crc kubenswrapper[4946]: I1128 08:49:26.696528 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-1d3f-account-create-update-q8h5l"] Nov 28 08:49:26 crc kubenswrapper[4946]: I1128 08:49:26.729836 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d4e2519-b955-43b3-9110-690e645f9d15-operator-scripts\") pod \"neutron-db-create-zkc24\" (UID: \"1d4e2519-b955-43b3-9110-690e645f9d15\") " pod="openstack/neutron-db-create-zkc24" Nov 28 08:49:26 crc kubenswrapper[4946]: I1128 08:49:26.729900 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf2fw\" (UniqueName: \"kubernetes.io/projected/1d4e2519-b955-43b3-9110-690e645f9d15-kube-api-access-hf2fw\") pod \"neutron-db-create-zkc24\" (UID: \"1d4e2519-b955-43b3-9110-690e645f9d15\") " pod="openstack/neutron-db-create-zkc24" Nov 28 08:49:26 crc kubenswrapper[4946]: I1128 08:49:26.831250 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c87597-b262-40ac-a349-9fd176f7e0da-operator-scripts\") pod \"neutron-1d3f-account-create-update-q8h5l\" (UID: \"67c87597-b262-40ac-a349-9fd176f7e0da\") " pod="openstack/neutron-1d3f-account-create-update-q8h5l" Nov 28 08:49:26 crc kubenswrapper[4946]: I1128 08:49:26.831351 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d4e2519-b955-43b3-9110-690e645f9d15-operator-scripts\") pod \"neutron-db-create-zkc24\" (UID: \"1d4e2519-b955-43b3-9110-690e645f9d15\") " pod="openstack/neutron-db-create-zkc24" Nov 28 08:49:26 crc kubenswrapper[4946]: I1128 08:49:26.831394 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf2fw\" (UniqueName: \"kubernetes.io/projected/1d4e2519-b955-43b3-9110-690e645f9d15-kube-api-access-hf2fw\") pod \"neutron-db-create-zkc24\" (UID: \"1d4e2519-b955-43b3-9110-690e645f9d15\") " pod="openstack/neutron-db-create-zkc24" Nov 28 08:49:26 crc kubenswrapper[4946]: I1128 08:49:26.831447 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62xcg\" (UniqueName: \"kubernetes.io/projected/67c87597-b262-40ac-a349-9fd176f7e0da-kube-api-access-62xcg\") pod \"neutron-1d3f-account-create-update-q8h5l\" (UID: \"67c87597-b262-40ac-a349-9fd176f7e0da\") " pod="openstack/neutron-1d3f-account-create-update-q8h5l" Nov 28 08:49:26 crc kubenswrapper[4946]: I1128 08:49:26.832219 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d4e2519-b955-43b3-9110-690e645f9d15-operator-scripts\") pod \"neutron-db-create-zkc24\" (UID: \"1d4e2519-b955-43b3-9110-690e645f9d15\") " pod="openstack/neutron-db-create-zkc24" Nov 28 08:49:26 crc kubenswrapper[4946]: I1128 08:49:26.860635 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf2fw\" (UniqueName: \"kubernetes.io/projected/1d4e2519-b955-43b3-9110-690e645f9d15-kube-api-access-hf2fw\") pod \"neutron-db-create-zkc24\" (UID: \"1d4e2519-b955-43b3-9110-690e645f9d15\") " pod="openstack/neutron-db-create-zkc24" Nov 28 08:49:26 crc kubenswrapper[4946]: I1128 08:49:26.910006 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zkc24" Nov 28 08:49:26 crc kubenswrapper[4946]: I1128 08:49:26.933791 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62xcg\" (UniqueName: \"kubernetes.io/projected/67c87597-b262-40ac-a349-9fd176f7e0da-kube-api-access-62xcg\") pod \"neutron-1d3f-account-create-update-q8h5l\" (UID: \"67c87597-b262-40ac-a349-9fd176f7e0da\") " pod="openstack/neutron-1d3f-account-create-update-q8h5l" Nov 28 08:49:26 crc kubenswrapper[4946]: I1128 08:49:26.933955 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c87597-b262-40ac-a349-9fd176f7e0da-operator-scripts\") pod \"neutron-1d3f-account-create-update-q8h5l\" (UID: \"67c87597-b262-40ac-a349-9fd176f7e0da\") " pod="openstack/neutron-1d3f-account-create-update-q8h5l" Nov 28 08:49:26 crc kubenswrapper[4946]: I1128 08:49:26.934849 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c87597-b262-40ac-a349-9fd176f7e0da-operator-scripts\") pod \"neutron-1d3f-account-create-update-q8h5l\" (UID: \"67c87597-b262-40ac-a349-9fd176f7e0da\") " pod="openstack/neutron-1d3f-account-create-update-q8h5l" Nov 28 08:49:26 crc kubenswrapper[4946]: I1128 08:49:26.953197 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62xcg\" (UniqueName: \"kubernetes.io/projected/67c87597-b262-40ac-a349-9fd176f7e0da-kube-api-access-62xcg\") pod \"neutron-1d3f-account-create-update-q8h5l\" (UID: \"67c87597-b262-40ac-a349-9fd176f7e0da\") " pod="openstack/neutron-1d3f-account-create-update-q8h5l" Nov 28 08:49:27 crc kubenswrapper[4946]: I1128 08:49:27.000299 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1d3f-account-create-update-q8h5l" Nov 28 08:49:27 crc kubenswrapper[4946]: I1128 08:49:27.369173 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-zkc24"] Nov 28 08:49:27 crc kubenswrapper[4946]: W1128 08:49:27.373871 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d4e2519_b955_43b3_9110_690e645f9d15.slice/crio-1cecbc7a01dd6a8207af0eb2f8926c3b1b02322ba1bd197ca93ad11f415adbb7 WatchSource:0}: Error finding container 1cecbc7a01dd6a8207af0eb2f8926c3b1b02322ba1bd197ca93ad11f415adbb7: Status 404 returned error can't find the container with id 1cecbc7a01dd6a8207af0eb2f8926c3b1b02322ba1bd197ca93ad11f415adbb7 Nov 28 08:49:27 crc kubenswrapper[4946]: I1128 08:49:27.481358 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-1d3f-account-create-update-q8h5l"] Nov 28 08:49:27 crc kubenswrapper[4946]: W1128 08:49:27.492676 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67c87597_b262_40ac_a349_9fd176f7e0da.slice/crio-cec79754e57447ea5a9839537f00e84eec0d505a640927cda2a141dfa004dd4a WatchSource:0}: Error finding container cec79754e57447ea5a9839537f00e84eec0d505a640927cda2a141dfa004dd4a: Status 404 returned error can't find the container with id cec79754e57447ea5a9839537f00e84eec0d505a640927cda2a141dfa004dd4a Nov 28 08:49:27 crc kubenswrapper[4946]: I1128 08:49:27.855653 4946 generic.go:334] "Generic (PLEG): container finished" podID="83d1400b-13ee-48a0-a8a3-57a12a80ad78" containerID="f9424838fa78827c5d474f614a4799aa52a18ed35e801d6224c1f07ec2e1bd9b" exitCode=0 Nov 28 08:49:27 crc kubenswrapper[4946]: I1128 08:49:27.856082 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqhsb" event={"ID":"83d1400b-13ee-48a0-a8a3-57a12a80ad78","Type":"ContainerDied","Data":"f9424838fa78827c5d474f614a4799aa52a18ed35e801d6224c1f07ec2e1bd9b"} Nov 28 08:49:27 crc kubenswrapper[4946]: I1128 08:49:27.858100 4946 generic.go:334] "Generic (PLEG): container finished" podID="1d4e2519-b955-43b3-9110-690e645f9d15" containerID="d1e3230019f670958d19752551793badc7d7b5bace0fafa532f0f1ff77a392c0" exitCode=0 Nov 28 08:49:27 crc kubenswrapper[4946]: I1128 08:49:27.858178 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zkc24" event={"ID":"1d4e2519-b955-43b3-9110-690e645f9d15","Type":"ContainerDied","Data":"d1e3230019f670958d19752551793badc7d7b5bace0fafa532f0f1ff77a392c0"} Nov 28 08:49:27 crc kubenswrapper[4946]: I1128 08:49:27.858260 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zkc24" event={"ID":"1d4e2519-b955-43b3-9110-690e645f9d15","Type":"ContainerStarted","Data":"1cecbc7a01dd6a8207af0eb2f8926c3b1b02322ba1bd197ca93ad11f415adbb7"} Nov 28 08:49:27 crc kubenswrapper[4946]: I1128 08:49:27.867102 4946 generic.go:334] "Generic (PLEG): container finished" podID="67c87597-b262-40ac-a349-9fd176f7e0da" containerID="4923a975f389ceb2a20894cd7508fac09ae3ded9f07d16ff931787b058e7fbda" exitCode=0 Nov 28 08:49:27 crc kubenswrapper[4946]: I1128 08:49:27.867128 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1d3f-account-create-update-q8h5l" event={"ID":"67c87597-b262-40ac-a349-9fd176f7e0da","Type":"ContainerDied","Data":"4923a975f389ceb2a20894cd7508fac09ae3ded9f07d16ff931787b058e7fbda"} Nov 28 08:49:27 crc kubenswrapper[4946]: I1128 08:49:27.867168 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1d3f-account-create-update-q8h5l" event={"ID":"67c87597-b262-40ac-a349-9fd176f7e0da","Type":"ContainerStarted","Data":"cec79754e57447ea5a9839537f00e84eec0d505a640927cda2a141dfa004dd4a"} Nov 28 08:49:29 crc kubenswrapper[4946]: I1128 08:49:29.330297 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1d3f-account-create-update-q8h5l" Nov 28 08:49:29 crc kubenswrapper[4946]: I1128 08:49:29.336817 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zkc24" Nov 28 08:49:29 crc kubenswrapper[4946]: I1128 08:49:29.483712 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf2fw\" (UniqueName: \"kubernetes.io/projected/1d4e2519-b955-43b3-9110-690e645f9d15-kube-api-access-hf2fw\") pod \"1d4e2519-b955-43b3-9110-690e645f9d15\" (UID: \"1d4e2519-b955-43b3-9110-690e645f9d15\") " Nov 28 08:49:29 crc kubenswrapper[4946]: I1128 08:49:29.483890 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d4e2519-b955-43b3-9110-690e645f9d15-operator-scripts\") pod \"1d4e2519-b955-43b3-9110-690e645f9d15\" (UID: \"1d4e2519-b955-43b3-9110-690e645f9d15\") " Nov 28 08:49:29 crc kubenswrapper[4946]: I1128 08:49:29.484007 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c87597-b262-40ac-a349-9fd176f7e0da-operator-scripts\") pod \"67c87597-b262-40ac-a349-9fd176f7e0da\" (UID: \"67c87597-b262-40ac-a349-9fd176f7e0da\") " Nov 28 08:49:29 crc kubenswrapper[4946]: I1128 08:49:29.484223 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62xcg\" (UniqueName: \"kubernetes.io/projected/67c87597-b262-40ac-a349-9fd176f7e0da-kube-api-access-62xcg\") pod \"67c87597-b262-40ac-a349-9fd176f7e0da\" (UID: \"67c87597-b262-40ac-a349-9fd176f7e0da\") " Nov 28 08:49:29 crc kubenswrapper[4946]: I1128 08:49:29.484766 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d4e2519-b955-43b3-9110-690e645f9d15-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d4e2519-b955-43b3-9110-690e645f9d15" (UID: "1d4e2519-b955-43b3-9110-690e645f9d15"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:49:29 crc kubenswrapper[4946]: I1128 08:49:29.484771 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67c87597-b262-40ac-a349-9fd176f7e0da-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67c87597-b262-40ac-a349-9fd176f7e0da" (UID: "67c87597-b262-40ac-a349-9fd176f7e0da"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:49:29 crc kubenswrapper[4946]: I1128 08:49:29.490519 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67c87597-b262-40ac-a349-9fd176f7e0da-kube-api-access-62xcg" (OuterVolumeSpecName: "kube-api-access-62xcg") pod "67c87597-b262-40ac-a349-9fd176f7e0da" (UID: "67c87597-b262-40ac-a349-9fd176f7e0da"). InnerVolumeSpecName "kube-api-access-62xcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:49:29 crc kubenswrapper[4946]: I1128 08:49:29.490672 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d4e2519-b955-43b3-9110-690e645f9d15-kube-api-access-hf2fw" (OuterVolumeSpecName: "kube-api-access-hf2fw") pod "1d4e2519-b955-43b3-9110-690e645f9d15" (UID: "1d4e2519-b955-43b3-9110-690e645f9d15"). InnerVolumeSpecName "kube-api-access-hf2fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:49:29 crc kubenswrapper[4946]: I1128 08:49:29.586208 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf2fw\" (UniqueName: \"kubernetes.io/projected/1d4e2519-b955-43b3-9110-690e645f9d15-kube-api-access-hf2fw\") on node \"crc\" DevicePath \"\"" Nov 28 08:49:29 crc kubenswrapper[4946]: I1128 08:49:29.586244 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d4e2519-b955-43b3-9110-690e645f9d15-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:49:29 crc kubenswrapper[4946]: I1128 08:49:29.586255 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c87597-b262-40ac-a349-9fd176f7e0da-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:49:29 crc kubenswrapper[4946]: I1128 08:49:29.586264 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62xcg\" (UniqueName: \"kubernetes.io/projected/67c87597-b262-40ac-a349-9fd176f7e0da-kube-api-access-62xcg\") on node \"crc\" DevicePath \"\"" Nov 28 08:49:29 crc kubenswrapper[4946]: I1128 08:49:29.896676 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1d3f-account-create-update-q8h5l" event={"ID":"67c87597-b262-40ac-a349-9fd176f7e0da","Type":"ContainerDied","Data":"cec79754e57447ea5a9839537f00e84eec0d505a640927cda2a141dfa004dd4a"} Nov 28 08:49:29 crc kubenswrapper[4946]: I1128 08:49:29.896730 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cec79754e57447ea5a9839537f00e84eec0d505a640927cda2a141dfa004dd4a" Nov 28 08:49:29 crc kubenswrapper[4946]: I1128 08:49:29.896739 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1d3f-account-create-update-q8h5l" Nov 28 08:49:29 crc kubenswrapper[4946]: I1128 08:49:29.900088 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zkc24" event={"ID":"1d4e2519-b955-43b3-9110-690e645f9d15","Type":"ContainerDied","Data":"1cecbc7a01dd6a8207af0eb2f8926c3b1b02322ba1bd197ca93ad11f415adbb7"} Nov 28 08:49:29 crc kubenswrapper[4946]: I1128 08:49:29.900108 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cecbc7a01dd6a8207af0eb2f8926c3b1b02322ba1bd197ca93ad11f415adbb7" Nov 28 08:49:29 crc kubenswrapper[4946]: I1128 08:49:29.900166 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zkc24" Nov 28 08:49:30 crc kubenswrapper[4946]: I1128 08:49:30.918918 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqhsb" event={"ID":"83d1400b-13ee-48a0-a8a3-57a12a80ad78","Type":"ContainerStarted","Data":"78c9ae7a637d4de266787325c00bf8da07ebbcd958faec3cdcce0c89799fe734"} Nov 28 08:49:30 crc kubenswrapper[4946]: I1128 08:49:30.953674 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zqhsb" podStartSLOduration=2.955129334 podStartE2EDuration="6.953562984s" podCreationTimestamp="2025-11-28 08:49:24 +0000 UTC" firstStartedPulling="2025-11-28 08:49:25.841635717 +0000 UTC m=+7020.219700858" lastFinishedPulling="2025-11-28 08:49:29.840069397 +0000 UTC m=+7024.218134508" observedRunningTime="2025-11-28 08:49:30.943837623 +0000 UTC m=+7025.321902834" watchObservedRunningTime="2025-11-28 08:49:30.953562984 +0000 UTC m=+7025.331628165" Nov 28 08:49:31 crc kubenswrapper[4946]: I1128 08:49:31.890095 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-srwds"] Nov 28 08:49:31 crc kubenswrapper[4946]: E1128 08:49:31.890798 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c87597-b262-40ac-a349-9fd176f7e0da" containerName="mariadb-account-create-update" Nov 28 08:49:31 crc kubenswrapper[4946]: I1128 08:49:31.890817 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c87597-b262-40ac-a349-9fd176f7e0da" containerName="mariadb-account-create-update" Nov 28 08:49:31 crc kubenswrapper[4946]: E1128 08:49:31.890853 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d4e2519-b955-43b3-9110-690e645f9d15" containerName="mariadb-database-create" Nov 28 08:49:31 crc kubenswrapper[4946]: I1128 08:49:31.890862 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d4e2519-b955-43b3-9110-690e645f9d15" containerName="mariadb-database-create" Nov 28 08:49:31 crc kubenswrapper[4946]: I1128 08:49:31.891093 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d4e2519-b955-43b3-9110-690e645f9d15" containerName="mariadb-database-create" Nov 28 08:49:31 crc kubenswrapper[4946]: I1128 08:49:31.891118 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c87597-b262-40ac-a349-9fd176f7e0da" containerName="mariadb-account-create-update" Nov 28 08:49:31 crc kubenswrapper[4946]: I1128 08:49:31.891686 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-srwds" Nov 28 08:49:31 crc kubenswrapper[4946]: I1128 08:49:31.894139 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 28 08:49:31 crc kubenswrapper[4946]: I1128 08:49:31.895844 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-kpp6g" Nov 28 08:49:31 crc kubenswrapper[4946]: I1128 08:49:31.895896 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 28 08:49:31 crc kubenswrapper[4946]: I1128 08:49:31.915417 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-srwds"] Nov 28 08:49:32 crc kubenswrapper[4946]: I1128 08:49:32.029985 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh5cs\" (UniqueName: \"kubernetes.io/projected/bff22882-abd9-45cb-b73b-f71e66526523-kube-api-access-wh5cs\") pod \"neutron-db-sync-srwds\" (UID: \"bff22882-abd9-45cb-b73b-f71e66526523\") " pod="openstack/neutron-db-sync-srwds" Nov 28 08:49:32 crc kubenswrapper[4946]: I1128 08:49:32.030068 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff22882-abd9-45cb-b73b-f71e66526523-combined-ca-bundle\") pod \"neutron-db-sync-srwds\" (UID: \"bff22882-abd9-45cb-b73b-f71e66526523\") " pod="openstack/neutron-db-sync-srwds" Nov 28 08:49:32 crc kubenswrapper[4946]: I1128 08:49:32.030110 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bff22882-abd9-45cb-b73b-f71e66526523-config\") pod \"neutron-db-sync-srwds\" (UID: \"bff22882-abd9-45cb-b73b-f71e66526523\") " pod="openstack/neutron-db-sync-srwds" Nov 28 08:49:32 crc kubenswrapper[4946]: I1128 08:49:32.132081 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bff22882-abd9-45cb-b73b-f71e66526523-config\") pod \"neutron-db-sync-srwds\" (UID: \"bff22882-abd9-45cb-b73b-f71e66526523\") " pod="openstack/neutron-db-sync-srwds" Nov 28 08:49:32 crc kubenswrapper[4946]: I1128 08:49:32.132226 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh5cs\" (UniqueName: \"kubernetes.io/projected/bff22882-abd9-45cb-b73b-f71e66526523-kube-api-access-wh5cs\") pod \"neutron-db-sync-srwds\" (UID: \"bff22882-abd9-45cb-b73b-f71e66526523\") " pod="openstack/neutron-db-sync-srwds" Nov 28 08:49:32 crc kubenswrapper[4946]: I1128 08:49:32.132290 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff22882-abd9-45cb-b73b-f71e66526523-combined-ca-bundle\") pod \"neutron-db-sync-srwds\" (UID: \"bff22882-abd9-45cb-b73b-f71e66526523\") " pod="openstack/neutron-db-sync-srwds" Nov 28 08:49:32 crc kubenswrapper[4946]: I1128 08:49:32.138421 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bff22882-abd9-45cb-b73b-f71e66526523-config\") pod \"neutron-db-sync-srwds\" (UID: \"bff22882-abd9-45cb-b73b-f71e66526523\") " pod="openstack/neutron-db-sync-srwds" Nov 28 08:49:32 crc kubenswrapper[4946]: I1128 08:49:32.138716 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff22882-abd9-45cb-b73b-f71e66526523-combined-ca-bundle\") pod \"neutron-db-sync-srwds\" (UID: \"bff22882-abd9-45cb-b73b-f71e66526523\") " pod="openstack/neutron-db-sync-srwds" Nov 28 08:49:32 crc kubenswrapper[4946]: I1128 08:49:32.153683 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh5cs\" (UniqueName: \"kubernetes.io/projected/bff22882-abd9-45cb-b73b-f71e66526523-kube-api-access-wh5cs\") pod \"neutron-db-sync-srwds\" (UID: \"bff22882-abd9-45cb-b73b-f71e66526523\") " pod="openstack/neutron-db-sync-srwds" Nov 28 08:49:32 crc kubenswrapper[4946]: I1128 08:49:32.209735 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-srwds" Nov 28 08:49:32 crc kubenswrapper[4946]: I1128 08:49:32.689109 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-srwds"] Nov 28 08:49:32 crc kubenswrapper[4946]: I1128 08:49:32.939416 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-srwds" event={"ID":"bff22882-abd9-45cb-b73b-f71e66526523","Type":"ContainerStarted","Data":"ef70ec24a253d7b5111401994b607d37cf93f5710192a3dd7f04d143d9234f8d"} Nov 28 08:49:33 crc kubenswrapper[4946]: I1128 08:49:33.956366 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-srwds" event={"ID":"bff22882-abd9-45cb-b73b-f71e66526523","Type":"ContainerStarted","Data":"8666d385cd752a53250dd22410d8fbb4741e93836b5b7bf2960a49f41b510058"} Nov 28 08:49:33 crc kubenswrapper[4946]: I1128 08:49:33.979796 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-srwds" podStartSLOduration=2.9797760220000002 podStartE2EDuration="2.979776022s" podCreationTimestamp="2025-11-28 08:49:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:49:33.970849281 +0000 UTC m=+7028.348914412" watchObservedRunningTime="2025-11-28 08:49:33.979776022 +0000 UTC m=+7028.357841143" Nov 28 08:49:34 crc kubenswrapper[4946]: I1128 08:49:34.690596 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zqhsb" Nov 28 08:49:34 crc kubenswrapper[4946]: I1128 08:49:34.693022 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zqhsb" Nov 28 08:49:34 crc kubenswrapper[4946]: I1128 08:49:34.768931 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zqhsb" Nov 28 08:49:36 crc kubenswrapper[4946]: I1128 08:49:36.044639 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zqhsb" Nov 28 08:49:36 crc kubenswrapper[4946]: I1128 08:49:36.119127 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zqhsb"] Nov 28 08:49:36 crc kubenswrapper[4946]: I1128 08:49:36.993058 4946 generic.go:334] "Generic (PLEG): container finished" podID="bff22882-abd9-45cb-b73b-f71e66526523" containerID="8666d385cd752a53250dd22410d8fbb4741e93836b5b7bf2960a49f41b510058" exitCode=0 Nov 28 08:49:36 crc kubenswrapper[4946]: I1128 08:49:36.993156 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-srwds" event={"ID":"bff22882-abd9-45cb-b73b-f71e66526523","Type":"ContainerDied","Data":"8666d385cd752a53250dd22410d8fbb4741e93836b5b7bf2960a49f41b510058"} Nov 28 08:49:38 crc kubenswrapper[4946]: I1128 08:49:38.007128 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zqhsb" podUID="83d1400b-13ee-48a0-a8a3-57a12a80ad78" containerName="registry-server" containerID="cri-o://78c9ae7a637d4de266787325c00bf8da07ebbcd958faec3cdcce0c89799fe734" gracePeriod=2 Nov 28 08:49:38 crc kubenswrapper[4946]: I1128 08:49:38.440804 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-srwds" Nov 28 08:49:38 crc kubenswrapper[4946]: I1128 08:49:38.583167 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh5cs\" (UniqueName: \"kubernetes.io/projected/bff22882-abd9-45cb-b73b-f71e66526523-kube-api-access-wh5cs\") pod \"bff22882-abd9-45cb-b73b-f71e66526523\" (UID: \"bff22882-abd9-45cb-b73b-f71e66526523\") " Nov 28 08:49:38 crc kubenswrapper[4946]: I1128 08:49:38.583316 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bff22882-abd9-45cb-b73b-f71e66526523-config\") pod \"bff22882-abd9-45cb-b73b-f71e66526523\" (UID: \"bff22882-abd9-45cb-b73b-f71e66526523\") " Nov 28 08:49:38 crc kubenswrapper[4946]: I1128 08:49:38.583422 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff22882-abd9-45cb-b73b-f71e66526523-combined-ca-bundle\") pod \"bff22882-abd9-45cb-b73b-f71e66526523\" (UID: \"bff22882-abd9-45cb-b73b-f71e66526523\") " Nov 28 08:49:38 crc kubenswrapper[4946]: I1128 08:49:38.595688 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bff22882-abd9-45cb-b73b-f71e66526523-kube-api-access-wh5cs" (OuterVolumeSpecName: "kube-api-access-wh5cs") pod "bff22882-abd9-45cb-b73b-f71e66526523" (UID: "bff22882-abd9-45cb-b73b-f71e66526523"). InnerVolumeSpecName "kube-api-access-wh5cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:49:38 crc kubenswrapper[4946]: I1128 08:49:38.610177 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff22882-abd9-45cb-b73b-f71e66526523-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bff22882-abd9-45cb-b73b-f71e66526523" (UID: "bff22882-abd9-45cb-b73b-f71e66526523"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:49:38 crc kubenswrapper[4946]: I1128 08:49:38.610291 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff22882-abd9-45cb-b73b-f71e66526523-config" (OuterVolumeSpecName: "config") pod "bff22882-abd9-45cb-b73b-f71e66526523" (UID: "bff22882-abd9-45cb-b73b-f71e66526523"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:49:38 crc kubenswrapper[4946]: I1128 08:49:38.684963 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh5cs\" (UniqueName: \"kubernetes.io/projected/bff22882-abd9-45cb-b73b-f71e66526523-kube-api-access-wh5cs\") on node \"crc\" DevicePath \"\"" Nov 28 08:49:38 crc kubenswrapper[4946]: I1128 08:49:38.684997 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/bff22882-abd9-45cb-b73b-f71e66526523-config\") on node \"crc\" DevicePath \"\"" Nov 28 08:49:38 crc kubenswrapper[4946]: I1128 08:49:38.685010 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff22882-abd9-45cb-b73b-f71e66526523-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.047361 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-srwds" event={"ID":"bff22882-abd9-45cb-b73b-f71e66526523","Type":"ContainerDied","Data":"ef70ec24a253d7b5111401994b607d37cf93f5710192a3dd7f04d143d9234f8d"} Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.047412 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef70ec24a253d7b5111401994b607d37cf93f5710192a3dd7f04d143d9234f8d" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.047531 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-srwds" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.060387 4946 generic.go:334] "Generic (PLEG): container finished" podID="83d1400b-13ee-48a0-a8a3-57a12a80ad78" containerID="78c9ae7a637d4de266787325c00bf8da07ebbcd958faec3cdcce0c89799fe734" exitCode=0 Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.060427 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqhsb" event={"ID":"83d1400b-13ee-48a0-a8a3-57a12a80ad78","Type":"ContainerDied","Data":"78c9ae7a637d4de266787325c00bf8da07ebbcd958faec3cdcce0c89799fe734"} Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.096382 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zqhsb" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.185781 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c69597795-dsvbw"] Nov 28 08:49:39 crc kubenswrapper[4946]: E1128 08:49:39.186161 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d1400b-13ee-48a0-a8a3-57a12a80ad78" containerName="extract-utilities" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.186179 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d1400b-13ee-48a0-a8a3-57a12a80ad78" containerName="extract-utilities" Nov 28 08:49:39 crc kubenswrapper[4946]: E1128 08:49:39.186209 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff22882-abd9-45cb-b73b-f71e66526523" containerName="neutron-db-sync" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.186217 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff22882-abd9-45cb-b73b-f71e66526523" containerName="neutron-db-sync" Nov 28 08:49:39 crc kubenswrapper[4946]: E1128 08:49:39.186239 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d1400b-13ee-48a0-a8a3-57a12a80ad78" containerName="registry-server" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.186245 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d1400b-13ee-48a0-a8a3-57a12a80ad78" containerName="registry-server" Nov 28 08:49:39 crc kubenswrapper[4946]: E1128 08:49:39.186260 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d1400b-13ee-48a0-a8a3-57a12a80ad78" containerName="extract-content" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.186265 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d1400b-13ee-48a0-a8a3-57a12a80ad78" containerName="extract-content" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.186429 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="83d1400b-13ee-48a0-a8a3-57a12a80ad78" containerName="registry-server" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.186441 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="bff22882-abd9-45cb-b73b-f71e66526523" containerName="neutron-db-sync" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.187388 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c69597795-dsvbw" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.196417 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d1400b-13ee-48a0-a8a3-57a12a80ad78-catalog-content\") pod \"83d1400b-13ee-48a0-a8a3-57a12a80ad78\" (UID: \"83d1400b-13ee-48a0-a8a3-57a12a80ad78\") " Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.196504 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xv2w\" (UniqueName: \"kubernetes.io/projected/83d1400b-13ee-48a0-a8a3-57a12a80ad78-kube-api-access-6xv2w\") pod \"83d1400b-13ee-48a0-a8a3-57a12a80ad78\" (UID: \"83d1400b-13ee-48a0-a8a3-57a12a80ad78\") " Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.196627 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d1400b-13ee-48a0-a8a3-57a12a80ad78-utilities\") pod \"83d1400b-13ee-48a0-a8a3-57a12a80ad78\" (UID: \"83d1400b-13ee-48a0-a8a3-57a12a80ad78\") " Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.197747 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83d1400b-13ee-48a0-a8a3-57a12a80ad78-utilities" (OuterVolumeSpecName: "utilities") pod "83d1400b-13ee-48a0-a8a3-57a12a80ad78" (UID: "83d1400b-13ee-48a0-a8a3-57a12a80ad78"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.197761 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c69597795-dsvbw"] Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.208682 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83d1400b-13ee-48a0-a8a3-57a12a80ad78-kube-api-access-6xv2w" (OuterVolumeSpecName: "kube-api-access-6xv2w") pod "83d1400b-13ee-48a0-a8a3-57a12a80ad78" (UID: "83d1400b-13ee-48a0-a8a3-57a12a80ad78"). InnerVolumeSpecName "kube-api-access-6xv2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.261376 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-66757bf5bf-g455z"] Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.262853 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66757bf5bf-g455z" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.265810 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-kpp6g" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.266087 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.266237 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.274166 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83d1400b-13ee-48a0-a8a3-57a12a80ad78-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83d1400b-13ee-48a0-a8a3-57a12a80ad78" (UID: "83d1400b-13ee-48a0-a8a3-57a12a80ad78"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.287708 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66757bf5bf-g455z"] Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.298390 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4594554-584e-461f-adf9-5864bbc232af-ovsdbserver-sb\") pod \"dnsmasq-dns-c69597795-dsvbw\" (UID: \"d4594554-584e-461f-adf9-5864bbc232af\") " pod="openstack/dnsmasq-dns-c69597795-dsvbw" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.298474 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4594554-584e-461f-adf9-5864bbc232af-dns-svc\") pod \"dnsmasq-dns-c69597795-dsvbw\" (UID: \"d4594554-584e-461f-adf9-5864bbc232af\") " pod="openstack/dnsmasq-dns-c69597795-dsvbw" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.298521 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4594554-584e-461f-adf9-5864bbc232af-ovsdbserver-nb\") pod \"dnsmasq-dns-c69597795-dsvbw\" (UID: \"d4594554-584e-461f-adf9-5864bbc232af\") " pod="openstack/dnsmasq-dns-c69597795-dsvbw" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.298605 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p629m\" (UniqueName: \"kubernetes.io/projected/d4594554-584e-461f-adf9-5864bbc232af-kube-api-access-p629m\") pod \"dnsmasq-dns-c69597795-dsvbw\" (UID: \"d4594554-584e-461f-adf9-5864bbc232af\") " pod="openstack/dnsmasq-dns-c69597795-dsvbw" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.298666 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4594554-584e-461f-adf9-5864bbc232af-config\") pod \"dnsmasq-dns-c69597795-dsvbw\" (UID: \"d4594554-584e-461f-adf9-5864bbc232af\") " pod="openstack/dnsmasq-dns-c69597795-dsvbw" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.298714 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d1400b-13ee-48a0-a8a3-57a12a80ad78-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.298726 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d1400b-13ee-48a0-a8a3-57a12a80ad78-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.298737 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xv2w\" (UniqueName: \"kubernetes.io/projected/83d1400b-13ee-48a0-a8a3-57a12a80ad78-kube-api-access-6xv2w\") on node \"crc\" DevicePath \"\"" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.399927 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/69c4c930-04a2-4502-8802-01d04186e378-httpd-config\") pod \"neutron-66757bf5bf-g455z\" (UID: \"69c4c930-04a2-4502-8802-01d04186e378\") " pod="openstack/neutron-66757bf5bf-g455z" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.399977 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4594554-584e-461f-adf9-5864bbc232af-config\") pod \"dnsmasq-dns-c69597795-dsvbw\" (UID: \"d4594554-584e-461f-adf9-5864bbc232af\") " pod="openstack/dnsmasq-dns-c69597795-dsvbw" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.400011 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4594554-584e-461f-adf9-5864bbc232af-ovsdbserver-sb\") pod \"dnsmasq-dns-c69597795-dsvbw\" (UID: \"d4594554-584e-461f-adf9-5864bbc232af\") " pod="openstack/dnsmasq-dns-c69597795-dsvbw" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.400238 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4594554-584e-461f-adf9-5864bbc232af-dns-svc\") pod \"dnsmasq-dns-c69597795-dsvbw\" (UID: \"d4594554-584e-461f-adf9-5864bbc232af\") " pod="openstack/dnsmasq-dns-c69597795-dsvbw" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.400309 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dthlh\" (UniqueName: \"kubernetes.io/projected/69c4c930-04a2-4502-8802-01d04186e378-kube-api-access-dthlh\") pod \"neutron-66757bf5bf-g455z\" (UID: \"69c4c930-04a2-4502-8802-01d04186e378\") " pod="openstack/neutron-66757bf5bf-g455z" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.400351 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4594554-584e-461f-adf9-5864bbc232af-ovsdbserver-nb\") pod \"dnsmasq-dns-c69597795-dsvbw\" (UID: \"d4594554-584e-461f-adf9-5864bbc232af\") " pod="openstack/dnsmasq-dns-c69597795-dsvbw" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.400429 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p629m\" (UniqueName: \"kubernetes.io/projected/d4594554-584e-461f-adf9-5864bbc232af-kube-api-access-p629m\") pod \"dnsmasq-dns-c69597795-dsvbw\" (UID: \"d4594554-584e-461f-adf9-5864bbc232af\") " pod="openstack/dnsmasq-dns-c69597795-dsvbw" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.400458 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/69c4c930-04a2-4502-8802-01d04186e378-config\") pod \"neutron-66757bf5bf-g455z\" (UID: \"69c4c930-04a2-4502-8802-01d04186e378\") " pod="openstack/neutron-66757bf5bf-g455z" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.400518 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c4c930-04a2-4502-8802-01d04186e378-combined-ca-bundle\") pod \"neutron-66757bf5bf-g455z\" (UID: \"69c4c930-04a2-4502-8802-01d04186e378\") " pod="openstack/neutron-66757bf5bf-g455z" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.400787 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4594554-584e-461f-adf9-5864bbc232af-ovsdbserver-sb\") pod \"dnsmasq-dns-c69597795-dsvbw\" (UID: \"d4594554-584e-461f-adf9-5864bbc232af\") " pod="openstack/dnsmasq-dns-c69597795-dsvbw" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.401093 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4594554-584e-461f-adf9-5864bbc232af-ovsdbserver-nb\") pod \"dnsmasq-dns-c69597795-dsvbw\" (UID: \"d4594554-584e-461f-adf9-5864bbc232af\") " pod="openstack/dnsmasq-dns-c69597795-dsvbw" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.401418 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4594554-584e-461f-adf9-5864bbc232af-config\") pod \"dnsmasq-dns-c69597795-dsvbw\" (UID: \"d4594554-584e-461f-adf9-5864bbc232af\") " pod="openstack/dnsmasq-dns-c69597795-dsvbw" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.401548 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4594554-584e-461f-adf9-5864bbc232af-dns-svc\") pod \"dnsmasq-dns-c69597795-dsvbw\" (UID: \"d4594554-584e-461f-adf9-5864bbc232af\") " pod="openstack/dnsmasq-dns-c69597795-dsvbw" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.416219 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p629m\" (UniqueName: \"kubernetes.io/projected/d4594554-584e-461f-adf9-5864bbc232af-kube-api-access-p629m\") pod \"dnsmasq-dns-c69597795-dsvbw\" (UID: \"d4594554-584e-461f-adf9-5864bbc232af\") " pod="openstack/dnsmasq-dns-c69597795-dsvbw" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.502479 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/69c4c930-04a2-4502-8802-01d04186e378-config\") pod \"neutron-66757bf5bf-g455z\" (UID: \"69c4c930-04a2-4502-8802-01d04186e378\") " pod="openstack/neutron-66757bf5bf-g455z" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.502638 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c4c930-04a2-4502-8802-01d04186e378-combined-ca-bundle\") pod \"neutron-66757bf5bf-g455z\" (UID: \"69c4c930-04a2-4502-8802-01d04186e378\") " pod="openstack/neutron-66757bf5bf-g455z" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.502699 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/69c4c930-04a2-4502-8802-01d04186e378-httpd-config\") pod \"neutron-66757bf5bf-g455z\" (UID: \"69c4c930-04a2-4502-8802-01d04186e378\") " pod="openstack/neutron-66757bf5bf-g455z" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.502766 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dthlh\" (UniqueName: \"kubernetes.io/projected/69c4c930-04a2-4502-8802-01d04186e378-kube-api-access-dthlh\") pod \"neutron-66757bf5bf-g455z\" (UID: \"69c4c930-04a2-4502-8802-01d04186e378\") " pod="openstack/neutron-66757bf5bf-g455z" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.506030 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c4c930-04a2-4502-8802-01d04186e378-combined-ca-bundle\") pod \"neutron-66757bf5bf-g455z\" (UID: \"69c4c930-04a2-4502-8802-01d04186e378\") " pod="openstack/neutron-66757bf5bf-g455z" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.507241 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/69c4c930-04a2-4502-8802-01d04186e378-config\") pod \"neutron-66757bf5bf-g455z\" (UID: \"69c4c930-04a2-4502-8802-01d04186e378\") " pod="openstack/neutron-66757bf5bf-g455z" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.511188 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/69c4c930-04a2-4502-8802-01d04186e378-httpd-config\") pod \"neutron-66757bf5bf-g455z\" (UID: \"69c4c930-04a2-4502-8802-01d04186e378\") " pod="openstack/neutron-66757bf5bf-g455z" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.531304 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dthlh\" (UniqueName: \"kubernetes.io/projected/69c4c930-04a2-4502-8802-01d04186e378-kube-api-access-dthlh\") pod \"neutron-66757bf5bf-g455z\" (UID: \"69c4c930-04a2-4502-8802-01d04186e378\") " pod="openstack/neutron-66757bf5bf-g455z" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.557509 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c69597795-dsvbw" Nov 28 08:49:39 crc kubenswrapper[4946]: I1128 08:49:39.585546 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66757bf5bf-g455z" Nov 28 08:49:40 crc kubenswrapper[4946]: I1128 08:49:40.055312 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c69597795-dsvbw"] Nov 28 08:49:40 crc kubenswrapper[4946]: I1128 08:49:40.070208 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c69597795-dsvbw" event={"ID":"d4594554-584e-461f-adf9-5864bbc232af","Type":"ContainerStarted","Data":"b640eaeb2bfcf64fcbe11183a8da0d3012c90e2e4b1e33cfce5658155fa018a3"} Nov 28 08:49:40 crc kubenswrapper[4946]: I1128 08:49:40.071822 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqhsb" event={"ID":"83d1400b-13ee-48a0-a8a3-57a12a80ad78","Type":"ContainerDied","Data":"e77130319540dadee4caf0ea889b96dfd45524c4bdaf4759bb18e88fad17fb8c"} Nov 28 08:49:40 crc kubenswrapper[4946]: I1128 08:49:40.071855 4946 scope.go:117] "RemoveContainer" containerID="78c9ae7a637d4de266787325c00bf8da07ebbcd958faec3cdcce0c89799fe734" Nov 28 08:49:40 crc kubenswrapper[4946]: I1128 08:49:40.071970 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zqhsb" Nov 28 08:49:40 crc kubenswrapper[4946]: I1128 08:49:40.107359 4946 scope.go:117] "RemoveContainer" containerID="f9424838fa78827c5d474f614a4799aa52a18ed35e801d6224c1f07ec2e1bd9b" Nov 28 08:49:40 crc kubenswrapper[4946]: I1128 08:49:40.107556 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zqhsb"] Nov 28 08:49:40 crc kubenswrapper[4946]: I1128 08:49:40.116280 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zqhsb"] Nov 28 08:49:40 crc kubenswrapper[4946]: I1128 08:49:40.143409 4946 scope.go:117] "RemoveContainer" containerID="20f253f15f14cc8d4bbaa9edab6b8d23b60f4dd621e1318a1125bd9119bb48b6" Nov 28 08:49:40 crc kubenswrapper[4946]: I1128 08:49:40.161275 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66757bf5bf-g455z"] Nov 28 08:49:40 crc kubenswrapper[4946]: W1128 08:49:40.175322 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69c4c930_04a2_4502_8802_01d04186e378.slice/crio-804170b8ac76905885787f51ff1988871af2c8a95a98a5c1ef3bfd25e4a8907f WatchSource:0}: Error finding container 804170b8ac76905885787f51ff1988871af2c8a95a98a5c1ef3bfd25e4a8907f: Status 404 returned error can't find the container with id 804170b8ac76905885787f51ff1988871af2c8a95a98a5c1ef3bfd25e4a8907f Nov 28 08:49:41 crc kubenswrapper[4946]: I1128 08:49:41.083204 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66757bf5bf-g455z" event={"ID":"69c4c930-04a2-4502-8802-01d04186e378","Type":"ContainerStarted","Data":"2bc5b388d82ada63ecf956f4789d6178d06f32a42f260a98ac2bc785e11d0c84"} Nov 28 08:49:41 crc kubenswrapper[4946]: I1128 08:49:41.083632 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66757bf5bf-g455z" event={"ID":"69c4c930-04a2-4502-8802-01d04186e378","Type":"ContainerStarted","Data":"740dcc1787f1e0edb6cef725147ca35dc12c754cacd22f5f0a4e22aa87a36848"} Nov 28 08:49:41 crc kubenswrapper[4946]: I1128 08:49:41.083656 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66757bf5bf-g455z" event={"ID":"69c4c930-04a2-4502-8802-01d04186e378","Type":"ContainerStarted","Data":"804170b8ac76905885787f51ff1988871af2c8a95a98a5c1ef3bfd25e4a8907f"} Nov 28 08:49:41 crc kubenswrapper[4946]: I1128 08:49:41.083683 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-66757bf5bf-g455z" Nov 28 08:49:41 crc kubenswrapper[4946]: I1128 08:49:41.085133 4946 generic.go:334] "Generic (PLEG): container finished" podID="d4594554-584e-461f-adf9-5864bbc232af" containerID="1425c81f485e45aab5264cd0ce2a5d46d5020eb887f827a95f23686b782acfce" exitCode=0 Nov 28 08:49:41 crc kubenswrapper[4946]: I1128 08:49:41.085183 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c69597795-dsvbw" event={"ID":"d4594554-584e-461f-adf9-5864bbc232af","Type":"ContainerDied","Data":"1425c81f485e45aab5264cd0ce2a5d46d5020eb887f827a95f23686b782acfce"} Nov 28 08:49:41 crc kubenswrapper[4946]: I1128 08:49:41.104760 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-66757bf5bf-g455z" podStartSLOduration=2.10473827 podStartE2EDuration="2.10473827s" podCreationTimestamp="2025-11-28 08:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:49:41.103544 +0000 UTC m=+7035.481609111" watchObservedRunningTime="2025-11-28 08:49:41.10473827 +0000 UTC m=+7035.482803381" Nov 28 08:49:42 crc kubenswrapper[4946]: I1128 08:49:42.040663 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83d1400b-13ee-48a0-a8a3-57a12a80ad78" path="/var/lib/kubelet/pods/83d1400b-13ee-48a0-a8a3-57a12a80ad78/volumes" Nov 28 08:49:42 crc kubenswrapper[4946]: I1128 08:49:42.099824 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c69597795-dsvbw" event={"ID":"d4594554-584e-461f-adf9-5864bbc232af","Type":"ContainerStarted","Data":"ba9a54fd8702778962834ae12a77b15dbfbeeb9eb50360aca7dc0e1b209f6dd3"} Nov 28 08:49:42 crc kubenswrapper[4946]: I1128 08:49:42.132838 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c69597795-dsvbw" podStartSLOduration=3.132812953 podStartE2EDuration="3.132812953s" podCreationTimestamp="2025-11-28 08:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:49:42.12178016 +0000 UTC m=+7036.499845301" watchObservedRunningTime="2025-11-28 08:49:42.132812953 +0000 UTC m=+7036.510878084" Nov 28 08:49:43 crc kubenswrapper[4946]: I1128 08:49:43.110628 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c69597795-dsvbw" Nov 28 08:49:49 crc kubenswrapper[4946]: I1128 08:49:49.559698 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c69597795-dsvbw" Nov 28 08:49:49 crc kubenswrapper[4946]: I1128 08:49:49.625891 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57fb645c4f-tglxs"] Nov 28 08:49:49 crc kubenswrapper[4946]: I1128 08:49:49.626129 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" podUID="a1fcc079-c977-42f0-b8ea-3abeb7c64cd4" containerName="dnsmasq-dns" containerID="cri-o://0411449c80fa1a3c76a139822e13a30410c8a1cd077b702f6140f30f8b6f03ae" gracePeriod=10 Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.166177 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.169269 4946 generic.go:334] "Generic (PLEG): container finished" podID="a1fcc079-c977-42f0-b8ea-3abeb7c64cd4" containerID="0411449c80fa1a3c76a139822e13a30410c8a1cd077b702f6140f30f8b6f03ae" exitCode=0 Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.169311 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" event={"ID":"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4","Type":"ContainerDied","Data":"0411449c80fa1a3c76a139822e13a30410c8a1cd077b702f6140f30f8b6f03ae"} Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.169363 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" event={"ID":"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4","Type":"ContainerDied","Data":"89b08e63ed9c4be97d1519b4b4dc2576d9f7233e10e3063811d6927723f791f4"} Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.169380 4946 scope.go:117] "RemoveContainer" containerID="0411449c80fa1a3c76a139822e13a30410c8a1cd077b702f6140f30f8b6f03ae" Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.169540 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57fb645c4f-tglxs" Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.189574 4946 scope.go:117] "RemoveContainer" containerID="62b0d6d2bd1461420c6b5f8d128b07fa496148708db84d47f6c21819c2cb68cf" Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.220643 4946 scope.go:117] "RemoveContainer" containerID="0411449c80fa1a3c76a139822e13a30410c8a1cd077b702f6140f30f8b6f03ae" Nov 28 08:49:50 crc kubenswrapper[4946]: E1128 08:49:50.224787 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0411449c80fa1a3c76a139822e13a30410c8a1cd077b702f6140f30f8b6f03ae\": container with ID starting with 0411449c80fa1a3c76a139822e13a30410c8a1cd077b702f6140f30f8b6f03ae not found: ID does not exist" containerID="0411449c80fa1a3c76a139822e13a30410c8a1cd077b702f6140f30f8b6f03ae" Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.224820 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0411449c80fa1a3c76a139822e13a30410c8a1cd077b702f6140f30f8b6f03ae"} err="failed to get container status \"0411449c80fa1a3c76a139822e13a30410c8a1cd077b702f6140f30f8b6f03ae\": rpc error: code = NotFound desc = could not find container \"0411449c80fa1a3c76a139822e13a30410c8a1cd077b702f6140f30f8b6f03ae\": container with ID starting with 0411449c80fa1a3c76a139822e13a30410c8a1cd077b702f6140f30f8b6f03ae not found: ID does not exist" Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.224844 4946 scope.go:117] "RemoveContainer" containerID="62b0d6d2bd1461420c6b5f8d128b07fa496148708db84d47f6c21819c2cb68cf" Nov 28 08:49:50 crc kubenswrapper[4946]: E1128 08:49:50.231361 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62b0d6d2bd1461420c6b5f8d128b07fa496148708db84d47f6c21819c2cb68cf\": container with ID starting with 62b0d6d2bd1461420c6b5f8d128b07fa496148708db84d47f6c21819c2cb68cf not found: ID does not exist" containerID="62b0d6d2bd1461420c6b5f8d128b07fa496148708db84d47f6c21819c2cb68cf" Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.231408 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62b0d6d2bd1461420c6b5f8d128b07fa496148708db84d47f6c21819c2cb68cf"} err="failed to get container status \"62b0d6d2bd1461420c6b5f8d128b07fa496148708db84d47f6c21819c2cb68cf\": rpc error: code = NotFound desc = could not find container \"62b0d6d2bd1461420c6b5f8d128b07fa496148708db84d47f6c21819c2cb68cf\": container with ID starting with 62b0d6d2bd1461420c6b5f8d128b07fa496148708db84d47f6c21819c2cb68cf not found: ID does not exist" Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.298496 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-ovsdbserver-sb\") pod \"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4\" (UID: \"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4\") " Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.298609 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-ovsdbserver-nb\") pod \"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4\" (UID: \"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4\") " Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.298629 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcwrk\" (UniqueName: \"kubernetes.io/projected/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-kube-api-access-xcwrk\") pod \"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4\" (UID: \"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4\") " Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.298653 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-dns-svc\") pod \"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4\" (UID: \"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4\") " Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.298683 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-config\") pod \"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4\" (UID: \"a1fcc079-c977-42f0-b8ea-3abeb7c64cd4\") " Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.304595 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-kube-api-access-xcwrk" (OuterVolumeSpecName: "kube-api-access-xcwrk") pod "a1fcc079-c977-42f0-b8ea-3abeb7c64cd4" (UID: "a1fcc079-c977-42f0-b8ea-3abeb7c64cd4"). InnerVolumeSpecName "kube-api-access-xcwrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.336887 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a1fcc079-c977-42f0-b8ea-3abeb7c64cd4" (UID: "a1fcc079-c977-42f0-b8ea-3abeb7c64cd4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.337286 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a1fcc079-c977-42f0-b8ea-3abeb7c64cd4" (UID: "a1fcc079-c977-42f0-b8ea-3abeb7c64cd4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.337357 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a1fcc079-c977-42f0-b8ea-3abeb7c64cd4" (UID: "a1fcc079-c977-42f0-b8ea-3abeb7c64cd4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.340669 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-config" (OuterVolumeSpecName: "config") pod "a1fcc079-c977-42f0-b8ea-3abeb7c64cd4" (UID: "a1fcc079-c977-42f0-b8ea-3abeb7c64cd4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.399912 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.400159 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.400271 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcwrk\" (UniqueName: \"kubernetes.io/projected/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-kube-api-access-xcwrk\") on node \"crc\" DevicePath \"\"" Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.400352 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.400418 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4-config\") on node \"crc\" DevicePath \"\"" Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.504406 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57fb645c4f-tglxs"] Nov 28 08:49:50 crc kubenswrapper[4946]: I1128 08:49:50.511195 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57fb645c4f-tglxs"] Nov 28 08:49:52 crc kubenswrapper[4946]: I1128 08:49:52.001327 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1fcc079-c977-42f0-b8ea-3abeb7c64cd4" path="/var/lib/kubelet/pods/a1fcc079-c977-42f0-b8ea-3abeb7c64cd4/volumes" Nov 28 08:50:09 crc kubenswrapper[4946]: I1128 08:50:09.604132 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-66757bf5bf-g455z" Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.002035 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-q9d9x"] Nov 28 08:50:17 crc kubenswrapper[4946]: E1128 08:50:17.002847 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fcc079-c977-42f0-b8ea-3abeb7c64cd4" containerName="dnsmasq-dns" Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.002861 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fcc079-c977-42f0-b8ea-3abeb7c64cd4" containerName="dnsmasq-dns" Nov 28 08:50:17 crc kubenswrapper[4946]: E1128 08:50:17.002877 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fcc079-c977-42f0-b8ea-3abeb7c64cd4" containerName="init" Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.002883 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fcc079-c977-42f0-b8ea-3abeb7c64cd4" containerName="init" Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.003021 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1fcc079-c977-42f0-b8ea-3abeb7c64cd4" containerName="dnsmasq-dns" Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.003853 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-q9d9x" Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.012055 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-q9d9x"] Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.182190 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52vcn\" (UniqueName: \"kubernetes.io/projected/8e4dd7c8-ff87-4edc-bf02-52eae38dc702-kube-api-access-52vcn\") pod \"glance-db-create-q9d9x\" (UID: \"8e4dd7c8-ff87-4edc-bf02-52eae38dc702\") " pod="openstack/glance-db-create-q9d9x" Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.182452 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e4dd7c8-ff87-4edc-bf02-52eae38dc702-operator-scripts\") pod \"glance-db-create-q9d9x\" (UID: \"8e4dd7c8-ff87-4edc-bf02-52eae38dc702\") " pod="openstack/glance-db-create-q9d9x" Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.207802 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-283e-account-create-update-zkp2w"] Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.210923 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-283e-account-create-update-zkp2w" Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.213637 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.251430 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-283e-account-create-update-zkp2w"] Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.284537 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52vcn\" (UniqueName: \"kubernetes.io/projected/8e4dd7c8-ff87-4edc-bf02-52eae38dc702-kube-api-access-52vcn\") pod \"glance-db-create-q9d9x\" (UID: \"8e4dd7c8-ff87-4edc-bf02-52eae38dc702\") " pod="openstack/glance-db-create-q9d9x" Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.284798 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e4dd7c8-ff87-4edc-bf02-52eae38dc702-operator-scripts\") pod \"glance-db-create-q9d9x\" (UID: \"8e4dd7c8-ff87-4edc-bf02-52eae38dc702\") " pod="openstack/glance-db-create-q9d9x" Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.285661 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e4dd7c8-ff87-4edc-bf02-52eae38dc702-operator-scripts\") pod \"glance-db-create-q9d9x\" (UID: \"8e4dd7c8-ff87-4edc-bf02-52eae38dc702\") " pod="openstack/glance-db-create-q9d9x" Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.302132 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52vcn\" (UniqueName: \"kubernetes.io/projected/8e4dd7c8-ff87-4edc-bf02-52eae38dc702-kube-api-access-52vcn\") pod \"glance-db-create-q9d9x\" (UID: \"8e4dd7c8-ff87-4edc-bf02-52eae38dc702\") " pod="openstack/glance-db-create-q9d9x" Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.321197 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-q9d9x" Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.386719 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4e4f3ce-b467-44bd-88ed-22ab591f9730-operator-scripts\") pod \"glance-283e-account-create-update-zkp2w\" (UID: \"b4e4f3ce-b467-44bd-88ed-22ab591f9730\") " pod="openstack/glance-283e-account-create-update-zkp2w" Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.393518 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h45cd\" (UniqueName: \"kubernetes.io/projected/b4e4f3ce-b467-44bd-88ed-22ab591f9730-kube-api-access-h45cd\") pod \"glance-283e-account-create-update-zkp2w\" (UID: \"b4e4f3ce-b467-44bd-88ed-22ab591f9730\") " pod="openstack/glance-283e-account-create-update-zkp2w" Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.495258 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h45cd\" (UniqueName: \"kubernetes.io/projected/b4e4f3ce-b467-44bd-88ed-22ab591f9730-kube-api-access-h45cd\") pod \"glance-283e-account-create-update-zkp2w\" (UID: \"b4e4f3ce-b467-44bd-88ed-22ab591f9730\") " pod="openstack/glance-283e-account-create-update-zkp2w" Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.495643 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4e4f3ce-b467-44bd-88ed-22ab591f9730-operator-scripts\") pod \"glance-283e-account-create-update-zkp2w\" (UID: \"b4e4f3ce-b467-44bd-88ed-22ab591f9730\") " pod="openstack/glance-283e-account-create-update-zkp2w" Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.496544 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4e4f3ce-b467-44bd-88ed-22ab591f9730-operator-scripts\") pod \"glance-283e-account-create-update-zkp2w\" (UID: \"b4e4f3ce-b467-44bd-88ed-22ab591f9730\") " pod="openstack/glance-283e-account-create-update-zkp2w" Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.527855 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h45cd\" (UniqueName: \"kubernetes.io/projected/b4e4f3ce-b467-44bd-88ed-22ab591f9730-kube-api-access-h45cd\") pod \"glance-283e-account-create-update-zkp2w\" (UID: \"b4e4f3ce-b467-44bd-88ed-22ab591f9730\") " pod="openstack/glance-283e-account-create-update-zkp2w" Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.532940 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-283e-account-create-update-zkp2w" Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.792555 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-283e-account-create-update-zkp2w"] Nov 28 08:50:17 crc kubenswrapper[4946]: I1128 08:50:17.819821 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-q9d9x"] Nov 28 08:50:17 crc kubenswrapper[4946]: W1128 08:50:17.821025 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e4dd7c8_ff87_4edc_bf02_52eae38dc702.slice/crio-65c6730be80ebf323868b470f9e9c1911d26756920b742bf3e2f0a8326dc636c WatchSource:0}: Error finding container 65c6730be80ebf323868b470f9e9c1911d26756920b742bf3e2f0a8326dc636c: Status 404 returned error can't find the container with id 65c6730be80ebf323868b470f9e9c1911d26756920b742bf3e2f0a8326dc636c Nov 28 08:50:18 crc kubenswrapper[4946]: I1128 08:50:18.667538 4946 generic.go:334] "Generic (PLEG): container finished" podID="8e4dd7c8-ff87-4edc-bf02-52eae38dc702" containerID="2ba82a0c1d0004ee4b24689508e2228492dcda52e90819cf398c2119b21367aa" exitCode=0 Nov 28 08:50:18 crc kubenswrapper[4946]: I1128 08:50:18.667730 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-q9d9x" event={"ID":"8e4dd7c8-ff87-4edc-bf02-52eae38dc702","Type":"ContainerDied","Data":"2ba82a0c1d0004ee4b24689508e2228492dcda52e90819cf398c2119b21367aa"} Nov 28 08:50:18 crc kubenswrapper[4946]: I1128 08:50:18.667927 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-q9d9x" event={"ID":"8e4dd7c8-ff87-4edc-bf02-52eae38dc702","Type":"ContainerStarted","Data":"65c6730be80ebf323868b470f9e9c1911d26756920b742bf3e2f0a8326dc636c"} Nov 28 08:50:18 crc kubenswrapper[4946]: I1128 08:50:18.671455 4946 generic.go:334] "Generic (PLEG): container finished" podID="b4e4f3ce-b467-44bd-88ed-22ab591f9730" containerID="d86d4ba36e919dfcdbf4c57e794f76aa6951025eb3be951f3ce4c323076eed6e" exitCode=0 Nov 28 08:50:18 crc kubenswrapper[4946]: I1128 08:50:18.671510 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-283e-account-create-update-zkp2w" event={"ID":"b4e4f3ce-b467-44bd-88ed-22ab591f9730","Type":"ContainerDied","Data":"d86d4ba36e919dfcdbf4c57e794f76aa6951025eb3be951f3ce4c323076eed6e"} Nov 28 08:50:18 crc kubenswrapper[4946]: I1128 08:50:18.671541 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-283e-account-create-update-zkp2w" event={"ID":"b4e4f3ce-b467-44bd-88ed-22ab591f9730","Type":"ContainerStarted","Data":"3be09e0350a0f60c38d57736d5fe7563448d82be095ed865c7ea36e4cc56f55a"} Nov 28 08:50:20 crc kubenswrapper[4946]: I1128 08:50:20.118221 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-283e-account-create-update-zkp2w" Nov 28 08:50:20 crc kubenswrapper[4946]: I1128 08:50:20.124705 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-q9d9x" Nov 28 08:50:20 crc kubenswrapper[4946]: I1128 08:50:20.246608 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e4dd7c8-ff87-4edc-bf02-52eae38dc702-operator-scripts\") pod \"8e4dd7c8-ff87-4edc-bf02-52eae38dc702\" (UID: \"8e4dd7c8-ff87-4edc-bf02-52eae38dc702\") " Nov 28 08:50:20 crc kubenswrapper[4946]: I1128 08:50:20.246669 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52vcn\" (UniqueName: \"kubernetes.io/projected/8e4dd7c8-ff87-4edc-bf02-52eae38dc702-kube-api-access-52vcn\") pod \"8e4dd7c8-ff87-4edc-bf02-52eae38dc702\" (UID: \"8e4dd7c8-ff87-4edc-bf02-52eae38dc702\") " Nov 28 08:50:20 crc kubenswrapper[4946]: I1128 08:50:20.246814 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h45cd\" (UniqueName: \"kubernetes.io/projected/b4e4f3ce-b467-44bd-88ed-22ab591f9730-kube-api-access-h45cd\") pod \"b4e4f3ce-b467-44bd-88ed-22ab591f9730\" (UID: \"b4e4f3ce-b467-44bd-88ed-22ab591f9730\") " Nov 28 08:50:20 crc kubenswrapper[4946]: I1128 08:50:20.246853 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4e4f3ce-b467-44bd-88ed-22ab591f9730-operator-scripts\") pod \"b4e4f3ce-b467-44bd-88ed-22ab591f9730\" (UID: \"b4e4f3ce-b467-44bd-88ed-22ab591f9730\") " Nov 28 08:50:20 crc kubenswrapper[4946]: I1128 08:50:20.247237 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e4dd7c8-ff87-4edc-bf02-52eae38dc702-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e4dd7c8-ff87-4edc-bf02-52eae38dc702" (UID: "8e4dd7c8-ff87-4edc-bf02-52eae38dc702"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:50:20 crc kubenswrapper[4946]: I1128 08:50:20.247738 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4e4f3ce-b467-44bd-88ed-22ab591f9730-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4e4f3ce-b467-44bd-88ed-22ab591f9730" (UID: "b4e4f3ce-b467-44bd-88ed-22ab591f9730"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:50:20 crc kubenswrapper[4946]: I1128 08:50:20.268377 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e4dd7c8-ff87-4edc-bf02-52eae38dc702-kube-api-access-52vcn" (OuterVolumeSpecName: "kube-api-access-52vcn") pod "8e4dd7c8-ff87-4edc-bf02-52eae38dc702" (UID: "8e4dd7c8-ff87-4edc-bf02-52eae38dc702"). InnerVolumeSpecName "kube-api-access-52vcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:50:20 crc kubenswrapper[4946]: I1128 08:50:20.268679 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4e4f3ce-b467-44bd-88ed-22ab591f9730-kube-api-access-h45cd" (OuterVolumeSpecName: "kube-api-access-h45cd") pod "b4e4f3ce-b467-44bd-88ed-22ab591f9730" (UID: "b4e4f3ce-b467-44bd-88ed-22ab591f9730"). InnerVolumeSpecName "kube-api-access-h45cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:50:20 crc kubenswrapper[4946]: I1128 08:50:20.348659 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4e4f3ce-b467-44bd-88ed-22ab591f9730-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:20 crc kubenswrapper[4946]: I1128 08:50:20.349059 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e4dd7c8-ff87-4edc-bf02-52eae38dc702-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:20 crc kubenswrapper[4946]: I1128 08:50:20.349073 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52vcn\" (UniqueName: \"kubernetes.io/projected/8e4dd7c8-ff87-4edc-bf02-52eae38dc702-kube-api-access-52vcn\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:20 crc kubenswrapper[4946]: I1128 08:50:20.349089 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h45cd\" (UniqueName: \"kubernetes.io/projected/b4e4f3ce-b467-44bd-88ed-22ab591f9730-kube-api-access-h45cd\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:20 crc kubenswrapper[4946]: I1128 08:50:20.696231 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-q9d9x" Nov 28 08:50:20 crc kubenswrapper[4946]: I1128 08:50:20.696653 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-q9d9x" event={"ID":"8e4dd7c8-ff87-4edc-bf02-52eae38dc702","Type":"ContainerDied","Data":"65c6730be80ebf323868b470f9e9c1911d26756920b742bf3e2f0a8326dc636c"} Nov 28 08:50:20 crc kubenswrapper[4946]: I1128 08:50:20.696791 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65c6730be80ebf323868b470f9e9c1911d26756920b742bf3e2f0a8326dc636c" Nov 28 08:50:20 crc kubenswrapper[4946]: I1128 08:50:20.699855 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-283e-account-create-update-zkp2w" event={"ID":"b4e4f3ce-b467-44bd-88ed-22ab591f9730","Type":"ContainerDied","Data":"3be09e0350a0f60c38d57736d5fe7563448d82be095ed865c7ea36e4cc56f55a"} Nov 28 08:50:20 crc kubenswrapper[4946]: I1128 08:50:20.699893 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3be09e0350a0f60c38d57736d5fe7563448d82be095ed865c7ea36e4cc56f55a" Nov 28 08:50:20 crc kubenswrapper[4946]: I1128 08:50:20.699968 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-283e-account-create-update-zkp2w" Nov 28 08:50:22 crc kubenswrapper[4946]: I1128 08:50:22.425274 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-rxtds"] Nov 28 08:50:22 crc kubenswrapper[4946]: E1128 08:50:22.425641 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4e4f3ce-b467-44bd-88ed-22ab591f9730" containerName="mariadb-account-create-update" Nov 28 08:50:22 crc kubenswrapper[4946]: I1128 08:50:22.425660 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4e4f3ce-b467-44bd-88ed-22ab591f9730" containerName="mariadb-account-create-update" Nov 28 08:50:22 crc kubenswrapper[4946]: E1128 08:50:22.425696 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e4dd7c8-ff87-4edc-bf02-52eae38dc702" containerName="mariadb-database-create" Nov 28 08:50:22 crc kubenswrapper[4946]: I1128 08:50:22.425702 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e4dd7c8-ff87-4edc-bf02-52eae38dc702" containerName="mariadb-database-create" Nov 28 08:50:22 crc kubenswrapper[4946]: I1128 08:50:22.425850 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e4dd7c8-ff87-4edc-bf02-52eae38dc702" containerName="mariadb-database-create" Nov 28 08:50:22 crc kubenswrapper[4946]: I1128 08:50:22.425881 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4e4f3ce-b467-44bd-88ed-22ab591f9730" containerName="mariadb-account-create-update" Nov 28 08:50:22 crc kubenswrapper[4946]: I1128 08:50:22.426392 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rxtds" Nov 28 08:50:22 crc kubenswrapper[4946]: I1128 08:50:22.431689 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-9spxc" Nov 28 08:50:22 crc kubenswrapper[4946]: I1128 08:50:22.434256 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 28 08:50:22 crc kubenswrapper[4946]: I1128 08:50:22.450231 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-rxtds"] Nov 28 08:50:22 crc kubenswrapper[4946]: I1128 08:50:22.586683 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/870b12ba-c2ed-4359-b100-b68529bda2a0-combined-ca-bundle\") pod \"glance-db-sync-rxtds\" (UID: \"870b12ba-c2ed-4359-b100-b68529bda2a0\") " pod="openstack/glance-db-sync-rxtds" Nov 28 08:50:22 crc kubenswrapper[4946]: I1128 08:50:22.587073 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/870b12ba-c2ed-4359-b100-b68529bda2a0-db-sync-config-data\") pod \"glance-db-sync-rxtds\" (UID: \"870b12ba-c2ed-4359-b100-b68529bda2a0\") " pod="openstack/glance-db-sync-rxtds" Nov 28 08:50:22 crc kubenswrapper[4946]: I1128 08:50:22.587129 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/870b12ba-c2ed-4359-b100-b68529bda2a0-config-data\") pod \"glance-db-sync-rxtds\" (UID: \"870b12ba-c2ed-4359-b100-b68529bda2a0\") " pod="openstack/glance-db-sync-rxtds" Nov 28 08:50:22 crc kubenswrapper[4946]: I1128 08:50:22.587155 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlzc5\" (UniqueName: \"kubernetes.io/projected/870b12ba-c2ed-4359-b100-b68529bda2a0-kube-api-access-mlzc5\") pod \"glance-db-sync-rxtds\" (UID: \"870b12ba-c2ed-4359-b100-b68529bda2a0\") " pod="openstack/glance-db-sync-rxtds" Nov 28 08:50:22 crc kubenswrapper[4946]: I1128 08:50:22.688703 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/870b12ba-c2ed-4359-b100-b68529bda2a0-combined-ca-bundle\") pod \"glance-db-sync-rxtds\" (UID: \"870b12ba-c2ed-4359-b100-b68529bda2a0\") " pod="openstack/glance-db-sync-rxtds" Nov 28 08:50:22 crc kubenswrapper[4946]: I1128 08:50:22.688812 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/870b12ba-c2ed-4359-b100-b68529bda2a0-db-sync-config-data\") pod \"glance-db-sync-rxtds\" (UID: \"870b12ba-c2ed-4359-b100-b68529bda2a0\") " pod="openstack/glance-db-sync-rxtds" Nov 28 08:50:22 crc kubenswrapper[4946]: I1128 08:50:22.688898 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/870b12ba-c2ed-4359-b100-b68529bda2a0-config-data\") pod \"glance-db-sync-rxtds\" (UID: \"870b12ba-c2ed-4359-b100-b68529bda2a0\") " pod="openstack/glance-db-sync-rxtds" Nov 28 08:50:22 crc kubenswrapper[4946]: I1128 08:50:22.688943 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlzc5\" (UniqueName: \"kubernetes.io/projected/870b12ba-c2ed-4359-b100-b68529bda2a0-kube-api-access-mlzc5\") pod \"glance-db-sync-rxtds\" (UID: \"870b12ba-c2ed-4359-b100-b68529bda2a0\") " pod="openstack/glance-db-sync-rxtds" Nov 28 08:50:22 crc kubenswrapper[4946]: I1128 08:50:22.695847 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/870b12ba-c2ed-4359-b100-b68529bda2a0-db-sync-config-data\") pod \"glance-db-sync-rxtds\" (UID: \"870b12ba-c2ed-4359-b100-b68529bda2a0\") " pod="openstack/glance-db-sync-rxtds" Nov 28 08:50:22 crc kubenswrapper[4946]: I1128 08:50:22.696221 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/870b12ba-c2ed-4359-b100-b68529bda2a0-combined-ca-bundle\") pod \"glance-db-sync-rxtds\" (UID: \"870b12ba-c2ed-4359-b100-b68529bda2a0\") " pod="openstack/glance-db-sync-rxtds" Nov 28 08:50:22 crc kubenswrapper[4946]: I1128 08:50:22.699071 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/870b12ba-c2ed-4359-b100-b68529bda2a0-config-data\") pod \"glance-db-sync-rxtds\" (UID: \"870b12ba-c2ed-4359-b100-b68529bda2a0\") " pod="openstack/glance-db-sync-rxtds" Nov 28 08:50:22 crc kubenswrapper[4946]: I1128 08:50:22.714113 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlzc5\" (UniqueName: \"kubernetes.io/projected/870b12ba-c2ed-4359-b100-b68529bda2a0-kube-api-access-mlzc5\") pod \"glance-db-sync-rxtds\" (UID: \"870b12ba-c2ed-4359-b100-b68529bda2a0\") " pod="openstack/glance-db-sync-rxtds" Nov 28 08:50:22 crc kubenswrapper[4946]: I1128 08:50:22.787790 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rxtds" Nov 28 08:50:23 crc kubenswrapper[4946]: I1128 08:50:23.312817 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-rxtds"] Nov 28 08:50:23 crc kubenswrapper[4946]: I1128 08:50:23.320817 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 08:50:23 crc kubenswrapper[4946]: I1128 08:50:23.734012 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rxtds" event={"ID":"870b12ba-c2ed-4359-b100-b68529bda2a0","Type":"ContainerStarted","Data":"9b4511df094c41a51a63ee8cfe7dc82a28f1dc81fb6bdcbdf5f9322380d8242b"} Nov 28 08:50:39 crc kubenswrapper[4946]: I1128 08:50:39.889285 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rxtds" event={"ID":"870b12ba-c2ed-4359-b100-b68529bda2a0","Type":"ContainerStarted","Data":"bad7c0788e8564179929cfafe33edaa496b34c3f4e55c8ace27e8eea8bb1ad14"} Nov 28 08:50:39 crc kubenswrapper[4946]: I1128 08:50:39.908894 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-rxtds" podStartSLOduration=2.752056017 podStartE2EDuration="17.908875246s" podCreationTimestamp="2025-11-28 08:50:22 +0000 UTC" firstStartedPulling="2025-11-28 08:50:23.320631677 +0000 UTC m=+7077.698696788" lastFinishedPulling="2025-11-28 08:50:38.477450906 +0000 UTC m=+7092.855516017" observedRunningTime="2025-11-28 08:50:39.906084967 +0000 UTC m=+7094.284150078" watchObservedRunningTime="2025-11-28 08:50:39.908875246 +0000 UTC m=+7094.286940367" Nov 28 08:50:42 crc kubenswrapper[4946]: I1128 08:50:42.926708 4946 generic.go:334] "Generic (PLEG): container finished" podID="870b12ba-c2ed-4359-b100-b68529bda2a0" containerID="bad7c0788e8564179929cfafe33edaa496b34c3f4e55c8ace27e8eea8bb1ad14" exitCode=0 Nov 28 08:50:42 crc kubenswrapper[4946]: I1128 08:50:42.926793 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rxtds" event={"ID":"870b12ba-c2ed-4359-b100-b68529bda2a0","Type":"ContainerDied","Data":"bad7c0788e8564179929cfafe33edaa496b34c3f4e55c8ace27e8eea8bb1ad14"} Nov 28 08:50:44 crc kubenswrapper[4946]: I1128 08:50:44.332565 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rxtds" Nov 28 08:50:44 crc kubenswrapper[4946]: I1128 08:50:44.415359 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/870b12ba-c2ed-4359-b100-b68529bda2a0-config-data\") pod \"870b12ba-c2ed-4359-b100-b68529bda2a0\" (UID: \"870b12ba-c2ed-4359-b100-b68529bda2a0\") " Nov 28 08:50:44 crc kubenswrapper[4946]: I1128 08:50:44.415993 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/870b12ba-c2ed-4359-b100-b68529bda2a0-combined-ca-bundle\") pod \"870b12ba-c2ed-4359-b100-b68529bda2a0\" (UID: \"870b12ba-c2ed-4359-b100-b68529bda2a0\") " Nov 28 08:50:44 crc kubenswrapper[4946]: I1128 08:50:44.416045 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/870b12ba-c2ed-4359-b100-b68529bda2a0-db-sync-config-data\") pod \"870b12ba-c2ed-4359-b100-b68529bda2a0\" (UID: \"870b12ba-c2ed-4359-b100-b68529bda2a0\") " Nov 28 08:50:44 crc kubenswrapper[4946]: I1128 08:50:44.416105 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlzc5\" (UniqueName: \"kubernetes.io/projected/870b12ba-c2ed-4359-b100-b68529bda2a0-kube-api-access-mlzc5\") pod \"870b12ba-c2ed-4359-b100-b68529bda2a0\" (UID: \"870b12ba-c2ed-4359-b100-b68529bda2a0\") " Nov 28 08:50:44 crc kubenswrapper[4946]: I1128 08:50:44.421402 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/870b12ba-c2ed-4359-b100-b68529bda2a0-kube-api-access-mlzc5" (OuterVolumeSpecName: "kube-api-access-mlzc5") pod "870b12ba-c2ed-4359-b100-b68529bda2a0" (UID: "870b12ba-c2ed-4359-b100-b68529bda2a0"). InnerVolumeSpecName "kube-api-access-mlzc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:50:44 crc kubenswrapper[4946]: I1128 08:50:44.421464 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/870b12ba-c2ed-4359-b100-b68529bda2a0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "870b12ba-c2ed-4359-b100-b68529bda2a0" (UID: "870b12ba-c2ed-4359-b100-b68529bda2a0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:50:44 crc kubenswrapper[4946]: I1128 08:50:44.441902 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/870b12ba-c2ed-4359-b100-b68529bda2a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "870b12ba-c2ed-4359-b100-b68529bda2a0" (UID: "870b12ba-c2ed-4359-b100-b68529bda2a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:50:44 crc kubenswrapper[4946]: I1128 08:50:44.476203 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/870b12ba-c2ed-4359-b100-b68529bda2a0-config-data" (OuterVolumeSpecName: "config-data") pod "870b12ba-c2ed-4359-b100-b68529bda2a0" (UID: "870b12ba-c2ed-4359-b100-b68529bda2a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:50:44 crc kubenswrapper[4946]: I1128 08:50:44.519172 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/870b12ba-c2ed-4359-b100-b68529bda2a0-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:44 crc kubenswrapper[4946]: I1128 08:50:44.519239 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/870b12ba-c2ed-4359-b100-b68529bda2a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:44 crc kubenswrapper[4946]: I1128 08:50:44.519270 4946 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/870b12ba-c2ed-4359-b100-b68529bda2a0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:44 crc kubenswrapper[4946]: I1128 08:50:44.519294 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlzc5\" (UniqueName: \"kubernetes.io/projected/870b12ba-c2ed-4359-b100-b68529bda2a0-kube-api-access-mlzc5\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:44 crc kubenswrapper[4946]: I1128 08:50:44.953911 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rxtds" event={"ID":"870b12ba-c2ed-4359-b100-b68529bda2a0","Type":"ContainerDied","Data":"9b4511df094c41a51a63ee8cfe7dc82a28f1dc81fb6bdcbdf5f9322380d8242b"} Nov 28 08:50:44 crc kubenswrapper[4946]: I1128 08:50:44.953970 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b4511df094c41a51a63ee8cfe7dc82a28f1dc81fb6bdcbdf5f9322380d8242b" Nov 28 08:50:44 crc kubenswrapper[4946]: I1128 08:50:44.954056 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rxtds" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.307822 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 08:50:45 crc kubenswrapper[4946]: E1128 08:50:45.308190 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="870b12ba-c2ed-4359-b100-b68529bda2a0" containerName="glance-db-sync" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.308203 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="870b12ba-c2ed-4359-b100-b68529bda2a0" containerName="glance-db-sync" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.308362 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="870b12ba-c2ed-4359-b100-b68529bda2a0" containerName="glance-db-sync" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.312418 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.314452 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.314596 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.315783 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.316173 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-9spxc" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.325341 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.427277 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86dd75bcb7-cxcts"] Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.429767 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.440120 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.440242 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5sjv\" (UniqueName: \"kubernetes.io/projected/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-kube-api-access-l5sjv\") pod \"glance-default-external-api-0\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.440455 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-logs\") pod \"glance-default-external-api-0\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.440556 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-config-data\") pod \"glance-default-external-api-0\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.440583 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-ceph\") pod \"glance-default-external-api-0\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.440802 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.440908 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-scripts\") pod \"glance-default-external-api-0\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.476848 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86dd75bcb7-cxcts"] Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.542131 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff0b524c-9061-4af0-8192-95c1cb73259b-dns-svc\") pod \"dnsmasq-dns-86dd75bcb7-cxcts\" (UID: \"ff0b524c-9061-4af0-8192-95c1cb73259b\") " pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.542175 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-logs\") pod \"glance-default-external-api-0\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.542205 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-config-data\") pod \"glance-default-external-api-0\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.542226 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4hdq\" (UniqueName: \"kubernetes.io/projected/ff0b524c-9061-4af0-8192-95c1cb73259b-kube-api-access-r4hdq\") pod \"dnsmasq-dns-86dd75bcb7-cxcts\" (UID: \"ff0b524c-9061-4af0-8192-95c1cb73259b\") " pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.542243 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-ceph\") pod \"glance-default-external-api-0\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.542280 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff0b524c-9061-4af0-8192-95c1cb73259b-ovsdbserver-sb\") pod \"dnsmasq-dns-86dd75bcb7-cxcts\" (UID: \"ff0b524c-9061-4af0-8192-95c1cb73259b\") " pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.542301 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff0b524c-9061-4af0-8192-95c1cb73259b-config\") pod \"dnsmasq-dns-86dd75bcb7-cxcts\" (UID: \"ff0b524c-9061-4af0-8192-95c1cb73259b\") " pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.542317 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff0b524c-9061-4af0-8192-95c1cb73259b-ovsdbserver-nb\") pod \"dnsmasq-dns-86dd75bcb7-cxcts\" (UID: \"ff0b524c-9061-4af0-8192-95c1cb73259b\") " pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.542340 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.542368 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-scripts\") pod \"glance-default-external-api-0\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.542413 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.542438 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5sjv\" (UniqueName: \"kubernetes.io/projected/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-kube-api-access-l5sjv\") pod \"glance-default-external-api-0\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.542690 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-logs\") pod \"glance-default-external-api-0\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.543621 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.546452 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-config-data\") pod \"glance-default-external-api-0\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.547504 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-scripts\") pod \"glance-default-external-api-0\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.552264 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.558429 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5sjv\" (UniqueName: \"kubernetes.io/projected/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-kube-api-access-l5sjv\") pod \"glance-default-external-api-0\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.558765 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-ceph\") pod \"glance-default-external-api-0\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.599449 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.600891 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.605197 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.611367 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.643753 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff0b524c-9061-4af0-8192-95c1cb73259b-dns-svc\") pod \"dnsmasq-dns-86dd75bcb7-cxcts\" (UID: \"ff0b524c-9061-4af0-8192-95c1cb73259b\") " pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.643810 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4hdq\" (UniqueName: \"kubernetes.io/projected/ff0b524c-9061-4af0-8192-95c1cb73259b-kube-api-access-r4hdq\") pod \"dnsmasq-dns-86dd75bcb7-cxcts\" (UID: \"ff0b524c-9061-4af0-8192-95c1cb73259b\") " pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.643853 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff0b524c-9061-4af0-8192-95c1cb73259b-ovsdbserver-sb\") pod \"dnsmasq-dns-86dd75bcb7-cxcts\" (UID: \"ff0b524c-9061-4af0-8192-95c1cb73259b\") " pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.643880 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff0b524c-9061-4af0-8192-95c1cb73259b-config\") pod \"dnsmasq-dns-86dd75bcb7-cxcts\" (UID: \"ff0b524c-9061-4af0-8192-95c1cb73259b\") " pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.643901 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff0b524c-9061-4af0-8192-95c1cb73259b-ovsdbserver-nb\") pod \"dnsmasq-dns-86dd75bcb7-cxcts\" (UID: \"ff0b524c-9061-4af0-8192-95c1cb73259b\") " pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.644951 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff0b524c-9061-4af0-8192-95c1cb73259b-ovsdbserver-nb\") pod \"dnsmasq-dns-86dd75bcb7-cxcts\" (UID: \"ff0b524c-9061-4af0-8192-95c1cb73259b\") " pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.645494 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff0b524c-9061-4af0-8192-95c1cb73259b-dns-svc\") pod \"dnsmasq-dns-86dd75bcb7-cxcts\" (UID: \"ff0b524c-9061-4af0-8192-95c1cb73259b\") " pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.645565 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff0b524c-9061-4af0-8192-95c1cb73259b-ovsdbserver-sb\") pod \"dnsmasq-dns-86dd75bcb7-cxcts\" (UID: \"ff0b524c-9061-4af0-8192-95c1cb73259b\") " pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.645867 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.646639 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff0b524c-9061-4af0-8192-95c1cb73259b-config\") pod \"dnsmasq-dns-86dd75bcb7-cxcts\" (UID: \"ff0b524c-9061-4af0-8192-95c1cb73259b\") " pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.666669 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4hdq\" (UniqueName: \"kubernetes.io/projected/ff0b524c-9061-4af0-8192-95c1cb73259b-kube-api-access-r4hdq\") pod \"dnsmasq-dns-86dd75bcb7-cxcts\" (UID: \"ff0b524c-9061-4af0-8192-95c1cb73259b\") " pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.748402 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.748793 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.748831 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhvlv\" (UniqueName: \"kubernetes.io/projected/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-kube-api-access-fhvlv\") pod \"glance-default-internal-api-0\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.748846 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.748863 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.748910 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.748942 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-logs\") pod \"glance-default-internal-api-0\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.770934 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.853356 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.853427 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.853493 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhvlv\" (UniqueName: \"kubernetes.io/projected/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-kube-api-access-fhvlv\") pod \"glance-default-internal-api-0\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.853519 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.853552 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.853657 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.853729 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-logs\") pod \"glance-default-internal-api-0\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.854578 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-logs\") pod \"glance-default-internal-api-0\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.855276 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.859411 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.860742 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.860744 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.875102 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhvlv\" (UniqueName: \"kubernetes.io/projected/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-kube-api-access-fhvlv\") pod \"glance-default-internal-api-0\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.876975 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.956978 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 08:50:45 crc kubenswrapper[4946]: I1128 08:50:45.966144 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 08:50:46 crc kubenswrapper[4946]: I1128 08:50:46.241933 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86dd75bcb7-cxcts"] Nov 28 08:50:46 crc kubenswrapper[4946]: I1128 08:50:46.365067 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 08:50:46 crc kubenswrapper[4946]: I1128 08:50:46.532937 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 08:50:46 crc kubenswrapper[4946]: W1128 08:50:46.567655 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38b63ee2_92e4_4dfe_afef_4f1a1391f80b.slice/crio-c8f4e446da49087130dc426391da649debef79851c828237e5903a9e32975525 WatchSource:0}: Error finding container c8f4e446da49087130dc426391da649debef79851c828237e5903a9e32975525: Status 404 returned error can't find the container with id c8f4e446da49087130dc426391da649debef79851c828237e5903a9e32975525 Nov 28 08:50:46 crc kubenswrapper[4946]: I1128 08:50:46.985511 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0ad43077-11d0-4fef-b7f0-4a59f1187e6e","Type":"ContainerStarted","Data":"969f4a4765fdb142270e0c850c205a00035f6e46e8638b47cff9336849bc61c4"} Nov 28 08:50:46 crc kubenswrapper[4946]: I1128 08:50:46.985980 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0ad43077-11d0-4fef-b7f0-4a59f1187e6e","Type":"ContainerStarted","Data":"307b2627735676eeaae95f5f8773b3fbf62e7647ec50f1c5e7c9b943cbc073ee"} Nov 28 08:50:46 crc kubenswrapper[4946]: I1128 08:50:46.989859 4946 generic.go:334] "Generic (PLEG): container finished" podID="ff0b524c-9061-4af0-8192-95c1cb73259b" containerID="84d9e41ede266d8a1d8e25460269f10f16e3b01bf5c26ac7c7b98906b79f2654" exitCode=0 Nov 28 08:50:46 crc kubenswrapper[4946]: I1128 08:50:46.989919 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" event={"ID":"ff0b524c-9061-4af0-8192-95c1cb73259b","Type":"ContainerDied","Data":"84d9e41ede266d8a1d8e25460269f10f16e3b01bf5c26ac7c7b98906b79f2654"} Nov 28 08:50:46 crc kubenswrapper[4946]: I1128 08:50:46.989940 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" event={"ID":"ff0b524c-9061-4af0-8192-95c1cb73259b","Type":"ContainerStarted","Data":"6e22f8bc18ab3917f30d0e3b0691d2bd8ae77aa27a1d7a4b2b216277e6de9003"} Nov 28 08:50:46 crc kubenswrapper[4946]: I1128 08:50:46.992107 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"38b63ee2-92e4-4dfe-afef-4f1a1391f80b","Type":"ContainerStarted","Data":"c8f4e446da49087130dc426391da649debef79851c828237e5903a9e32975525"} Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.005130 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" event={"ID":"ff0b524c-9061-4af0-8192-95c1cb73259b","Type":"ContainerStarted","Data":"f9d32dcbfdc80df2a90d60ca2f7d86096048bf0c5e51b9c050b0d324ad1d69f3"} Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.005610 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.007507 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"38b63ee2-92e4-4dfe-afef-4f1a1391f80b","Type":"ContainerStarted","Data":"98c57c9119e6de43fd42e785576cc83f557c9da4179da0c3a7b68e2e534e8772"} Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.007550 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"38b63ee2-92e4-4dfe-afef-4f1a1391f80b","Type":"ContainerStarted","Data":"dc858b15d59d248ea08e470de33c6af4da9ac5604b9704e695257802a06ebd37"} Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.015926 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0ad43077-11d0-4fef-b7f0-4a59f1187e6e","Type":"ContainerStarted","Data":"e4793a13632a8d5471df46a9aa66a7b1b63234970bc0b948c8035f1fd803f4a3"} Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.016156 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0ad43077-11d0-4fef-b7f0-4a59f1187e6e" containerName="glance-log" containerID="cri-o://969f4a4765fdb142270e0c850c205a00035f6e46e8638b47cff9336849bc61c4" gracePeriod=30 Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.016437 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0ad43077-11d0-4fef-b7f0-4a59f1187e6e" containerName="glance-httpd" containerID="cri-o://e4793a13632a8d5471df46a9aa66a7b1b63234970bc0b948c8035f1fd803f4a3" gracePeriod=30 Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.024228 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" podStartSLOduration=3.024210843 podStartE2EDuration="3.024210843s" podCreationTimestamp="2025-11-28 08:50:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:50:48.023560867 +0000 UTC m=+7102.401625968" watchObservedRunningTime="2025-11-28 08:50:48.024210843 +0000 UTC m=+7102.402275954" Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.051302 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.051287214 podStartE2EDuration="3.051287214s" podCreationTimestamp="2025-11-28 08:50:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:50:48.045571042 +0000 UTC m=+7102.423636183" watchObservedRunningTime="2025-11-28 08:50:48.051287214 +0000 UTC m=+7102.429352325" Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.078229 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.07820273 podStartE2EDuration="3.07820273s" podCreationTimestamp="2025-11-28 08:50:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:50:48.065823414 +0000 UTC m=+7102.443888525" watchObservedRunningTime="2025-11-28 08:50:48.07820273 +0000 UTC m=+7102.456267841" Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.103909 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.533575 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.607958 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-httpd-run\") pod \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.608034 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-ceph\") pod \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.608070 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5sjv\" (UniqueName: \"kubernetes.io/projected/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-kube-api-access-l5sjv\") pod \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.608199 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-config-data\") pod \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.608216 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-scripts\") pod \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.608232 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-logs\") pod \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.608290 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-combined-ca-bundle\") pod \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\" (UID: \"0ad43077-11d0-4fef-b7f0-4a59f1187e6e\") " Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.608476 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0ad43077-11d0-4fef-b7f0-4a59f1187e6e" (UID: "0ad43077-11d0-4fef-b7f0-4a59f1187e6e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.608863 4946 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.609346 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-logs" (OuterVolumeSpecName: "logs") pod "0ad43077-11d0-4fef-b7f0-4a59f1187e6e" (UID: "0ad43077-11d0-4fef-b7f0-4a59f1187e6e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.614797 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-ceph" (OuterVolumeSpecName: "ceph") pod "0ad43077-11d0-4fef-b7f0-4a59f1187e6e" (UID: "0ad43077-11d0-4fef-b7f0-4a59f1187e6e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.616184 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-kube-api-access-l5sjv" (OuterVolumeSpecName: "kube-api-access-l5sjv") pod "0ad43077-11d0-4fef-b7f0-4a59f1187e6e" (UID: "0ad43077-11d0-4fef-b7f0-4a59f1187e6e"). InnerVolumeSpecName "kube-api-access-l5sjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.626826 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-scripts" (OuterVolumeSpecName: "scripts") pod "0ad43077-11d0-4fef-b7f0-4a59f1187e6e" (UID: "0ad43077-11d0-4fef-b7f0-4a59f1187e6e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.639757 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ad43077-11d0-4fef-b7f0-4a59f1187e6e" (UID: "0ad43077-11d0-4fef-b7f0-4a59f1187e6e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.671033 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-config-data" (OuterVolumeSpecName: "config-data") pod "0ad43077-11d0-4fef-b7f0-4a59f1187e6e" (UID: "0ad43077-11d0-4fef-b7f0-4a59f1187e6e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.710626 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.710657 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.710668 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-logs\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.710678 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.710690 4946 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-ceph\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:48 crc kubenswrapper[4946]: I1128 08:50:48.710699 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5sjv\" (UniqueName: \"kubernetes.io/projected/0ad43077-11d0-4fef-b7f0-4a59f1187e6e-kube-api-access-l5sjv\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.032227 4946 generic.go:334] "Generic (PLEG): container finished" podID="0ad43077-11d0-4fef-b7f0-4a59f1187e6e" containerID="e4793a13632a8d5471df46a9aa66a7b1b63234970bc0b948c8035f1fd803f4a3" exitCode=0 Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.032750 4946 generic.go:334] "Generic (PLEG): container finished" podID="0ad43077-11d0-4fef-b7f0-4a59f1187e6e" containerID="969f4a4765fdb142270e0c850c205a00035f6e46e8638b47cff9336849bc61c4" exitCode=143 Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.032342 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0ad43077-11d0-4fef-b7f0-4a59f1187e6e","Type":"ContainerDied","Data":"e4793a13632a8d5471df46a9aa66a7b1b63234970bc0b948c8035f1fd803f4a3"} Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.032860 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0ad43077-11d0-4fef-b7f0-4a59f1187e6e","Type":"ContainerDied","Data":"969f4a4765fdb142270e0c850c205a00035f6e46e8638b47cff9336849bc61c4"} Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.032899 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0ad43077-11d0-4fef-b7f0-4a59f1187e6e","Type":"ContainerDied","Data":"307b2627735676eeaae95f5f8773b3fbf62e7647ec50f1c5e7c9b943cbc073ee"} Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.032939 4946 scope.go:117] "RemoveContainer" containerID="e4793a13632a8d5471df46a9aa66a7b1b63234970bc0b948c8035f1fd803f4a3" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.032331 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.069088 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.075558 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.083607 4946 scope.go:117] "RemoveContainer" containerID="969f4a4765fdb142270e0c850c205a00035f6e46e8638b47cff9336849bc61c4" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.117415 4946 scope.go:117] "RemoveContainer" containerID="e4793a13632a8d5471df46a9aa66a7b1b63234970bc0b948c8035f1fd803f4a3" Nov 28 08:50:49 crc kubenswrapper[4946]: E1128 08:50:49.118732 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4793a13632a8d5471df46a9aa66a7b1b63234970bc0b948c8035f1fd803f4a3\": container with ID starting with e4793a13632a8d5471df46a9aa66a7b1b63234970bc0b948c8035f1fd803f4a3 not found: ID does not exist" containerID="e4793a13632a8d5471df46a9aa66a7b1b63234970bc0b948c8035f1fd803f4a3" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.118763 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4793a13632a8d5471df46a9aa66a7b1b63234970bc0b948c8035f1fd803f4a3"} err="failed to get container status \"e4793a13632a8d5471df46a9aa66a7b1b63234970bc0b948c8035f1fd803f4a3\": rpc error: code = NotFound desc = could not find container \"e4793a13632a8d5471df46a9aa66a7b1b63234970bc0b948c8035f1fd803f4a3\": container with ID starting with e4793a13632a8d5471df46a9aa66a7b1b63234970bc0b948c8035f1fd803f4a3 not found: ID does not exist" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.118783 4946 scope.go:117] "RemoveContainer" containerID="969f4a4765fdb142270e0c850c205a00035f6e46e8638b47cff9336849bc61c4" Nov 28 08:50:49 crc kubenswrapper[4946]: E1128 08:50:49.119020 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"969f4a4765fdb142270e0c850c205a00035f6e46e8638b47cff9336849bc61c4\": container with ID starting with 969f4a4765fdb142270e0c850c205a00035f6e46e8638b47cff9336849bc61c4 not found: ID does not exist" containerID="969f4a4765fdb142270e0c850c205a00035f6e46e8638b47cff9336849bc61c4" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.119036 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"969f4a4765fdb142270e0c850c205a00035f6e46e8638b47cff9336849bc61c4"} err="failed to get container status \"969f4a4765fdb142270e0c850c205a00035f6e46e8638b47cff9336849bc61c4\": rpc error: code = NotFound desc = could not find container \"969f4a4765fdb142270e0c850c205a00035f6e46e8638b47cff9336849bc61c4\": container with ID starting with 969f4a4765fdb142270e0c850c205a00035f6e46e8638b47cff9336849bc61c4 not found: ID does not exist" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.119048 4946 scope.go:117] "RemoveContainer" containerID="e4793a13632a8d5471df46a9aa66a7b1b63234970bc0b948c8035f1fd803f4a3" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.119228 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4793a13632a8d5471df46a9aa66a7b1b63234970bc0b948c8035f1fd803f4a3"} err="failed to get container status \"e4793a13632a8d5471df46a9aa66a7b1b63234970bc0b948c8035f1fd803f4a3\": rpc error: code = NotFound desc = could not find container \"e4793a13632a8d5471df46a9aa66a7b1b63234970bc0b948c8035f1fd803f4a3\": container with ID starting with e4793a13632a8d5471df46a9aa66a7b1b63234970bc0b948c8035f1fd803f4a3 not found: ID does not exist" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.119243 4946 scope.go:117] "RemoveContainer" containerID="969f4a4765fdb142270e0c850c205a00035f6e46e8638b47cff9336849bc61c4" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.119407 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"969f4a4765fdb142270e0c850c205a00035f6e46e8638b47cff9336849bc61c4"} err="failed to get container status \"969f4a4765fdb142270e0c850c205a00035f6e46e8638b47cff9336849bc61c4\": rpc error: code = NotFound desc = could not find container \"969f4a4765fdb142270e0c850c205a00035f6e46e8638b47cff9336849bc61c4\": container with ID starting with 969f4a4765fdb142270e0c850c205a00035f6e46e8638b47cff9336849bc61c4 not found: ID does not exist" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.124135 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 08:50:49 crc kubenswrapper[4946]: E1128 08:50:49.125219 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad43077-11d0-4fef-b7f0-4a59f1187e6e" containerName="glance-httpd" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.125441 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad43077-11d0-4fef-b7f0-4a59f1187e6e" containerName="glance-httpd" Nov 28 08:50:49 crc kubenswrapper[4946]: E1128 08:50:49.125635 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad43077-11d0-4fef-b7f0-4a59f1187e6e" containerName="glance-log" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.125774 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad43077-11d0-4fef-b7f0-4a59f1187e6e" containerName="glance-log" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.126256 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ad43077-11d0-4fef-b7f0-4a59f1187e6e" containerName="glance-log" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.126420 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ad43077-11d0-4fef-b7f0-4a59f1187e6e" containerName="glance-httpd" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.128353 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.138204 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.142391 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.225383 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e32f1876-143a-4c22-b70f-19c9beaee9f5-ceph\") pod \"glance-default-external-api-0\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.225467 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rqsv\" (UniqueName: \"kubernetes.io/projected/e32f1876-143a-4c22-b70f-19c9beaee9f5-kube-api-access-8rqsv\") pod \"glance-default-external-api-0\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.225586 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e32f1876-143a-4c22-b70f-19c9beaee9f5-logs\") pod \"glance-default-external-api-0\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.225661 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e32f1876-143a-4c22-b70f-19c9beaee9f5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.225795 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e32f1876-143a-4c22-b70f-19c9beaee9f5-scripts\") pod \"glance-default-external-api-0\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.225931 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e32f1876-143a-4c22-b70f-19c9beaee9f5-config-data\") pod \"glance-default-external-api-0\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.226044 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e32f1876-143a-4c22-b70f-19c9beaee9f5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.327324 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e32f1876-143a-4c22-b70f-19c9beaee9f5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.327501 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e32f1876-143a-4c22-b70f-19c9beaee9f5-ceph\") pod \"glance-default-external-api-0\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.327546 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rqsv\" (UniqueName: \"kubernetes.io/projected/e32f1876-143a-4c22-b70f-19c9beaee9f5-kube-api-access-8rqsv\") pod \"glance-default-external-api-0\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.327597 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e32f1876-143a-4c22-b70f-19c9beaee9f5-logs\") pod \"glance-default-external-api-0\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.327679 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e32f1876-143a-4c22-b70f-19c9beaee9f5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.327717 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e32f1876-143a-4c22-b70f-19c9beaee9f5-scripts\") pod \"glance-default-external-api-0\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.327760 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e32f1876-143a-4c22-b70f-19c9beaee9f5-config-data\") pod \"glance-default-external-api-0\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.327957 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e32f1876-143a-4c22-b70f-19c9beaee9f5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.329116 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e32f1876-143a-4c22-b70f-19c9beaee9f5-logs\") pod \"glance-default-external-api-0\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.332537 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e32f1876-143a-4c22-b70f-19c9beaee9f5-scripts\") pod \"glance-default-external-api-0\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.335204 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e32f1876-143a-4c22-b70f-19c9beaee9f5-config-data\") pod \"glance-default-external-api-0\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.341009 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e32f1876-143a-4c22-b70f-19c9beaee9f5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.341899 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e32f1876-143a-4c22-b70f-19c9beaee9f5-ceph\") pod \"glance-default-external-api-0\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.351102 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rqsv\" (UniqueName: \"kubernetes.io/projected/e32f1876-143a-4c22-b70f-19c9beaee9f5-kube-api-access-8rqsv\") pod \"glance-default-external-api-0\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " pod="openstack/glance-default-external-api-0" Nov 28 08:50:49 crc kubenswrapper[4946]: I1128 08:50:49.451696 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.003179 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ad43077-11d0-4fef-b7f0-4a59f1187e6e" path="/var/lib/kubelet/pods/0ad43077-11d0-4fef-b7f0-4a59f1187e6e/volumes" Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.043009 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.045234 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="38b63ee2-92e4-4dfe-afef-4f1a1391f80b" containerName="glance-log" containerID="cri-o://dc858b15d59d248ea08e470de33c6af4da9ac5604b9704e695257802a06ebd37" gracePeriod=30 Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.045690 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="38b63ee2-92e4-4dfe-afef-4f1a1391f80b" containerName="glance-httpd" containerID="cri-o://98c57c9119e6de43fd42e785576cc83f557c9da4179da0c3a7b68e2e534e8772" gracePeriod=30 Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.571885 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.649688 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhvlv\" (UniqueName: \"kubernetes.io/projected/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-kube-api-access-fhvlv\") pod \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.649758 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-httpd-run\") pod \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.649788 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-ceph\") pod \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.649837 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-logs\") pod \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.649908 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-combined-ca-bundle\") pod \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.649945 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-scripts\") pod \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.650047 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-config-data\") pod \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\" (UID: \"38b63ee2-92e4-4dfe-afef-4f1a1391f80b\") " Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.650773 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-logs" (OuterVolumeSpecName: "logs") pod "38b63ee2-92e4-4dfe-afef-4f1a1391f80b" (UID: "38b63ee2-92e4-4dfe-afef-4f1a1391f80b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.650901 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "38b63ee2-92e4-4dfe-afef-4f1a1391f80b" (UID: "38b63ee2-92e4-4dfe-afef-4f1a1391f80b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.654042 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-ceph" (OuterVolumeSpecName: "ceph") pod "38b63ee2-92e4-4dfe-afef-4f1a1391f80b" (UID: "38b63ee2-92e4-4dfe-afef-4f1a1391f80b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.658153 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-scripts" (OuterVolumeSpecName: "scripts") pod "38b63ee2-92e4-4dfe-afef-4f1a1391f80b" (UID: "38b63ee2-92e4-4dfe-afef-4f1a1391f80b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.660102 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-kube-api-access-fhvlv" (OuterVolumeSpecName: "kube-api-access-fhvlv") pod "38b63ee2-92e4-4dfe-afef-4f1a1391f80b" (UID: "38b63ee2-92e4-4dfe-afef-4f1a1391f80b"). InnerVolumeSpecName "kube-api-access-fhvlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.681210 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38b63ee2-92e4-4dfe-afef-4f1a1391f80b" (UID: "38b63ee2-92e4-4dfe-afef-4f1a1391f80b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.735304 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-config-data" (OuterVolumeSpecName: "config-data") pod "38b63ee2-92e4-4dfe-afef-4f1a1391f80b" (UID: "38b63ee2-92e4-4dfe-afef-4f1a1391f80b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.751939 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.751979 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhvlv\" (UniqueName: \"kubernetes.io/projected/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-kube-api-access-fhvlv\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.751997 4946 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.752014 4946 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-ceph\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.752028 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-logs\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.752042 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:50 crc kubenswrapper[4946]: I1128 08:50:50.752056 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38b63ee2-92e4-4dfe-afef-4f1a1391f80b-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.067329 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e32f1876-143a-4c22-b70f-19c9beaee9f5","Type":"ContainerStarted","Data":"31cf236dfe5b715766773673ba523d31b8564e0d30a333ed9e7c013532a72679"} Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.067399 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e32f1876-143a-4c22-b70f-19c9beaee9f5","Type":"ContainerStarted","Data":"69ac4319b546b1d76b7207382b0ee2501ccc1b4e554f45fa30c0ce9dac86d731"} Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.070488 4946 generic.go:334] "Generic (PLEG): container finished" podID="38b63ee2-92e4-4dfe-afef-4f1a1391f80b" containerID="98c57c9119e6de43fd42e785576cc83f557c9da4179da0c3a7b68e2e534e8772" exitCode=0 Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.070524 4946 generic.go:334] "Generic (PLEG): container finished" podID="38b63ee2-92e4-4dfe-afef-4f1a1391f80b" containerID="dc858b15d59d248ea08e470de33c6af4da9ac5604b9704e695257802a06ebd37" exitCode=143 Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.070552 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"38b63ee2-92e4-4dfe-afef-4f1a1391f80b","Type":"ContainerDied","Data":"98c57c9119e6de43fd42e785576cc83f557c9da4179da0c3a7b68e2e534e8772"} Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.070569 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.070591 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"38b63ee2-92e4-4dfe-afef-4f1a1391f80b","Type":"ContainerDied","Data":"dc858b15d59d248ea08e470de33c6af4da9ac5604b9704e695257802a06ebd37"} Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.070608 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"38b63ee2-92e4-4dfe-afef-4f1a1391f80b","Type":"ContainerDied","Data":"c8f4e446da49087130dc426391da649debef79851c828237e5903a9e32975525"} Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.070633 4946 scope.go:117] "RemoveContainer" containerID="98c57c9119e6de43fd42e785576cc83f557c9da4179da0c3a7b68e2e534e8772" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.101343 4946 scope.go:117] "RemoveContainer" containerID="dc858b15d59d248ea08e470de33c6af4da9ac5604b9704e695257802a06ebd37" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.149855 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.155255 4946 scope.go:117] "RemoveContainer" containerID="98c57c9119e6de43fd42e785576cc83f557c9da4179da0c3a7b68e2e534e8772" Nov 28 08:50:51 crc kubenswrapper[4946]: E1128 08:50:51.155854 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98c57c9119e6de43fd42e785576cc83f557c9da4179da0c3a7b68e2e534e8772\": container with ID starting with 98c57c9119e6de43fd42e785576cc83f557c9da4179da0c3a7b68e2e534e8772 not found: ID does not exist" containerID="98c57c9119e6de43fd42e785576cc83f557c9da4179da0c3a7b68e2e534e8772" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.155894 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c57c9119e6de43fd42e785576cc83f557c9da4179da0c3a7b68e2e534e8772"} err="failed to get container status \"98c57c9119e6de43fd42e785576cc83f557c9da4179da0c3a7b68e2e534e8772\": rpc error: code = NotFound desc = could not find container \"98c57c9119e6de43fd42e785576cc83f557c9da4179da0c3a7b68e2e534e8772\": container with ID starting with 98c57c9119e6de43fd42e785576cc83f557c9da4179da0c3a7b68e2e534e8772 not found: ID does not exist" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.155918 4946 scope.go:117] "RemoveContainer" containerID="dc858b15d59d248ea08e470de33c6af4da9ac5604b9704e695257802a06ebd37" Nov 28 08:50:51 crc kubenswrapper[4946]: E1128 08:50:51.156228 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc858b15d59d248ea08e470de33c6af4da9ac5604b9704e695257802a06ebd37\": container with ID starting with dc858b15d59d248ea08e470de33c6af4da9ac5604b9704e695257802a06ebd37 not found: ID does not exist" containerID="dc858b15d59d248ea08e470de33c6af4da9ac5604b9704e695257802a06ebd37" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.156255 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc858b15d59d248ea08e470de33c6af4da9ac5604b9704e695257802a06ebd37"} err="failed to get container status \"dc858b15d59d248ea08e470de33c6af4da9ac5604b9704e695257802a06ebd37\": rpc error: code = NotFound desc = could not find container \"dc858b15d59d248ea08e470de33c6af4da9ac5604b9704e695257802a06ebd37\": container with ID starting with dc858b15d59d248ea08e470de33c6af4da9ac5604b9704e695257802a06ebd37 not found: ID does not exist" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.156271 4946 scope.go:117] "RemoveContainer" containerID="98c57c9119e6de43fd42e785576cc83f557c9da4179da0c3a7b68e2e534e8772" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.156705 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c57c9119e6de43fd42e785576cc83f557c9da4179da0c3a7b68e2e534e8772"} err="failed to get container status \"98c57c9119e6de43fd42e785576cc83f557c9da4179da0c3a7b68e2e534e8772\": rpc error: code = NotFound desc = could not find container \"98c57c9119e6de43fd42e785576cc83f557c9da4179da0c3a7b68e2e534e8772\": container with ID starting with 98c57c9119e6de43fd42e785576cc83f557c9da4179da0c3a7b68e2e534e8772 not found: ID does not exist" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.156766 4946 scope.go:117] "RemoveContainer" containerID="dc858b15d59d248ea08e470de33c6af4da9ac5604b9704e695257802a06ebd37" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.157123 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc858b15d59d248ea08e470de33c6af4da9ac5604b9704e695257802a06ebd37"} err="failed to get container status \"dc858b15d59d248ea08e470de33c6af4da9ac5604b9704e695257802a06ebd37\": rpc error: code = NotFound desc = could not find container \"dc858b15d59d248ea08e470de33c6af4da9ac5604b9704e695257802a06ebd37\": container with ID starting with dc858b15d59d248ea08e470de33c6af4da9ac5604b9704e695257802a06ebd37 not found: ID does not exist" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.159437 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.174058 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 08:50:51 crc kubenswrapper[4946]: E1128 08:50:51.174649 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b63ee2-92e4-4dfe-afef-4f1a1391f80b" containerName="glance-httpd" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.174671 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b63ee2-92e4-4dfe-afef-4f1a1391f80b" containerName="glance-httpd" Nov 28 08:50:51 crc kubenswrapper[4946]: E1128 08:50:51.174689 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b63ee2-92e4-4dfe-afef-4f1a1391f80b" containerName="glance-log" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.174700 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b63ee2-92e4-4dfe-afef-4f1a1391f80b" containerName="glance-log" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.175184 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b63ee2-92e4-4dfe-afef-4f1a1391f80b" containerName="glance-httpd" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.175229 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b63ee2-92e4-4dfe-afef-4f1a1391f80b" containerName="glance-log" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.178013 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.189176 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.199111 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.266841 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/896ed951-3552-439f-836f-245b73b84c91-ceph\") pod \"glance-default-internal-api-0\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.267171 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/896ed951-3552-439f-836f-245b73b84c91-scripts\") pod \"glance-default-internal-api-0\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.267348 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/896ed951-3552-439f-836f-245b73b84c91-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.267482 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/896ed951-3552-439f-836f-245b73b84c91-config-data\") pod \"glance-default-internal-api-0\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.267591 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/896ed951-3552-439f-836f-245b73b84c91-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.267723 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw67t\" (UniqueName: \"kubernetes.io/projected/896ed951-3552-439f-836f-245b73b84c91-kube-api-access-cw67t\") pod \"glance-default-internal-api-0\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.267885 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/896ed951-3552-439f-836f-245b73b84c91-logs\") pod \"glance-default-internal-api-0\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.370069 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/896ed951-3552-439f-836f-245b73b84c91-config-data\") pod \"glance-default-internal-api-0\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.370452 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/896ed951-3552-439f-836f-245b73b84c91-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.370594 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw67t\" (UniqueName: \"kubernetes.io/projected/896ed951-3552-439f-836f-245b73b84c91-kube-api-access-cw67t\") pod \"glance-default-internal-api-0\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.370696 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/896ed951-3552-439f-836f-245b73b84c91-logs\") pod \"glance-default-internal-api-0\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.370808 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/896ed951-3552-439f-836f-245b73b84c91-ceph\") pod \"glance-default-internal-api-0\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.370927 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/896ed951-3552-439f-836f-245b73b84c91-scripts\") pod \"glance-default-internal-api-0\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.371064 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/896ed951-3552-439f-836f-245b73b84c91-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.371121 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/896ed951-3552-439f-836f-245b73b84c91-logs\") pod \"glance-default-internal-api-0\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.370945 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/896ed951-3552-439f-836f-245b73b84c91-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.378222 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/896ed951-3552-439f-836f-245b73b84c91-config-data\") pod \"glance-default-internal-api-0\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.379150 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/896ed951-3552-439f-836f-245b73b84c91-scripts\") pod \"glance-default-internal-api-0\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.384188 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/896ed951-3552-439f-836f-245b73b84c91-ceph\") pod \"glance-default-internal-api-0\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.386212 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/896ed951-3552-439f-836f-245b73b84c91-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.421937 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw67t\" (UniqueName: \"kubernetes.io/projected/896ed951-3552-439f-836f-245b73b84c91-kube-api-access-cw67t\") pod \"glance-default-internal-api-0\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:50:51 crc kubenswrapper[4946]: I1128 08:50:51.518797 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 08:50:52 crc kubenswrapper[4946]: I1128 08:50:52.004574 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38b63ee2-92e4-4dfe-afef-4f1a1391f80b" path="/var/lib/kubelet/pods/38b63ee2-92e4-4dfe-afef-4f1a1391f80b/volumes" Nov 28 08:50:52 crc kubenswrapper[4946]: I1128 08:50:52.080390 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e32f1876-143a-4c22-b70f-19c9beaee9f5","Type":"ContainerStarted","Data":"086b37a34495e44d8f45fc6bdd971d9cf1bf95f0b2df9805cbcfaf057ca1e41f"} Nov 28 08:50:52 crc kubenswrapper[4946]: I1128 08:50:52.105992 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.105972773 podStartE2EDuration="3.105972773s" podCreationTimestamp="2025-11-28 08:50:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:50:52.099885302 +0000 UTC m=+7106.477950413" watchObservedRunningTime="2025-11-28 08:50:52.105972773 +0000 UTC m=+7106.484037884" Nov 28 08:50:52 crc kubenswrapper[4946]: I1128 08:50:52.940379 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 08:50:53 crc kubenswrapper[4946]: I1128 08:50:53.092692 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"896ed951-3552-439f-836f-245b73b84c91","Type":"ContainerStarted","Data":"bae2944c8e4275496b1088ba927ca3b909a54bb6334e3cdc5643183c52cc059b"} Nov 28 08:50:54 crc kubenswrapper[4946]: I1128 08:50:54.103395 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"896ed951-3552-439f-836f-245b73b84c91","Type":"ContainerStarted","Data":"39b1e173baf848e7f2d3eed8b494a2a7cdd12f726ffd780d4714c4716666d117"} Nov 28 08:50:55 crc kubenswrapper[4946]: I1128 08:50:55.115134 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"896ed951-3552-439f-836f-245b73b84c91","Type":"ContainerStarted","Data":"3575c48b97a5fa07ec0dcbc91c1d074c45ad8e692b4e83b4731bd5fc878ab9e6"} Nov 28 08:50:55 crc kubenswrapper[4946]: I1128 08:50:55.141576 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.141558083 podStartE2EDuration="4.141558083s" podCreationTimestamp="2025-11-28 08:50:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:50:55.133661138 +0000 UTC m=+7109.511726259" watchObservedRunningTime="2025-11-28 08:50:55.141558083 +0000 UTC m=+7109.519623194" Nov 28 08:50:55 crc kubenswrapper[4946]: I1128 08:50:55.773703 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" Nov 28 08:50:55 crc kubenswrapper[4946]: I1128 08:50:55.888174 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c69597795-dsvbw"] Nov 28 08:50:55 crc kubenswrapper[4946]: I1128 08:50:55.888569 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c69597795-dsvbw" podUID="d4594554-584e-461f-adf9-5864bbc232af" containerName="dnsmasq-dns" containerID="cri-o://ba9a54fd8702778962834ae12a77b15dbfbeeb9eb50360aca7dc0e1b209f6dd3" gracePeriod=10 Nov 28 08:50:57 crc kubenswrapper[4946]: I1128 08:50:57.134270 4946 generic.go:334] "Generic (PLEG): container finished" podID="d4594554-584e-461f-adf9-5864bbc232af" containerID="ba9a54fd8702778962834ae12a77b15dbfbeeb9eb50360aca7dc0e1b209f6dd3" exitCode=0 Nov 28 08:50:57 crc kubenswrapper[4946]: I1128 08:50:57.134371 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c69597795-dsvbw" event={"ID":"d4594554-584e-461f-adf9-5864bbc232af","Type":"ContainerDied","Data":"ba9a54fd8702778962834ae12a77b15dbfbeeb9eb50360aca7dc0e1b209f6dd3"} Nov 28 08:50:57 crc kubenswrapper[4946]: I1128 08:50:57.401553 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c69597795-dsvbw" Nov 28 08:50:57 crc kubenswrapper[4946]: I1128 08:50:57.590357 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4594554-584e-461f-adf9-5864bbc232af-ovsdbserver-sb\") pod \"d4594554-584e-461f-adf9-5864bbc232af\" (UID: \"d4594554-584e-461f-adf9-5864bbc232af\") " Nov 28 08:50:57 crc kubenswrapper[4946]: I1128 08:50:57.590421 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4594554-584e-461f-adf9-5864bbc232af-ovsdbserver-nb\") pod \"d4594554-584e-461f-adf9-5864bbc232af\" (UID: \"d4594554-584e-461f-adf9-5864bbc232af\") " Nov 28 08:50:57 crc kubenswrapper[4946]: I1128 08:50:57.590536 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4594554-584e-461f-adf9-5864bbc232af-dns-svc\") pod \"d4594554-584e-461f-adf9-5864bbc232af\" (UID: \"d4594554-584e-461f-adf9-5864bbc232af\") " Nov 28 08:50:57 crc kubenswrapper[4946]: I1128 08:50:57.590661 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4594554-584e-461f-adf9-5864bbc232af-config\") pod \"d4594554-584e-461f-adf9-5864bbc232af\" (UID: \"d4594554-584e-461f-adf9-5864bbc232af\") " Nov 28 08:50:57 crc kubenswrapper[4946]: I1128 08:50:57.590746 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p629m\" (UniqueName: \"kubernetes.io/projected/d4594554-584e-461f-adf9-5864bbc232af-kube-api-access-p629m\") pod \"d4594554-584e-461f-adf9-5864bbc232af\" (UID: \"d4594554-584e-461f-adf9-5864bbc232af\") " Nov 28 08:50:57 crc kubenswrapper[4946]: I1128 08:50:57.600392 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4594554-584e-461f-adf9-5864bbc232af-kube-api-access-p629m" (OuterVolumeSpecName: "kube-api-access-p629m") pod "d4594554-584e-461f-adf9-5864bbc232af" (UID: "d4594554-584e-461f-adf9-5864bbc232af"). InnerVolumeSpecName "kube-api-access-p629m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:50:57 crc kubenswrapper[4946]: I1128 08:50:57.647177 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4594554-584e-461f-adf9-5864bbc232af-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d4594554-584e-461f-adf9-5864bbc232af" (UID: "d4594554-584e-461f-adf9-5864bbc232af"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:50:57 crc kubenswrapper[4946]: I1128 08:50:57.650172 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4594554-584e-461f-adf9-5864bbc232af-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d4594554-584e-461f-adf9-5864bbc232af" (UID: "d4594554-584e-461f-adf9-5864bbc232af"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:50:57 crc kubenswrapper[4946]: I1128 08:50:57.657429 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4594554-584e-461f-adf9-5864bbc232af-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d4594554-584e-461f-adf9-5864bbc232af" (UID: "d4594554-584e-461f-adf9-5864bbc232af"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:50:57 crc kubenswrapper[4946]: I1128 08:50:57.663922 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4594554-584e-461f-adf9-5864bbc232af-config" (OuterVolumeSpecName: "config") pod "d4594554-584e-461f-adf9-5864bbc232af" (UID: "d4594554-584e-461f-adf9-5864bbc232af"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:50:57 crc kubenswrapper[4946]: I1128 08:50:57.693228 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4594554-584e-461f-adf9-5864bbc232af-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:57 crc kubenswrapper[4946]: I1128 08:50:57.693254 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4594554-584e-461f-adf9-5864bbc232af-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:57 crc kubenswrapper[4946]: I1128 08:50:57.693264 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4594554-584e-461f-adf9-5864bbc232af-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:57 crc kubenswrapper[4946]: I1128 08:50:57.693273 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4594554-584e-461f-adf9-5864bbc232af-config\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:57 crc kubenswrapper[4946]: I1128 08:50:57.693283 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p629m\" (UniqueName: \"kubernetes.io/projected/d4594554-584e-461f-adf9-5864bbc232af-kube-api-access-p629m\") on node \"crc\" DevicePath \"\"" Nov 28 08:50:58 crc kubenswrapper[4946]: I1128 08:50:58.146507 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c69597795-dsvbw" event={"ID":"d4594554-584e-461f-adf9-5864bbc232af","Type":"ContainerDied","Data":"b640eaeb2bfcf64fcbe11183a8da0d3012c90e2e4b1e33cfce5658155fa018a3"} Nov 28 08:50:58 crc kubenswrapper[4946]: I1128 08:50:58.146570 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c69597795-dsvbw" Nov 28 08:50:58 crc kubenswrapper[4946]: I1128 08:50:58.146820 4946 scope.go:117] "RemoveContainer" containerID="ba9a54fd8702778962834ae12a77b15dbfbeeb9eb50360aca7dc0e1b209f6dd3" Nov 28 08:50:58 crc kubenswrapper[4946]: I1128 08:50:58.173713 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c69597795-dsvbw"] Nov 28 08:50:58 crc kubenswrapper[4946]: I1128 08:50:58.174949 4946 scope.go:117] "RemoveContainer" containerID="1425c81f485e45aab5264cd0ce2a5d46d5020eb887f827a95f23686b782acfce" Nov 28 08:50:58 crc kubenswrapper[4946]: I1128 08:50:58.180543 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c69597795-dsvbw"] Nov 28 08:50:59 crc kubenswrapper[4946]: I1128 08:50:59.452428 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 08:50:59 crc kubenswrapper[4946]: I1128 08:50:59.452981 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 08:50:59 crc kubenswrapper[4946]: I1128 08:50:59.497555 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 08:50:59 crc kubenswrapper[4946]: I1128 08:50:59.511863 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 08:51:00 crc kubenswrapper[4946]: I1128 08:51:00.004681 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4594554-584e-461f-adf9-5864bbc232af" path="/var/lib/kubelet/pods/d4594554-584e-461f-adf9-5864bbc232af/volumes" Nov 28 08:51:00 crc kubenswrapper[4946]: I1128 08:51:00.175352 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 08:51:00 crc kubenswrapper[4946]: I1128 08:51:00.175421 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 08:51:01 crc kubenswrapper[4946]: I1128 08:51:01.520017 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 08:51:01 crc kubenswrapper[4946]: I1128 08:51:01.520647 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 08:51:01 crc kubenswrapper[4946]: I1128 08:51:01.571263 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 08:51:01 crc kubenswrapper[4946]: I1128 08:51:01.588013 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 08:51:02 crc kubenswrapper[4946]: I1128 08:51:02.032412 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 08:51:02 crc kubenswrapper[4946]: I1128 08:51:02.036708 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 08:51:02 crc kubenswrapper[4946]: I1128 08:51:02.196437 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 08:51:02 crc kubenswrapper[4946]: I1128 08:51:02.196485 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 08:51:03 crc kubenswrapper[4946]: I1128 08:51:03.987216 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 08:51:04 crc kubenswrapper[4946]: I1128 08:51:04.034877 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 08:51:14 crc kubenswrapper[4946]: I1128 08:51:14.067485 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-c9h9f"] Nov 28 08:51:14 crc kubenswrapper[4946]: E1128 08:51:14.068186 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4594554-584e-461f-adf9-5864bbc232af" containerName="init" Nov 28 08:51:14 crc kubenswrapper[4946]: I1128 08:51:14.068198 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4594554-584e-461f-adf9-5864bbc232af" containerName="init" Nov 28 08:51:14 crc kubenswrapper[4946]: E1128 08:51:14.068224 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4594554-584e-461f-adf9-5864bbc232af" containerName="dnsmasq-dns" Nov 28 08:51:14 crc kubenswrapper[4946]: I1128 08:51:14.068230 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4594554-584e-461f-adf9-5864bbc232af" containerName="dnsmasq-dns" Nov 28 08:51:14 crc kubenswrapper[4946]: I1128 08:51:14.068430 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4594554-584e-461f-adf9-5864bbc232af" containerName="dnsmasq-dns" Nov 28 08:51:14 crc kubenswrapper[4946]: I1128 08:51:14.069156 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c9h9f" Nov 28 08:51:14 crc kubenswrapper[4946]: I1128 08:51:14.077754 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-c9h9f"] Nov 28 08:51:14 crc kubenswrapper[4946]: I1128 08:51:14.173214 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-bc0c-account-create-update-lq5dr"] Nov 28 08:51:14 crc kubenswrapper[4946]: I1128 08:51:14.174362 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bc0c-account-create-update-lq5dr" Nov 28 08:51:14 crc kubenswrapper[4946]: I1128 08:51:14.174573 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85e917fc-a566-491c-abc5-515295f5b38e-operator-scripts\") pod \"placement-db-create-c9h9f\" (UID: \"85e917fc-a566-491c-abc5-515295f5b38e\") " pod="openstack/placement-db-create-c9h9f" Nov 28 08:51:14 crc kubenswrapper[4946]: I1128 08:51:14.175018 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7wrg\" (UniqueName: \"kubernetes.io/projected/85e917fc-a566-491c-abc5-515295f5b38e-kube-api-access-s7wrg\") pod \"placement-db-create-c9h9f\" (UID: \"85e917fc-a566-491c-abc5-515295f5b38e\") " pod="openstack/placement-db-create-c9h9f" Nov 28 08:51:14 crc kubenswrapper[4946]: I1128 08:51:14.176586 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 28 08:51:14 crc kubenswrapper[4946]: I1128 08:51:14.184765 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bc0c-account-create-update-lq5dr"] Nov 28 08:51:14 crc kubenswrapper[4946]: I1128 08:51:14.276414 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7wrg\" (UniqueName: \"kubernetes.io/projected/85e917fc-a566-491c-abc5-515295f5b38e-kube-api-access-s7wrg\") pod \"placement-db-create-c9h9f\" (UID: \"85e917fc-a566-491c-abc5-515295f5b38e\") " pod="openstack/placement-db-create-c9h9f" Nov 28 08:51:14 crc kubenswrapper[4946]: I1128 08:51:14.276493 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85e917fc-a566-491c-abc5-515295f5b38e-operator-scripts\") pod \"placement-db-create-c9h9f\" (UID: \"85e917fc-a566-491c-abc5-515295f5b38e\") " pod="openstack/placement-db-create-c9h9f" Nov 28 08:51:14 crc kubenswrapper[4946]: I1128 08:51:14.276535 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e-operator-scripts\") pod \"placement-bc0c-account-create-update-lq5dr\" (UID: \"0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e\") " pod="openstack/placement-bc0c-account-create-update-lq5dr" Nov 28 08:51:14 crc kubenswrapper[4946]: I1128 08:51:14.276603 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v58lh\" (UniqueName: \"kubernetes.io/projected/0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e-kube-api-access-v58lh\") pod \"placement-bc0c-account-create-update-lq5dr\" (UID: \"0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e\") " pod="openstack/placement-bc0c-account-create-update-lq5dr" Nov 28 08:51:14 crc kubenswrapper[4946]: I1128 08:51:14.277324 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85e917fc-a566-491c-abc5-515295f5b38e-operator-scripts\") pod \"placement-db-create-c9h9f\" (UID: \"85e917fc-a566-491c-abc5-515295f5b38e\") " pod="openstack/placement-db-create-c9h9f" Nov 28 08:51:14 crc kubenswrapper[4946]: I1128 08:51:14.293667 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7wrg\" (UniqueName: \"kubernetes.io/projected/85e917fc-a566-491c-abc5-515295f5b38e-kube-api-access-s7wrg\") pod \"placement-db-create-c9h9f\" (UID: \"85e917fc-a566-491c-abc5-515295f5b38e\") " pod="openstack/placement-db-create-c9h9f" Nov 28 08:51:14 crc kubenswrapper[4946]: I1128 08:51:14.379601 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e-operator-scripts\") pod \"placement-bc0c-account-create-update-lq5dr\" (UID: \"0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e\") " pod="openstack/placement-bc0c-account-create-update-lq5dr" Nov 28 08:51:14 crc kubenswrapper[4946]: I1128 08:51:14.380217 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v58lh\" (UniqueName: \"kubernetes.io/projected/0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e-kube-api-access-v58lh\") pod \"placement-bc0c-account-create-update-lq5dr\" (UID: \"0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e\") " pod="openstack/placement-bc0c-account-create-update-lq5dr" Nov 28 08:51:14 crc kubenswrapper[4946]: I1128 08:51:14.380256 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e-operator-scripts\") pod \"placement-bc0c-account-create-update-lq5dr\" (UID: \"0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e\") " pod="openstack/placement-bc0c-account-create-update-lq5dr" Nov 28 08:51:14 crc kubenswrapper[4946]: I1128 08:51:14.401749 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c9h9f" Nov 28 08:51:14 crc kubenswrapper[4946]: I1128 08:51:14.413576 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v58lh\" (UniqueName: \"kubernetes.io/projected/0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e-kube-api-access-v58lh\") pod \"placement-bc0c-account-create-update-lq5dr\" (UID: \"0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e\") " pod="openstack/placement-bc0c-account-create-update-lq5dr" Nov 28 08:51:14 crc kubenswrapper[4946]: I1128 08:51:14.493708 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bc0c-account-create-update-lq5dr" Nov 28 08:51:14 crc kubenswrapper[4946]: W1128 08:51:14.907999 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85e917fc_a566_491c_abc5_515295f5b38e.slice/crio-ce7a1298ad056e765fa4249c0bc1e8063086f8b41d6214308c15c97cc1ee80a8 WatchSource:0}: Error finding container ce7a1298ad056e765fa4249c0bc1e8063086f8b41d6214308c15c97cc1ee80a8: Status 404 returned error can't find the container with id ce7a1298ad056e765fa4249c0bc1e8063086f8b41d6214308c15c97cc1ee80a8 Nov 28 08:51:14 crc kubenswrapper[4946]: I1128 08:51:14.911673 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-c9h9f"] Nov 28 08:51:15 crc kubenswrapper[4946]: I1128 08:51:15.006836 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bc0c-account-create-update-lq5dr"] Nov 28 08:51:15 crc kubenswrapper[4946]: W1128 08:51:15.009972 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f0e1c79_d58d_4aeb_99b0_4ee1ecedf96e.slice/crio-4d6a7e823b69b6409ef1a615da76fcd2b17e2f220a5d7250e80521c8ac8e6745 WatchSource:0}: Error finding container 4d6a7e823b69b6409ef1a615da76fcd2b17e2f220a5d7250e80521c8ac8e6745: Status 404 returned error can't find the container with id 4d6a7e823b69b6409ef1a615da76fcd2b17e2f220a5d7250e80521c8ac8e6745 Nov 28 08:51:15 crc kubenswrapper[4946]: I1128 08:51:15.351991 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bc0c-account-create-update-lq5dr" event={"ID":"0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e","Type":"ContainerStarted","Data":"f976139ca0f06d5441ab8fd016b08f4706bed58c9dd81ce2d69b1c6c2a8a026a"} Nov 28 08:51:15 crc kubenswrapper[4946]: I1128 08:51:15.352349 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bc0c-account-create-update-lq5dr" event={"ID":"0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e","Type":"ContainerStarted","Data":"4d6a7e823b69b6409ef1a615da76fcd2b17e2f220a5d7250e80521c8ac8e6745"} Nov 28 08:51:15 crc kubenswrapper[4946]: I1128 08:51:15.353380 4946 generic.go:334] "Generic (PLEG): container finished" podID="85e917fc-a566-491c-abc5-515295f5b38e" containerID="94ee65fed67d23a723e2eaf61a3f2a4c20b09e4c9fe5166404d7ff0b39c66c77" exitCode=0 Nov 28 08:51:15 crc kubenswrapper[4946]: I1128 08:51:15.353426 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c9h9f" event={"ID":"85e917fc-a566-491c-abc5-515295f5b38e","Type":"ContainerDied","Data":"94ee65fed67d23a723e2eaf61a3f2a4c20b09e4c9fe5166404d7ff0b39c66c77"} Nov 28 08:51:15 crc kubenswrapper[4946]: I1128 08:51:15.353454 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c9h9f" event={"ID":"85e917fc-a566-491c-abc5-515295f5b38e","Type":"ContainerStarted","Data":"ce7a1298ad056e765fa4249c0bc1e8063086f8b41d6214308c15c97cc1ee80a8"} Nov 28 08:51:15 crc kubenswrapper[4946]: I1128 08:51:15.369099 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-bc0c-account-create-update-lq5dr" podStartSLOduration=1.369078363 podStartE2EDuration="1.369078363s" podCreationTimestamp="2025-11-28 08:51:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:51:15.363760952 +0000 UTC m=+7129.741826063" watchObservedRunningTime="2025-11-28 08:51:15.369078363 +0000 UTC m=+7129.747143474" Nov 28 08:51:16 crc kubenswrapper[4946]: I1128 08:51:16.367156 4946 generic.go:334] "Generic (PLEG): container finished" podID="0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e" containerID="f976139ca0f06d5441ab8fd016b08f4706bed58c9dd81ce2d69b1c6c2a8a026a" exitCode=0 Nov 28 08:51:16 crc kubenswrapper[4946]: I1128 08:51:16.367226 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bc0c-account-create-update-lq5dr" event={"ID":"0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e","Type":"ContainerDied","Data":"f976139ca0f06d5441ab8fd016b08f4706bed58c9dd81ce2d69b1c6c2a8a026a"} Nov 28 08:51:16 crc kubenswrapper[4946]: I1128 08:51:16.810824 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c9h9f" Nov 28 08:51:16 crc kubenswrapper[4946]: I1128 08:51:16.932428 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7wrg\" (UniqueName: \"kubernetes.io/projected/85e917fc-a566-491c-abc5-515295f5b38e-kube-api-access-s7wrg\") pod \"85e917fc-a566-491c-abc5-515295f5b38e\" (UID: \"85e917fc-a566-491c-abc5-515295f5b38e\") " Nov 28 08:51:16 crc kubenswrapper[4946]: I1128 08:51:16.932592 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85e917fc-a566-491c-abc5-515295f5b38e-operator-scripts\") pod \"85e917fc-a566-491c-abc5-515295f5b38e\" (UID: \"85e917fc-a566-491c-abc5-515295f5b38e\") " Nov 28 08:51:16 crc kubenswrapper[4946]: I1128 08:51:16.933165 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85e917fc-a566-491c-abc5-515295f5b38e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85e917fc-a566-491c-abc5-515295f5b38e" (UID: "85e917fc-a566-491c-abc5-515295f5b38e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:51:16 crc kubenswrapper[4946]: I1128 08:51:16.939000 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85e917fc-a566-491c-abc5-515295f5b38e-kube-api-access-s7wrg" (OuterVolumeSpecName: "kube-api-access-s7wrg") pod "85e917fc-a566-491c-abc5-515295f5b38e" (UID: "85e917fc-a566-491c-abc5-515295f5b38e"). InnerVolumeSpecName "kube-api-access-s7wrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:51:17 crc kubenswrapper[4946]: I1128 08:51:17.035670 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7wrg\" (UniqueName: \"kubernetes.io/projected/85e917fc-a566-491c-abc5-515295f5b38e-kube-api-access-s7wrg\") on node \"crc\" DevicePath \"\"" Nov 28 08:51:17 crc kubenswrapper[4946]: I1128 08:51:17.035736 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85e917fc-a566-491c-abc5-515295f5b38e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:51:17 crc kubenswrapper[4946]: I1128 08:51:17.387701 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c9h9f" Nov 28 08:51:17 crc kubenswrapper[4946]: I1128 08:51:17.387661 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c9h9f" event={"ID":"85e917fc-a566-491c-abc5-515295f5b38e","Type":"ContainerDied","Data":"ce7a1298ad056e765fa4249c0bc1e8063086f8b41d6214308c15c97cc1ee80a8"} Nov 28 08:51:17 crc kubenswrapper[4946]: I1128 08:51:17.387913 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce7a1298ad056e765fa4249c0bc1e8063086f8b41d6214308c15c97cc1ee80a8" Nov 28 08:51:17 crc kubenswrapper[4946]: I1128 08:51:17.791212 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bc0c-account-create-update-lq5dr" Nov 28 08:51:17 crc kubenswrapper[4946]: I1128 08:51:17.957076 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e-operator-scripts\") pod \"0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e\" (UID: \"0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e\") " Nov 28 08:51:17 crc kubenswrapper[4946]: I1128 08:51:17.957233 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v58lh\" (UniqueName: \"kubernetes.io/projected/0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e-kube-api-access-v58lh\") pod \"0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e\" (UID: \"0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e\") " Nov 28 08:51:17 crc kubenswrapper[4946]: I1128 08:51:17.958065 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e" (UID: "0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:51:17 crc kubenswrapper[4946]: I1128 08:51:17.963380 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e-kube-api-access-v58lh" (OuterVolumeSpecName: "kube-api-access-v58lh") pod "0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e" (UID: "0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e"). InnerVolumeSpecName "kube-api-access-v58lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:51:18 crc kubenswrapper[4946]: I1128 08:51:18.059683 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:51:18 crc kubenswrapper[4946]: I1128 08:51:18.059712 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v58lh\" (UniqueName: \"kubernetes.io/projected/0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e-kube-api-access-v58lh\") on node \"crc\" DevicePath \"\"" Nov 28 08:51:18 crc kubenswrapper[4946]: I1128 08:51:18.419064 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bc0c-account-create-update-lq5dr" event={"ID":"0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e","Type":"ContainerDied","Data":"4d6a7e823b69b6409ef1a615da76fcd2b17e2f220a5d7250e80521c8ac8e6745"} Nov 28 08:51:18 crc kubenswrapper[4946]: I1128 08:51:18.419109 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d6a7e823b69b6409ef1a615da76fcd2b17e2f220a5d7250e80521c8ac8e6745" Nov 28 08:51:18 crc kubenswrapper[4946]: I1128 08:51:18.419121 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bc0c-account-create-update-lq5dr" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.581586 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-j2fl5"] Nov 28 08:51:19 crc kubenswrapper[4946]: E1128 08:51:19.582132 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e" containerName="mariadb-account-create-update" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.582143 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e" containerName="mariadb-account-create-update" Nov 28 08:51:19 crc kubenswrapper[4946]: E1128 08:51:19.582171 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85e917fc-a566-491c-abc5-515295f5b38e" containerName="mariadb-database-create" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.582177 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="85e917fc-a566-491c-abc5-515295f5b38e" containerName="mariadb-database-create" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.582324 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e" containerName="mariadb-account-create-update" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.582341 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="85e917fc-a566-491c-abc5-515295f5b38e" containerName="mariadb-database-create" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.582883 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j2fl5" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.591682 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-v2m2q" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.591928 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.594509 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.606840 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c5f444ccf-zm87m"] Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.608372 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.615766 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-j2fl5"] Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.691961 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47b7d54b-d04e-4c47-86dc-cc283143f3d7-config-data\") pod \"placement-db-sync-j2fl5\" (UID: \"47b7d54b-d04e-4c47-86dc-cc283143f3d7\") " pod="openstack/placement-db-sync-j2fl5" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.692030 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6ds8\" (UniqueName: \"kubernetes.io/projected/47b7d54b-d04e-4c47-86dc-cc283143f3d7-kube-api-access-n6ds8\") pod \"placement-db-sync-j2fl5\" (UID: \"47b7d54b-d04e-4c47-86dc-cc283143f3d7\") " pod="openstack/placement-db-sync-j2fl5" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.692072 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b7d54b-d04e-4c47-86dc-cc283143f3d7-combined-ca-bundle\") pod \"placement-db-sync-j2fl5\" (UID: \"47b7d54b-d04e-4c47-86dc-cc283143f3d7\") " pod="openstack/placement-db-sync-j2fl5" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.692097 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47b7d54b-d04e-4c47-86dc-cc283143f3d7-logs\") pod \"placement-db-sync-j2fl5\" (UID: \"47b7d54b-d04e-4c47-86dc-cc283143f3d7\") " pod="openstack/placement-db-sync-j2fl5" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.692132 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47b7d54b-d04e-4c47-86dc-cc283143f3d7-scripts\") pod \"placement-db-sync-j2fl5\" (UID: \"47b7d54b-d04e-4c47-86dc-cc283143f3d7\") " pod="openstack/placement-db-sync-j2fl5" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.742306 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c5f444ccf-zm87m"] Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.793497 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b7d54b-d04e-4c47-86dc-cc283143f3d7-combined-ca-bundle\") pod \"placement-db-sync-j2fl5\" (UID: \"47b7d54b-d04e-4c47-86dc-cc283143f3d7\") " pod="openstack/placement-db-sync-j2fl5" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.793551 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47b7d54b-d04e-4c47-86dc-cc283143f3d7-logs\") pod \"placement-db-sync-j2fl5\" (UID: \"47b7d54b-d04e-4c47-86dc-cc283143f3d7\") " pod="openstack/placement-db-sync-j2fl5" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.793615 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p95q5\" (UniqueName: \"kubernetes.io/projected/23f2a6f0-0a54-4926-8b15-e05851d7f142-kube-api-access-p95q5\") pod \"dnsmasq-dns-5c5f444ccf-zm87m\" (UID: \"23f2a6f0-0a54-4926-8b15-e05851d7f142\") " pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.794112 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47b7d54b-d04e-4c47-86dc-cc283143f3d7-logs\") pod \"placement-db-sync-j2fl5\" (UID: \"47b7d54b-d04e-4c47-86dc-cc283143f3d7\") " pod="openstack/placement-db-sync-j2fl5" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.794154 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47b7d54b-d04e-4c47-86dc-cc283143f3d7-scripts\") pod \"placement-db-sync-j2fl5\" (UID: \"47b7d54b-d04e-4c47-86dc-cc283143f3d7\") " pod="openstack/placement-db-sync-j2fl5" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.794213 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23f2a6f0-0a54-4926-8b15-e05851d7f142-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5f444ccf-zm87m\" (UID: \"23f2a6f0-0a54-4926-8b15-e05851d7f142\") " pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.794246 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23f2a6f0-0a54-4926-8b15-e05851d7f142-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5f444ccf-zm87m\" (UID: \"23f2a6f0-0a54-4926-8b15-e05851d7f142\") " pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.794307 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47b7d54b-d04e-4c47-86dc-cc283143f3d7-config-data\") pod \"placement-db-sync-j2fl5\" (UID: \"47b7d54b-d04e-4c47-86dc-cc283143f3d7\") " pod="openstack/placement-db-sync-j2fl5" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.794347 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6ds8\" (UniqueName: \"kubernetes.io/projected/47b7d54b-d04e-4c47-86dc-cc283143f3d7-kube-api-access-n6ds8\") pod \"placement-db-sync-j2fl5\" (UID: \"47b7d54b-d04e-4c47-86dc-cc283143f3d7\") " pod="openstack/placement-db-sync-j2fl5" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.794371 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23f2a6f0-0a54-4926-8b15-e05851d7f142-dns-svc\") pod \"dnsmasq-dns-5c5f444ccf-zm87m\" (UID: \"23f2a6f0-0a54-4926-8b15-e05851d7f142\") " pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.794394 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23f2a6f0-0a54-4926-8b15-e05851d7f142-config\") pod \"dnsmasq-dns-5c5f444ccf-zm87m\" (UID: \"23f2a6f0-0a54-4926-8b15-e05851d7f142\") " pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.799013 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47b7d54b-d04e-4c47-86dc-cc283143f3d7-config-data\") pod \"placement-db-sync-j2fl5\" (UID: \"47b7d54b-d04e-4c47-86dc-cc283143f3d7\") " pod="openstack/placement-db-sync-j2fl5" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.802293 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b7d54b-d04e-4c47-86dc-cc283143f3d7-combined-ca-bundle\") pod \"placement-db-sync-j2fl5\" (UID: \"47b7d54b-d04e-4c47-86dc-cc283143f3d7\") " pod="openstack/placement-db-sync-j2fl5" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.814905 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6ds8\" (UniqueName: \"kubernetes.io/projected/47b7d54b-d04e-4c47-86dc-cc283143f3d7-kube-api-access-n6ds8\") pod \"placement-db-sync-j2fl5\" (UID: \"47b7d54b-d04e-4c47-86dc-cc283143f3d7\") " pod="openstack/placement-db-sync-j2fl5" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.816878 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47b7d54b-d04e-4c47-86dc-cc283143f3d7-scripts\") pod \"placement-db-sync-j2fl5\" (UID: \"47b7d54b-d04e-4c47-86dc-cc283143f3d7\") " pod="openstack/placement-db-sync-j2fl5" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.896220 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23f2a6f0-0a54-4926-8b15-e05851d7f142-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5f444ccf-zm87m\" (UID: \"23f2a6f0-0a54-4926-8b15-e05851d7f142\") " pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.896471 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23f2a6f0-0a54-4926-8b15-e05851d7f142-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5f444ccf-zm87m\" (UID: \"23f2a6f0-0a54-4926-8b15-e05851d7f142\") " pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.896549 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23f2a6f0-0a54-4926-8b15-e05851d7f142-dns-svc\") pod \"dnsmasq-dns-5c5f444ccf-zm87m\" (UID: \"23f2a6f0-0a54-4926-8b15-e05851d7f142\") " pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.896574 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23f2a6f0-0a54-4926-8b15-e05851d7f142-config\") pod \"dnsmasq-dns-5c5f444ccf-zm87m\" (UID: \"23f2a6f0-0a54-4926-8b15-e05851d7f142\") " pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.897272 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23f2a6f0-0a54-4926-8b15-e05851d7f142-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5f444ccf-zm87m\" (UID: \"23f2a6f0-0a54-4926-8b15-e05851d7f142\") " pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.897309 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p95q5\" (UniqueName: \"kubernetes.io/projected/23f2a6f0-0a54-4926-8b15-e05851d7f142-kube-api-access-p95q5\") pod \"dnsmasq-dns-5c5f444ccf-zm87m\" (UID: \"23f2a6f0-0a54-4926-8b15-e05851d7f142\") " pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.897911 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23f2a6f0-0a54-4926-8b15-e05851d7f142-config\") pod \"dnsmasq-dns-5c5f444ccf-zm87m\" (UID: \"23f2a6f0-0a54-4926-8b15-e05851d7f142\") " pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.898975 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23f2a6f0-0a54-4926-8b15-e05851d7f142-dns-svc\") pod \"dnsmasq-dns-5c5f444ccf-zm87m\" (UID: \"23f2a6f0-0a54-4926-8b15-e05851d7f142\") " pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.899131 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23f2a6f0-0a54-4926-8b15-e05851d7f142-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5f444ccf-zm87m\" (UID: \"23f2a6f0-0a54-4926-8b15-e05851d7f142\") " pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.907002 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j2fl5" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.916342 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p95q5\" (UniqueName: \"kubernetes.io/projected/23f2a6f0-0a54-4926-8b15-e05851d7f142-kube-api-access-p95q5\") pod \"dnsmasq-dns-5c5f444ccf-zm87m\" (UID: \"23f2a6f0-0a54-4926-8b15-e05851d7f142\") " pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" Nov 28 08:51:19 crc kubenswrapper[4946]: I1128 08:51:19.967085 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" Nov 28 08:51:20 crc kubenswrapper[4946]: I1128 08:51:20.372613 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-j2fl5"] Nov 28 08:51:20 crc kubenswrapper[4946]: I1128 08:51:20.439752 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j2fl5" event={"ID":"47b7d54b-d04e-4c47-86dc-cc283143f3d7","Type":"ContainerStarted","Data":"a07f63349f40c78966179e33b1f66587fdd409291ad562b2c944534b6f4ea6e4"} Nov 28 08:51:20 crc kubenswrapper[4946]: I1128 08:51:20.471845 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c5f444ccf-zm87m"] Nov 28 08:51:20 crc kubenswrapper[4946]: W1128 08:51:20.481811 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23f2a6f0_0a54_4926_8b15_e05851d7f142.slice/crio-de5282fe949706cf2eae4f774504be5cdae3aed1e2164900fd77ccc92d6fd1ce WatchSource:0}: Error finding container de5282fe949706cf2eae4f774504be5cdae3aed1e2164900fd77ccc92d6fd1ce: Status 404 returned error can't find the container with id de5282fe949706cf2eae4f774504be5cdae3aed1e2164900fd77ccc92d6fd1ce Nov 28 08:51:21 crc kubenswrapper[4946]: I1128 08:51:21.452830 4946 generic.go:334] "Generic (PLEG): container finished" podID="23f2a6f0-0a54-4926-8b15-e05851d7f142" containerID="64970413d4401a2be1c65e689f16be4bdc0265c800501eed282a2eade2737bfe" exitCode=0 Nov 28 08:51:21 crc kubenswrapper[4946]: I1128 08:51:21.453421 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" event={"ID":"23f2a6f0-0a54-4926-8b15-e05851d7f142","Type":"ContainerDied","Data":"64970413d4401a2be1c65e689f16be4bdc0265c800501eed282a2eade2737bfe"} Nov 28 08:51:21 crc kubenswrapper[4946]: I1128 08:51:21.456405 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" event={"ID":"23f2a6f0-0a54-4926-8b15-e05851d7f142","Type":"ContainerStarted","Data":"de5282fe949706cf2eae4f774504be5cdae3aed1e2164900fd77ccc92d6fd1ce"} Nov 28 08:51:22 crc kubenswrapper[4946]: I1128 08:51:22.469944 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" event={"ID":"23f2a6f0-0a54-4926-8b15-e05851d7f142","Type":"ContainerStarted","Data":"c778bb7d442bce7c504805b7ed9c3630d155ec8ed86fa0f7fc5b4539f42ed3a2"} Nov 28 08:51:22 crc kubenswrapper[4946]: I1128 08:51:22.471609 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" Nov 28 08:51:22 crc kubenswrapper[4946]: I1128 08:51:22.492175 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" podStartSLOduration=3.492154076 podStartE2EDuration="3.492154076s" podCreationTimestamp="2025-11-28 08:51:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:51:22.487116531 +0000 UTC m=+7136.865181642" watchObservedRunningTime="2025-11-28 08:51:22.492154076 +0000 UTC m=+7136.870219187" Nov 28 08:51:24 crc kubenswrapper[4946]: I1128 08:51:24.490211 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j2fl5" event={"ID":"47b7d54b-d04e-4c47-86dc-cc283143f3d7","Type":"ContainerStarted","Data":"322ddc7bc8022960bc74b4e656c68b690b89f9a6d42fc613213f26a4515a78a1"} Nov 28 08:51:24 crc kubenswrapper[4946]: I1128 08:51:24.520721 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-j2fl5" podStartSLOduration=2.373270476 podStartE2EDuration="5.520697276s" podCreationTimestamp="2025-11-28 08:51:19 +0000 UTC" firstStartedPulling="2025-11-28 08:51:20.375200657 +0000 UTC m=+7134.753265768" lastFinishedPulling="2025-11-28 08:51:23.522627457 +0000 UTC m=+7137.900692568" observedRunningTime="2025-11-28 08:51:24.513526978 +0000 UTC m=+7138.891592129" watchObservedRunningTime="2025-11-28 08:51:24.520697276 +0000 UTC m=+7138.898762427" Nov 28 08:51:24 crc kubenswrapper[4946]: I1128 08:51:24.730801 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:51:24 crc kubenswrapper[4946]: I1128 08:51:24.730870 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:51:25 crc kubenswrapper[4946]: I1128 08:51:25.501600 4946 generic.go:334] "Generic (PLEG): container finished" podID="47b7d54b-d04e-4c47-86dc-cc283143f3d7" containerID="322ddc7bc8022960bc74b4e656c68b690b89f9a6d42fc613213f26a4515a78a1" exitCode=0 Nov 28 08:51:25 crc kubenswrapper[4946]: I1128 08:51:25.501674 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j2fl5" event={"ID":"47b7d54b-d04e-4c47-86dc-cc283143f3d7","Type":"ContainerDied","Data":"322ddc7bc8022960bc74b4e656c68b690b89f9a6d42fc613213f26a4515a78a1"} Nov 28 08:51:26 crc kubenswrapper[4946]: I1128 08:51:26.949096 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j2fl5" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.149834 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47b7d54b-d04e-4c47-86dc-cc283143f3d7-logs\") pod \"47b7d54b-d04e-4c47-86dc-cc283143f3d7\" (UID: \"47b7d54b-d04e-4c47-86dc-cc283143f3d7\") " Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.149972 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6ds8\" (UniqueName: \"kubernetes.io/projected/47b7d54b-d04e-4c47-86dc-cc283143f3d7-kube-api-access-n6ds8\") pod \"47b7d54b-d04e-4c47-86dc-cc283143f3d7\" (UID: \"47b7d54b-d04e-4c47-86dc-cc283143f3d7\") " Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.150077 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47b7d54b-d04e-4c47-86dc-cc283143f3d7-scripts\") pod \"47b7d54b-d04e-4c47-86dc-cc283143f3d7\" (UID: \"47b7d54b-d04e-4c47-86dc-cc283143f3d7\") " Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.150153 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47b7d54b-d04e-4c47-86dc-cc283143f3d7-config-data\") pod \"47b7d54b-d04e-4c47-86dc-cc283143f3d7\" (UID: \"47b7d54b-d04e-4c47-86dc-cc283143f3d7\") " Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.150209 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b7d54b-d04e-4c47-86dc-cc283143f3d7-combined-ca-bundle\") pod \"47b7d54b-d04e-4c47-86dc-cc283143f3d7\" (UID: \"47b7d54b-d04e-4c47-86dc-cc283143f3d7\") " Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.151356 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47b7d54b-d04e-4c47-86dc-cc283143f3d7-logs" (OuterVolumeSpecName: "logs") pod "47b7d54b-d04e-4c47-86dc-cc283143f3d7" (UID: "47b7d54b-d04e-4c47-86dc-cc283143f3d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.151793 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47b7d54b-d04e-4c47-86dc-cc283143f3d7-logs\") on node \"crc\" DevicePath \"\"" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.158685 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47b7d54b-d04e-4c47-86dc-cc283143f3d7-kube-api-access-n6ds8" (OuterVolumeSpecName: "kube-api-access-n6ds8") pod "47b7d54b-d04e-4c47-86dc-cc283143f3d7" (UID: "47b7d54b-d04e-4c47-86dc-cc283143f3d7"). InnerVolumeSpecName "kube-api-access-n6ds8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.164697 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47b7d54b-d04e-4c47-86dc-cc283143f3d7-scripts" (OuterVolumeSpecName: "scripts") pod "47b7d54b-d04e-4c47-86dc-cc283143f3d7" (UID: "47b7d54b-d04e-4c47-86dc-cc283143f3d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.197151 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47b7d54b-d04e-4c47-86dc-cc283143f3d7-config-data" (OuterVolumeSpecName: "config-data") pod "47b7d54b-d04e-4c47-86dc-cc283143f3d7" (UID: "47b7d54b-d04e-4c47-86dc-cc283143f3d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.202771 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47b7d54b-d04e-4c47-86dc-cc283143f3d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47b7d54b-d04e-4c47-86dc-cc283143f3d7" (UID: "47b7d54b-d04e-4c47-86dc-cc283143f3d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.254173 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47b7d54b-d04e-4c47-86dc-cc283143f3d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.254242 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6ds8\" (UniqueName: \"kubernetes.io/projected/47b7d54b-d04e-4c47-86dc-cc283143f3d7-kube-api-access-n6ds8\") on node \"crc\" DevicePath \"\"" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.254258 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47b7d54b-d04e-4c47-86dc-cc283143f3d7-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.254270 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47b7d54b-d04e-4c47-86dc-cc283143f3d7-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.535977 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j2fl5" event={"ID":"47b7d54b-d04e-4c47-86dc-cc283143f3d7","Type":"ContainerDied","Data":"a07f63349f40c78966179e33b1f66587fdd409291ad562b2c944534b6f4ea6e4"} Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.536214 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j2fl5" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.537576 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a07f63349f40c78966179e33b1f66587fdd409291ad562b2c944534b6f4ea6e4" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.645563 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-77d5c54d5b-8wbwm"] Nov 28 08:51:27 crc kubenswrapper[4946]: E1128 08:51:27.645984 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47b7d54b-d04e-4c47-86dc-cc283143f3d7" containerName="placement-db-sync" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.646005 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="47b7d54b-d04e-4c47-86dc-cc283143f3d7" containerName="placement-db-sync" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.646219 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="47b7d54b-d04e-4c47-86dc-cc283143f3d7" containerName="placement-db-sync" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.647341 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-77d5c54d5b-8wbwm" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.649945 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.650153 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-v2m2q" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.653896 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.668546 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-77d5c54d5b-8wbwm"] Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.767945 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f01cd93-53ff-47a5-89e3-d9354feeb065-combined-ca-bundle\") pod \"placement-77d5c54d5b-8wbwm\" (UID: \"6f01cd93-53ff-47a5-89e3-d9354feeb065\") " pod="openstack/placement-77d5c54d5b-8wbwm" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.768010 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f01cd93-53ff-47a5-89e3-d9354feeb065-config-data\") pod \"placement-77d5c54d5b-8wbwm\" (UID: \"6f01cd93-53ff-47a5-89e3-d9354feeb065\") " pod="openstack/placement-77d5c54d5b-8wbwm" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.768035 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f01cd93-53ff-47a5-89e3-d9354feeb065-logs\") pod \"placement-77d5c54d5b-8wbwm\" (UID: \"6f01cd93-53ff-47a5-89e3-d9354feeb065\") " pod="openstack/placement-77d5c54d5b-8wbwm" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.768709 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f01cd93-53ff-47a5-89e3-d9354feeb065-scripts\") pod \"placement-77d5c54d5b-8wbwm\" (UID: \"6f01cd93-53ff-47a5-89e3-d9354feeb065\") " pod="openstack/placement-77d5c54d5b-8wbwm" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.768778 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4jst\" (UniqueName: \"kubernetes.io/projected/6f01cd93-53ff-47a5-89e3-d9354feeb065-kube-api-access-n4jst\") pod \"placement-77d5c54d5b-8wbwm\" (UID: \"6f01cd93-53ff-47a5-89e3-d9354feeb065\") " pod="openstack/placement-77d5c54d5b-8wbwm" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.870047 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f01cd93-53ff-47a5-89e3-d9354feeb065-combined-ca-bundle\") pod \"placement-77d5c54d5b-8wbwm\" (UID: \"6f01cd93-53ff-47a5-89e3-d9354feeb065\") " pod="openstack/placement-77d5c54d5b-8wbwm" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.870133 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f01cd93-53ff-47a5-89e3-d9354feeb065-config-data\") pod \"placement-77d5c54d5b-8wbwm\" (UID: \"6f01cd93-53ff-47a5-89e3-d9354feeb065\") " pod="openstack/placement-77d5c54d5b-8wbwm" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.870160 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f01cd93-53ff-47a5-89e3-d9354feeb065-logs\") pod \"placement-77d5c54d5b-8wbwm\" (UID: \"6f01cd93-53ff-47a5-89e3-d9354feeb065\") " pod="openstack/placement-77d5c54d5b-8wbwm" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.870307 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f01cd93-53ff-47a5-89e3-d9354feeb065-scripts\") pod \"placement-77d5c54d5b-8wbwm\" (UID: \"6f01cd93-53ff-47a5-89e3-d9354feeb065\") " pod="openstack/placement-77d5c54d5b-8wbwm" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.870334 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4jst\" (UniqueName: \"kubernetes.io/projected/6f01cd93-53ff-47a5-89e3-d9354feeb065-kube-api-access-n4jst\") pod \"placement-77d5c54d5b-8wbwm\" (UID: \"6f01cd93-53ff-47a5-89e3-d9354feeb065\") " pod="openstack/placement-77d5c54d5b-8wbwm" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.870696 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f01cd93-53ff-47a5-89e3-d9354feeb065-logs\") pod \"placement-77d5c54d5b-8wbwm\" (UID: \"6f01cd93-53ff-47a5-89e3-d9354feeb065\") " pod="openstack/placement-77d5c54d5b-8wbwm" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.876367 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f01cd93-53ff-47a5-89e3-d9354feeb065-scripts\") pod \"placement-77d5c54d5b-8wbwm\" (UID: \"6f01cd93-53ff-47a5-89e3-d9354feeb065\") " pod="openstack/placement-77d5c54d5b-8wbwm" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.876492 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f01cd93-53ff-47a5-89e3-d9354feeb065-config-data\") pod \"placement-77d5c54d5b-8wbwm\" (UID: \"6f01cd93-53ff-47a5-89e3-d9354feeb065\") " pod="openstack/placement-77d5c54d5b-8wbwm" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.878171 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f01cd93-53ff-47a5-89e3-d9354feeb065-combined-ca-bundle\") pod \"placement-77d5c54d5b-8wbwm\" (UID: \"6f01cd93-53ff-47a5-89e3-d9354feeb065\") " pod="openstack/placement-77d5c54d5b-8wbwm" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.893171 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4jst\" (UniqueName: \"kubernetes.io/projected/6f01cd93-53ff-47a5-89e3-d9354feeb065-kube-api-access-n4jst\") pod \"placement-77d5c54d5b-8wbwm\" (UID: \"6f01cd93-53ff-47a5-89e3-d9354feeb065\") " pod="openstack/placement-77d5c54d5b-8wbwm" Nov 28 08:51:27 crc kubenswrapper[4946]: I1128 08:51:27.980489 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-77d5c54d5b-8wbwm" Nov 28 08:51:28 crc kubenswrapper[4946]: I1128 08:51:28.440314 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-77d5c54d5b-8wbwm"] Nov 28 08:51:28 crc kubenswrapper[4946]: I1128 08:51:28.550448 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-77d5c54d5b-8wbwm" event={"ID":"6f01cd93-53ff-47a5-89e3-d9354feeb065","Type":"ContainerStarted","Data":"274f693905a2d29a6d1eecfdfd1e3577be27dea7176bf64cdae5ee125ec9ccd3"} Nov 28 08:51:29 crc kubenswrapper[4946]: I1128 08:51:29.568405 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-77d5c54d5b-8wbwm" event={"ID":"6f01cd93-53ff-47a5-89e3-d9354feeb065","Type":"ContainerStarted","Data":"413e1ec982e7a6f460c20188673cb16804b2655c6567980d0d91f3cc4f7b0c7a"} Nov 28 08:51:29 crc kubenswrapper[4946]: I1128 08:51:29.568745 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-77d5c54d5b-8wbwm" event={"ID":"6f01cd93-53ff-47a5-89e3-d9354feeb065","Type":"ContainerStarted","Data":"3ba121991d651cdb0110fb14cf2521696a063660be561625c66f15141dc4701d"} Nov 28 08:51:29 crc kubenswrapper[4946]: I1128 08:51:29.568984 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-77d5c54d5b-8wbwm" Nov 28 08:51:29 crc kubenswrapper[4946]: I1128 08:51:29.569028 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-77d5c54d5b-8wbwm" Nov 28 08:51:29 crc kubenswrapper[4946]: I1128 08:51:29.593947 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-77d5c54d5b-8wbwm" podStartSLOduration=2.593918018 podStartE2EDuration="2.593918018s" podCreationTimestamp="2025-11-28 08:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:51:29.585632223 +0000 UTC m=+7143.963697374" watchObservedRunningTime="2025-11-28 08:51:29.593918018 +0000 UTC m=+7143.971983139" Nov 28 08:51:29 crc kubenswrapper[4946]: I1128 08:51:29.968877 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.055127 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86dd75bcb7-cxcts"] Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.055384 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" podUID="ff0b524c-9061-4af0-8192-95c1cb73259b" containerName="dnsmasq-dns" containerID="cri-o://f9d32dcbfdc80df2a90d60ca2f7d86096048bf0c5e51b9c050b0d324ad1d69f3" gracePeriod=10 Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.525264 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.580607 4946 generic.go:334] "Generic (PLEG): container finished" podID="ff0b524c-9061-4af0-8192-95c1cb73259b" containerID="f9d32dcbfdc80df2a90d60ca2f7d86096048bf0c5e51b9c050b0d324ad1d69f3" exitCode=0 Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.580653 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" event={"ID":"ff0b524c-9061-4af0-8192-95c1cb73259b","Type":"ContainerDied","Data":"f9d32dcbfdc80df2a90d60ca2f7d86096048bf0c5e51b9c050b0d324ad1d69f3"} Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.580677 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.580706 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86dd75bcb7-cxcts" event={"ID":"ff0b524c-9061-4af0-8192-95c1cb73259b","Type":"ContainerDied","Data":"6e22f8bc18ab3917f30d0e3b0691d2bd8ae77aa27a1d7a4b2b216277e6de9003"} Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.580726 4946 scope.go:117] "RemoveContainer" containerID="f9d32dcbfdc80df2a90d60ca2f7d86096048bf0c5e51b9c050b0d324ad1d69f3" Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.603564 4946 scope.go:117] "RemoveContainer" containerID="84d9e41ede266d8a1d8e25460269f10f16e3b01bf5c26ac7c7b98906b79f2654" Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.623623 4946 scope.go:117] "RemoveContainer" containerID="f9d32dcbfdc80df2a90d60ca2f7d86096048bf0c5e51b9c050b0d324ad1d69f3" Nov 28 08:51:30 crc kubenswrapper[4946]: E1128 08:51:30.624219 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9d32dcbfdc80df2a90d60ca2f7d86096048bf0c5e51b9c050b0d324ad1d69f3\": container with ID starting with f9d32dcbfdc80df2a90d60ca2f7d86096048bf0c5e51b9c050b0d324ad1d69f3 not found: ID does not exist" containerID="f9d32dcbfdc80df2a90d60ca2f7d86096048bf0c5e51b9c050b0d324ad1d69f3" Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.624255 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9d32dcbfdc80df2a90d60ca2f7d86096048bf0c5e51b9c050b0d324ad1d69f3"} err="failed to get container status \"f9d32dcbfdc80df2a90d60ca2f7d86096048bf0c5e51b9c050b0d324ad1d69f3\": rpc error: code = NotFound desc = could not find container \"f9d32dcbfdc80df2a90d60ca2f7d86096048bf0c5e51b9c050b0d324ad1d69f3\": container with ID starting with f9d32dcbfdc80df2a90d60ca2f7d86096048bf0c5e51b9c050b0d324ad1d69f3 not found: ID does not exist" Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.624281 4946 scope.go:117] "RemoveContainer" containerID="84d9e41ede266d8a1d8e25460269f10f16e3b01bf5c26ac7c7b98906b79f2654" Nov 28 08:51:30 crc kubenswrapper[4946]: E1128 08:51:30.624677 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84d9e41ede266d8a1d8e25460269f10f16e3b01bf5c26ac7c7b98906b79f2654\": container with ID starting with 84d9e41ede266d8a1d8e25460269f10f16e3b01bf5c26ac7c7b98906b79f2654 not found: ID does not exist" containerID="84d9e41ede266d8a1d8e25460269f10f16e3b01bf5c26ac7c7b98906b79f2654" Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.624724 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84d9e41ede266d8a1d8e25460269f10f16e3b01bf5c26ac7c7b98906b79f2654"} err="failed to get container status \"84d9e41ede266d8a1d8e25460269f10f16e3b01bf5c26ac7c7b98906b79f2654\": rpc error: code = NotFound desc = could not find container \"84d9e41ede266d8a1d8e25460269f10f16e3b01bf5c26ac7c7b98906b79f2654\": container with ID starting with 84d9e41ede266d8a1d8e25460269f10f16e3b01bf5c26ac7c7b98906b79f2654 not found: ID does not exist" Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.632554 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff0b524c-9061-4af0-8192-95c1cb73259b-ovsdbserver-nb\") pod \"ff0b524c-9061-4af0-8192-95c1cb73259b\" (UID: \"ff0b524c-9061-4af0-8192-95c1cb73259b\") " Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.632648 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4hdq\" (UniqueName: \"kubernetes.io/projected/ff0b524c-9061-4af0-8192-95c1cb73259b-kube-api-access-r4hdq\") pod \"ff0b524c-9061-4af0-8192-95c1cb73259b\" (UID: \"ff0b524c-9061-4af0-8192-95c1cb73259b\") " Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.633013 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff0b524c-9061-4af0-8192-95c1cb73259b-config\") pod \"ff0b524c-9061-4af0-8192-95c1cb73259b\" (UID: \"ff0b524c-9061-4af0-8192-95c1cb73259b\") " Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.633152 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff0b524c-9061-4af0-8192-95c1cb73259b-dns-svc\") pod \"ff0b524c-9061-4af0-8192-95c1cb73259b\" (UID: \"ff0b524c-9061-4af0-8192-95c1cb73259b\") " Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.633526 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff0b524c-9061-4af0-8192-95c1cb73259b-ovsdbserver-sb\") pod \"ff0b524c-9061-4af0-8192-95c1cb73259b\" (UID: \"ff0b524c-9061-4af0-8192-95c1cb73259b\") " Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.639368 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff0b524c-9061-4af0-8192-95c1cb73259b-kube-api-access-r4hdq" (OuterVolumeSpecName: "kube-api-access-r4hdq") pod "ff0b524c-9061-4af0-8192-95c1cb73259b" (UID: "ff0b524c-9061-4af0-8192-95c1cb73259b"). InnerVolumeSpecName "kube-api-access-r4hdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.672590 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff0b524c-9061-4af0-8192-95c1cb73259b-config" (OuterVolumeSpecName: "config") pod "ff0b524c-9061-4af0-8192-95c1cb73259b" (UID: "ff0b524c-9061-4af0-8192-95c1cb73259b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.681423 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff0b524c-9061-4af0-8192-95c1cb73259b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ff0b524c-9061-4af0-8192-95c1cb73259b" (UID: "ff0b524c-9061-4af0-8192-95c1cb73259b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.687356 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff0b524c-9061-4af0-8192-95c1cb73259b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ff0b524c-9061-4af0-8192-95c1cb73259b" (UID: "ff0b524c-9061-4af0-8192-95c1cb73259b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.700079 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff0b524c-9061-4af0-8192-95c1cb73259b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ff0b524c-9061-4af0-8192-95c1cb73259b" (UID: "ff0b524c-9061-4af0-8192-95c1cb73259b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.735219 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff0b524c-9061-4af0-8192-95c1cb73259b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.735261 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4hdq\" (UniqueName: \"kubernetes.io/projected/ff0b524c-9061-4af0-8192-95c1cb73259b-kube-api-access-r4hdq\") on node \"crc\" DevicePath \"\"" Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.735280 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff0b524c-9061-4af0-8192-95c1cb73259b-config\") on node \"crc\" DevicePath \"\"" Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.735292 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff0b524c-9061-4af0-8192-95c1cb73259b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.735303 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff0b524c-9061-4af0-8192-95c1cb73259b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.924889 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86dd75bcb7-cxcts"] Nov 28 08:51:30 crc kubenswrapper[4946]: I1128 08:51:30.935978 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86dd75bcb7-cxcts"] Nov 28 08:51:32 crc kubenswrapper[4946]: I1128 08:51:32.009781 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff0b524c-9061-4af0-8192-95c1cb73259b" path="/var/lib/kubelet/pods/ff0b524c-9061-4af0-8192-95c1cb73259b/volumes" Nov 28 08:51:38 crc kubenswrapper[4946]: I1128 08:51:38.409378 4946 scope.go:117] "RemoveContainer" containerID="ce3518c124a5df086af9397de0857ce5ca25d2d274726a1af2b73491ba41a4f7" Nov 28 08:51:38 crc kubenswrapper[4946]: I1128 08:51:38.450173 4946 scope.go:117] "RemoveContainer" containerID="afccba0e154a47ac23fa6a92a761538bf8108de89683898cd10d868362c44ffa" Nov 28 08:51:54 crc kubenswrapper[4946]: I1128 08:51:54.730414 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:51:54 crc kubenswrapper[4946]: I1128 08:51:54.732594 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:51:59 crc kubenswrapper[4946]: I1128 08:51:59.014016 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-77d5c54d5b-8wbwm" Nov 28 08:51:59 crc kubenswrapper[4946]: I1128 08:51:59.017993 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-77d5c54d5b-8wbwm" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.554533 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-tz944"] Nov 28 08:52:23 crc kubenswrapper[4946]: E1128 08:52:23.555744 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff0b524c-9061-4af0-8192-95c1cb73259b" containerName="dnsmasq-dns" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.555839 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff0b524c-9061-4af0-8192-95c1cb73259b" containerName="dnsmasq-dns" Nov 28 08:52:23 crc kubenswrapper[4946]: E1128 08:52:23.555861 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff0b524c-9061-4af0-8192-95c1cb73259b" containerName="init" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.555867 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff0b524c-9061-4af0-8192-95c1cb73259b" containerName="init" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.556138 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff0b524c-9061-4af0-8192-95c1cb73259b" containerName="dnsmasq-dns" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.558371 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tz944" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.567891 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tz944"] Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.612178 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ea7f40-c928-4e4d-afe5-ab0b4a76e918-operator-scripts\") pod \"nova-api-db-create-tz944\" (UID: \"60ea7f40-c928-4e4d-afe5-ab0b4a76e918\") " pod="openstack/nova-api-db-create-tz944" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.612234 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vjtw\" (UniqueName: \"kubernetes.io/projected/60ea7f40-c928-4e4d-afe5-ab0b4a76e918-kube-api-access-6vjtw\") pod \"nova-api-db-create-tz944\" (UID: \"60ea7f40-c928-4e4d-afe5-ab0b4a76e918\") " pod="openstack/nova-api-db-create-tz944" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.658941 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-rzjfc"] Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.660380 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rzjfc" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.668894 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rzjfc"] Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.712643 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqvpb\" (UniqueName: \"kubernetes.io/projected/9d510bda-fc5e-412b-be7a-e7f6154da6ff-kube-api-access-qqvpb\") pod \"nova-cell0-db-create-rzjfc\" (UID: \"9d510bda-fc5e-412b-be7a-e7f6154da6ff\") " pod="openstack/nova-cell0-db-create-rzjfc" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.712717 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d510bda-fc5e-412b-be7a-e7f6154da6ff-operator-scripts\") pod \"nova-cell0-db-create-rzjfc\" (UID: \"9d510bda-fc5e-412b-be7a-e7f6154da6ff\") " pod="openstack/nova-cell0-db-create-rzjfc" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.712755 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ea7f40-c928-4e4d-afe5-ab0b4a76e918-operator-scripts\") pod \"nova-api-db-create-tz944\" (UID: \"60ea7f40-c928-4e4d-afe5-ab0b4a76e918\") " pod="openstack/nova-api-db-create-tz944" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.712782 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vjtw\" (UniqueName: \"kubernetes.io/projected/60ea7f40-c928-4e4d-afe5-ab0b4a76e918-kube-api-access-6vjtw\") pod \"nova-api-db-create-tz944\" (UID: \"60ea7f40-c928-4e4d-afe5-ab0b4a76e918\") " pod="openstack/nova-api-db-create-tz944" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.713451 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ea7f40-c928-4e4d-afe5-ab0b4a76e918-operator-scripts\") pod \"nova-api-db-create-tz944\" (UID: \"60ea7f40-c928-4e4d-afe5-ab0b4a76e918\") " pod="openstack/nova-api-db-create-tz944" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.753166 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vjtw\" (UniqueName: \"kubernetes.io/projected/60ea7f40-c928-4e4d-afe5-ab0b4a76e918-kube-api-access-6vjtw\") pod \"nova-api-db-create-tz944\" (UID: \"60ea7f40-c928-4e4d-afe5-ab0b4a76e918\") " pod="openstack/nova-api-db-create-tz944" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.761682 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-ztrmk"] Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.762841 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ztrmk" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.787698 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-263b-account-create-update-lr5fj"] Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.788885 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-263b-account-create-update-lr5fj" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.790676 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.811925 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-ztrmk"] Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.814272 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqvpb\" (UniqueName: \"kubernetes.io/projected/9d510bda-fc5e-412b-be7a-e7f6154da6ff-kube-api-access-qqvpb\") pod \"nova-cell0-db-create-rzjfc\" (UID: \"9d510bda-fc5e-412b-be7a-e7f6154da6ff\") " pod="openstack/nova-cell0-db-create-rzjfc" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.814340 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnf7b\" (UniqueName: \"kubernetes.io/projected/7d6ecba1-ee43-4308-ba77-ea66427ac798-kube-api-access-dnf7b\") pod \"nova-api-263b-account-create-update-lr5fj\" (UID: \"7d6ecba1-ee43-4308-ba77-ea66427ac798\") " pod="openstack/nova-api-263b-account-create-update-lr5fj" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.814368 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d510bda-fc5e-412b-be7a-e7f6154da6ff-operator-scripts\") pod \"nova-cell0-db-create-rzjfc\" (UID: \"9d510bda-fc5e-412b-be7a-e7f6154da6ff\") " pod="openstack/nova-cell0-db-create-rzjfc" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.814402 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dbfdfa4-2fc0-42de-bee6-5657463640c6-operator-scripts\") pod \"nova-cell1-db-create-ztrmk\" (UID: \"9dbfdfa4-2fc0-42de-bee6-5657463640c6\") " pod="openstack/nova-cell1-db-create-ztrmk" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.814441 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d6ecba1-ee43-4308-ba77-ea66427ac798-operator-scripts\") pod \"nova-api-263b-account-create-update-lr5fj\" (UID: \"7d6ecba1-ee43-4308-ba77-ea66427ac798\") " pod="openstack/nova-api-263b-account-create-update-lr5fj" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.814480 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvh9l\" (UniqueName: \"kubernetes.io/projected/9dbfdfa4-2fc0-42de-bee6-5657463640c6-kube-api-access-bvh9l\") pod \"nova-cell1-db-create-ztrmk\" (UID: \"9dbfdfa4-2fc0-42de-bee6-5657463640c6\") " pod="openstack/nova-cell1-db-create-ztrmk" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.815372 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d510bda-fc5e-412b-be7a-e7f6154da6ff-operator-scripts\") pod \"nova-cell0-db-create-rzjfc\" (UID: \"9d510bda-fc5e-412b-be7a-e7f6154da6ff\") " pod="openstack/nova-cell0-db-create-rzjfc" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.840607 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-263b-account-create-update-lr5fj"] Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.844165 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqvpb\" (UniqueName: \"kubernetes.io/projected/9d510bda-fc5e-412b-be7a-e7f6154da6ff-kube-api-access-qqvpb\") pod \"nova-cell0-db-create-rzjfc\" (UID: \"9d510bda-fc5e-412b-be7a-e7f6154da6ff\") " pod="openstack/nova-cell0-db-create-rzjfc" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.876515 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tz944" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.915355 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dbfdfa4-2fc0-42de-bee6-5657463640c6-operator-scripts\") pod \"nova-cell1-db-create-ztrmk\" (UID: \"9dbfdfa4-2fc0-42de-bee6-5657463640c6\") " pod="openstack/nova-cell1-db-create-ztrmk" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.915566 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d6ecba1-ee43-4308-ba77-ea66427ac798-operator-scripts\") pod \"nova-api-263b-account-create-update-lr5fj\" (UID: \"7d6ecba1-ee43-4308-ba77-ea66427ac798\") " pod="openstack/nova-api-263b-account-create-update-lr5fj" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.915644 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvh9l\" (UniqueName: \"kubernetes.io/projected/9dbfdfa4-2fc0-42de-bee6-5657463640c6-kube-api-access-bvh9l\") pod \"nova-cell1-db-create-ztrmk\" (UID: \"9dbfdfa4-2fc0-42de-bee6-5657463640c6\") " pod="openstack/nova-cell1-db-create-ztrmk" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.915777 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnf7b\" (UniqueName: \"kubernetes.io/projected/7d6ecba1-ee43-4308-ba77-ea66427ac798-kube-api-access-dnf7b\") pod \"nova-api-263b-account-create-update-lr5fj\" (UID: \"7d6ecba1-ee43-4308-ba77-ea66427ac798\") " pod="openstack/nova-api-263b-account-create-update-lr5fj" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.916623 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dbfdfa4-2fc0-42de-bee6-5657463640c6-operator-scripts\") pod \"nova-cell1-db-create-ztrmk\" (UID: \"9dbfdfa4-2fc0-42de-bee6-5657463640c6\") " pod="openstack/nova-cell1-db-create-ztrmk" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.917215 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d6ecba1-ee43-4308-ba77-ea66427ac798-operator-scripts\") pod \"nova-api-263b-account-create-update-lr5fj\" (UID: \"7d6ecba1-ee43-4308-ba77-ea66427ac798\") " pod="openstack/nova-api-263b-account-create-update-lr5fj" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.934030 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnf7b\" (UniqueName: \"kubernetes.io/projected/7d6ecba1-ee43-4308-ba77-ea66427ac798-kube-api-access-dnf7b\") pod \"nova-api-263b-account-create-update-lr5fj\" (UID: \"7d6ecba1-ee43-4308-ba77-ea66427ac798\") " pod="openstack/nova-api-263b-account-create-update-lr5fj" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.937880 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvh9l\" (UniqueName: \"kubernetes.io/projected/9dbfdfa4-2fc0-42de-bee6-5657463640c6-kube-api-access-bvh9l\") pod \"nova-cell1-db-create-ztrmk\" (UID: \"9dbfdfa4-2fc0-42de-bee6-5657463640c6\") " pod="openstack/nova-cell1-db-create-ztrmk" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.964207 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5268-account-create-update-nkrh2"] Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.965624 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5268-account-create-update-nkrh2" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.973065 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 28 08:52:23 crc kubenswrapper[4946]: I1128 08:52:23.975424 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5268-account-create-update-nkrh2"] Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.005208 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rzjfc" Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.018070 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q82c8\" (UniqueName: \"kubernetes.io/projected/df17e46e-2f32-4c8d-9aa7-cba30670c85f-kube-api-access-q82c8\") pod \"nova-cell0-5268-account-create-update-nkrh2\" (UID: \"df17e46e-2f32-4c8d-9aa7-cba30670c85f\") " pod="openstack/nova-cell0-5268-account-create-update-nkrh2" Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.018142 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df17e46e-2f32-4c8d-9aa7-cba30670c85f-operator-scripts\") pod \"nova-cell0-5268-account-create-update-nkrh2\" (UID: \"df17e46e-2f32-4c8d-9aa7-cba30670c85f\") " pod="openstack/nova-cell0-5268-account-create-update-nkrh2" Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.101213 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ztrmk" Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.109587 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-263b-account-create-update-lr5fj" Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.119771 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q82c8\" (UniqueName: \"kubernetes.io/projected/df17e46e-2f32-4c8d-9aa7-cba30670c85f-kube-api-access-q82c8\") pod \"nova-cell0-5268-account-create-update-nkrh2\" (UID: \"df17e46e-2f32-4c8d-9aa7-cba30670c85f\") " pod="openstack/nova-cell0-5268-account-create-update-nkrh2" Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.119831 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df17e46e-2f32-4c8d-9aa7-cba30670c85f-operator-scripts\") pod \"nova-cell0-5268-account-create-update-nkrh2\" (UID: \"df17e46e-2f32-4c8d-9aa7-cba30670c85f\") " pod="openstack/nova-cell0-5268-account-create-update-nkrh2" Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.120451 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df17e46e-2f32-4c8d-9aa7-cba30670c85f-operator-scripts\") pod \"nova-cell0-5268-account-create-update-nkrh2\" (UID: \"df17e46e-2f32-4c8d-9aa7-cba30670c85f\") " pod="openstack/nova-cell0-5268-account-create-update-nkrh2" Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.151914 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q82c8\" (UniqueName: \"kubernetes.io/projected/df17e46e-2f32-4c8d-9aa7-cba30670c85f-kube-api-access-q82c8\") pod \"nova-cell0-5268-account-create-update-nkrh2\" (UID: \"df17e46e-2f32-4c8d-9aa7-cba30670c85f\") " pod="openstack/nova-cell0-5268-account-create-update-nkrh2" Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.164306 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-66e9-account-create-update-bjp9j"] Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.165324 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-66e9-account-create-update-bjp9j" Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.174032 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.202959 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-66e9-account-create-update-bjp9j"] Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.321753 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5268-account-create-update-nkrh2" Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.323445 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574aecae-6184-4f4c-abed-81ec30f03ac0-operator-scripts\") pod \"nova-cell1-66e9-account-create-update-bjp9j\" (UID: \"574aecae-6184-4f4c-abed-81ec30f03ac0\") " pod="openstack/nova-cell1-66e9-account-create-update-bjp9j" Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.323607 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxxns\" (UniqueName: \"kubernetes.io/projected/574aecae-6184-4f4c-abed-81ec30f03ac0-kube-api-access-pxxns\") pod \"nova-cell1-66e9-account-create-update-bjp9j\" (UID: \"574aecae-6184-4f4c-abed-81ec30f03ac0\") " pod="openstack/nova-cell1-66e9-account-create-update-bjp9j" Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.347851 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tz944"] Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.425035 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxxns\" (UniqueName: \"kubernetes.io/projected/574aecae-6184-4f4c-abed-81ec30f03ac0-kube-api-access-pxxns\") pod \"nova-cell1-66e9-account-create-update-bjp9j\" (UID: \"574aecae-6184-4f4c-abed-81ec30f03ac0\") " pod="openstack/nova-cell1-66e9-account-create-update-bjp9j" Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.425088 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574aecae-6184-4f4c-abed-81ec30f03ac0-operator-scripts\") pod \"nova-cell1-66e9-account-create-update-bjp9j\" (UID: \"574aecae-6184-4f4c-abed-81ec30f03ac0\") " pod="openstack/nova-cell1-66e9-account-create-update-bjp9j" Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.437332 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574aecae-6184-4f4c-abed-81ec30f03ac0-operator-scripts\") pod \"nova-cell1-66e9-account-create-update-bjp9j\" (UID: \"574aecae-6184-4f4c-abed-81ec30f03ac0\") " pod="openstack/nova-cell1-66e9-account-create-update-bjp9j" Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.450012 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxxns\" (UniqueName: \"kubernetes.io/projected/574aecae-6184-4f4c-abed-81ec30f03ac0-kube-api-access-pxxns\") pod \"nova-cell1-66e9-account-create-update-bjp9j\" (UID: \"574aecae-6184-4f4c-abed-81ec30f03ac0\") " pod="openstack/nova-cell1-66e9-account-create-update-bjp9j" Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.494558 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-66e9-account-create-update-bjp9j" Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.537229 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rzjfc"] Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.726317 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-263b-account-create-update-lr5fj"] Nov 28 08:52:24 crc kubenswrapper[4946]: W1128 08:52:24.729387 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d6ecba1_ee43_4308_ba77_ea66427ac798.slice/crio-43b2c3861d6bd0e3e6600e0e5c8af6e005af0b75662d17836e36b5cc77450456 WatchSource:0}: Error finding container 43b2c3861d6bd0e3e6600e0e5c8af6e005af0b75662d17836e36b5cc77450456: Status 404 returned error can't find the container with id 43b2c3861d6bd0e3e6600e0e5c8af6e005af0b75662d17836e36b5cc77450456 Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.730227 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.730263 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.730339 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.730762 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5d101f00978c0d1403b405e85a9fdb8ca2bcf5d89e67e93ef9450e8af3a21554"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.730814 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://5d101f00978c0d1403b405e85a9fdb8ca2bcf5d89e67e93ef9450e8af3a21554" gracePeriod=600 Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.744154 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-ztrmk"] Nov 28 08:52:24 crc kubenswrapper[4946]: I1128 08:52:24.873778 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5268-account-create-update-nkrh2"] Nov 28 08:52:25 crc kubenswrapper[4946]: I1128 08:52:25.033395 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-66e9-account-create-update-bjp9j"] Nov 28 08:52:25 crc kubenswrapper[4946]: I1128 08:52:25.225108 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="5d101f00978c0d1403b405e85a9fdb8ca2bcf5d89e67e93ef9450e8af3a21554" exitCode=0 Nov 28 08:52:25 crc kubenswrapper[4946]: I1128 08:52:25.225168 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"5d101f00978c0d1403b405e85a9fdb8ca2bcf5d89e67e93ef9450e8af3a21554"} Nov 28 08:52:25 crc kubenswrapper[4946]: I1128 08:52:25.225194 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376"} Nov 28 08:52:25 crc kubenswrapper[4946]: I1128 08:52:25.225209 4946 scope.go:117] "RemoveContainer" containerID="350d49558ff5edf402a167963260dd78bc11ac007c0bdaee3d45f6814cfb23bd" Nov 28 08:52:25 crc kubenswrapper[4946]: I1128 08:52:25.234184 4946 generic.go:334] "Generic (PLEG): container finished" podID="9d510bda-fc5e-412b-be7a-e7f6154da6ff" containerID="d71f454e72d4601e82dbfb46fbbdb717a54eb183f4d5d8fc4f330c6324b0ba2a" exitCode=0 Nov 28 08:52:25 crc kubenswrapper[4946]: I1128 08:52:25.234259 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rzjfc" event={"ID":"9d510bda-fc5e-412b-be7a-e7f6154da6ff","Type":"ContainerDied","Data":"d71f454e72d4601e82dbfb46fbbdb717a54eb183f4d5d8fc4f330c6324b0ba2a"} Nov 28 08:52:25 crc kubenswrapper[4946]: I1128 08:52:25.234292 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rzjfc" event={"ID":"9d510bda-fc5e-412b-be7a-e7f6154da6ff","Type":"ContainerStarted","Data":"79d48c6a2b9f3b598544ec69ccca5552187cdbf2cf777c4de7f1490e98c17f94"} Nov 28 08:52:25 crc kubenswrapper[4946]: I1128 08:52:25.238672 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-66e9-account-create-update-bjp9j" event={"ID":"574aecae-6184-4f4c-abed-81ec30f03ac0","Type":"ContainerStarted","Data":"1906fa8c2fc01e5041ffc5563fece78be7c087ab33583e605e780ed08b55ac8d"} Nov 28 08:52:25 crc kubenswrapper[4946]: I1128 08:52:25.247991 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-263b-account-create-update-lr5fj" event={"ID":"7d6ecba1-ee43-4308-ba77-ea66427ac798","Type":"ContainerStarted","Data":"06270bef99c40dec19eab6dc72c4ad9ad5a33f12cbd902dd086ce31d00d4f23b"} Nov 28 08:52:25 crc kubenswrapper[4946]: I1128 08:52:25.248043 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-263b-account-create-update-lr5fj" event={"ID":"7d6ecba1-ee43-4308-ba77-ea66427ac798","Type":"ContainerStarted","Data":"43b2c3861d6bd0e3e6600e0e5c8af6e005af0b75662d17836e36b5cc77450456"} Nov 28 08:52:25 crc kubenswrapper[4946]: I1128 08:52:25.252142 4946 generic.go:334] "Generic (PLEG): container finished" podID="60ea7f40-c928-4e4d-afe5-ab0b4a76e918" containerID="2120d25158b240e41e34e6642365a529363c3785e0137bdf59590a41e526f192" exitCode=0 Nov 28 08:52:25 crc kubenswrapper[4946]: I1128 08:52:25.252221 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tz944" event={"ID":"60ea7f40-c928-4e4d-afe5-ab0b4a76e918","Type":"ContainerDied","Data":"2120d25158b240e41e34e6642365a529363c3785e0137bdf59590a41e526f192"} Nov 28 08:52:25 crc kubenswrapper[4946]: I1128 08:52:25.252247 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tz944" event={"ID":"60ea7f40-c928-4e4d-afe5-ab0b4a76e918","Type":"ContainerStarted","Data":"ecf79b4a3c63c9cff25f1d3b7dc6277a7ecdea20a1aaa3360ca42c478af0521d"} Nov 28 08:52:25 crc kubenswrapper[4946]: I1128 08:52:25.255980 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ztrmk" event={"ID":"9dbfdfa4-2fc0-42de-bee6-5657463640c6","Type":"ContainerStarted","Data":"c9e18741ca662196f97cc7cadf00628471e1f36f0c701a8c5526ce71a63523b2"} Nov 28 08:52:25 crc kubenswrapper[4946]: I1128 08:52:25.256033 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ztrmk" event={"ID":"9dbfdfa4-2fc0-42de-bee6-5657463640c6","Type":"ContainerStarted","Data":"5ca8068c36f30c2c0a629b9e0fecd92183c7f85bdb6793b49830d7d9acc588a5"} Nov 28 08:52:25 crc kubenswrapper[4946]: I1128 08:52:25.261408 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5268-account-create-update-nkrh2" event={"ID":"df17e46e-2f32-4c8d-9aa7-cba30670c85f","Type":"ContainerStarted","Data":"02138abdb64b2212cf27612e629175161f4f8d3c5c53c833a03c2a3c1486100b"} Nov 28 08:52:25 crc kubenswrapper[4946]: I1128 08:52:25.285122 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-263b-account-create-update-lr5fj" podStartSLOduration=2.285106258 podStartE2EDuration="2.285106258s" podCreationTimestamp="2025-11-28 08:52:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:52:25.277053129 +0000 UTC m=+7199.655118240" watchObservedRunningTime="2025-11-28 08:52:25.285106258 +0000 UTC m=+7199.663171369" Nov 28 08:52:25 crc kubenswrapper[4946]: I1128 08:52:25.296698 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-ztrmk" podStartSLOduration=2.296677375 podStartE2EDuration="2.296677375s" podCreationTimestamp="2025-11-28 08:52:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:52:25.294953352 +0000 UTC m=+7199.673018463" watchObservedRunningTime="2025-11-28 08:52:25.296677375 +0000 UTC m=+7199.674742486" Nov 28 08:52:25 crc kubenswrapper[4946]: I1128 08:52:25.325866 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-5268-account-create-update-nkrh2" podStartSLOduration=2.325850317 podStartE2EDuration="2.325850317s" podCreationTimestamp="2025-11-28 08:52:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:52:25.321763606 +0000 UTC m=+7199.699828717" watchObservedRunningTime="2025-11-28 08:52:25.325850317 +0000 UTC m=+7199.703915428" Nov 28 08:52:26 crc kubenswrapper[4946]: I1128 08:52:26.288172 4946 generic.go:334] "Generic (PLEG): container finished" podID="9dbfdfa4-2fc0-42de-bee6-5657463640c6" containerID="c9e18741ca662196f97cc7cadf00628471e1f36f0c701a8c5526ce71a63523b2" exitCode=0 Nov 28 08:52:26 crc kubenswrapper[4946]: I1128 08:52:26.288307 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ztrmk" event={"ID":"9dbfdfa4-2fc0-42de-bee6-5657463640c6","Type":"ContainerDied","Data":"c9e18741ca662196f97cc7cadf00628471e1f36f0c701a8c5526ce71a63523b2"} Nov 28 08:52:26 crc kubenswrapper[4946]: I1128 08:52:26.291880 4946 generic.go:334] "Generic (PLEG): container finished" podID="df17e46e-2f32-4c8d-9aa7-cba30670c85f" containerID="b25e6a0e37465a270b99cc767412f5d71891fc08a3ed17e43ce822fa6e38699c" exitCode=0 Nov 28 08:52:26 crc kubenswrapper[4946]: I1128 08:52:26.291955 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5268-account-create-update-nkrh2" event={"ID":"df17e46e-2f32-4c8d-9aa7-cba30670c85f","Type":"ContainerDied","Data":"b25e6a0e37465a270b99cc767412f5d71891fc08a3ed17e43ce822fa6e38699c"} Nov 28 08:52:26 crc kubenswrapper[4946]: I1128 08:52:26.315816 4946 generic.go:334] "Generic (PLEG): container finished" podID="574aecae-6184-4f4c-abed-81ec30f03ac0" containerID="cf02778d1f39a71494c7dc1c914b706e50d6c137d07c843c16caf662ecdfdeb7" exitCode=0 Nov 28 08:52:26 crc kubenswrapper[4946]: I1128 08:52:26.316044 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-66e9-account-create-update-bjp9j" event={"ID":"574aecae-6184-4f4c-abed-81ec30f03ac0","Type":"ContainerDied","Data":"cf02778d1f39a71494c7dc1c914b706e50d6c137d07c843c16caf662ecdfdeb7"} Nov 28 08:52:26 crc kubenswrapper[4946]: I1128 08:52:26.323304 4946 generic.go:334] "Generic (PLEG): container finished" podID="7d6ecba1-ee43-4308-ba77-ea66427ac798" containerID="06270bef99c40dec19eab6dc72c4ad9ad5a33f12cbd902dd086ce31d00d4f23b" exitCode=0 Nov 28 08:52:26 crc kubenswrapper[4946]: I1128 08:52:26.323518 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-263b-account-create-update-lr5fj" event={"ID":"7d6ecba1-ee43-4308-ba77-ea66427ac798","Type":"ContainerDied","Data":"06270bef99c40dec19eab6dc72c4ad9ad5a33f12cbd902dd086ce31d00d4f23b"} Nov 28 08:52:26 crc kubenswrapper[4946]: I1128 08:52:26.752963 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tz944" Nov 28 08:52:26 crc kubenswrapper[4946]: I1128 08:52:26.757920 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rzjfc" Nov 28 08:52:26 crc kubenswrapper[4946]: I1128 08:52:26.879158 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqvpb\" (UniqueName: \"kubernetes.io/projected/9d510bda-fc5e-412b-be7a-e7f6154da6ff-kube-api-access-qqvpb\") pod \"9d510bda-fc5e-412b-be7a-e7f6154da6ff\" (UID: \"9d510bda-fc5e-412b-be7a-e7f6154da6ff\") " Nov 28 08:52:26 crc kubenswrapper[4946]: I1128 08:52:26.879216 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d510bda-fc5e-412b-be7a-e7f6154da6ff-operator-scripts\") pod \"9d510bda-fc5e-412b-be7a-e7f6154da6ff\" (UID: \"9d510bda-fc5e-412b-be7a-e7f6154da6ff\") " Nov 28 08:52:26 crc kubenswrapper[4946]: I1128 08:52:26.879288 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ea7f40-c928-4e4d-afe5-ab0b4a76e918-operator-scripts\") pod \"60ea7f40-c928-4e4d-afe5-ab0b4a76e918\" (UID: \"60ea7f40-c928-4e4d-afe5-ab0b4a76e918\") " Nov 28 08:52:26 crc kubenswrapper[4946]: I1128 08:52:26.879391 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vjtw\" (UniqueName: \"kubernetes.io/projected/60ea7f40-c928-4e4d-afe5-ab0b4a76e918-kube-api-access-6vjtw\") pod \"60ea7f40-c928-4e4d-afe5-ab0b4a76e918\" (UID: \"60ea7f40-c928-4e4d-afe5-ab0b4a76e918\") " Nov 28 08:52:26 crc kubenswrapper[4946]: I1128 08:52:26.880051 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60ea7f40-c928-4e4d-afe5-ab0b4a76e918-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "60ea7f40-c928-4e4d-afe5-ab0b4a76e918" (UID: "60ea7f40-c928-4e4d-afe5-ab0b4a76e918"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:52:26 crc kubenswrapper[4946]: I1128 08:52:26.880663 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d510bda-fc5e-412b-be7a-e7f6154da6ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d510bda-fc5e-412b-be7a-e7f6154da6ff" (UID: "9d510bda-fc5e-412b-be7a-e7f6154da6ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:52:26 crc kubenswrapper[4946]: I1128 08:52:26.889815 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60ea7f40-c928-4e4d-afe5-ab0b4a76e918-kube-api-access-6vjtw" (OuterVolumeSpecName: "kube-api-access-6vjtw") pod "60ea7f40-c928-4e4d-afe5-ab0b4a76e918" (UID: "60ea7f40-c928-4e4d-afe5-ab0b4a76e918"). InnerVolumeSpecName "kube-api-access-6vjtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:52:26 crc kubenswrapper[4946]: I1128 08:52:26.889850 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d510bda-fc5e-412b-be7a-e7f6154da6ff-kube-api-access-qqvpb" (OuterVolumeSpecName: "kube-api-access-qqvpb") pod "9d510bda-fc5e-412b-be7a-e7f6154da6ff" (UID: "9d510bda-fc5e-412b-be7a-e7f6154da6ff"). InnerVolumeSpecName "kube-api-access-qqvpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:52:26 crc kubenswrapper[4946]: I1128 08:52:26.982177 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vjtw\" (UniqueName: \"kubernetes.io/projected/60ea7f40-c928-4e4d-afe5-ab0b4a76e918-kube-api-access-6vjtw\") on node \"crc\" DevicePath \"\"" Nov 28 08:52:26 crc kubenswrapper[4946]: I1128 08:52:26.982256 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqvpb\" (UniqueName: \"kubernetes.io/projected/9d510bda-fc5e-412b-be7a-e7f6154da6ff-kube-api-access-qqvpb\") on node \"crc\" DevicePath \"\"" Nov 28 08:52:26 crc kubenswrapper[4946]: I1128 08:52:26.982277 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d510bda-fc5e-412b-be7a-e7f6154da6ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:52:26 crc kubenswrapper[4946]: I1128 08:52:26.982295 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ea7f40-c928-4e4d-afe5-ab0b4a76e918-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:52:27 crc kubenswrapper[4946]: I1128 08:52:27.337220 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rzjfc" event={"ID":"9d510bda-fc5e-412b-be7a-e7f6154da6ff","Type":"ContainerDied","Data":"79d48c6a2b9f3b598544ec69ccca5552187cdbf2cf777c4de7f1490e98c17f94"} Nov 28 08:52:27 crc kubenswrapper[4946]: I1128 08:52:27.337300 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79d48c6a2b9f3b598544ec69ccca5552187cdbf2cf777c4de7f1490e98c17f94" Nov 28 08:52:27 crc kubenswrapper[4946]: I1128 08:52:27.337259 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rzjfc" Nov 28 08:52:27 crc kubenswrapper[4946]: I1128 08:52:27.341511 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tz944" event={"ID":"60ea7f40-c928-4e4d-afe5-ab0b4a76e918","Type":"ContainerDied","Data":"ecf79b4a3c63c9cff25f1d3b7dc6277a7ecdea20a1aaa3360ca42c478af0521d"} Nov 28 08:52:27 crc kubenswrapper[4946]: I1128 08:52:27.341568 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecf79b4a3c63c9cff25f1d3b7dc6277a7ecdea20a1aaa3360ca42c478af0521d" Nov 28 08:52:27 crc kubenswrapper[4946]: I1128 08:52:27.341731 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tz944" Nov 28 08:52:27 crc kubenswrapper[4946]: I1128 08:52:27.878047 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-263b-account-create-update-lr5fj" Nov 28 08:52:27 crc kubenswrapper[4946]: I1128 08:52:27.916661 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d6ecba1-ee43-4308-ba77-ea66427ac798-operator-scripts\") pod \"7d6ecba1-ee43-4308-ba77-ea66427ac798\" (UID: \"7d6ecba1-ee43-4308-ba77-ea66427ac798\") " Nov 28 08:52:27 crc kubenswrapper[4946]: I1128 08:52:27.918238 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnf7b\" (UniqueName: \"kubernetes.io/projected/7d6ecba1-ee43-4308-ba77-ea66427ac798-kube-api-access-dnf7b\") pod \"7d6ecba1-ee43-4308-ba77-ea66427ac798\" (UID: \"7d6ecba1-ee43-4308-ba77-ea66427ac798\") " Nov 28 08:52:27 crc kubenswrapper[4946]: I1128 08:52:27.919017 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d6ecba1-ee43-4308-ba77-ea66427ac798-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d6ecba1-ee43-4308-ba77-ea66427ac798" (UID: "7d6ecba1-ee43-4308-ba77-ea66427ac798"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:52:27 crc kubenswrapper[4946]: I1128 08:52:27.949672 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d6ecba1-ee43-4308-ba77-ea66427ac798-kube-api-access-dnf7b" (OuterVolumeSpecName: "kube-api-access-dnf7b") pod "7d6ecba1-ee43-4308-ba77-ea66427ac798" (UID: "7d6ecba1-ee43-4308-ba77-ea66427ac798"). InnerVolumeSpecName "kube-api-access-dnf7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.028505 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnf7b\" (UniqueName: \"kubernetes.io/projected/7d6ecba1-ee43-4308-ba77-ea66427ac798-kube-api-access-dnf7b\") on node \"crc\" DevicePath \"\"" Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.028709 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d6ecba1-ee43-4308-ba77-ea66427ac798-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.108215 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ztrmk" Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.117279 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-66e9-account-create-update-bjp9j" Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.125751 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5268-account-create-update-nkrh2" Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.129791 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574aecae-6184-4f4c-abed-81ec30f03ac0-operator-scripts\") pod \"574aecae-6184-4f4c-abed-81ec30f03ac0\" (UID: \"574aecae-6184-4f4c-abed-81ec30f03ac0\") " Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.129897 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dbfdfa4-2fc0-42de-bee6-5657463640c6-operator-scripts\") pod \"9dbfdfa4-2fc0-42de-bee6-5657463640c6\" (UID: \"9dbfdfa4-2fc0-42de-bee6-5657463640c6\") " Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.129933 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxxns\" (UniqueName: \"kubernetes.io/projected/574aecae-6184-4f4c-abed-81ec30f03ac0-kube-api-access-pxxns\") pod \"574aecae-6184-4f4c-abed-81ec30f03ac0\" (UID: \"574aecae-6184-4f4c-abed-81ec30f03ac0\") " Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.130004 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvh9l\" (UniqueName: \"kubernetes.io/projected/9dbfdfa4-2fc0-42de-bee6-5657463640c6-kube-api-access-bvh9l\") pod \"9dbfdfa4-2fc0-42de-bee6-5657463640c6\" (UID: \"9dbfdfa4-2fc0-42de-bee6-5657463640c6\") " Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.130263 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/574aecae-6184-4f4c-abed-81ec30f03ac0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "574aecae-6184-4f4c-abed-81ec30f03ac0" (UID: "574aecae-6184-4f4c-abed-81ec30f03ac0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.130439 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dbfdfa4-2fc0-42de-bee6-5657463640c6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9dbfdfa4-2fc0-42de-bee6-5657463640c6" (UID: "9dbfdfa4-2fc0-42de-bee6-5657463640c6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.130550 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574aecae-6184-4f4c-abed-81ec30f03ac0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.146672 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dbfdfa4-2fc0-42de-bee6-5657463640c6-kube-api-access-bvh9l" (OuterVolumeSpecName: "kube-api-access-bvh9l") pod "9dbfdfa4-2fc0-42de-bee6-5657463640c6" (UID: "9dbfdfa4-2fc0-42de-bee6-5657463640c6"). InnerVolumeSpecName "kube-api-access-bvh9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.146799 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/574aecae-6184-4f4c-abed-81ec30f03ac0-kube-api-access-pxxns" (OuterVolumeSpecName: "kube-api-access-pxxns") pod "574aecae-6184-4f4c-abed-81ec30f03ac0" (UID: "574aecae-6184-4f4c-abed-81ec30f03ac0"). InnerVolumeSpecName "kube-api-access-pxxns". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.231563 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q82c8\" (UniqueName: \"kubernetes.io/projected/df17e46e-2f32-4c8d-9aa7-cba30670c85f-kube-api-access-q82c8\") pod \"df17e46e-2f32-4c8d-9aa7-cba30670c85f\" (UID: \"df17e46e-2f32-4c8d-9aa7-cba30670c85f\") " Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.231882 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df17e46e-2f32-4c8d-9aa7-cba30670c85f-operator-scripts\") pod \"df17e46e-2f32-4c8d-9aa7-cba30670c85f\" (UID: \"df17e46e-2f32-4c8d-9aa7-cba30670c85f\") " Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.232454 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dbfdfa4-2fc0-42de-bee6-5657463640c6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.232566 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxxns\" (UniqueName: \"kubernetes.io/projected/574aecae-6184-4f4c-abed-81ec30f03ac0-kube-api-access-pxxns\") on node \"crc\" DevicePath \"\"" Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.232945 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvh9l\" (UniqueName: \"kubernetes.io/projected/9dbfdfa4-2fc0-42de-bee6-5657463640c6-kube-api-access-bvh9l\") on node \"crc\" DevicePath \"\"" Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.233061 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df17e46e-2f32-4c8d-9aa7-cba30670c85f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "df17e46e-2f32-4c8d-9aa7-cba30670c85f" (UID: "df17e46e-2f32-4c8d-9aa7-cba30670c85f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.235003 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df17e46e-2f32-4c8d-9aa7-cba30670c85f-kube-api-access-q82c8" (OuterVolumeSpecName: "kube-api-access-q82c8") pod "df17e46e-2f32-4c8d-9aa7-cba30670c85f" (UID: "df17e46e-2f32-4c8d-9aa7-cba30670c85f"). InnerVolumeSpecName "kube-api-access-q82c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.335388 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df17e46e-2f32-4c8d-9aa7-cba30670c85f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.335447 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q82c8\" (UniqueName: \"kubernetes.io/projected/df17e46e-2f32-4c8d-9aa7-cba30670c85f-kube-api-access-q82c8\") on node \"crc\" DevicePath \"\"" Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.354329 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ztrmk" event={"ID":"9dbfdfa4-2fc0-42de-bee6-5657463640c6","Type":"ContainerDied","Data":"5ca8068c36f30c2c0a629b9e0fecd92183c7f85bdb6793b49830d7d9acc588a5"} Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.355509 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ca8068c36f30c2c0a629b9e0fecd92183c7f85bdb6793b49830d7d9acc588a5" Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.354390 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ztrmk" Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.356908 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5268-account-create-update-nkrh2" event={"ID":"df17e46e-2f32-4c8d-9aa7-cba30670c85f","Type":"ContainerDied","Data":"02138abdb64b2212cf27612e629175161f4f8d3c5c53c833a03c2a3c1486100b"} Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.356951 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02138abdb64b2212cf27612e629175161f4f8d3c5c53c833a03c2a3c1486100b" Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.356992 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5268-account-create-update-nkrh2" Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.360915 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-66e9-account-create-update-bjp9j" event={"ID":"574aecae-6184-4f4c-abed-81ec30f03ac0","Type":"ContainerDied","Data":"1906fa8c2fc01e5041ffc5563fece78be7c087ab33583e605e780ed08b55ac8d"} Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.360959 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1906fa8c2fc01e5041ffc5563fece78be7c087ab33583e605e780ed08b55ac8d" Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.361001 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-66e9-account-create-update-bjp9j" Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.363253 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-263b-account-create-update-lr5fj" event={"ID":"7d6ecba1-ee43-4308-ba77-ea66427ac798","Type":"ContainerDied","Data":"43b2c3861d6bd0e3e6600e0e5c8af6e005af0b75662d17836e36b5cc77450456"} Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.363302 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43b2c3861d6bd0e3e6600e0e5c8af6e005af0b75662d17836e36b5cc77450456" Nov 28 08:52:28 crc kubenswrapper[4946]: I1128 08:52:28.363395 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-263b-account-create-update-lr5fj" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.216075 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m8sbg"] Nov 28 08:52:34 crc kubenswrapper[4946]: E1128 08:52:34.216884 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dbfdfa4-2fc0-42de-bee6-5657463640c6" containerName="mariadb-database-create" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.216899 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dbfdfa4-2fc0-42de-bee6-5657463640c6" containerName="mariadb-database-create" Nov 28 08:52:34 crc kubenswrapper[4946]: E1128 08:52:34.216920 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df17e46e-2f32-4c8d-9aa7-cba30670c85f" containerName="mariadb-account-create-update" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.216928 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="df17e46e-2f32-4c8d-9aa7-cba30670c85f" containerName="mariadb-account-create-update" Nov 28 08:52:34 crc kubenswrapper[4946]: E1128 08:52:34.216947 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d510bda-fc5e-412b-be7a-e7f6154da6ff" containerName="mariadb-database-create" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.216955 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d510bda-fc5e-412b-be7a-e7f6154da6ff" containerName="mariadb-database-create" Nov 28 08:52:34 crc kubenswrapper[4946]: E1128 08:52:34.216975 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60ea7f40-c928-4e4d-afe5-ab0b4a76e918" containerName="mariadb-database-create" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.216983 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="60ea7f40-c928-4e4d-afe5-ab0b4a76e918" containerName="mariadb-database-create" Nov 28 08:52:34 crc kubenswrapper[4946]: E1128 08:52:34.217003 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="574aecae-6184-4f4c-abed-81ec30f03ac0" containerName="mariadb-account-create-update" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.217010 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="574aecae-6184-4f4c-abed-81ec30f03ac0" containerName="mariadb-account-create-update" Nov 28 08:52:34 crc kubenswrapper[4946]: E1128 08:52:34.217028 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6ecba1-ee43-4308-ba77-ea66427ac798" containerName="mariadb-account-create-update" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.217035 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6ecba1-ee43-4308-ba77-ea66427ac798" containerName="mariadb-account-create-update" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.217212 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d510bda-fc5e-412b-be7a-e7f6154da6ff" containerName="mariadb-database-create" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.217236 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d6ecba1-ee43-4308-ba77-ea66427ac798" containerName="mariadb-account-create-update" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.217245 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="df17e46e-2f32-4c8d-9aa7-cba30670c85f" containerName="mariadb-account-create-update" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.217263 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="574aecae-6184-4f4c-abed-81ec30f03ac0" containerName="mariadb-account-create-update" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.217281 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dbfdfa4-2fc0-42de-bee6-5657463640c6" containerName="mariadb-database-create" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.217294 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="60ea7f40-c928-4e4d-afe5-ab0b4a76e918" containerName="mariadb-database-create" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.217916 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m8sbg" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.220005 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.220987 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.221118 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8qt4g" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.262981 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m8sbg"] Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.269072 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be2290f7-78ba-4e50-9e73-8732b2e2639c-config-data\") pod \"nova-cell0-conductor-db-sync-m8sbg\" (UID: \"be2290f7-78ba-4e50-9e73-8732b2e2639c\") " pod="openstack/nova-cell0-conductor-db-sync-m8sbg" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.269159 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm7br\" (UniqueName: \"kubernetes.io/projected/be2290f7-78ba-4e50-9e73-8732b2e2639c-kube-api-access-dm7br\") pod \"nova-cell0-conductor-db-sync-m8sbg\" (UID: \"be2290f7-78ba-4e50-9e73-8732b2e2639c\") " pod="openstack/nova-cell0-conductor-db-sync-m8sbg" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.269334 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be2290f7-78ba-4e50-9e73-8732b2e2639c-scripts\") pod \"nova-cell0-conductor-db-sync-m8sbg\" (UID: \"be2290f7-78ba-4e50-9e73-8732b2e2639c\") " pod="openstack/nova-cell0-conductor-db-sync-m8sbg" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.269492 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2290f7-78ba-4e50-9e73-8732b2e2639c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-m8sbg\" (UID: \"be2290f7-78ba-4e50-9e73-8732b2e2639c\") " pod="openstack/nova-cell0-conductor-db-sync-m8sbg" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.371376 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be2290f7-78ba-4e50-9e73-8732b2e2639c-scripts\") pod \"nova-cell0-conductor-db-sync-m8sbg\" (UID: \"be2290f7-78ba-4e50-9e73-8732b2e2639c\") " pod="openstack/nova-cell0-conductor-db-sync-m8sbg" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.371564 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2290f7-78ba-4e50-9e73-8732b2e2639c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-m8sbg\" (UID: \"be2290f7-78ba-4e50-9e73-8732b2e2639c\") " pod="openstack/nova-cell0-conductor-db-sync-m8sbg" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.371638 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be2290f7-78ba-4e50-9e73-8732b2e2639c-config-data\") pod \"nova-cell0-conductor-db-sync-m8sbg\" (UID: \"be2290f7-78ba-4e50-9e73-8732b2e2639c\") " pod="openstack/nova-cell0-conductor-db-sync-m8sbg" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.371695 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm7br\" (UniqueName: \"kubernetes.io/projected/be2290f7-78ba-4e50-9e73-8732b2e2639c-kube-api-access-dm7br\") pod \"nova-cell0-conductor-db-sync-m8sbg\" (UID: \"be2290f7-78ba-4e50-9e73-8732b2e2639c\") " pod="openstack/nova-cell0-conductor-db-sync-m8sbg" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.376982 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2290f7-78ba-4e50-9e73-8732b2e2639c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-m8sbg\" (UID: \"be2290f7-78ba-4e50-9e73-8732b2e2639c\") " pod="openstack/nova-cell0-conductor-db-sync-m8sbg" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.378945 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be2290f7-78ba-4e50-9e73-8732b2e2639c-scripts\") pod \"nova-cell0-conductor-db-sync-m8sbg\" (UID: \"be2290f7-78ba-4e50-9e73-8732b2e2639c\") " pod="openstack/nova-cell0-conductor-db-sync-m8sbg" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.383956 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be2290f7-78ba-4e50-9e73-8732b2e2639c-config-data\") pod \"nova-cell0-conductor-db-sync-m8sbg\" (UID: \"be2290f7-78ba-4e50-9e73-8732b2e2639c\") " pod="openstack/nova-cell0-conductor-db-sync-m8sbg" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.394678 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm7br\" (UniqueName: \"kubernetes.io/projected/be2290f7-78ba-4e50-9e73-8732b2e2639c-kube-api-access-dm7br\") pod \"nova-cell0-conductor-db-sync-m8sbg\" (UID: \"be2290f7-78ba-4e50-9e73-8732b2e2639c\") " pod="openstack/nova-cell0-conductor-db-sync-m8sbg" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.574422 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m8sbg" Nov 28 08:52:34 crc kubenswrapper[4946]: I1128 08:52:34.852577 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m8sbg"] Nov 28 08:52:35 crc kubenswrapper[4946]: I1128 08:52:35.455628 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m8sbg" event={"ID":"be2290f7-78ba-4e50-9e73-8732b2e2639c","Type":"ContainerStarted","Data":"6c48b49f5da9feff8a07862e22d32a9f2ec5222399933a628261e42a970daf2b"} Nov 28 08:52:38 crc kubenswrapper[4946]: I1128 08:52:38.591292 4946 scope.go:117] "RemoveContainer" containerID="538b5221263527cec4c2ef9f4ec8ab4e45a7f9afb89feed17ffc1295c9c13c28" Nov 28 08:52:44 crc kubenswrapper[4946]: I1128 08:52:44.548425 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m8sbg" event={"ID":"be2290f7-78ba-4e50-9e73-8732b2e2639c","Type":"ContainerStarted","Data":"9b96b891ac92b8dbe02be64f74d1319b150ab3f2328c605b8b6b3765faa93a88"} Nov 28 08:52:44 crc kubenswrapper[4946]: I1128 08:52:44.593932 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-m8sbg" podStartSLOduration=1.378211126 podStartE2EDuration="10.593898225s" podCreationTimestamp="2025-11-28 08:52:34 +0000 UTC" firstStartedPulling="2025-11-28 08:52:34.856911736 +0000 UTC m=+7209.234976847" lastFinishedPulling="2025-11-28 08:52:44.072598795 +0000 UTC m=+7218.450663946" observedRunningTime="2025-11-28 08:52:44.591239609 +0000 UTC m=+7218.969304750" watchObservedRunningTime="2025-11-28 08:52:44.593898225 +0000 UTC m=+7218.971963406" Nov 28 08:52:49 crc kubenswrapper[4946]: I1128 08:52:49.603785 4946 generic.go:334] "Generic (PLEG): container finished" podID="be2290f7-78ba-4e50-9e73-8732b2e2639c" containerID="9b96b891ac92b8dbe02be64f74d1319b150ab3f2328c605b8b6b3765faa93a88" exitCode=0 Nov 28 08:52:49 crc kubenswrapper[4946]: I1128 08:52:49.603914 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m8sbg" event={"ID":"be2290f7-78ba-4e50-9e73-8732b2e2639c","Type":"ContainerDied","Data":"9b96b891ac92b8dbe02be64f74d1319b150ab3f2328c605b8b6b3765faa93a88"} Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.028268 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m8sbg" Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.102647 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm7br\" (UniqueName: \"kubernetes.io/projected/be2290f7-78ba-4e50-9e73-8732b2e2639c-kube-api-access-dm7br\") pod \"be2290f7-78ba-4e50-9e73-8732b2e2639c\" (UID: \"be2290f7-78ba-4e50-9e73-8732b2e2639c\") " Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.102785 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be2290f7-78ba-4e50-9e73-8732b2e2639c-scripts\") pod \"be2290f7-78ba-4e50-9e73-8732b2e2639c\" (UID: \"be2290f7-78ba-4e50-9e73-8732b2e2639c\") " Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.102960 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2290f7-78ba-4e50-9e73-8732b2e2639c-combined-ca-bundle\") pod \"be2290f7-78ba-4e50-9e73-8732b2e2639c\" (UID: \"be2290f7-78ba-4e50-9e73-8732b2e2639c\") " Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.103013 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be2290f7-78ba-4e50-9e73-8732b2e2639c-config-data\") pod \"be2290f7-78ba-4e50-9e73-8732b2e2639c\" (UID: \"be2290f7-78ba-4e50-9e73-8732b2e2639c\") " Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.109290 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be2290f7-78ba-4e50-9e73-8732b2e2639c-kube-api-access-dm7br" (OuterVolumeSpecName: "kube-api-access-dm7br") pod "be2290f7-78ba-4e50-9e73-8732b2e2639c" (UID: "be2290f7-78ba-4e50-9e73-8732b2e2639c"). InnerVolumeSpecName "kube-api-access-dm7br". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.109309 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be2290f7-78ba-4e50-9e73-8732b2e2639c-scripts" (OuterVolumeSpecName: "scripts") pod "be2290f7-78ba-4e50-9e73-8732b2e2639c" (UID: "be2290f7-78ba-4e50-9e73-8732b2e2639c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.137072 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be2290f7-78ba-4e50-9e73-8732b2e2639c-config-data" (OuterVolumeSpecName: "config-data") pod "be2290f7-78ba-4e50-9e73-8732b2e2639c" (UID: "be2290f7-78ba-4e50-9e73-8732b2e2639c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.148203 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be2290f7-78ba-4e50-9e73-8732b2e2639c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be2290f7-78ba-4e50-9e73-8732b2e2639c" (UID: "be2290f7-78ba-4e50-9e73-8732b2e2639c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.204726 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2290f7-78ba-4e50-9e73-8732b2e2639c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.204763 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be2290f7-78ba-4e50-9e73-8732b2e2639c-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.204778 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm7br\" (UniqueName: \"kubernetes.io/projected/be2290f7-78ba-4e50-9e73-8732b2e2639c-kube-api-access-dm7br\") on node \"crc\" DevicePath \"\"" Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.204791 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be2290f7-78ba-4e50-9e73-8732b2e2639c-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.625562 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m8sbg" event={"ID":"be2290f7-78ba-4e50-9e73-8732b2e2639c","Type":"ContainerDied","Data":"6c48b49f5da9feff8a07862e22d32a9f2ec5222399933a628261e42a970daf2b"} Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.625606 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c48b49f5da9feff8a07862e22d32a9f2ec5222399933a628261e42a970daf2b" Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.625661 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m8sbg" Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.713546 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 08:52:51 crc kubenswrapper[4946]: E1128 08:52:51.714361 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be2290f7-78ba-4e50-9e73-8732b2e2639c" containerName="nova-cell0-conductor-db-sync" Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.714518 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="be2290f7-78ba-4e50-9e73-8732b2e2639c" containerName="nova-cell0-conductor-db-sync" Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.714889 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="be2290f7-78ba-4e50-9e73-8732b2e2639c" containerName="nova-cell0-conductor-db-sync" Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.715811 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.719249 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.719560 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8qt4g" Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.726609 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.817431 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg5qx\" (UniqueName: \"kubernetes.io/projected/95bd6a55-27ab-4832-974d-20e84433add3-kube-api-access-rg5qx\") pod \"nova-cell0-conductor-0\" (UID: \"95bd6a55-27ab-4832-974d-20e84433add3\") " pod="openstack/nova-cell0-conductor-0" Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.817664 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95bd6a55-27ab-4832-974d-20e84433add3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"95bd6a55-27ab-4832-974d-20e84433add3\") " pod="openstack/nova-cell0-conductor-0" Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.817706 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95bd6a55-27ab-4832-974d-20e84433add3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"95bd6a55-27ab-4832-974d-20e84433add3\") " pod="openstack/nova-cell0-conductor-0" Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.920093 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95bd6a55-27ab-4832-974d-20e84433add3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"95bd6a55-27ab-4832-974d-20e84433add3\") " pod="openstack/nova-cell0-conductor-0" Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.920154 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95bd6a55-27ab-4832-974d-20e84433add3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"95bd6a55-27ab-4832-974d-20e84433add3\") " pod="openstack/nova-cell0-conductor-0" Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.920273 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg5qx\" (UniqueName: \"kubernetes.io/projected/95bd6a55-27ab-4832-974d-20e84433add3-kube-api-access-rg5qx\") pod \"nova-cell0-conductor-0\" (UID: \"95bd6a55-27ab-4832-974d-20e84433add3\") " pod="openstack/nova-cell0-conductor-0" Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.925922 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95bd6a55-27ab-4832-974d-20e84433add3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"95bd6a55-27ab-4832-974d-20e84433add3\") " pod="openstack/nova-cell0-conductor-0" Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.926015 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95bd6a55-27ab-4832-974d-20e84433add3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"95bd6a55-27ab-4832-974d-20e84433add3\") " pod="openstack/nova-cell0-conductor-0" Nov 28 08:52:51 crc kubenswrapper[4946]: I1128 08:52:51.937592 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg5qx\" (UniqueName: \"kubernetes.io/projected/95bd6a55-27ab-4832-974d-20e84433add3-kube-api-access-rg5qx\") pod \"nova-cell0-conductor-0\" (UID: \"95bd6a55-27ab-4832-974d-20e84433add3\") " pod="openstack/nova-cell0-conductor-0" Nov 28 08:52:52 crc kubenswrapper[4946]: I1128 08:52:52.049506 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 08:52:52 crc kubenswrapper[4946]: I1128 08:52:52.482071 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 08:52:52 crc kubenswrapper[4946]: I1128 08:52:52.634409 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"95bd6a55-27ab-4832-974d-20e84433add3","Type":"ContainerStarted","Data":"76d9aa500fcc544b24c152a13a19efb9958fbc71585c5e4e59537b0f81733fe4"} Nov 28 08:52:53 crc kubenswrapper[4946]: I1128 08:52:53.646617 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"95bd6a55-27ab-4832-974d-20e84433add3","Type":"ContainerStarted","Data":"a068932e83858cfa3df73cfe29e939011a97ad3c4f3466fc554b218170927fc6"} Nov 28 08:52:53 crc kubenswrapper[4946]: I1128 08:52:53.646993 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 28 08:52:53 crc kubenswrapper[4946]: I1128 08:52:53.686074 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.686036623 podStartE2EDuration="2.686036623s" podCreationTimestamp="2025-11-28 08:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:52:53.677207744 +0000 UTC m=+7228.055272895" watchObservedRunningTime="2025-11-28 08:52:53.686036623 +0000 UTC m=+7228.064101764" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.078528 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.549600 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-qjtsc"] Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.550677 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qjtsc" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.552585 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.552807 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.561642 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qjtsc"] Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.707924 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/630068c6-dca8-4f28-b680-9c730c477328-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qjtsc\" (UID: \"630068c6-dca8-4f28-b680-9c730c477328\") " pod="openstack/nova-cell0-cell-mapping-qjtsc" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.708503 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/630068c6-dca8-4f28-b680-9c730c477328-config-data\") pod \"nova-cell0-cell-mapping-qjtsc\" (UID: \"630068c6-dca8-4f28-b680-9c730c477328\") " pod="openstack/nova-cell0-cell-mapping-qjtsc" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.708836 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/630068c6-dca8-4f28-b680-9c730c477328-scripts\") pod \"nova-cell0-cell-mapping-qjtsc\" (UID: \"630068c6-dca8-4f28-b680-9c730c477328\") " pod="openstack/nova-cell0-cell-mapping-qjtsc" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.709106 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlcxh\" (UniqueName: \"kubernetes.io/projected/630068c6-dca8-4f28-b680-9c730c477328-kube-api-access-nlcxh\") pod \"nova-cell0-cell-mapping-qjtsc\" (UID: \"630068c6-dca8-4f28-b680-9c730c477328\") " pod="openstack/nova-cell0-cell-mapping-qjtsc" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.754638 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.759824 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.770942 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.777866 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.791454 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.793804 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.799876 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.817085 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/630068c6-dca8-4f28-b680-9c730c477328-scripts\") pod \"nova-cell0-cell-mapping-qjtsc\" (UID: \"630068c6-dca8-4f28-b680-9c730c477328\") " pod="openstack/nova-cell0-cell-mapping-qjtsc" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.817151 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlcxh\" (UniqueName: \"kubernetes.io/projected/630068c6-dca8-4f28-b680-9c730c477328-kube-api-access-nlcxh\") pod \"nova-cell0-cell-mapping-qjtsc\" (UID: \"630068c6-dca8-4f28-b680-9c730c477328\") " pod="openstack/nova-cell0-cell-mapping-qjtsc" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.817214 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/630068c6-dca8-4f28-b680-9c730c477328-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qjtsc\" (UID: \"630068c6-dca8-4f28-b680-9c730c477328\") " pod="openstack/nova-cell0-cell-mapping-qjtsc" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.817236 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/630068c6-dca8-4f28-b680-9c730c477328-config-data\") pod \"nova-cell0-cell-mapping-qjtsc\" (UID: \"630068c6-dca8-4f28-b680-9c730c477328\") " pod="openstack/nova-cell0-cell-mapping-qjtsc" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.826155 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/630068c6-dca8-4f28-b680-9c730c477328-scripts\") pod \"nova-cell0-cell-mapping-qjtsc\" (UID: \"630068c6-dca8-4f28-b680-9c730c477328\") " pod="openstack/nova-cell0-cell-mapping-qjtsc" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.826777 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/630068c6-dca8-4f28-b680-9c730c477328-config-data\") pod \"nova-cell0-cell-mapping-qjtsc\" (UID: \"630068c6-dca8-4f28-b680-9c730c477328\") " pod="openstack/nova-cell0-cell-mapping-qjtsc" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.833212 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/630068c6-dca8-4f28-b680-9c730c477328-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qjtsc\" (UID: \"630068c6-dca8-4f28-b680-9c730c477328\") " pod="openstack/nova-cell0-cell-mapping-qjtsc" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.837617 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.840766 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlcxh\" (UniqueName: \"kubernetes.io/projected/630068c6-dca8-4f28-b680-9c730c477328-kube-api-access-nlcxh\") pod \"nova-cell0-cell-mapping-qjtsc\" (UID: \"630068c6-dca8-4f28-b680-9c730c477328\") " pod="openstack/nova-cell0-cell-mapping-qjtsc" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.868414 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qjtsc" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.878564 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.880317 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.893479 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.919137 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8mlh\" (UniqueName: \"kubernetes.io/projected/0cac59fd-5161-47c4-a5aa-2a268962215a-kube-api-access-f8mlh\") pod \"nova-api-0\" (UID: \"0cac59fd-5161-47c4-a5aa-2a268962215a\") " pod="openstack/nova-api-0" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.919189 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7c3bcb6-9ac5-4bae-b7a3-64e31022516d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7c3bcb6-9ac5-4bae-b7a3-64e31022516d\") " pod="openstack/nova-scheduler-0" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.925238 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.927586 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cac59fd-5161-47c4-a5aa-2a268962215a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0cac59fd-5161-47c4-a5aa-2a268962215a\") " pod="openstack/nova-api-0" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.927707 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxr5b\" (UniqueName: \"kubernetes.io/projected/e7c3bcb6-9ac5-4bae-b7a3-64e31022516d-kube-api-access-mxr5b\") pod \"nova-scheduler-0\" (UID: \"e7c3bcb6-9ac5-4bae-b7a3-64e31022516d\") " pod="openstack/nova-scheduler-0" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.933708 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.935412 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.942195 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.942582 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.946571 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cac59fd-5161-47c4-a5aa-2a268962215a-logs\") pod \"nova-api-0\" (UID: \"0cac59fd-5161-47c4-a5aa-2a268962215a\") " pod="openstack/nova-api-0" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.946613 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cac59fd-5161-47c4-a5aa-2a268962215a-config-data\") pod \"nova-api-0\" (UID: \"0cac59fd-5161-47c4-a5aa-2a268962215a\") " pod="openstack/nova-api-0" Nov 28 08:52:57 crc kubenswrapper[4946]: I1128 08:52:57.946658 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7c3bcb6-9ac5-4bae-b7a3-64e31022516d-config-data\") pod \"nova-scheduler-0\" (UID: \"e7c3bcb6-9ac5-4bae-b7a3-64e31022516d\") " pod="openstack/nova-scheduler-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.015726 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b748879ff-rm7x4"] Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.026127 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b748879ff-rm7x4" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.037519 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b748879ff-rm7x4"] Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.053459 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cac59fd-5161-47c4-a5aa-2a268962215a-logs\") pod \"nova-api-0\" (UID: \"0cac59fd-5161-47c4-a5aa-2a268962215a\") " pod="openstack/nova-api-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.053524 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cac59fd-5161-47c4-a5aa-2a268962215a-config-data\") pod \"nova-api-0\" (UID: \"0cac59fd-5161-47c4-a5aa-2a268962215a\") " pod="openstack/nova-api-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.053550 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdcee07-b175-4410-b443-b80370342779-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dbdcee07-b175-4410-b443-b80370342779\") " pod="openstack/nova-metadata-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.053570 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7c3bcb6-9ac5-4bae-b7a3-64e31022516d-config-data\") pod \"nova-scheduler-0\" (UID: \"e7c3bcb6-9ac5-4bae-b7a3-64e31022516d\") " pod="openstack/nova-scheduler-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.053588 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678f7591-c9d6-43dc-8ece-0d4f0f4965f4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"678f7591-c9d6-43dc-8ece-0d4f0f4965f4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.053612 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4tsm\" (UniqueName: \"kubernetes.io/projected/678f7591-c9d6-43dc-8ece-0d4f0f4965f4-kube-api-access-w4tsm\") pod \"nova-cell1-novncproxy-0\" (UID: \"678f7591-c9d6-43dc-8ece-0d4f0f4965f4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.053640 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8mlh\" (UniqueName: \"kubernetes.io/projected/0cac59fd-5161-47c4-a5aa-2a268962215a-kube-api-access-f8mlh\") pod \"nova-api-0\" (UID: \"0cac59fd-5161-47c4-a5aa-2a268962215a\") " pod="openstack/nova-api-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.053673 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7c3bcb6-9ac5-4bae-b7a3-64e31022516d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7c3bcb6-9ac5-4bae-b7a3-64e31022516d\") " pod="openstack/nova-scheduler-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.053718 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wk54\" (UniqueName: \"kubernetes.io/projected/dbdcee07-b175-4410-b443-b80370342779-kube-api-access-7wk54\") pod \"nova-metadata-0\" (UID: \"dbdcee07-b175-4410-b443-b80370342779\") " pod="openstack/nova-metadata-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.053744 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678f7591-c9d6-43dc-8ece-0d4f0f4965f4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"678f7591-c9d6-43dc-8ece-0d4f0f4965f4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.053767 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbdcee07-b175-4410-b443-b80370342779-logs\") pod \"nova-metadata-0\" (UID: \"dbdcee07-b175-4410-b443-b80370342779\") " pod="openstack/nova-metadata-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.053783 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cac59fd-5161-47c4-a5aa-2a268962215a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0cac59fd-5161-47c4-a5aa-2a268962215a\") " pod="openstack/nova-api-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.053828 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxr5b\" (UniqueName: \"kubernetes.io/projected/e7c3bcb6-9ac5-4bae-b7a3-64e31022516d-kube-api-access-mxr5b\") pod \"nova-scheduler-0\" (UID: \"e7c3bcb6-9ac5-4bae-b7a3-64e31022516d\") " pod="openstack/nova-scheduler-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.053857 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbdcee07-b175-4410-b443-b80370342779-config-data\") pod \"nova-metadata-0\" (UID: \"dbdcee07-b175-4410-b443-b80370342779\") " pod="openstack/nova-metadata-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.059627 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cac59fd-5161-47c4-a5aa-2a268962215a-logs\") pod \"nova-api-0\" (UID: \"0cac59fd-5161-47c4-a5aa-2a268962215a\") " pod="openstack/nova-api-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.063141 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7c3bcb6-9ac5-4bae-b7a3-64e31022516d-config-data\") pod \"nova-scheduler-0\" (UID: \"e7c3bcb6-9ac5-4bae-b7a3-64e31022516d\") " pod="openstack/nova-scheduler-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.063502 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cac59fd-5161-47c4-a5aa-2a268962215a-config-data\") pod \"nova-api-0\" (UID: \"0cac59fd-5161-47c4-a5aa-2a268962215a\") " pod="openstack/nova-api-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.069743 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cac59fd-5161-47c4-a5aa-2a268962215a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0cac59fd-5161-47c4-a5aa-2a268962215a\") " pod="openstack/nova-api-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.077687 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxr5b\" (UniqueName: \"kubernetes.io/projected/e7c3bcb6-9ac5-4bae-b7a3-64e31022516d-kube-api-access-mxr5b\") pod \"nova-scheduler-0\" (UID: \"e7c3bcb6-9ac5-4bae-b7a3-64e31022516d\") " pod="openstack/nova-scheduler-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.083188 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7c3bcb6-9ac5-4bae-b7a3-64e31022516d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7c3bcb6-9ac5-4bae-b7a3-64e31022516d\") " pod="openstack/nova-scheduler-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.090596 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8mlh\" (UniqueName: \"kubernetes.io/projected/0cac59fd-5161-47c4-a5aa-2a268962215a-kube-api-access-f8mlh\") pod \"nova-api-0\" (UID: \"0cac59fd-5161-47c4-a5aa-2a268962215a\") " pod="openstack/nova-api-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.155638 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbdcee07-b175-4410-b443-b80370342779-config-data\") pod \"nova-metadata-0\" (UID: \"dbdcee07-b175-4410-b443-b80370342779\") " pod="openstack/nova-metadata-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.155678 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-dns-svc\") pod \"dnsmasq-dns-b748879ff-rm7x4\" (UID: \"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1\") " pod="openstack/dnsmasq-dns-b748879ff-rm7x4" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.155747 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdcee07-b175-4410-b443-b80370342779-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dbdcee07-b175-4410-b443-b80370342779\") " pod="openstack/nova-metadata-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.155767 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678f7591-c9d6-43dc-8ece-0d4f0f4965f4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"678f7591-c9d6-43dc-8ece-0d4f0f4965f4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.155862 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-ovsdbserver-sb\") pod \"dnsmasq-dns-b748879ff-rm7x4\" (UID: \"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1\") " pod="openstack/dnsmasq-dns-b748879ff-rm7x4" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.155919 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4tsm\" (UniqueName: \"kubernetes.io/projected/678f7591-c9d6-43dc-8ece-0d4f0f4965f4-kube-api-access-w4tsm\") pod \"nova-cell1-novncproxy-0\" (UID: \"678f7591-c9d6-43dc-8ece-0d4f0f4965f4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.155946 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgjmj\" (UniqueName: \"kubernetes.io/projected/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-kube-api-access-mgjmj\") pod \"dnsmasq-dns-b748879ff-rm7x4\" (UID: \"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1\") " pod="openstack/dnsmasq-dns-b748879ff-rm7x4" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.156023 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-config\") pod \"dnsmasq-dns-b748879ff-rm7x4\" (UID: \"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1\") " pod="openstack/dnsmasq-dns-b748879ff-rm7x4" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.156061 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wk54\" (UniqueName: \"kubernetes.io/projected/dbdcee07-b175-4410-b443-b80370342779-kube-api-access-7wk54\") pod \"nova-metadata-0\" (UID: \"dbdcee07-b175-4410-b443-b80370342779\") " pod="openstack/nova-metadata-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.156102 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678f7591-c9d6-43dc-8ece-0d4f0f4965f4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"678f7591-c9d6-43dc-8ece-0d4f0f4965f4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.156124 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbdcee07-b175-4410-b443-b80370342779-logs\") pod \"nova-metadata-0\" (UID: \"dbdcee07-b175-4410-b443-b80370342779\") " pod="openstack/nova-metadata-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.156148 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-ovsdbserver-nb\") pod \"dnsmasq-dns-b748879ff-rm7x4\" (UID: \"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1\") " pod="openstack/dnsmasq-dns-b748879ff-rm7x4" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.160958 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbdcee07-b175-4410-b443-b80370342779-logs\") pod \"nova-metadata-0\" (UID: \"dbdcee07-b175-4410-b443-b80370342779\") " pod="openstack/nova-metadata-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.161195 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdcee07-b175-4410-b443-b80370342779-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dbdcee07-b175-4410-b443-b80370342779\") " pod="openstack/nova-metadata-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.162312 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbdcee07-b175-4410-b443-b80370342779-config-data\") pod \"nova-metadata-0\" (UID: \"dbdcee07-b175-4410-b443-b80370342779\") " pod="openstack/nova-metadata-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.164267 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678f7591-c9d6-43dc-8ece-0d4f0f4965f4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"678f7591-c9d6-43dc-8ece-0d4f0f4965f4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.169973 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678f7591-c9d6-43dc-8ece-0d4f0f4965f4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"678f7591-c9d6-43dc-8ece-0d4f0f4965f4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.172247 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4tsm\" (UniqueName: \"kubernetes.io/projected/678f7591-c9d6-43dc-8ece-0d4f0f4965f4-kube-api-access-w4tsm\") pod \"nova-cell1-novncproxy-0\" (UID: \"678f7591-c9d6-43dc-8ece-0d4f0f4965f4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.173807 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wk54\" (UniqueName: \"kubernetes.io/projected/dbdcee07-b175-4410-b443-b80370342779-kube-api-access-7wk54\") pod \"nova-metadata-0\" (UID: \"dbdcee07-b175-4410-b443-b80370342779\") " pod="openstack/nova-metadata-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.195148 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.220287 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.258365 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-config\") pod \"dnsmasq-dns-b748879ff-rm7x4\" (UID: \"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1\") " pod="openstack/dnsmasq-dns-b748879ff-rm7x4" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.258563 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-ovsdbserver-nb\") pod \"dnsmasq-dns-b748879ff-rm7x4\" (UID: \"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1\") " pod="openstack/dnsmasq-dns-b748879ff-rm7x4" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.258648 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-dns-svc\") pod \"dnsmasq-dns-b748879ff-rm7x4\" (UID: \"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1\") " pod="openstack/dnsmasq-dns-b748879ff-rm7x4" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.258748 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-ovsdbserver-sb\") pod \"dnsmasq-dns-b748879ff-rm7x4\" (UID: \"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1\") " pod="openstack/dnsmasq-dns-b748879ff-rm7x4" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.258804 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgjmj\" (UniqueName: \"kubernetes.io/projected/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-kube-api-access-mgjmj\") pod \"dnsmasq-dns-b748879ff-rm7x4\" (UID: \"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1\") " pod="openstack/dnsmasq-dns-b748879ff-rm7x4" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.260534 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-dns-svc\") pod \"dnsmasq-dns-b748879ff-rm7x4\" (UID: \"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1\") " pod="openstack/dnsmasq-dns-b748879ff-rm7x4" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.261154 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-ovsdbserver-nb\") pod \"dnsmasq-dns-b748879ff-rm7x4\" (UID: \"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1\") " pod="openstack/dnsmasq-dns-b748879ff-rm7x4" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.261445 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-config\") pod \"dnsmasq-dns-b748879ff-rm7x4\" (UID: \"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1\") " pod="openstack/dnsmasq-dns-b748879ff-rm7x4" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.262278 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-ovsdbserver-sb\") pod \"dnsmasq-dns-b748879ff-rm7x4\" (UID: \"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1\") " pod="openstack/dnsmasq-dns-b748879ff-rm7x4" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.276288 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgjmj\" (UniqueName: \"kubernetes.io/projected/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-kube-api-access-mgjmj\") pod \"dnsmasq-dns-b748879ff-rm7x4\" (UID: \"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1\") " pod="openstack/dnsmasq-dns-b748879ff-rm7x4" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.335426 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.384024 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.407400 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b748879ff-rm7x4" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.498276 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qjtsc"] Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.527659 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cwv2r"] Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.529028 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cwv2r" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.530289 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.531065 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.539469 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cwv2r"] Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.667675 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7724247-b3f0-4422-9120-b4ab79b4e613-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cwv2r\" (UID: \"a7724247-b3f0-4422-9120-b4ab79b4e613\") " pod="openstack/nova-cell1-conductor-db-sync-cwv2r" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.667756 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7724247-b3f0-4422-9120-b4ab79b4e613-scripts\") pod \"nova-cell1-conductor-db-sync-cwv2r\" (UID: \"a7724247-b3f0-4422-9120-b4ab79b4e613\") " pod="openstack/nova-cell1-conductor-db-sync-cwv2r" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.667797 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7724247-b3f0-4422-9120-b4ab79b4e613-config-data\") pod \"nova-cell1-conductor-db-sync-cwv2r\" (UID: \"a7724247-b3f0-4422-9120-b4ab79b4e613\") " pod="openstack/nova-cell1-conductor-db-sync-cwv2r" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.667822 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd9nn\" (UniqueName: \"kubernetes.io/projected/a7724247-b3f0-4422-9120-b4ab79b4e613-kube-api-access-dd9nn\") pod \"nova-cell1-conductor-db-sync-cwv2r\" (UID: \"a7724247-b3f0-4422-9120-b4ab79b4e613\") " pod="openstack/nova-cell1-conductor-db-sync-cwv2r" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.688743 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qjtsc" event={"ID":"630068c6-dca8-4f28-b680-9c730c477328","Type":"ContainerStarted","Data":"d777a2be50fb82a675422687f6c46176fe81f3a274f6e493ec222618ff4f5d89"} Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.769233 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7724247-b3f0-4422-9120-b4ab79b4e613-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cwv2r\" (UID: \"a7724247-b3f0-4422-9120-b4ab79b4e613\") " pod="openstack/nova-cell1-conductor-db-sync-cwv2r" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.769320 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7724247-b3f0-4422-9120-b4ab79b4e613-scripts\") pod \"nova-cell1-conductor-db-sync-cwv2r\" (UID: \"a7724247-b3f0-4422-9120-b4ab79b4e613\") " pod="openstack/nova-cell1-conductor-db-sync-cwv2r" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.769364 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7724247-b3f0-4422-9120-b4ab79b4e613-config-data\") pod \"nova-cell1-conductor-db-sync-cwv2r\" (UID: \"a7724247-b3f0-4422-9120-b4ab79b4e613\") " pod="openstack/nova-cell1-conductor-db-sync-cwv2r" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.769389 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd9nn\" (UniqueName: \"kubernetes.io/projected/a7724247-b3f0-4422-9120-b4ab79b4e613-kube-api-access-dd9nn\") pod \"nova-cell1-conductor-db-sync-cwv2r\" (UID: \"a7724247-b3f0-4422-9120-b4ab79b4e613\") " pod="openstack/nova-cell1-conductor-db-sync-cwv2r" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.776450 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.779253 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7724247-b3f0-4422-9120-b4ab79b4e613-config-data\") pod \"nova-cell1-conductor-db-sync-cwv2r\" (UID: \"a7724247-b3f0-4422-9120-b4ab79b4e613\") " pod="openstack/nova-cell1-conductor-db-sync-cwv2r" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.779615 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7724247-b3f0-4422-9120-b4ab79b4e613-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cwv2r\" (UID: \"a7724247-b3f0-4422-9120-b4ab79b4e613\") " pod="openstack/nova-cell1-conductor-db-sync-cwv2r" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.784265 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7724247-b3f0-4422-9120-b4ab79b4e613-scripts\") pod \"nova-cell1-conductor-db-sync-cwv2r\" (UID: \"a7724247-b3f0-4422-9120-b4ab79b4e613\") " pod="openstack/nova-cell1-conductor-db-sync-cwv2r" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.794560 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd9nn\" (UniqueName: \"kubernetes.io/projected/a7724247-b3f0-4422-9120-b4ab79b4e613-kube-api-access-dd9nn\") pod \"nova-cell1-conductor-db-sync-cwv2r\" (UID: \"a7724247-b3f0-4422-9120-b4ab79b4e613\") " pod="openstack/nova-cell1-conductor-db-sync-cwv2r" Nov 28 08:52:58 crc kubenswrapper[4946]: W1128 08:52:58.816812 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7c3bcb6_9ac5_4bae_b7a3_64e31022516d.slice/crio-ae7d27bdef6cbb14702e30a0335bd9fc0f2b89577847a7a982087c542c04d1d7 WatchSource:0}: Error finding container ae7d27bdef6cbb14702e30a0335bd9fc0f2b89577847a7a982087c542c04d1d7: Status 404 returned error can't find the container with id ae7d27bdef6cbb14702e30a0335bd9fc0f2b89577847a7a982087c542c04d1d7 Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.857134 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cwv2r" Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.872512 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 08:52:58 crc kubenswrapper[4946]: I1128 08:52:58.912144 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 08:52:59 crc kubenswrapper[4946]: I1128 08:52:59.007223 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 08:52:59 crc kubenswrapper[4946]: W1128 08:52:59.013776 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0cac59fd_5161_47c4_a5aa_2a268962215a.slice/crio-f9b384989023075021d81f20a8653adf556ad1020af3b249baf88b654cf8e433 WatchSource:0}: Error finding container f9b384989023075021d81f20a8653adf556ad1020af3b249baf88b654cf8e433: Status 404 returned error can't find the container with id f9b384989023075021d81f20a8653adf556ad1020af3b249baf88b654cf8e433 Nov 28 08:52:59 crc kubenswrapper[4946]: I1128 08:52:59.113749 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b748879ff-rm7x4"] Nov 28 08:52:59 crc kubenswrapper[4946]: W1128 08:52:59.138441 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8b3118f_a393_4a4d_9fff_0846fb6ac6f1.slice/crio-3d18fdc3cb019a932f3edc8d6a6119859fd5763266f0430223229713440c6895 WatchSource:0}: Error finding container 3d18fdc3cb019a932f3edc8d6a6119859fd5763266f0430223229713440c6895: Status 404 returned error can't find the container with id 3d18fdc3cb019a932f3edc8d6a6119859fd5763266f0430223229713440c6895 Nov 28 08:52:59 crc kubenswrapper[4946]: W1128 08:52:59.437393 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7724247_b3f0_4422_9120_b4ab79b4e613.slice/crio-7fbd3e233e282a37d299186cce92f136cd6e7d4cf80585cf92737bbf6711b279 WatchSource:0}: Error finding container 7fbd3e233e282a37d299186cce92f136cd6e7d4cf80585cf92737bbf6711b279: Status 404 returned error can't find the container with id 7fbd3e233e282a37d299186cce92f136cd6e7d4cf80585cf92737bbf6711b279 Nov 28 08:52:59 crc kubenswrapper[4946]: I1128 08:52:59.441139 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cwv2r"] Nov 28 08:52:59 crc kubenswrapper[4946]: I1128 08:52:59.718026 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7c3bcb6-9ac5-4bae-b7a3-64e31022516d","Type":"ContainerStarted","Data":"ae7d27bdef6cbb14702e30a0335bd9fc0f2b89577847a7a982087c542c04d1d7"} Nov 28 08:52:59 crc kubenswrapper[4946]: I1128 08:52:59.719315 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cwv2r" event={"ID":"a7724247-b3f0-4422-9120-b4ab79b4e613","Type":"ContainerStarted","Data":"7fbd3e233e282a37d299186cce92f136cd6e7d4cf80585cf92737bbf6711b279"} Nov 28 08:52:59 crc kubenswrapper[4946]: I1128 08:52:59.720631 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0cac59fd-5161-47c4-a5aa-2a268962215a","Type":"ContainerStarted","Data":"f9b384989023075021d81f20a8653adf556ad1020af3b249baf88b654cf8e433"} Nov 28 08:52:59 crc kubenswrapper[4946]: I1128 08:52:59.721838 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dbdcee07-b175-4410-b443-b80370342779","Type":"ContainerStarted","Data":"e80e089298f26813dd3ba30f98d1b9eb34834fc49c4437f4f50adc9b48fd795a"} Nov 28 08:52:59 crc kubenswrapper[4946]: I1128 08:52:59.723113 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qjtsc" event={"ID":"630068c6-dca8-4f28-b680-9c730c477328","Type":"ContainerStarted","Data":"4f8e173b34d1506a6bed0691341522603cad87d30ac6502d8ff321cf68fcd7a5"} Nov 28 08:52:59 crc kubenswrapper[4946]: I1128 08:52:59.724909 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"678f7591-c9d6-43dc-8ece-0d4f0f4965f4","Type":"ContainerStarted","Data":"2bc414c9c4f2bdfbf327f3c4ca343437ec0d2005319a4a80edfaebbb4ac10250"} Nov 28 08:52:59 crc kubenswrapper[4946]: I1128 08:52:59.727399 4946 generic.go:334] "Generic (PLEG): container finished" podID="d8b3118f-a393-4a4d-9fff-0846fb6ac6f1" containerID="2a679ecab020588299dbe05fec513d8b0793950c31cc5ebdf5348f7c91c65beb" exitCode=0 Nov 28 08:52:59 crc kubenswrapper[4946]: I1128 08:52:59.727452 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b748879ff-rm7x4" event={"ID":"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1","Type":"ContainerDied","Data":"2a679ecab020588299dbe05fec513d8b0793950c31cc5ebdf5348f7c91c65beb"} Nov 28 08:52:59 crc kubenswrapper[4946]: I1128 08:52:59.727503 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b748879ff-rm7x4" event={"ID":"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1","Type":"ContainerStarted","Data":"3d18fdc3cb019a932f3edc8d6a6119859fd5763266f0430223229713440c6895"} Nov 28 08:52:59 crc kubenswrapper[4946]: I1128 08:52:59.742261 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-qjtsc" podStartSLOduration=2.742245832 podStartE2EDuration="2.742245832s" podCreationTimestamp="2025-11-28 08:52:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:52:59.739130805 +0000 UTC m=+7234.117195916" watchObservedRunningTime="2025-11-28 08:52:59.742245832 +0000 UTC m=+7234.120310943" Nov 28 08:53:00 crc kubenswrapper[4946]: I1128 08:53:00.740989 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cwv2r" event={"ID":"a7724247-b3f0-4422-9120-b4ab79b4e613","Type":"ContainerStarted","Data":"f65e759f88c4f12322679858a51b2efb8e0875588667933b40e551ad478f7e26"} Nov 28 08:53:00 crc kubenswrapper[4946]: I1128 08:53:00.756744 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-cwv2r" podStartSLOduration=2.756726658 podStartE2EDuration="2.756726658s" podCreationTimestamp="2025-11-28 08:52:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:53:00.75479409 +0000 UTC m=+7235.132859211" watchObservedRunningTime="2025-11-28 08:53:00.756726658 +0000 UTC m=+7235.134791769" Nov 28 08:53:01 crc kubenswrapper[4946]: I1128 08:53:01.751934 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dbdcee07-b175-4410-b443-b80370342779","Type":"ContainerStarted","Data":"4a357200b73160dd9d102ab83d63605be106616f5b86a066e6e4403add3c0467"} Nov 28 08:53:01 crc kubenswrapper[4946]: I1128 08:53:01.754231 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"678f7591-c9d6-43dc-8ece-0d4f0f4965f4","Type":"ContainerStarted","Data":"18d400fa8f5cb27d5a2fc63ee636aa43f1e420c49f73941dae45801c363dfbb9"} Nov 28 08:53:01 crc kubenswrapper[4946]: I1128 08:53:01.756834 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b748879ff-rm7x4" event={"ID":"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1","Type":"ContainerStarted","Data":"83ce25e78bc0d9d1d4d261d549c85653a9e589b62b53e87c7c2251d3f88453aa"} Nov 28 08:53:01 crc kubenswrapper[4946]: I1128 08:53:01.756964 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b748879ff-rm7x4" Nov 28 08:53:01 crc kubenswrapper[4946]: I1128 08:53:01.758368 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7c3bcb6-9ac5-4bae-b7a3-64e31022516d","Type":"ContainerStarted","Data":"f03364a41b69272215cc50a2ec670ad9728def8ef4f7c69a999316875457909e"} Nov 28 08:53:01 crc kubenswrapper[4946]: I1128 08:53:01.759482 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0cac59fd-5161-47c4-a5aa-2a268962215a","Type":"ContainerStarted","Data":"248f52e63cb3f79285e7710f22227d97b7448dc724584e7cd5e9baedabcf32ea"} Nov 28 08:53:01 crc kubenswrapper[4946]: I1128 08:53:01.780100 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.505724465 podStartE2EDuration="4.780083083s" podCreationTimestamp="2025-11-28 08:52:57 +0000 UTC" firstStartedPulling="2025-11-28 08:52:58.974128869 +0000 UTC m=+7233.352193980" lastFinishedPulling="2025-11-28 08:53:01.248487487 +0000 UTC m=+7235.626552598" observedRunningTime="2025-11-28 08:53:01.773739436 +0000 UTC m=+7236.151804547" watchObservedRunningTime="2025-11-28 08:53:01.780083083 +0000 UTC m=+7236.158148194" Nov 28 08:53:01 crc kubenswrapper[4946]: I1128 08:53:01.792258 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.39045517 podStartE2EDuration="4.792242234s" podCreationTimestamp="2025-11-28 08:52:57 +0000 UTC" firstStartedPulling="2025-11-28 08:52:58.846635991 +0000 UTC m=+7233.224701102" lastFinishedPulling="2025-11-28 08:53:01.248423055 +0000 UTC m=+7235.626488166" observedRunningTime="2025-11-28 08:53:01.79048329 +0000 UTC m=+7236.168548401" watchObservedRunningTime="2025-11-28 08:53:01.792242234 +0000 UTC m=+7236.170307335" Nov 28 08:53:01 crc kubenswrapper[4946]: I1128 08:53:01.816740 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b748879ff-rm7x4" podStartSLOduration=4.81671649 podStartE2EDuration="4.81671649s" podCreationTimestamp="2025-11-28 08:52:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:53:01.814548186 +0000 UTC m=+7236.192613297" watchObservedRunningTime="2025-11-28 08:53:01.81671649 +0000 UTC m=+7236.194781601" Nov 28 08:53:02 crc kubenswrapper[4946]: I1128 08:53:02.802354 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0cac59fd-5161-47c4-a5aa-2a268962215a","Type":"ContainerStarted","Data":"bf92841c98eae6e14a784992b1bb49c0da15b13814f60cb6d38bb2c2e55c9575"} Nov 28 08:53:02 crc kubenswrapper[4946]: I1128 08:53:02.807677 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dbdcee07-b175-4410-b443-b80370342779","Type":"ContainerStarted","Data":"2f4824f7998447412b929f4958504278353350c579d1469c3671acbff004e14e"} Nov 28 08:53:02 crc kubenswrapper[4946]: I1128 08:53:02.854072 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.594205702 podStartE2EDuration="5.85403093s" podCreationTimestamp="2025-11-28 08:52:57 +0000 UTC" firstStartedPulling="2025-11-28 08:52:59.016771975 +0000 UTC m=+7233.394837086" lastFinishedPulling="2025-11-28 08:53:01.276597203 +0000 UTC m=+7235.654662314" observedRunningTime="2025-11-28 08:53:02.834709151 +0000 UTC m=+7237.212774282" watchObservedRunningTime="2025-11-28 08:53:02.85403093 +0000 UTC m=+7237.232096081" Nov 28 08:53:02 crc kubenswrapper[4946]: I1128 08:53:02.867107 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.590847939 podStartE2EDuration="5.867084203s" podCreationTimestamp="2025-11-28 08:52:57 +0000 UTC" firstStartedPulling="2025-11-28 08:52:58.971046923 +0000 UTC m=+7233.349112034" lastFinishedPulling="2025-11-28 08:53:01.247283187 +0000 UTC m=+7235.625348298" observedRunningTime="2025-11-28 08:53:02.85970848 +0000 UTC m=+7237.237773591" watchObservedRunningTime="2025-11-28 08:53:02.867084203 +0000 UTC m=+7237.245149344" Nov 28 08:53:03 crc kubenswrapper[4946]: I1128 08:53:03.195724 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 08:53:03 crc kubenswrapper[4946]: I1128 08:53:03.221810 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:53:03 crc kubenswrapper[4946]: I1128 08:53:03.336644 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 08:53:03 crc kubenswrapper[4946]: I1128 08:53:03.336776 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 08:53:03 crc kubenswrapper[4946]: I1128 08:53:03.821372 4946 generic.go:334] "Generic (PLEG): container finished" podID="630068c6-dca8-4f28-b680-9c730c477328" containerID="4f8e173b34d1506a6bed0691341522603cad87d30ac6502d8ff321cf68fcd7a5" exitCode=0 Nov 28 08:53:03 crc kubenswrapper[4946]: I1128 08:53:03.821439 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qjtsc" event={"ID":"630068c6-dca8-4f28-b680-9c730c477328","Type":"ContainerDied","Data":"4f8e173b34d1506a6bed0691341522603cad87d30ac6502d8ff321cf68fcd7a5"} Nov 28 08:53:03 crc kubenswrapper[4946]: I1128 08:53:03.825433 4946 generic.go:334] "Generic (PLEG): container finished" podID="a7724247-b3f0-4422-9120-b4ab79b4e613" containerID="f65e759f88c4f12322679858a51b2efb8e0875588667933b40e551ad478f7e26" exitCode=0 Nov 28 08:53:03 crc kubenswrapper[4946]: I1128 08:53:03.825492 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cwv2r" event={"ID":"a7724247-b3f0-4422-9120-b4ab79b4e613","Type":"ContainerDied","Data":"f65e759f88c4f12322679858a51b2efb8e0875588667933b40e551ad478f7e26"} Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.366342 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cwv2r" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.374224 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qjtsc" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.504406 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7724247-b3f0-4422-9120-b4ab79b4e613-config-data\") pod \"a7724247-b3f0-4422-9120-b4ab79b4e613\" (UID: \"a7724247-b3f0-4422-9120-b4ab79b4e613\") " Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.504596 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/630068c6-dca8-4f28-b680-9c730c477328-scripts\") pod \"630068c6-dca8-4f28-b680-9c730c477328\" (UID: \"630068c6-dca8-4f28-b680-9c730c477328\") " Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.504781 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7724247-b3f0-4422-9120-b4ab79b4e613-combined-ca-bundle\") pod \"a7724247-b3f0-4422-9120-b4ab79b4e613\" (UID: \"a7724247-b3f0-4422-9120-b4ab79b4e613\") " Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.504822 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd9nn\" (UniqueName: \"kubernetes.io/projected/a7724247-b3f0-4422-9120-b4ab79b4e613-kube-api-access-dd9nn\") pod \"a7724247-b3f0-4422-9120-b4ab79b4e613\" (UID: \"a7724247-b3f0-4422-9120-b4ab79b4e613\") " Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.504846 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/630068c6-dca8-4f28-b680-9c730c477328-combined-ca-bundle\") pod \"630068c6-dca8-4f28-b680-9c730c477328\" (UID: \"630068c6-dca8-4f28-b680-9c730c477328\") " Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.504890 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/630068c6-dca8-4f28-b680-9c730c477328-config-data\") pod \"630068c6-dca8-4f28-b680-9c730c477328\" (UID: \"630068c6-dca8-4f28-b680-9c730c477328\") " Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.504987 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlcxh\" (UniqueName: \"kubernetes.io/projected/630068c6-dca8-4f28-b680-9c730c477328-kube-api-access-nlcxh\") pod \"630068c6-dca8-4f28-b680-9c730c477328\" (UID: \"630068c6-dca8-4f28-b680-9c730c477328\") " Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.505013 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7724247-b3f0-4422-9120-b4ab79b4e613-scripts\") pod \"a7724247-b3f0-4422-9120-b4ab79b4e613\" (UID: \"a7724247-b3f0-4422-9120-b4ab79b4e613\") " Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.510337 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/630068c6-dca8-4f28-b680-9c730c477328-kube-api-access-nlcxh" (OuterVolumeSpecName: "kube-api-access-nlcxh") pod "630068c6-dca8-4f28-b680-9c730c477328" (UID: "630068c6-dca8-4f28-b680-9c730c477328"). InnerVolumeSpecName "kube-api-access-nlcxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.511073 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7724247-b3f0-4422-9120-b4ab79b4e613-kube-api-access-dd9nn" (OuterVolumeSpecName: "kube-api-access-dd9nn") pod "a7724247-b3f0-4422-9120-b4ab79b4e613" (UID: "a7724247-b3f0-4422-9120-b4ab79b4e613"). InnerVolumeSpecName "kube-api-access-dd9nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.513201 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/630068c6-dca8-4f28-b680-9c730c477328-scripts" (OuterVolumeSpecName: "scripts") pod "630068c6-dca8-4f28-b680-9c730c477328" (UID: "630068c6-dca8-4f28-b680-9c730c477328"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.513966 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7724247-b3f0-4422-9120-b4ab79b4e613-scripts" (OuterVolumeSpecName: "scripts") pod "a7724247-b3f0-4422-9120-b4ab79b4e613" (UID: "a7724247-b3f0-4422-9120-b4ab79b4e613"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.531016 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7724247-b3f0-4422-9120-b4ab79b4e613-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7724247-b3f0-4422-9120-b4ab79b4e613" (UID: "a7724247-b3f0-4422-9120-b4ab79b4e613"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.533452 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/630068c6-dca8-4f28-b680-9c730c477328-config-data" (OuterVolumeSpecName: "config-data") pod "630068c6-dca8-4f28-b680-9c730c477328" (UID: "630068c6-dca8-4f28-b680-9c730c477328"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.536750 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/630068c6-dca8-4f28-b680-9c730c477328-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "630068c6-dca8-4f28-b680-9c730c477328" (UID: "630068c6-dca8-4f28-b680-9c730c477328"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.544348 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7724247-b3f0-4422-9120-b4ab79b4e613-config-data" (OuterVolumeSpecName: "config-data") pod "a7724247-b3f0-4422-9120-b4ab79b4e613" (UID: "a7724247-b3f0-4422-9120-b4ab79b4e613"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.607524 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7724247-b3f0-4422-9120-b4ab79b4e613-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.607571 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd9nn\" (UniqueName: \"kubernetes.io/projected/a7724247-b3f0-4422-9120-b4ab79b4e613-kube-api-access-dd9nn\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.607592 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/630068c6-dca8-4f28-b680-9c730c477328-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.607612 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/630068c6-dca8-4f28-b680-9c730c477328-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.607629 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlcxh\" (UniqueName: \"kubernetes.io/projected/630068c6-dca8-4f28-b680-9c730c477328-kube-api-access-nlcxh\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.607645 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7724247-b3f0-4422-9120-b4ab79b4e613-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.607661 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7724247-b3f0-4422-9120-b4ab79b4e613-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.607676 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/630068c6-dca8-4f28-b680-9c730c477328-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.847140 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cwv2r" event={"ID":"a7724247-b3f0-4422-9120-b4ab79b4e613","Type":"ContainerDied","Data":"7fbd3e233e282a37d299186cce92f136cd6e7d4cf80585cf92737bbf6711b279"} Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.847390 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fbd3e233e282a37d299186cce92f136cd6e7d4cf80585cf92737bbf6711b279" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.847226 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cwv2r" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.848628 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qjtsc" event={"ID":"630068c6-dca8-4f28-b680-9c730c477328","Type":"ContainerDied","Data":"d777a2be50fb82a675422687f6c46176fe81f3a274f6e493ec222618ff4f5d89"} Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.848645 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qjtsc" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.848668 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d777a2be50fb82a675422687f6c46176fe81f3a274f6e493ec222618ff4f5d89" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.954379 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 08:53:05 crc kubenswrapper[4946]: E1128 08:53:05.954755 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7724247-b3f0-4422-9120-b4ab79b4e613" containerName="nova-cell1-conductor-db-sync" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.954771 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7724247-b3f0-4422-9120-b4ab79b4e613" containerName="nova-cell1-conductor-db-sync" Nov 28 08:53:05 crc kubenswrapper[4946]: E1128 08:53:05.954802 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="630068c6-dca8-4f28-b680-9c730c477328" containerName="nova-manage" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.954808 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="630068c6-dca8-4f28-b680-9c730c477328" containerName="nova-manage" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.954971 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7724247-b3f0-4422-9120-b4ab79b4e613" containerName="nova-cell1-conductor-db-sync" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.954982 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="630068c6-dca8-4f28-b680-9c730c477328" containerName="nova-manage" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.955580 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.958575 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 28 08:53:05 crc kubenswrapper[4946]: I1128 08:53:05.968904 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.045421 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.045652 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0cac59fd-5161-47c4-a5aa-2a268962215a" containerName="nova-api-log" containerID="cri-o://248f52e63cb3f79285e7710f22227d97b7448dc724584e7cd5e9baedabcf32ea" gracePeriod=30 Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.045790 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0cac59fd-5161-47c4-a5aa-2a268962215a" containerName="nova-api-api" containerID="cri-o://bf92841c98eae6e14a784992b1bb49c0da15b13814f60cb6d38bb2c2e55c9575" gracePeriod=30 Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.061538 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.061849 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e7c3bcb6-9ac5-4bae-b7a3-64e31022516d" containerName="nova-scheduler-scheduler" containerID="cri-o://f03364a41b69272215cc50a2ec670ad9728def8ef4f7c69a999316875457909e" gracePeriod=30 Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.084147 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.084447 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dbdcee07-b175-4410-b443-b80370342779" containerName="nova-metadata-log" containerID="cri-o://4a357200b73160dd9d102ab83d63605be106616f5b86a066e6e4403add3c0467" gracePeriod=30 Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.084532 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dbdcee07-b175-4410-b443-b80370342779" containerName="nova-metadata-metadata" containerID="cri-o://2f4824f7998447412b929f4958504278353350c579d1469c3671acbff004e14e" gracePeriod=30 Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.117759 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec\") " pod="openstack/nova-cell1-conductor-0" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.117873 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec\") " pod="openstack/nova-cell1-conductor-0" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.118020 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m29ds\" (UniqueName: \"kubernetes.io/projected/0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec-kube-api-access-m29ds\") pod \"nova-cell1-conductor-0\" (UID: \"0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec\") " pod="openstack/nova-cell1-conductor-0" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.219597 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec\") " pod="openstack/nova-cell1-conductor-0" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.219944 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec\") " pod="openstack/nova-cell1-conductor-0" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.219981 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m29ds\" (UniqueName: \"kubernetes.io/projected/0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec-kube-api-access-m29ds\") pod \"nova-cell1-conductor-0\" (UID: \"0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec\") " pod="openstack/nova-cell1-conductor-0" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.224529 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec\") " pod="openstack/nova-cell1-conductor-0" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.226927 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec\") " pod="openstack/nova-cell1-conductor-0" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.255107 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m29ds\" (UniqueName: \"kubernetes.io/projected/0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec-kube-api-access-m29ds\") pod \"nova-cell1-conductor-0\" (UID: \"0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec\") " pod="openstack/nova-cell1-conductor-0" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.282619 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.616267 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.709672 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.728241 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdcee07-b175-4410-b443-b80370342779-combined-ca-bundle\") pod \"dbdcee07-b175-4410-b443-b80370342779\" (UID: \"dbdcee07-b175-4410-b443-b80370342779\") " Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.728362 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbdcee07-b175-4410-b443-b80370342779-logs\") pod \"dbdcee07-b175-4410-b443-b80370342779\" (UID: \"dbdcee07-b175-4410-b443-b80370342779\") " Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.728395 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wk54\" (UniqueName: \"kubernetes.io/projected/dbdcee07-b175-4410-b443-b80370342779-kube-api-access-7wk54\") pod \"dbdcee07-b175-4410-b443-b80370342779\" (UID: \"dbdcee07-b175-4410-b443-b80370342779\") " Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.728428 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbdcee07-b175-4410-b443-b80370342779-config-data\") pod \"dbdcee07-b175-4410-b443-b80370342779\" (UID: \"dbdcee07-b175-4410-b443-b80370342779\") " Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.729774 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbdcee07-b175-4410-b443-b80370342779-logs" (OuterVolumeSpecName: "logs") pod "dbdcee07-b175-4410-b443-b80370342779" (UID: "dbdcee07-b175-4410-b443-b80370342779"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.734514 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbdcee07-b175-4410-b443-b80370342779-kube-api-access-7wk54" (OuterVolumeSpecName: "kube-api-access-7wk54") pod "dbdcee07-b175-4410-b443-b80370342779" (UID: "dbdcee07-b175-4410-b443-b80370342779"). InnerVolumeSpecName "kube-api-access-7wk54". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.758291 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbdcee07-b175-4410-b443-b80370342779-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbdcee07-b175-4410-b443-b80370342779" (UID: "dbdcee07-b175-4410-b443-b80370342779"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.758778 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbdcee07-b175-4410-b443-b80370342779-config-data" (OuterVolumeSpecName: "config-data") pod "dbdcee07-b175-4410-b443-b80370342779" (UID: "dbdcee07-b175-4410-b443-b80370342779"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.829612 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8mlh\" (UniqueName: \"kubernetes.io/projected/0cac59fd-5161-47c4-a5aa-2a268962215a-kube-api-access-f8mlh\") pod \"0cac59fd-5161-47c4-a5aa-2a268962215a\" (UID: \"0cac59fd-5161-47c4-a5aa-2a268962215a\") " Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.829713 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cac59fd-5161-47c4-a5aa-2a268962215a-logs\") pod \"0cac59fd-5161-47c4-a5aa-2a268962215a\" (UID: \"0cac59fd-5161-47c4-a5aa-2a268962215a\") " Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.829833 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cac59fd-5161-47c4-a5aa-2a268962215a-combined-ca-bundle\") pod \"0cac59fd-5161-47c4-a5aa-2a268962215a\" (UID: \"0cac59fd-5161-47c4-a5aa-2a268962215a\") " Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.829897 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cac59fd-5161-47c4-a5aa-2a268962215a-config-data\") pod \"0cac59fd-5161-47c4-a5aa-2a268962215a\" (UID: \"0cac59fd-5161-47c4-a5aa-2a268962215a\") " Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.830389 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wk54\" (UniqueName: \"kubernetes.io/projected/dbdcee07-b175-4410-b443-b80370342779-kube-api-access-7wk54\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.830407 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbdcee07-b175-4410-b443-b80370342779-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.830419 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdcee07-b175-4410-b443-b80370342779-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.830429 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbdcee07-b175-4410-b443-b80370342779-logs\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.831192 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cac59fd-5161-47c4-a5aa-2a268962215a-logs" (OuterVolumeSpecName: "logs") pod "0cac59fd-5161-47c4-a5aa-2a268962215a" (UID: "0cac59fd-5161-47c4-a5aa-2a268962215a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.835683 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cac59fd-5161-47c4-a5aa-2a268962215a-kube-api-access-f8mlh" (OuterVolumeSpecName: "kube-api-access-f8mlh") pod "0cac59fd-5161-47c4-a5aa-2a268962215a" (UID: "0cac59fd-5161-47c4-a5aa-2a268962215a"). InnerVolumeSpecName "kube-api-access-f8mlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.857880 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cac59fd-5161-47c4-a5aa-2a268962215a-config-data" (OuterVolumeSpecName: "config-data") pod "0cac59fd-5161-47c4-a5aa-2a268962215a" (UID: "0cac59fd-5161-47c4-a5aa-2a268962215a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.858020 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cac59fd-5161-47c4-a5aa-2a268962215a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0cac59fd-5161-47c4-a5aa-2a268962215a" (UID: "0cac59fd-5161-47c4-a5aa-2a268962215a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.883900 4946 generic.go:334] "Generic (PLEG): container finished" podID="0cac59fd-5161-47c4-a5aa-2a268962215a" containerID="bf92841c98eae6e14a784992b1bb49c0da15b13814f60cb6d38bb2c2e55c9575" exitCode=0 Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.883944 4946 generic.go:334] "Generic (PLEG): container finished" podID="0cac59fd-5161-47c4-a5aa-2a268962215a" containerID="248f52e63cb3f79285e7710f22227d97b7448dc724584e7cd5e9baedabcf32ea" exitCode=143 Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.883971 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.884017 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0cac59fd-5161-47c4-a5aa-2a268962215a","Type":"ContainerDied","Data":"bf92841c98eae6e14a784992b1bb49c0da15b13814f60cb6d38bb2c2e55c9575"} Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.884054 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0cac59fd-5161-47c4-a5aa-2a268962215a","Type":"ContainerDied","Data":"248f52e63cb3f79285e7710f22227d97b7448dc724584e7cd5e9baedabcf32ea"} Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.884070 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0cac59fd-5161-47c4-a5aa-2a268962215a","Type":"ContainerDied","Data":"f9b384989023075021d81f20a8653adf556ad1020af3b249baf88b654cf8e433"} Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.884087 4946 scope.go:117] "RemoveContainer" containerID="bf92841c98eae6e14a784992b1bb49c0da15b13814f60cb6d38bb2c2e55c9575" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.889007 4946 generic.go:334] "Generic (PLEG): container finished" podID="dbdcee07-b175-4410-b443-b80370342779" containerID="2f4824f7998447412b929f4958504278353350c579d1469c3671acbff004e14e" exitCode=0 Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.889038 4946 generic.go:334] "Generic (PLEG): container finished" podID="dbdcee07-b175-4410-b443-b80370342779" containerID="4a357200b73160dd9d102ab83d63605be106616f5b86a066e6e4403add3c0467" exitCode=143 Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.889082 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.889094 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dbdcee07-b175-4410-b443-b80370342779","Type":"ContainerDied","Data":"2f4824f7998447412b929f4958504278353350c579d1469c3671acbff004e14e"} Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.889239 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dbdcee07-b175-4410-b443-b80370342779","Type":"ContainerDied","Data":"4a357200b73160dd9d102ab83d63605be106616f5b86a066e6e4403add3c0467"} Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.889255 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dbdcee07-b175-4410-b443-b80370342779","Type":"ContainerDied","Data":"e80e089298f26813dd3ba30f98d1b9eb34834fc49c4437f4f50adc9b48fd795a"} Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.891691 4946 generic.go:334] "Generic (PLEG): container finished" podID="e7c3bcb6-9ac5-4bae-b7a3-64e31022516d" containerID="f03364a41b69272215cc50a2ec670ad9728def8ef4f7c69a999316875457909e" exitCode=0 Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.891729 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7c3bcb6-9ac5-4bae-b7a3-64e31022516d","Type":"ContainerDied","Data":"f03364a41b69272215cc50a2ec670ad9728def8ef4f7c69a999316875457909e"} Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.907449 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.921812 4946 scope.go:117] "RemoveContainer" containerID="248f52e63cb3f79285e7710f22227d97b7448dc724584e7cd5e9baedabcf32ea" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.932375 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.939509 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.943048 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8mlh\" (UniqueName: \"kubernetes.io/projected/0cac59fd-5161-47c4-a5aa-2a268962215a-kube-api-access-f8mlh\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.943550 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cac59fd-5161-47c4-a5aa-2a268962215a-logs\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.943568 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cac59fd-5161-47c4-a5aa-2a268962215a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.943582 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cac59fd-5161-47c4-a5aa-2a268962215a-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.949139 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 08:53:06 crc kubenswrapper[4946]: E1128 08:53:06.949581 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbdcee07-b175-4410-b443-b80370342779" containerName="nova-metadata-metadata" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.949597 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbdcee07-b175-4410-b443-b80370342779" containerName="nova-metadata-metadata" Nov 28 08:53:06 crc kubenswrapper[4946]: E1128 08:53:06.949617 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbdcee07-b175-4410-b443-b80370342779" containerName="nova-metadata-log" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.949626 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbdcee07-b175-4410-b443-b80370342779" containerName="nova-metadata-log" Nov 28 08:53:06 crc kubenswrapper[4946]: E1128 08:53:06.949651 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cac59fd-5161-47c4-a5aa-2a268962215a" containerName="nova-api-log" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.949656 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cac59fd-5161-47c4-a5aa-2a268962215a" containerName="nova-api-log" Nov 28 08:53:06 crc kubenswrapper[4946]: E1128 08:53:06.949672 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cac59fd-5161-47c4-a5aa-2a268962215a" containerName="nova-api-api" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.949678 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cac59fd-5161-47c4-a5aa-2a268962215a" containerName="nova-api-api" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.949844 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbdcee07-b175-4410-b443-b80370342779" containerName="nova-metadata-log" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.949859 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cac59fd-5161-47c4-a5aa-2a268962215a" containerName="nova-api-api" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.949872 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cac59fd-5161-47c4-a5aa-2a268962215a" containerName="nova-api-log" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.949885 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbdcee07-b175-4410-b443-b80370342779" containerName="nova-metadata-metadata" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.952012 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.959015 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.971853 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.983083 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 08:53:06 crc kubenswrapper[4946]: I1128 08:53:06.992208 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.000619 4946 scope.go:117] "RemoveContainer" containerID="bf92841c98eae6e14a784992b1bb49c0da15b13814f60cb6d38bb2c2e55c9575" Nov 28 08:53:07 crc kubenswrapper[4946]: E1128 08:53:07.001175 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf92841c98eae6e14a784992b1bb49c0da15b13814f60cb6d38bb2c2e55c9575\": container with ID starting with bf92841c98eae6e14a784992b1bb49c0da15b13814f60cb6d38bb2c2e55c9575 not found: ID does not exist" containerID="bf92841c98eae6e14a784992b1bb49c0da15b13814f60cb6d38bb2c2e55c9575" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.001218 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf92841c98eae6e14a784992b1bb49c0da15b13814f60cb6d38bb2c2e55c9575"} err="failed to get container status \"bf92841c98eae6e14a784992b1bb49c0da15b13814f60cb6d38bb2c2e55c9575\": rpc error: code = NotFound desc = could not find container \"bf92841c98eae6e14a784992b1bb49c0da15b13814f60cb6d38bb2c2e55c9575\": container with ID starting with bf92841c98eae6e14a784992b1bb49c0da15b13814f60cb6d38bb2c2e55c9575 not found: ID does not exist" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.001249 4946 scope.go:117] "RemoveContainer" containerID="248f52e63cb3f79285e7710f22227d97b7448dc724584e7cd5e9baedabcf32ea" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.002946 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.004735 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 08:53:07 crc kubenswrapper[4946]: E1128 08:53:07.004760 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"248f52e63cb3f79285e7710f22227d97b7448dc724584e7cd5e9baedabcf32ea\": container with ID starting with 248f52e63cb3f79285e7710f22227d97b7448dc724584e7cd5e9baedabcf32ea not found: ID does not exist" containerID="248f52e63cb3f79285e7710f22227d97b7448dc724584e7cd5e9baedabcf32ea" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.004795 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"248f52e63cb3f79285e7710f22227d97b7448dc724584e7cd5e9baedabcf32ea"} err="failed to get container status \"248f52e63cb3f79285e7710f22227d97b7448dc724584e7cd5e9baedabcf32ea\": rpc error: code = NotFound desc = could not find container \"248f52e63cb3f79285e7710f22227d97b7448dc724584e7cd5e9baedabcf32ea\": container with ID starting with 248f52e63cb3f79285e7710f22227d97b7448dc724584e7cd5e9baedabcf32ea not found: ID does not exist" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.004839 4946 scope.go:117] "RemoveContainer" containerID="bf92841c98eae6e14a784992b1bb49c0da15b13814f60cb6d38bb2c2e55c9575" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.007113 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf92841c98eae6e14a784992b1bb49c0da15b13814f60cb6d38bb2c2e55c9575"} err="failed to get container status \"bf92841c98eae6e14a784992b1bb49c0da15b13814f60cb6d38bb2c2e55c9575\": rpc error: code = NotFound desc = could not find container \"bf92841c98eae6e14a784992b1bb49c0da15b13814f60cb6d38bb2c2e55c9575\": container with ID starting with bf92841c98eae6e14a784992b1bb49c0da15b13814f60cb6d38bb2c2e55c9575 not found: ID does not exist" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.007147 4946 scope.go:117] "RemoveContainer" containerID="248f52e63cb3f79285e7710f22227d97b7448dc724584e7cd5e9baedabcf32ea" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.007638 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"248f52e63cb3f79285e7710f22227d97b7448dc724584e7cd5e9baedabcf32ea"} err="failed to get container status \"248f52e63cb3f79285e7710f22227d97b7448dc724584e7cd5e9baedabcf32ea\": rpc error: code = NotFound desc = could not find container \"248f52e63cb3f79285e7710f22227d97b7448dc724584e7cd5e9baedabcf32ea\": container with ID starting with 248f52e63cb3f79285e7710f22227d97b7448dc724584e7cd5e9baedabcf32ea not found: ID does not exist" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.007666 4946 scope.go:117] "RemoveContainer" containerID="2f4824f7998447412b929f4958504278353350c579d1469c3671acbff004e14e" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.007882 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.011541 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.035509 4946 scope.go:117] "RemoveContainer" containerID="4a357200b73160dd9d102ab83d63605be106616f5b86a066e6e4403add3c0467" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.071481 4946 scope.go:117] "RemoveContainer" containerID="2f4824f7998447412b929f4958504278353350c579d1469c3671acbff004e14e" Nov 28 08:53:07 crc kubenswrapper[4946]: E1128 08:53:07.071846 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f4824f7998447412b929f4958504278353350c579d1469c3671acbff004e14e\": container with ID starting with 2f4824f7998447412b929f4958504278353350c579d1469c3671acbff004e14e not found: ID does not exist" containerID="2f4824f7998447412b929f4958504278353350c579d1469c3671acbff004e14e" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.071887 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f4824f7998447412b929f4958504278353350c579d1469c3671acbff004e14e"} err="failed to get container status \"2f4824f7998447412b929f4958504278353350c579d1469c3671acbff004e14e\": rpc error: code = NotFound desc = could not find container \"2f4824f7998447412b929f4958504278353350c579d1469c3671acbff004e14e\": container with ID starting with 2f4824f7998447412b929f4958504278353350c579d1469c3671acbff004e14e not found: ID does not exist" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.071921 4946 scope.go:117] "RemoveContainer" containerID="4a357200b73160dd9d102ab83d63605be106616f5b86a066e6e4403add3c0467" Nov 28 08:53:07 crc kubenswrapper[4946]: E1128 08:53:07.072329 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a357200b73160dd9d102ab83d63605be106616f5b86a066e6e4403add3c0467\": container with ID starting with 4a357200b73160dd9d102ab83d63605be106616f5b86a066e6e4403add3c0467 not found: ID does not exist" containerID="4a357200b73160dd9d102ab83d63605be106616f5b86a066e6e4403add3c0467" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.072356 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a357200b73160dd9d102ab83d63605be106616f5b86a066e6e4403add3c0467"} err="failed to get container status \"4a357200b73160dd9d102ab83d63605be106616f5b86a066e6e4403add3c0467\": rpc error: code = NotFound desc = could not find container \"4a357200b73160dd9d102ab83d63605be106616f5b86a066e6e4403add3c0467\": container with ID starting with 4a357200b73160dd9d102ab83d63605be106616f5b86a066e6e4403add3c0467 not found: ID does not exist" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.072374 4946 scope.go:117] "RemoveContainer" containerID="2f4824f7998447412b929f4958504278353350c579d1469c3671acbff004e14e" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.072657 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f4824f7998447412b929f4958504278353350c579d1469c3671acbff004e14e"} err="failed to get container status \"2f4824f7998447412b929f4958504278353350c579d1469c3671acbff004e14e\": rpc error: code = NotFound desc = could not find container \"2f4824f7998447412b929f4958504278353350c579d1469c3671acbff004e14e\": container with ID starting with 2f4824f7998447412b929f4958504278353350c579d1469c3671acbff004e14e not found: ID does not exist" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.072682 4946 scope.go:117] "RemoveContainer" containerID="4a357200b73160dd9d102ab83d63605be106616f5b86a066e6e4403add3c0467" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.072939 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a357200b73160dd9d102ab83d63605be106616f5b86a066e6e4403add3c0467"} err="failed to get container status \"4a357200b73160dd9d102ab83d63605be106616f5b86a066e6e4403add3c0467\": rpc error: code = NotFound desc = could not find container \"4a357200b73160dd9d102ab83d63605be106616f5b86a066e6e4403add3c0467\": container with ID starting with 4a357200b73160dd9d102ab83d63605be106616f5b86a066e6e4403add3c0467 not found: ID does not exist" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.084786 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.146623 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10dc02b8-ef84-40dd-8eca-07afbf6bfc6a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a\") " pod="openstack/nova-metadata-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.146656 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1caea475-b49d-4ac7-8036-80a59d8fa7d6-config-data\") pod \"nova-api-0\" (UID: \"1caea475-b49d-4ac7-8036-80a59d8fa7d6\") " pod="openstack/nova-api-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.146679 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10dc02b8-ef84-40dd-8eca-07afbf6bfc6a-config-data\") pod \"nova-metadata-0\" (UID: \"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a\") " pod="openstack/nova-metadata-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.147003 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10dc02b8-ef84-40dd-8eca-07afbf6bfc6a-logs\") pod \"nova-metadata-0\" (UID: \"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a\") " pod="openstack/nova-metadata-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.147236 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1caea475-b49d-4ac7-8036-80a59d8fa7d6-logs\") pod \"nova-api-0\" (UID: \"1caea475-b49d-4ac7-8036-80a59d8fa7d6\") " pod="openstack/nova-api-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.147301 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzm4g\" (UniqueName: \"kubernetes.io/projected/1caea475-b49d-4ac7-8036-80a59d8fa7d6-kube-api-access-hzm4g\") pod \"nova-api-0\" (UID: \"1caea475-b49d-4ac7-8036-80a59d8fa7d6\") " pod="openstack/nova-api-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.147330 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1caea475-b49d-4ac7-8036-80a59d8fa7d6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1caea475-b49d-4ac7-8036-80a59d8fa7d6\") " pod="openstack/nova-api-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.147394 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cvrz\" (UniqueName: \"kubernetes.io/projected/10dc02b8-ef84-40dd-8eca-07afbf6bfc6a-kube-api-access-4cvrz\") pod \"nova-metadata-0\" (UID: \"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a\") " pod="openstack/nova-metadata-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.248789 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxr5b\" (UniqueName: \"kubernetes.io/projected/e7c3bcb6-9ac5-4bae-b7a3-64e31022516d-kube-api-access-mxr5b\") pod \"e7c3bcb6-9ac5-4bae-b7a3-64e31022516d\" (UID: \"e7c3bcb6-9ac5-4bae-b7a3-64e31022516d\") " Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.249116 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7c3bcb6-9ac5-4bae-b7a3-64e31022516d-combined-ca-bundle\") pod \"e7c3bcb6-9ac5-4bae-b7a3-64e31022516d\" (UID: \"e7c3bcb6-9ac5-4bae-b7a3-64e31022516d\") " Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.249249 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7c3bcb6-9ac5-4bae-b7a3-64e31022516d-config-data\") pod \"e7c3bcb6-9ac5-4bae-b7a3-64e31022516d\" (UID: \"e7c3bcb6-9ac5-4bae-b7a3-64e31022516d\") " Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.249670 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1caea475-b49d-4ac7-8036-80a59d8fa7d6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1caea475-b49d-4ac7-8036-80a59d8fa7d6\") " pod="openstack/nova-api-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.249784 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cvrz\" (UniqueName: \"kubernetes.io/projected/10dc02b8-ef84-40dd-8eca-07afbf6bfc6a-kube-api-access-4cvrz\") pod \"nova-metadata-0\" (UID: \"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a\") " pod="openstack/nova-metadata-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.249905 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10dc02b8-ef84-40dd-8eca-07afbf6bfc6a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a\") " pod="openstack/nova-metadata-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.250029 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1caea475-b49d-4ac7-8036-80a59d8fa7d6-config-data\") pod \"nova-api-0\" (UID: \"1caea475-b49d-4ac7-8036-80a59d8fa7d6\") " pod="openstack/nova-api-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.250119 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10dc02b8-ef84-40dd-8eca-07afbf6bfc6a-config-data\") pod \"nova-metadata-0\" (UID: \"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a\") " pod="openstack/nova-metadata-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.250282 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10dc02b8-ef84-40dd-8eca-07afbf6bfc6a-logs\") pod \"nova-metadata-0\" (UID: \"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a\") " pod="openstack/nova-metadata-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.250432 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1caea475-b49d-4ac7-8036-80a59d8fa7d6-logs\") pod \"nova-api-0\" (UID: \"1caea475-b49d-4ac7-8036-80a59d8fa7d6\") " pod="openstack/nova-api-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.250567 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzm4g\" (UniqueName: \"kubernetes.io/projected/1caea475-b49d-4ac7-8036-80a59d8fa7d6-kube-api-access-hzm4g\") pod \"nova-api-0\" (UID: \"1caea475-b49d-4ac7-8036-80a59d8fa7d6\") " pod="openstack/nova-api-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.251900 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10dc02b8-ef84-40dd-8eca-07afbf6bfc6a-logs\") pod \"nova-metadata-0\" (UID: \"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a\") " pod="openstack/nova-metadata-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.252147 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1caea475-b49d-4ac7-8036-80a59d8fa7d6-logs\") pod \"nova-api-0\" (UID: \"1caea475-b49d-4ac7-8036-80a59d8fa7d6\") " pod="openstack/nova-api-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.253740 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7c3bcb6-9ac5-4bae-b7a3-64e31022516d-kube-api-access-mxr5b" (OuterVolumeSpecName: "kube-api-access-mxr5b") pod "e7c3bcb6-9ac5-4bae-b7a3-64e31022516d" (UID: "e7c3bcb6-9ac5-4bae-b7a3-64e31022516d"). InnerVolumeSpecName "kube-api-access-mxr5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.256425 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1caea475-b49d-4ac7-8036-80a59d8fa7d6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1caea475-b49d-4ac7-8036-80a59d8fa7d6\") " pod="openstack/nova-api-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.256433 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10dc02b8-ef84-40dd-8eca-07afbf6bfc6a-config-data\") pod \"nova-metadata-0\" (UID: \"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a\") " pod="openstack/nova-metadata-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.256922 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10dc02b8-ef84-40dd-8eca-07afbf6bfc6a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a\") " pod="openstack/nova-metadata-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.257869 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1caea475-b49d-4ac7-8036-80a59d8fa7d6-config-data\") pod \"nova-api-0\" (UID: \"1caea475-b49d-4ac7-8036-80a59d8fa7d6\") " pod="openstack/nova-api-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.270170 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cvrz\" (UniqueName: \"kubernetes.io/projected/10dc02b8-ef84-40dd-8eca-07afbf6bfc6a-kube-api-access-4cvrz\") pod \"nova-metadata-0\" (UID: \"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a\") " pod="openstack/nova-metadata-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.272856 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzm4g\" (UniqueName: \"kubernetes.io/projected/1caea475-b49d-4ac7-8036-80a59d8fa7d6-kube-api-access-hzm4g\") pod \"nova-api-0\" (UID: \"1caea475-b49d-4ac7-8036-80a59d8fa7d6\") " pod="openstack/nova-api-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.280059 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7c3bcb6-9ac5-4bae-b7a3-64e31022516d-config-data" (OuterVolumeSpecName: "config-data") pod "e7c3bcb6-9ac5-4bae-b7a3-64e31022516d" (UID: "e7c3bcb6-9ac5-4bae-b7a3-64e31022516d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.295418 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7c3bcb6-9ac5-4bae-b7a3-64e31022516d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7c3bcb6-9ac5-4bae-b7a3-64e31022516d" (UID: "e7c3bcb6-9ac5-4bae-b7a3-64e31022516d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.305898 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.325209 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.352654 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7c3bcb6-9ac5-4bae-b7a3-64e31022516d-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.353166 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxr5b\" (UniqueName: \"kubernetes.io/projected/e7c3bcb6-9ac5-4bae-b7a3-64e31022516d-kube-api-access-mxr5b\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.353222 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7c3bcb6-9ac5-4bae-b7a3-64e31022516d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.859092 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.918706 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec","Type":"ContainerStarted","Data":"1548764c7bcf6b716ed869f5f9ddf284c9c3e888cfe8d038f8e9bedda9eb85d1"} Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.918939 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec","Type":"ContainerStarted","Data":"daa3098a4cc87ace905a9dbf80dc89625dac7cafbebcdd03c613a3b43cf3e693"} Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.921261 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.924973 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a","Type":"ContainerStarted","Data":"0383bdff11a1755545ea2e465c38d0cb9faccaea3869f879f5effde4031da2ac"} Nov 28 08:53:07 crc kubenswrapper[4946]: W1128 08:53:07.926035 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1caea475_b49d_4ac7_8036_80a59d8fa7d6.slice/crio-2237aec17d10612c28e7ecfc11cf36d1264fedb9c4ebede9968ec362a407c72c WatchSource:0}: Error finding container 2237aec17d10612c28e7ecfc11cf36d1264fedb9c4ebede9968ec362a407c72c: Status 404 returned error can't find the container with id 2237aec17d10612c28e7ecfc11cf36d1264fedb9c4ebede9968ec362a407c72c Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.927088 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7c3bcb6-9ac5-4bae-b7a3-64e31022516d","Type":"ContainerDied","Data":"ae7d27bdef6cbb14702e30a0335bd9fc0f2b89577847a7a982087c542c04d1d7"} Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.927143 4946 scope.go:117] "RemoveContainer" containerID="f03364a41b69272215cc50a2ec670ad9728def8ef4f7c69a999316875457909e" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.927192 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.927743 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.947156 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.947134357 podStartE2EDuration="2.947134357s" podCreationTimestamp="2025-11-28 08:53:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:53:07.938213266 +0000 UTC m=+7242.316278417" watchObservedRunningTime="2025-11-28 08:53:07.947134357 +0000 UTC m=+7242.325199478" Nov 28 08:53:07 crc kubenswrapper[4946]: I1128 08:53:07.994094 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.018520 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cac59fd-5161-47c4-a5aa-2a268962215a" path="/var/lib/kubelet/pods/0cac59fd-5161-47c4-a5aa-2a268962215a/volumes" Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.019541 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbdcee07-b175-4410-b443-b80370342779" path="/var/lib/kubelet/pods/dbdcee07-b175-4410-b443-b80370342779/volumes" Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.021389 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.030233 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 08:53:08 crc kubenswrapper[4946]: E1128 08:53:08.030871 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c3bcb6-9ac5-4bae-b7a3-64e31022516d" containerName="nova-scheduler-scheduler" Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.030959 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c3bcb6-9ac5-4bae-b7a3-64e31022516d" containerName="nova-scheduler-scheduler" Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.031258 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7c3bcb6-9ac5-4bae-b7a3-64e31022516d" containerName="nova-scheduler-scheduler" Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.032087 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.036067 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.042202 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.092364 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53807ecb-fa2d-4cab-a27c-1cbea80b44c3-config-data\") pod \"nova-scheduler-0\" (UID: \"53807ecb-fa2d-4cab-a27c-1cbea80b44c3\") " pod="openstack/nova-scheduler-0" Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.092420 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53807ecb-fa2d-4cab-a27c-1cbea80b44c3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"53807ecb-fa2d-4cab-a27c-1cbea80b44c3\") " pod="openstack/nova-scheduler-0" Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.092480 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fhlp\" (UniqueName: \"kubernetes.io/projected/53807ecb-fa2d-4cab-a27c-1cbea80b44c3-kube-api-access-6fhlp\") pod \"nova-scheduler-0\" (UID: \"53807ecb-fa2d-4cab-a27c-1cbea80b44c3\") " pod="openstack/nova-scheduler-0" Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.194590 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53807ecb-fa2d-4cab-a27c-1cbea80b44c3-config-data\") pod \"nova-scheduler-0\" (UID: \"53807ecb-fa2d-4cab-a27c-1cbea80b44c3\") " pod="openstack/nova-scheduler-0" Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.194732 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53807ecb-fa2d-4cab-a27c-1cbea80b44c3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"53807ecb-fa2d-4cab-a27c-1cbea80b44c3\") " pod="openstack/nova-scheduler-0" Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.194843 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fhlp\" (UniqueName: \"kubernetes.io/projected/53807ecb-fa2d-4cab-a27c-1cbea80b44c3-kube-api-access-6fhlp\") pod \"nova-scheduler-0\" (UID: \"53807ecb-fa2d-4cab-a27c-1cbea80b44c3\") " pod="openstack/nova-scheduler-0" Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.198498 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53807ecb-fa2d-4cab-a27c-1cbea80b44c3-config-data\") pod \"nova-scheduler-0\" (UID: \"53807ecb-fa2d-4cab-a27c-1cbea80b44c3\") " pod="openstack/nova-scheduler-0" Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.199863 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53807ecb-fa2d-4cab-a27c-1cbea80b44c3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"53807ecb-fa2d-4cab-a27c-1cbea80b44c3\") " pod="openstack/nova-scheduler-0" Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.221273 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.224332 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fhlp\" (UniqueName: \"kubernetes.io/projected/53807ecb-fa2d-4cab-a27c-1cbea80b44c3-kube-api-access-6fhlp\") pod \"nova-scheduler-0\" (UID: \"53807ecb-fa2d-4cab-a27c-1cbea80b44c3\") " pod="openstack/nova-scheduler-0" Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.238930 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.361857 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.409769 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b748879ff-rm7x4" Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.486923 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5f444ccf-zm87m"] Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.496703 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" podUID="23f2a6f0-0a54-4926-8b15-e05851d7f142" containerName="dnsmasq-dns" containerID="cri-o://c778bb7d442bce7c504805b7ed9c3630d155ec8ed86fa0f7fc5b4539f42ed3a2" gracePeriod=10 Nov 28 08:53:08 crc kubenswrapper[4946]: W1128 08:53:08.928323 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53807ecb_fa2d_4cab_a27c_1cbea80b44c3.slice/crio-9c93dcb648558d21d2fae84277abb17cd38d877d00de8cd3f5d9a7ac1e5c9e52 WatchSource:0}: Error finding container 9c93dcb648558d21d2fae84277abb17cd38d877d00de8cd3f5d9a7ac1e5c9e52: Status 404 returned error can't find the container with id 9c93dcb648558d21d2fae84277abb17cd38d877d00de8cd3f5d9a7ac1e5c9e52 Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.931886 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.942316 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1caea475-b49d-4ac7-8036-80a59d8fa7d6","Type":"ContainerStarted","Data":"a3db228c7d261a2641f0b413610ff6ba7113768a1dc51af33ea05b80ff49c90d"} Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.942349 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1caea475-b49d-4ac7-8036-80a59d8fa7d6","Type":"ContainerStarted","Data":"370599193f0331027766a833e00bdc25cafe0f66005a71f9ff8349f437daef62"} Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.942359 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1caea475-b49d-4ac7-8036-80a59d8fa7d6","Type":"ContainerStarted","Data":"2237aec17d10612c28e7ecfc11cf36d1264fedb9c4ebede9968ec362a407c72c"} Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.948076 4946 generic.go:334] "Generic (PLEG): container finished" podID="23f2a6f0-0a54-4926-8b15-e05851d7f142" containerID="c778bb7d442bce7c504805b7ed9c3630d155ec8ed86fa0f7fc5b4539f42ed3a2" exitCode=0 Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.948138 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" event={"ID":"23f2a6f0-0a54-4926-8b15-e05851d7f142","Type":"ContainerDied","Data":"c778bb7d442bce7c504805b7ed9c3630d155ec8ed86fa0f7fc5b4539f42ed3a2"} Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.948163 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" event={"ID":"23f2a6f0-0a54-4926-8b15-e05851d7f142","Type":"ContainerDied","Data":"de5282fe949706cf2eae4f774504be5cdae3aed1e2164900fd77ccc92d6fd1ce"} Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.948173 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de5282fe949706cf2eae4f774504be5cdae3aed1e2164900fd77ccc92d6fd1ce" Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.950335 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a","Type":"ContainerStarted","Data":"5207802eb333f4710080c106c7a3dd13b70bd735dd32e9a90dd84b95dbcd18e4"} Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.950365 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a","Type":"ContainerStarted","Data":"71e20ab7db04fc7fd1f9003ebfcb974bde978e30dde3ae948d877adbdc28fb33"} Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.962328 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.963836 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" Nov 28 08:53:08 crc kubenswrapper[4946]: I1128 08:53:08.966874 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.966860342 podStartE2EDuration="2.966860342s" podCreationTimestamp="2025-11-28 08:53:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:53:08.96476035 +0000 UTC m=+7243.342825471" watchObservedRunningTime="2025-11-28 08:53:08.966860342 +0000 UTC m=+7243.344925463" Nov 28 08:53:09 crc kubenswrapper[4946]: I1128 08:53:09.031288 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p95q5\" (UniqueName: \"kubernetes.io/projected/23f2a6f0-0a54-4926-8b15-e05851d7f142-kube-api-access-p95q5\") pod \"23f2a6f0-0a54-4926-8b15-e05851d7f142\" (UID: \"23f2a6f0-0a54-4926-8b15-e05851d7f142\") " Nov 28 08:53:09 crc kubenswrapper[4946]: I1128 08:53:09.031352 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23f2a6f0-0a54-4926-8b15-e05851d7f142-config\") pod \"23f2a6f0-0a54-4926-8b15-e05851d7f142\" (UID: \"23f2a6f0-0a54-4926-8b15-e05851d7f142\") " Nov 28 08:53:09 crc kubenswrapper[4946]: I1128 08:53:09.031416 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23f2a6f0-0a54-4926-8b15-e05851d7f142-ovsdbserver-nb\") pod \"23f2a6f0-0a54-4926-8b15-e05851d7f142\" (UID: \"23f2a6f0-0a54-4926-8b15-e05851d7f142\") " Nov 28 08:53:09 crc kubenswrapper[4946]: I1128 08:53:09.031515 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23f2a6f0-0a54-4926-8b15-e05851d7f142-dns-svc\") pod \"23f2a6f0-0a54-4926-8b15-e05851d7f142\" (UID: \"23f2a6f0-0a54-4926-8b15-e05851d7f142\") " Nov 28 08:53:09 crc kubenswrapper[4946]: I1128 08:53:09.044227 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.044212178 podStartE2EDuration="3.044212178s" podCreationTimestamp="2025-11-28 08:53:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:53:09.040593859 +0000 UTC m=+7243.418658980" watchObservedRunningTime="2025-11-28 08:53:09.044212178 +0000 UTC m=+7243.422277289" Nov 28 08:53:09 crc kubenswrapper[4946]: I1128 08:53:09.048710 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23f2a6f0-0a54-4926-8b15-e05851d7f142-kube-api-access-p95q5" (OuterVolumeSpecName: "kube-api-access-p95q5") pod "23f2a6f0-0a54-4926-8b15-e05851d7f142" (UID: "23f2a6f0-0a54-4926-8b15-e05851d7f142"). InnerVolumeSpecName "kube-api-access-p95q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:53:09 crc kubenswrapper[4946]: I1128 08:53:09.099363 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23f2a6f0-0a54-4926-8b15-e05851d7f142-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "23f2a6f0-0a54-4926-8b15-e05851d7f142" (UID: "23f2a6f0-0a54-4926-8b15-e05851d7f142"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:53:09 crc kubenswrapper[4946]: I1128 08:53:09.100874 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23f2a6f0-0a54-4926-8b15-e05851d7f142-config" (OuterVolumeSpecName: "config") pod "23f2a6f0-0a54-4926-8b15-e05851d7f142" (UID: "23f2a6f0-0a54-4926-8b15-e05851d7f142"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:53:09 crc kubenswrapper[4946]: I1128 08:53:09.111000 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23f2a6f0-0a54-4926-8b15-e05851d7f142-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "23f2a6f0-0a54-4926-8b15-e05851d7f142" (UID: "23f2a6f0-0a54-4926-8b15-e05851d7f142"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:53:09 crc kubenswrapper[4946]: I1128 08:53:09.140188 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23f2a6f0-0a54-4926-8b15-e05851d7f142-ovsdbserver-sb\") pod \"23f2a6f0-0a54-4926-8b15-e05851d7f142\" (UID: \"23f2a6f0-0a54-4926-8b15-e05851d7f142\") " Nov 28 08:53:09 crc kubenswrapper[4946]: I1128 08:53:09.140767 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23f2a6f0-0a54-4926-8b15-e05851d7f142-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:09 crc kubenswrapper[4946]: I1128 08:53:09.140788 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p95q5\" (UniqueName: \"kubernetes.io/projected/23f2a6f0-0a54-4926-8b15-e05851d7f142-kube-api-access-p95q5\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:09 crc kubenswrapper[4946]: I1128 08:53:09.140806 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23f2a6f0-0a54-4926-8b15-e05851d7f142-config\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:09 crc kubenswrapper[4946]: I1128 08:53:09.140815 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23f2a6f0-0a54-4926-8b15-e05851d7f142-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:09 crc kubenswrapper[4946]: I1128 08:53:09.194591 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23f2a6f0-0a54-4926-8b15-e05851d7f142-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "23f2a6f0-0a54-4926-8b15-e05851d7f142" (UID: "23f2a6f0-0a54-4926-8b15-e05851d7f142"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:53:09 crc kubenswrapper[4946]: I1128 08:53:09.242567 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23f2a6f0-0a54-4926-8b15-e05851d7f142-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:09 crc kubenswrapper[4946]: I1128 08:53:09.961253 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"53807ecb-fa2d-4cab-a27c-1cbea80b44c3","Type":"ContainerStarted","Data":"5826e0d51a4d3c8db1dad6a3d3b3fcca079213cbc8badcf69a4dce74ac1c728e"} Nov 28 08:53:09 crc kubenswrapper[4946]: I1128 08:53:09.961312 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"53807ecb-fa2d-4cab-a27c-1cbea80b44c3","Type":"ContainerStarted","Data":"9c93dcb648558d21d2fae84277abb17cd38d877d00de8cd3f5d9a7ac1e5c9e52"} Nov 28 08:53:09 crc kubenswrapper[4946]: I1128 08:53:09.961344 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5f444ccf-zm87m" Nov 28 08:53:09 crc kubenswrapper[4946]: I1128 08:53:09.992516 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.992499093 podStartE2EDuration="2.992499093s" podCreationTimestamp="2025-11-28 08:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:53:09.990062263 +0000 UTC m=+7244.368127394" watchObservedRunningTime="2025-11-28 08:53:09.992499093 +0000 UTC m=+7244.370564214" Nov 28 08:53:10 crc kubenswrapper[4946]: I1128 08:53:10.016132 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7c3bcb6-9ac5-4bae-b7a3-64e31022516d" path="/var/lib/kubelet/pods/e7c3bcb6-9ac5-4bae-b7a3-64e31022516d/volumes" Nov 28 08:53:10 crc kubenswrapper[4946]: I1128 08:53:10.028476 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5f444ccf-zm87m"] Nov 28 08:53:10 crc kubenswrapper[4946]: I1128 08:53:10.044723 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c5f444ccf-zm87m"] Nov 28 08:53:12 crc kubenswrapper[4946]: I1128 08:53:12.000347 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23f2a6f0-0a54-4926-8b15-e05851d7f142" path="/var/lib/kubelet/pods/23f2a6f0-0a54-4926-8b15-e05851d7f142/volumes" Nov 28 08:53:12 crc kubenswrapper[4946]: I1128 08:53:12.325902 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 08:53:12 crc kubenswrapper[4946]: I1128 08:53:12.325987 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 08:53:13 crc kubenswrapper[4946]: I1128 08:53:13.362529 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 08:53:16 crc kubenswrapper[4946]: I1128 08:53:16.320388 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 28 08:53:16 crc kubenswrapper[4946]: I1128 08:53:16.882885 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-62q66"] Nov 28 08:53:16 crc kubenswrapper[4946]: E1128 08:53:16.883318 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23f2a6f0-0a54-4926-8b15-e05851d7f142" containerName="dnsmasq-dns" Nov 28 08:53:16 crc kubenswrapper[4946]: I1128 08:53:16.883334 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f2a6f0-0a54-4926-8b15-e05851d7f142" containerName="dnsmasq-dns" Nov 28 08:53:16 crc kubenswrapper[4946]: E1128 08:53:16.883366 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23f2a6f0-0a54-4926-8b15-e05851d7f142" containerName="init" Nov 28 08:53:16 crc kubenswrapper[4946]: I1128 08:53:16.883372 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f2a6f0-0a54-4926-8b15-e05851d7f142" containerName="init" Nov 28 08:53:16 crc kubenswrapper[4946]: I1128 08:53:16.883576 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="23f2a6f0-0a54-4926-8b15-e05851d7f142" containerName="dnsmasq-dns" Nov 28 08:53:16 crc kubenswrapper[4946]: I1128 08:53:16.884238 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-62q66" Nov 28 08:53:16 crc kubenswrapper[4946]: I1128 08:53:16.886489 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 28 08:53:16 crc kubenswrapper[4946]: I1128 08:53:16.887580 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 28 08:53:16 crc kubenswrapper[4946]: I1128 08:53:16.901253 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-62q66"] Nov 28 08:53:16 crc kubenswrapper[4946]: I1128 08:53:16.987265 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ece55576-ba44-491f-8be3-84adf5c7f1c4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-62q66\" (UID: \"ece55576-ba44-491f-8be3-84adf5c7f1c4\") " pod="openstack/nova-cell1-cell-mapping-62q66" Nov 28 08:53:16 crc kubenswrapper[4946]: I1128 08:53:16.987379 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmj55\" (UniqueName: \"kubernetes.io/projected/ece55576-ba44-491f-8be3-84adf5c7f1c4-kube-api-access-rmj55\") pod \"nova-cell1-cell-mapping-62q66\" (UID: \"ece55576-ba44-491f-8be3-84adf5c7f1c4\") " pod="openstack/nova-cell1-cell-mapping-62q66" Nov 28 08:53:16 crc kubenswrapper[4946]: I1128 08:53:16.987432 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ece55576-ba44-491f-8be3-84adf5c7f1c4-config-data\") pod \"nova-cell1-cell-mapping-62q66\" (UID: \"ece55576-ba44-491f-8be3-84adf5c7f1c4\") " pod="openstack/nova-cell1-cell-mapping-62q66" Nov 28 08:53:16 crc kubenswrapper[4946]: I1128 08:53:16.987715 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ece55576-ba44-491f-8be3-84adf5c7f1c4-scripts\") pod \"nova-cell1-cell-mapping-62q66\" (UID: \"ece55576-ba44-491f-8be3-84adf5c7f1c4\") " pod="openstack/nova-cell1-cell-mapping-62q66" Nov 28 08:53:17 crc kubenswrapper[4946]: I1128 08:53:17.090104 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ece55576-ba44-491f-8be3-84adf5c7f1c4-scripts\") pod \"nova-cell1-cell-mapping-62q66\" (UID: \"ece55576-ba44-491f-8be3-84adf5c7f1c4\") " pod="openstack/nova-cell1-cell-mapping-62q66" Nov 28 08:53:17 crc kubenswrapper[4946]: I1128 08:53:17.090330 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ece55576-ba44-491f-8be3-84adf5c7f1c4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-62q66\" (UID: \"ece55576-ba44-491f-8be3-84adf5c7f1c4\") " pod="openstack/nova-cell1-cell-mapping-62q66" Nov 28 08:53:17 crc kubenswrapper[4946]: I1128 08:53:17.090413 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmj55\" (UniqueName: \"kubernetes.io/projected/ece55576-ba44-491f-8be3-84adf5c7f1c4-kube-api-access-rmj55\") pod \"nova-cell1-cell-mapping-62q66\" (UID: \"ece55576-ba44-491f-8be3-84adf5c7f1c4\") " pod="openstack/nova-cell1-cell-mapping-62q66" Nov 28 08:53:17 crc kubenswrapper[4946]: I1128 08:53:17.090545 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ece55576-ba44-491f-8be3-84adf5c7f1c4-config-data\") pod \"nova-cell1-cell-mapping-62q66\" (UID: \"ece55576-ba44-491f-8be3-84adf5c7f1c4\") " pod="openstack/nova-cell1-cell-mapping-62q66" Nov 28 08:53:17 crc kubenswrapper[4946]: I1128 08:53:17.110777 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ece55576-ba44-491f-8be3-84adf5c7f1c4-config-data\") pod \"nova-cell1-cell-mapping-62q66\" (UID: \"ece55576-ba44-491f-8be3-84adf5c7f1c4\") " pod="openstack/nova-cell1-cell-mapping-62q66" Nov 28 08:53:17 crc kubenswrapper[4946]: I1128 08:53:17.110783 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ece55576-ba44-491f-8be3-84adf5c7f1c4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-62q66\" (UID: \"ece55576-ba44-491f-8be3-84adf5c7f1c4\") " pod="openstack/nova-cell1-cell-mapping-62q66" Nov 28 08:53:17 crc kubenswrapper[4946]: I1128 08:53:17.120321 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ece55576-ba44-491f-8be3-84adf5c7f1c4-scripts\") pod \"nova-cell1-cell-mapping-62q66\" (UID: \"ece55576-ba44-491f-8be3-84adf5c7f1c4\") " pod="openstack/nova-cell1-cell-mapping-62q66" Nov 28 08:53:17 crc kubenswrapper[4946]: I1128 08:53:17.120812 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmj55\" (UniqueName: \"kubernetes.io/projected/ece55576-ba44-491f-8be3-84adf5c7f1c4-kube-api-access-rmj55\") pod \"nova-cell1-cell-mapping-62q66\" (UID: \"ece55576-ba44-491f-8be3-84adf5c7f1c4\") " pod="openstack/nova-cell1-cell-mapping-62q66" Nov 28 08:53:17 crc kubenswrapper[4946]: I1128 08:53:17.206609 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-62q66" Nov 28 08:53:17 crc kubenswrapper[4946]: I1128 08:53:17.307500 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 08:53:17 crc kubenswrapper[4946]: I1128 08:53:17.307739 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 08:53:17 crc kubenswrapper[4946]: I1128 08:53:17.326616 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 08:53:17 crc kubenswrapper[4946]: I1128 08:53:17.326945 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 08:53:17 crc kubenswrapper[4946]: I1128 08:53:17.764107 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-62q66"] Nov 28 08:53:17 crc kubenswrapper[4946]: W1128 08:53:17.768086 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podece55576_ba44_491f_8be3_84adf5c7f1c4.slice/crio-1fcd722afc27ad92b30b8818e2d7c4a3d524f0b9eaf917a22a8d56321230b08d WatchSource:0}: Error finding container 1fcd722afc27ad92b30b8818e2d7c4a3d524f0b9eaf917a22a8d56321230b08d: Status 404 returned error can't find the container with id 1fcd722afc27ad92b30b8818e2d7c4a3d524f0b9eaf917a22a8d56321230b08d Nov 28 08:53:18 crc kubenswrapper[4946]: I1128 08:53:18.070300 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-62q66" event={"ID":"ece55576-ba44-491f-8be3-84adf5c7f1c4","Type":"ContainerStarted","Data":"461239738a7a06da605c8471f4b12b183a6dc41331b73b3e3f3c177572bcb55b"} Nov 28 08:53:18 crc kubenswrapper[4946]: I1128 08:53:18.070642 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-62q66" event={"ID":"ece55576-ba44-491f-8be3-84adf5c7f1c4","Type":"ContainerStarted","Data":"1fcd722afc27ad92b30b8818e2d7c4a3d524f0b9eaf917a22a8d56321230b08d"} Nov 28 08:53:18 crc kubenswrapper[4946]: I1128 08:53:18.101010 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-62q66" podStartSLOduration=2.100992601 podStartE2EDuration="2.100992601s" podCreationTimestamp="2025-11-28 08:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:53:18.098606341 +0000 UTC m=+7252.476671452" watchObservedRunningTime="2025-11-28 08:53:18.100992601 +0000 UTC m=+7252.479057712" Nov 28 08:53:18 crc kubenswrapper[4946]: I1128 08:53:18.362410 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 08:53:18 crc kubenswrapper[4946]: I1128 08:53:18.389645 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 08:53:18 crc kubenswrapper[4946]: I1128 08:53:18.475702 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1caea475-b49d-4ac7-8036-80a59d8fa7d6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.72:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 08:53:18 crc kubenswrapper[4946]: I1128 08:53:18.475737 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="10dc02b8-ef84-40dd-8eca-07afbf6bfc6a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.73:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 08:53:18 crc kubenswrapper[4946]: I1128 08:53:18.475746 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1caea475-b49d-4ac7-8036-80a59d8fa7d6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.72:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 08:53:18 crc kubenswrapper[4946]: I1128 08:53:18.475702 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="10dc02b8-ef84-40dd-8eca-07afbf6bfc6a" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.73:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 08:53:19 crc kubenswrapper[4946]: I1128 08:53:19.121878 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 08:53:23 crc kubenswrapper[4946]: I1128 08:53:23.126728 4946 generic.go:334] "Generic (PLEG): container finished" podID="ece55576-ba44-491f-8be3-84adf5c7f1c4" containerID="461239738a7a06da605c8471f4b12b183a6dc41331b73b3e3f3c177572bcb55b" exitCode=0 Nov 28 08:53:23 crc kubenswrapper[4946]: I1128 08:53:23.126879 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-62q66" event={"ID":"ece55576-ba44-491f-8be3-84adf5c7f1c4","Type":"ContainerDied","Data":"461239738a7a06da605c8471f4b12b183a6dc41331b73b3e3f3c177572bcb55b"} Nov 28 08:53:24 crc kubenswrapper[4946]: I1128 08:53:24.546270 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-62q66" Nov 28 08:53:24 crc kubenswrapper[4946]: I1128 08:53:24.631677 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ece55576-ba44-491f-8be3-84adf5c7f1c4-scripts\") pod \"ece55576-ba44-491f-8be3-84adf5c7f1c4\" (UID: \"ece55576-ba44-491f-8be3-84adf5c7f1c4\") " Nov 28 08:53:24 crc kubenswrapper[4946]: I1128 08:53:24.631741 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ece55576-ba44-491f-8be3-84adf5c7f1c4-combined-ca-bundle\") pod \"ece55576-ba44-491f-8be3-84adf5c7f1c4\" (UID: \"ece55576-ba44-491f-8be3-84adf5c7f1c4\") " Nov 28 08:53:24 crc kubenswrapper[4946]: I1128 08:53:24.631834 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmj55\" (UniqueName: \"kubernetes.io/projected/ece55576-ba44-491f-8be3-84adf5c7f1c4-kube-api-access-rmj55\") pod \"ece55576-ba44-491f-8be3-84adf5c7f1c4\" (UID: \"ece55576-ba44-491f-8be3-84adf5c7f1c4\") " Nov 28 08:53:24 crc kubenswrapper[4946]: I1128 08:53:24.631932 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ece55576-ba44-491f-8be3-84adf5c7f1c4-config-data\") pod \"ece55576-ba44-491f-8be3-84adf5c7f1c4\" (UID: \"ece55576-ba44-491f-8be3-84adf5c7f1c4\") " Nov 28 08:53:24 crc kubenswrapper[4946]: I1128 08:53:24.638360 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ece55576-ba44-491f-8be3-84adf5c7f1c4-scripts" (OuterVolumeSpecName: "scripts") pod "ece55576-ba44-491f-8be3-84adf5c7f1c4" (UID: "ece55576-ba44-491f-8be3-84adf5c7f1c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:53:24 crc kubenswrapper[4946]: I1128 08:53:24.638652 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ece55576-ba44-491f-8be3-84adf5c7f1c4-kube-api-access-rmj55" (OuterVolumeSpecName: "kube-api-access-rmj55") pod "ece55576-ba44-491f-8be3-84adf5c7f1c4" (UID: "ece55576-ba44-491f-8be3-84adf5c7f1c4"). InnerVolumeSpecName "kube-api-access-rmj55". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:53:24 crc kubenswrapper[4946]: I1128 08:53:24.667452 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ece55576-ba44-491f-8be3-84adf5c7f1c4-config-data" (OuterVolumeSpecName: "config-data") pod "ece55576-ba44-491f-8be3-84adf5c7f1c4" (UID: "ece55576-ba44-491f-8be3-84adf5c7f1c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:53:24 crc kubenswrapper[4946]: I1128 08:53:24.667852 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ece55576-ba44-491f-8be3-84adf5c7f1c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ece55576-ba44-491f-8be3-84adf5c7f1c4" (UID: "ece55576-ba44-491f-8be3-84adf5c7f1c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:53:24 crc kubenswrapper[4946]: I1128 08:53:24.735045 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ece55576-ba44-491f-8be3-84adf5c7f1c4-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:24 crc kubenswrapper[4946]: I1128 08:53:24.735106 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ece55576-ba44-491f-8be3-84adf5c7f1c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:24 crc kubenswrapper[4946]: I1128 08:53:24.735130 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmj55\" (UniqueName: \"kubernetes.io/projected/ece55576-ba44-491f-8be3-84adf5c7f1c4-kube-api-access-rmj55\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:24 crc kubenswrapper[4946]: I1128 08:53:24.735149 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ece55576-ba44-491f-8be3-84adf5c7f1c4-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:25 crc kubenswrapper[4946]: I1128 08:53:25.157663 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-62q66" event={"ID":"ece55576-ba44-491f-8be3-84adf5c7f1c4","Type":"ContainerDied","Data":"1fcd722afc27ad92b30b8818e2d7c4a3d524f0b9eaf917a22a8d56321230b08d"} Nov 28 08:53:25 crc kubenswrapper[4946]: I1128 08:53:25.157721 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fcd722afc27ad92b30b8818e2d7c4a3d524f0b9eaf917a22a8d56321230b08d" Nov 28 08:53:25 crc kubenswrapper[4946]: I1128 08:53:25.157838 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-62q66" Nov 28 08:53:25 crc kubenswrapper[4946]: I1128 08:53:25.344260 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 08:53:25 crc kubenswrapper[4946]: I1128 08:53:25.344728 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1caea475-b49d-4ac7-8036-80a59d8fa7d6" containerName="nova-api-log" containerID="cri-o://370599193f0331027766a833e00bdc25cafe0f66005a71f9ff8349f437daef62" gracePeriod=30 Nov 28 08:53:25 crc kubenswrapper[4946]: I1128 08:53:25.344850 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1caea475-b49d-4ac7-8036-80a59d8fa7d6" containerName="nova-api-api" containerID="cri-o://a3db228c7d261a2641f0b413610ff6ba7113768a1dc51af33ea05b80ff49c90d" gracePeriod=30 Nov 28 08:53:25 crc kubenswrapper[4946]: I1128 08:53:25.373717 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 08:53:25 crc kubenswrapper[4946]: I1128 08:53:25.374091 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="53807ecb-fa2d-4cab-a27c-1cbea80b44c3" containerName="nova-scheduler-scheduler" containerID="cri-o://5826e0d51a4d3c8db1dad6a3d3b3fcca079213cbc8badcf69a4dce74ac1c728e" gracePeriod=30 Nov 28 08:53:25 crc kubenswrapper[4946]: I1128 08:53:25.455200 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 08:53:25 crc kubenswrapper[4946]: I1128 08:53:25.455543 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="10dc02b8-ef84-40dd-8eca-07afbf6bfc6a" containerName="nova-metadata-log" containerID="cri-o://71e20ab7db04fc7fd1f9003ebfcb974bde978e30dde3ae948d877adbdc28fb33" gracePeriod=30 Nov 28 08:53:25 crc kubenswrapper[4946]: I1128 08:53:25.455838 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="10dc02b8-ef84-40dd-8eca-07afbf6bfc6a" containerName="nova-metadata-metadata" containerID="cri-o://5207802eb333f4710080c106c7a3dd13b70bd735dd32e9a90dd84b95dbcd18e4" gracePeriod=30 Nov 28 08:53:26 crc kubenswrapper[4946]: I1128 08:53:26.170949 4946 generic.go:334] "Generic (PLEG): container finished" podID="1caea475-b49d-4ac7-8036-80a59d8fa7d6" containerID="370599193f0331027766a833e00bdc25cafe0f66005a71f9ff8349f437daef62" exitCode=143 Nov 28 08:53:26 crc kubenswrapper[4946]: I1128 08:53:26.171242 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1caea475-b49d-4ac7-8036-80a59d8fa7d6","Type":"ContainerDied","Data":"370599193f0331027766a833e00bdc25cafe0f66005a71f9ff8349f437daef62"} Nov 28 08:53:26 crc kubenswrapper[4946]: I1128 08:53:26.172833 4946 generic.go:334] "Generic (PLEG): container finished" podID="10dc02b8-ef84-40dd-8eca-07afbf6bfc6a" containerID="71e20ab7db04fc7fd1f9003ebfcb974bde978e30dde3ae948d877adbdc28fb33" exitCode=143 Nov 28 08:53:26 crc kubenswrapper[4946]: I1128 08:53:26.172864 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a","Type":"ContainerDied","Data":"71e20ab7db04fc7fd1f9003ebfcb974bde978e30dde3ae948d877adbdc28fb33"} Nov 28 08:53:28 crc kubenswrapper[4946]: E1128 08:53:28.364043 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5826e0d51a4d3c8db1dad6a3d3b3fcca079213cbc8badcf69a4dce74ac1c728e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 08:53:28 crc kubenswrapper[4946]: E1128 08:53:28.366810 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5826e0d51a4d3c8db1dad6a3d3b3fcca079213cbc8badcf69a4dce74ac1c728e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 08:53:28 crc kubenswrapper[4946]: E1128 08:53:28.368548 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5826e0d51a4d3c8db1dad6a3d3b3fcca079213cbc8badcf69a4dce74ac1c728e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 08:53:28 crc kubenswrapper[4946]: E1128 08:53:28.368634 4946 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="53807ecb-fa2d-4cab-a27c-1cbea80b44c3" containerName="nova-scheduler-scheduler" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.003680 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.062720 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.120155 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1caea475-b49d-4ac7-8036-80a59d8fa7d6-combined-ca-bundle\") pod \"1caea475-b49d-4ac7-8036-80a59d8fa7d6\" (UID: \"1caea475-b49d-4ac7-8036-80a59d8fa7d6\") " Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.120280 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10dc02b8-ef84-40dd-8eca-07afbf6bfc6a-logs\") pod \"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a\" (UID: \"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a\") " Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.120311 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1caea475-b49d-4ac7-8036-80a59d8fa7d6-logs\") pod \"1caea475-b49d-4ac7-8036-80a59d8fa7d6\" (UID: \"1caea475-b49d-4ac7-8036-80a59d8fa7d6\") " Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.120351 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cvrz\" (UniqueName: \"kubernetes.io/projected/10dc02b8-ef84-40dd-8eca-07afbf6bfc6a-kube-api-access-4cvrz\") pod \"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a\" (UID: \"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a\") " Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.120397 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzm4g\" (UniqueName: \"kubernetes.io/projected/1caea475-b49d-4ac7-8036-80a59d8fa7d6-kube-api-access-hzm4g\") pod \"1caea475-b49d-4ac7-8036-80a59d8fa7d6\" (UID: \"1caea475-b49d-4ac7-8036-80a59d8fa7d6\") " Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.120431 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10dc02b8-ef84-40dd-8eca-07afbf6bfc6a-combined-ca-bundle\") pod \"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a\" (UID: \"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a\") " Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.120504 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1caea475-b49d-4ac7-8036-80a59d8fa7d6-config-data\") pod \"1caea475-b49d-4ac7-8036-80a59d8fa7d6\" (UID: \"1caea475-b49d-4ac7-8036-80a59d8fa7d6\") " Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.120552 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10dc02b8-ef84-40dd-8eca-07afbf6bfc6a-config-data\") pod \"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a\" (UID: \"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a\") " Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.120962 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10dc02b8-ef84-40dd-8eca-07afbf6bfc6a-logs" (OuterVolumeSpecName: "logs") pod "10dc02b8-ef84-40dd-8eca-07afbf6bfc6a" (UID: "10dc02b8-ef84-40dd-8eca-07afbf6bfc6a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.121033 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1caea475-b49d-4ac7-8036-80a59d8fa7d6-logs" (OuterVolumeSpecName: "logs") pod "1caea475-b49d-4ac7-8036-80a59d8fa7d6" (UID: "1caea475-b49d-4ac7-8036-80a59d8fa7d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.121102 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10dc02b8-ef84-40dd-8eca-07afbf6bfc6a-logs\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.125587 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10dc02b8-ef84-40dd-8eca-07afbf6bfc6a-kube-api-access-4cvrz" (OuterVolumeSpecName: "kube-api-access-4cvrz") pod "10dc02b8-ef84-40dd-8eca-07afbf6bfc6a" (UID: "10dc02b8-ef84-40dd-8eca-07afbf6bfc6a"). InnerVolumeSpecName "kube-api-access-4cvrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.126741 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1caea475-b49d-4ac7-8036-80a59d8fa7d6-kube-api-access-hzm4g" (OuterVolumeSpecName: "kube-api-access-hzm4g") pod "1caea475-b49d-4ac7-8036-80a59d8fa7d6" (UID: "1caea475-b49d-4ac7-8036-80a59d8fa7d6"). InnerVolumeSpecName "kube-api-access-hzm4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.146918 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1caea475-b49d-4ac7-8036-80a59d8fa7d6-config-data" (OuterVolumeSpecName: "config-data") pod "1caea475-b49d-4ac7-8036-80a59d8fa7d6" (UID: "1caea475-b49d-4ac7-8036-80a59d8fa7d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.147671 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1caea475-b49d-4ac7-8036-80a59d8fa7d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1caea475-b49d-4ac7-8036-80a59d8fa7d6" (UID: "1caea475-b49d-4ac7-8036-80a59d8fa7d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.148363 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10dc02b8-ef84-40dd-8eca-07afbf6bfc6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10dc02b8-ef84-40dd-8eca-07afbf6bfc6a" (UID: "10dc02b8-ef84-40dd-8eca-07afbf6bfc6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.151223 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10dc02b8-ef84-40dd-8eca-07afbf6bfc6a-config-data" (OuterVolumeSpecName: "config-data") pod "10dc02b8-ef84-40dd-8eca-07afbf6bfc6a" (UID: "10dc02b8-ef84-40dd-8eca-07afbf6bfc6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.210556 4946 generic.go:334] "Generic (PLEG): container finished" podID="10dc02b8-ef84-40dd-8eca-07afbf6bfc6a" containerID="5207802eb333f4710080c106c7a3dd13b70bd735dd32e9a90dd84b95dbcd18e4" exitCode=0 Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.210617 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a","Type":"ContainerDied","Data":"5207802eb333f4710080c106c7a3dd13b70bd735dd32e9a90dd84b95dbcd18e4"} Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.210646 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10dc02b8-ef84-40dd-8eca-07afbf6bfc6a","Type":"ContainerDied","Data":"0383bdff11a1755545ea2e465c38d0cb9faccaea3869f879f5effde4031da2ac"} Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.210663 4946 scope.go:117] "RemoveContainer" containerID="5207802eb333f4710080c106c7a3dd13b70bd735dd32e9a90dd84b95dbcd18e4" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.210777 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.215856 4946 generic.go:334] "Generic (PLEG): container finished" podID="1caea475-b49d-4ac7-8036-80a59d8fa7d6" containerID="a3db228c7d261a2641f0b413610ff6ba7113768a1dc51af33ea05b80ff49c90d" exitCode=0 Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.215896 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1caea475-b49d-4ac7-8036-80a59d8fa7d6","Type":"ContainerDied","Data":"a3db228c7d261a2641f0b413610ff6ba7113768a1dc51af33ea05b80ff49c90d"} Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.215924 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1caea475-b49d-4ac7-8036-80a59d8fa7d6","Type":"ContainerDied","Data":"2237aec17d10612c28e7ecfc11cf36d1264fedb9c4ebede9968ec362a407c72c"} Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.215955 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.223192 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10dc02b8-ef84-40dd-8eca-07afbf6bfc6a-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.223223 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1caea475-b49d-4ac7-8036-80a59d8fa7d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.223237 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1caea475-b49d-4ac7-8036-80a59d8fa7d6-logs\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.223248 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cvrz\" (UniqueName: \"kubernetes.io/projected/10dc02b8-ef84-40dd-8eca-07afbf6bfc6a-kube-api-access-4cvrz\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.223260 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzm4g\" (UniqueName: \"kubernetes.io/projected/1caea475-b49d-4ac7-8036-80a59d8fa7d6-kube-api-access-hzm4g\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.223270 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10dc02b8-ef84-40dd-8eca-07afbf6bfc6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.223280 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1caea475-b49d-4ac7-8036-80a59d8fa7d6-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.237388 4946 scope.go:117] "RemoveContainer" containerID="71e20ab7db04fc7fd1f9003ebfcb974bde978e30dde3ae948d877adbdc28fb33" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.248829 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.279673 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.302699 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.302842 4946 scope.go:117] "RemoveContainer" containerID="5207802eb333f4710080c106c7a3dd13b70bd735dd32e9a90dd84b95dbcd18e4" Nov 28 08:53:29 crc kubenswrapper[4946]: E1128 08:53:29.303513 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5207802eb333f4710080c106c7a3dd13b70bd735dd32e9a90dd84b95dbcd18e4\": container with ID starting with 5207802eb333f4710080c106c7a3dd13b70bd735dd32e9a90dd84b95dbcd18e4 not found: ID does not exist" containerID="5207802eb333f4710080c106c7a3dd13b70bd735dd32e9a90dd84b95dbcd18e4" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.303548 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5207802eb333f4710080c106c7a3dd13b70bd735dd32e9a90dd84b95dbcd18e4"} err="failed to get container status \"5207802eb333f4710080c106c7a3dd13b70bd735dd32e9a90dd84b95dbcd18e4\": rpc error: code = NotFound desc = could not find container \"5207802eb333f4710080c106c7a3dd13b70bd735dd32e9a90dd84b95dbcd18e4\": container with ID starting with 5207802eb333f4710080c106c7a3dd13b70bd735dd32e9a90dd84b95dbcd18e4 not found: ID does not exist" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.303573 4946 scope.go:117] "RemoveContainer" containerID="71e20ab7db04fc7fd1f9003ebfcb974bde978e30dde3ae948d877adbdc28fb33" Nov 28 08:53:29 crc kubenswrapper[4946]: E1128 08:53:29.308018 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71e20ab7db04fc7fd1f9003ebfcb974bde978e30dde3ae948d877adbdc28fb33\": container with ID starting with 71e20ab7db04fc7fd1f9003ebfcb974bde978e30dde3ae948d877adbdc28fb33 not found: ID does not exist" containerID="71e20ab7db04fc7fd1f9003ebfcb974bde978e30dde3ae948d877adbdc28fb33" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.308068 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71e20ab7db04fc7fd1f9003ebfcb974bde978e30dde3ae948d877adbdc28fb33"} err="failed to get container status \"71e20ab7db04fc7fd1f9003ebfcb974bde978e30dde3ae948d877adbdc28fb33\": rpc error: code = NotFound desc = could not find container \"71e20ab7db04fc7fd1f9003ebfcb974bde978e30dde3ae948d877adbdc28fb33\": container with ID starting with 71e20ab7db04fc7fd1f9003ebfcb974bde978e30dde3ae948d877adbdc28fb33 not found: ID does not exist" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.308095 4946 scope.go:117] "RemoveContainer" containerID="a3db228c7d261a2641f0b413610ff6ba7113768a1dc51af33ea05b80ff49c90d" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.315617 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.332105 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 08:53:29 crc kubenswrapper[4946]: E1128 08:53:29.332636 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10dc02b8-ef84-40dd-8eca-07afbf6bfc6a" containerName="nova-metadata-log" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.332658 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="10dc02b8-ef84-40dd-8eca-07afbf6bfc6a" containerName="nova-metadata-log" Nov 28 08:53:29 crc kubenswrapper[4946]: E1128 08:53:29.332686 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece55576-ba44-491f-8be3-84adf5c7f1c4" containerName="nova-manage" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.332692 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece55576-ba44-491f-8be3-84adf5c7f1c4" containerName="nova-manage" Nov 28 08:53:29 crc kubenswrapper[4946]: E1128 08:53:29.332706 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1caea475-b49d-4ac7-8036-80a59d8fa7d6" containerName="nova-api-api" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.332713 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1caea475-b49d-4ac7-8036-80a59d8fa7d6" containerName="nova-api-api" Nov 28 08:53:29 crc kubenswrapper[4946]: E1128 08:53:29.332726 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1caea475-b49d-4ac7-8036-80a59d8fa7d6" containerName="nova-api-log" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.332732 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1caea475-b49d-4ac7-8036-80a59d8fa7d6" containerName="nova-api-log" Nov 28 08:53:29 crc kubenswrapper[4946]: E1128 08:53:29.332742 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10dc02b8-ef84-40dd-8eca-07afbf6bfc6a" containerName="nova-metadata-metadata" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.332748 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="10dc02b8-ef84-40dd-8eca-07afbf6bfc6a" containerName="nova-metadata-metadata" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.332904 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="1caea475-b49d-4ac7-8036-80a59d8fa7d6" containerName="nova-api-api" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.332917 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="10dc02b8-ef84-40dd-8eca-07afbf6bfc6a" containerName="nova-metadata-log" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.332930 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="ece55576-ba44-491f-8be3-84adf5c7f1c4" containerName="nova-manage" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.332947 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="10dc02b8-ef84-40dd-8eca-07afbf6bfc6a" containerName="nova-metadata-metadata" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.332959 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="1caea475-b49d-4ac7-8036-80a59d8fa7d6" containerName="nova-api-log" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.333107 4946 scope.go:117] "RemoveContainer" containerID="370599193f0331027766a833e00bdc25cafe0f66005a71f9ff8349f437daef62" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.334028 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.336079 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.340440 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.348723 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.353913 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.355877 4946 scope.go:117] "RemoveContainer" containerID="a3db228c7d261a2641f0b413610ff6ba7113768a1dc51af33ea05b80ff49c90d" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.357142 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 08:53:29 crc kubenswrapper[4946]: E1128 08:53:29.361391 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3db228c7d261a2641f0b413610ff6ba7113768a1dc51af33ea05b80ff49c90d\": container with ID starting with a3db228c7d261a2641f0b413610ff6ba7113768a1dc51af33ea05b80ff49c90d not found: ID does not exist" containerID="a3db228c7d261a2641f0b413610ff6ba7113768a1dc51af33ea05b80ff49c90d" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.361431 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3db228c7d261a2641f0b413610ff6ba7113768a1dc51af33ea05b80ff49c90d"} err="failed to get container status \"a3db228c7d261a2641f0b413610ff6ba7113768a1dc51af33ea05b80ff49c90d\": rpc error: code = NotFound desc = could not find container \"a3db228c7d261a2641f0b413610ff6ba7113768a1dc51af33ea05b80ff49c90d\": container with ID starting with a3db228c7d261a2641f0b413610ff6ba7113768a1dc51af33ea05b80ff49c90d not found: ID does not exist" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.361457 4946 scope.go:117] "RemoveContainer" containerID="370599193f0331027766a833e00bdc25cafe0f66005a71f9ff8349f437daef62" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.362199 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 08:53:29 crc kubenswrapper[4946]: E1128 08:53:29.363861 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"370599193f0331027766a833e00bdc25cafe0f66005a71f9ff8349f437daef62\": container with ID starting with 370599193f0331027766a833e00bdc25cafe0f66005a71f9ff8349f437daef62 not found: ID does not exist" containerID="370599193f0331027766a833e00bdc25cafe0f66005a71f9ff8349f437daef62" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.363891 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"370599193f0331027766a833e00bdc25cafe0f66005a71f9ff8349f437daef62"} err="failed to get container status \"370599193f0331027766a833e00bdc25cafe0f66005a71f9ff8349f437daef62\": rpc error: code = NotFound desc = could not find container \"370599193f0331027766a833e00bdc25cafe0f66005a71f9ff8349f437daef62\": container with ID starting with 370599193f0331027766a833e00bdc25cafe0f66005a71f9ff8349f437daef62 not found: ID does not exist" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.426861 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6794f905-f1ca-48a1-babd-115981e72ed4-config-data\") pod \"nova-metadata-0\" (UID: \"6794f905-f1ca-48a1-babd-115981e72ed4\") " pod="openstack/nova-metadata-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.426909 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6794f905-f1ca-48a1-babd-115981e72ed4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6794f905-f1ca-48a1-babd-115981e72ed4\") " pod="openstack/nova-metadata-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.426946 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00d946a1-a238-4ab3-b8e6-e276aac8bb22-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"00d946a1-a238-4ab3-b8e6-e276aac8bb22\") " pod="openstack/nova-api-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.427128 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00d946a1-a238-4ab3-b8e6-e276aac8bb22-config-data\") pod \"nova-api-0\" (UID: \"00d946a1-a238-4ab3-b8e6-e276aac8bb22\") " pod="openstack/nova-api-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.427254 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00d946a1-a238-4ab3-b8e6-e276aac8bb22-logs\") pod \"nova-api-0\" (UID: \"00d946a1-a238-4ab3-b8e6-e276aac8bb22\") " pod="openstack/nova-api-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.427282 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv8jq\" (UniqueName: \"kubernetes.io/projected/00d946a1-a238-4ab3-b8e6-e276aac8bb22-kube-api-access-xv8jq\") pod \"nova-api-0\" (UID: \"00d946a1-a238-4ab3-b8e6-e276aac8bb22\") " pod="openstack/nova-api-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.427343 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brmrk\" (UniqueName: \"kubernetes.io/projected/6794f905-f1ca-48a1-babd-115981e72ed4-kube-api-access-brmrk\") pod \"nova-metadata-0\" (UID: \"6794f905-f1ca-48a1-babd-115981e72ed4\") " pod="openstack/nova-metadata-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.427457 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6794f905-f1ca-48a1-babd-115981e72ed4-logs\") pod \"nova-metadata-0\" (UID: \"6794f905-f1ca-48a1-babd-115981e72ed4\") " pod="openstack/nova-metadata-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.529340 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00d946a1-a238-4ab3-b8e6-e276aac8bb22-logs\") pod \"nova-api-0\" (UID: \"00d946a1-a238-4ab3-b8e6-e276aac8bb22\") " pod="openstack/nova-api-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.529603 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv8jq\" (UniqueName: \"kubernetes.io/projected/00d946a1-a238-4ab3-b8e6-e276aac8bb22-kube-api-access-xv8jq\") pod \"nova-api-0\" (UID: \"00d946a1-a238-4ab3-b8e6-e276aac8bb22\") " pod="openstack/nova-api-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.529643 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brmrk\" (UniqueName: \"kubernetes.io/projected/6794f905-f1ca-48a1-babd-115981e72ed4-kube-api-access-brmrk\") pod \"nova-metadata-0\" (UID: \"6794f905-f1ca-48a1-babd-115981e72ed4\") " pod="openstack/nova-metadata-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.529693 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6794f905-f1ca-48a1-babd-115981e72ed4-logs\") pod \"nova-metadata-0\" (UID: \"6794f905-f1ca-48a1-babd-115981e72ed4\") " pod="openstack/nova-metadata-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.529714 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6794f905-f1ca-48a1-babd-115981e72ed4-config-data\") pod \"nova-metadata-0\" (UID: \"6794f905-f1ca-48a1-babd-115981e72ed4\") " pod="openstack/nova-metadata-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.529784 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00d946a1-a238-4ab3-b8e6-e276aac8bb22-logs\") pod \"nova-api-0\" (UID: \"00d946a1-a238-4ab3-b8e6-e276aac8bb22\") " pod="openstack/nova-api-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.530095 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6794f905-f1ca-48a1-babd-115981e72ed4-logs\") pod \"nova-metadata-0\" (UID: \"6794f905-f1ca-48a1-babd-115981e72ed4\") " pod="openstack/nova-metadata-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.529740 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6794f905-f1ca-48a1-babd-115981e72ed4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6794f905-f1ca-48a1-babd-115981e72ed4\") " pod="openstack/nova-metadata-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.530181 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00d946a1-a238-4ab3-b8e6-e276aac8bb22-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"00d946a1-a238-4ab3-b8e6-e276aac8bb22\") " pod="openstack/nova-api-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.530620 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00d946a1-a238-4ab3-b8e6-e276aac8bb22-config-data\") pod \"nova-api-0\" (UID: \"00d946a1-a238-4ab3-b8e6-e276aac8bb22\") " pod="openstack/nova-api-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.535172 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6794f905-f1ca-48a1-babd-115981e72ed4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6794f905-f1ca-48a1-babd-115981e72ed4\") " pod="openstack/nova-metadata-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.535496 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6794f905-f1ca-48a1-babd-115981e72ed4-config-data\") pod \"nova-metadata-0\" (UID: \"6794f905-f1ca-48a1-babd-115981e72ed4\") " pod="openstack/nova-metadata-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.538037 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00d946a1-a238-4ab3-b8e6-e276aac8bb22-config-data\") pod \"nova-api-0\" (UID: \"00d946a1-a238-4ab3-b8e6-e276aac8bb22\") " pod="openstack/nova-api-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.538742 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00d946a1-a238-4ab3-b8e6-e276aac8bb22-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"00d946a1-a238-4ab3-b8e6-e276aac8bb22\") " pod="openstack/nova-api-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.544135 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv8jq\" (UniqueName: \"kubernetes.io/projected/00d946a1-a238-4ab3-b8e6-e276aac8bb22-kube-api-access-xv8jq\") pod \"nova-api-0\" (UID: \"00d946a1-a238-4ab3-b8e6-e276aac8bb22\") " pod="openstack/nova-api-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.549958 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brmrk\" (UniqueName: \"kubernetes.io/projected/6794f905-f1ca-48a1-babd-115981e72ed4-kube-api-access-brmrk\") pod \"nova-metadata-0\" (UID: \"6794f905-f1ca-48a1-babd-115981e72ed4\") " pod="openstack/nova-metadata-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.665331 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 08:53:29 crc kubenswrapper[4946]: I1128 08:53:29.680300 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 08:53:30 crc kubenswrapper[4946]: I1128 08:53:30.001047 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10dc02b8-ef84-40dd-8eca-07afbf6bfc6a" path="/var/lib/kubelet/pods/10dc02b8-ef84-40dd-8eca-07afbf6bfc6a/volumes" Nov 28 08:53:30 crc kubenswrapper[4946]: I1128 08:53:30.002095 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1caea475-b49d-4ac7-8036-80a59d8fa7d6" path="/var/lib/kubelet/pods/1caea475-b49d-4ac7-8036-80a59d8fa7d6/volumes" Nov 28 08:53:30 crc kubenswrapper[4946]: I1128 08:53:30.134758 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 08:53:30 crc kubenswrapper[4946]: I1128 08:53:30.231845 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6794f905-f1ca-48a1-babd-115981e72ed4","Type":"ContainerStarted","Data":"0387a0c373947778b4adb9206de8894718cbcd5358fa42122e27ef4db6ab8434"} Nov 28 08:53:30 crc kubenswrapper[4946]: I1128 08:53:30.232726 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 08:53:30 crc kubenswrapper[4946]: I1128 08:53:30.574580 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 08:53:30 crc kubenswrapper[4946]: I1128 08:53:30.655042 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53807ecb-fa2d-4cab-a27c-1cbea80b44c3-config-data\") pod \"53807ecb-fa2d-4cab-a27c-1cbea80b44c3\" (UID: \"53807ecb-fa2d-4cab-a27c-1cbea80b44c3\") " Nov 28 08:53:30 crc kubenswrapper[4946]: I1128 08:53:30.655185 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53807ecb-fa2d-4cab-a27c-1cbea80b44c3-combined-ca-bundle\") pod \"53807ecb-fa2d-4cab-a27c-1cbea80b44c3\" (UID: \"53807ecb-fa2d-4cab-a27c-1cbea80b44c3\") " Nov 28 08:53:30 crc kubenswrapper[4946]: I1128 08:53:30.655341 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fhlp\" (UniqueName: \"kubernetes.io/projected/53807ecb-fa2d-4cab-a27c-1cbea80b44c3-kube-api-access-6fhlp\") pod \"53807ecb-fa2d-4cab-a27c-1cbea80b44c3\" (UID: \"53807ecb-fa2d-4cab-a27c-1cbea80b44c3\") " Nov 28 08:53:30 crc kubenswrapper[4946]: I1128 08:53:30.659989 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53807ecb-fa2d-4cab-a27c-1cbea80b44c3-kube-api-access-6fhlp" (OuterVolumeSpecName: "kube-api-access-6fhlp") pod "53807ecb-fa2d-4cab-a27c-1cbea80b44c3" (UID: "53807ecb-fa2d-4cab-a27c-1cbea80b44c3"). InnerVolumeSpecName "kube-api-access-6fhlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:53:30 crc kubenswrapper[4946]: I1128 08:53:30.709795 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53807ecb-fa2d-4cab-a27c-1cbea80b44c3-config-data" (OuterVolumeSpecName: "config-data") pod "53807ecb-fa2d-4cab-a27c-1cbea80b44c3" (UID: "53807ecb-fa2d-4cab-a27c-1cbea80b44c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:53:30 crc kubenswrapper[4946]: I1128 08:53:30.722646 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53807ecb-fa2d-4cab-a27c-1cbea80b44c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53807ecb-fa2d-4cab-a27c-1cbea80b44c3" (UID: "53807ecb-fa2d-4cab-a27c-1cbea80b44c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:53:30 crc kubenswrapper[4946]: I1128 08:53:30.757022 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fhlp\" (UniqueName: \"kubernetes.io/projected/53807ecb-fa2d-4cab-a27c-1cbea80b44c3-kube-api-access-6fhlp\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:30 crc kubenswrapper[4946]: I1128 08:53:30.757059 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53807ecb-fa2d-4cab-a27c-1cbea80b44c3-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:30 crc kubenswrapper[4946]: I1128 08:53:30.757069 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53807ecb-fa2d-4cab-a27c-1cbea80b44c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.256595 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"00d946a1-a238-4ab3-b8e6-e276aac8bb22","Type":"ContainerStarted","Data":"70895f70e30891e0fb785baa90a70cffd83e5316b032ce5ba2d16473dc141fc8"} Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.256634 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"00d946a1-a238-4ab3-b8e6-e276aac8bb22","Type":"ContainerStarted","Data":"e920ef783f678ccf0f74b883b2a2698545b071cd344d92d74547f8a3d6c0af40"} Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.256648 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"00d946a1-a238-4ab3-b8e6-e276aac8bb22","Type":"ContainerStarted","Data":"a4be75b1f2bf5db8f9dfddea2627b4bd9e4efd0405d3ecac10e7ec1c059dc29c"} Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.260147 4946 generic.go:334] "Generic (PLEG): container finished" podID="53807ecb-fa2d-4cab-a27c-1cbea80b44c3" containerID="5826e0d51a4d3c8db1dad6a3d3b3fcca079213cbc8badcf69a4dce74ac1c728e" exitCode=0 Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.260209 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"53807ecb-fa2d-4cab-a27c-1cbea80b44c3","Type":"ContainerDied","Data":"5826e0d51a4d3c8db1dad6a3d3b3fcca079213cbc8badcf69a4dce74ac1c728e"} Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.260658 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"53807ecb-fa2d-4cab-a27c-1cbea80b44c3","Type":"ContainerDied","Data":"9c93dcb648558d21d2fae84277abb17cd38d877d00de8cd3f5d9a7ac1e5c9e52"} Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.260836 4946 scope.go:117] "RemoveContainer" containerID="5826e0d51a4d3c8db1dad6a3d3b3fcca079213cbc8badcf69a4dce74ac1c728e" Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.260310 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.264281 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6794f905-f1ca-48a1-babd-115981e72ed4","Type":"ContainerStarted","Data":"bfc1a51101b4c552d61c7f5867477bb7aef0cb80c9a1b719d2163295d47914f3"} Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.264319 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6794f905-f1ca-48a1-babd-115981e72ed4","Type":"ContainerStarted","Data":"936bb085b6e5bb5dad263a9c8cccde5f9e9da7f530a1d756aec854c71404c588"} Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.298106 4946 scope.go:117] "RemoveContainer" containerID="5826e0d51a4d3c8db1dad6a3d3b3fcca079213cbc8badcf69a4dce74ac1c728e" Nov 28 08:53:31 crc kubenswrapper[4946]: E1128 08:53:31.298916 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5826e0d51a4d3c8db1dad6a3d3b3fcca079213cbc8badcf69a4dce74ac1c728e\": container with ID starting with 5826e0d51a4d3c8db1dad6a3d3b3fcca079213cbc8badcf69a4dce74ac1c728e not found: ID does not exist" containerID="5826e0d51a4d3c8db1dad6a3d3b3fcca079213cbc8badcf69a4dce74ac1c728e" Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.298964 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5826e0d51a4d3c8db1dad6a3d3b3fcca079213cbc8badcf69a4dce74ac1c728e"} err="failed to get container status \"5826e0d51a4d3c8db1dad6a3d3b3fcca079213cbc8badcf69a4dce74ac1c728e\": rpc error: code = NotFound desc = could not find container \"5826e0d51a4d3c8db1dad6a3d3b3fcca079213cbc8badcf69a4dce74ac1c728e\": container with ID starting with 5826e0d51a4d3c8db1dad6a3d3b3fcca079213cbc8badcf69a4dce74ac1c728e not found: ID does not exist" Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.300394 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.300361449 podStartE2EDuration="2.300361449s" podCreationTimestamp="2025-11-28 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:53:31.281589834 +0000 UTC m=+7265.659654945" watchObservedRunningTime="2025-11-28 08:53:31.300361449 +0000 UTC m=+7265.678426590" Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.316412 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.316391016 podStartE2EDuration="2.316391016s" podCreationTimestamp="2025-11-28 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:53:31.307194829 +0000 UTC m=+7265.685259950" watchObservedRunningTime="2025-11-28 08:53:31.316391016 +0000 UTC m=+7265.694456137" Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.368677 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.384791 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.399852 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 08:53:31 crc kubenswrapper[4946]: E1128 08:53:31.400558 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53807ecb-fa2d-4cab-a27c-1cbea80b44c3" containerName="nova-scheduler-scheduler" Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.400579 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="53807ecb-fa2d-4cab-a27c-1cbea80b44c3" containerName="nova-scheduler-scheduler" Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.400824 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="53807ecb-fa2d-4cab-a27c-1cbea80b44c3" containerName="nova-scheduler-scheduler" Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.401688 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.403845 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.407984 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.483414 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f17464c0-2039-4016-a165-cde04f10c349-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f17464c0-2039-4016-a165-cde04f10c349\") " pod="openstack/nova-scheduler-0" Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.483812 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x2mq\" (UniqueName: \"kubernetes.io/projected/f17464c0-2039-4016-a165-cde04f10c349-kube-api-access-4x2mq\") pod \"nova-scheduler-0\" (UID: \"f17464c0-2039-4016-a165-cde04f10c349\") " pod="openstack/nova-scheduler-0" Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.483867 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f17464c0-2039-4016-a165-cde04f10c349-config-data\") pod \"nova-scheduler-0\" (UID: \"f17464c0-2039-4016-a165-cde04f10c349\") " pod="openstack/nova-scheduler-0" Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.585623 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x2mq\" (UniqueName: \"kubernetes.io/projected/f17464c0-2039-4016-a165-cde04f10c349-kube-api-access-4x2mq\") pod \"nova-scheduler-0\" (UID: \"f17464c0-2039-4016-a165-cde04f10c349\") " pod="openstack/nova-scheduler-0" Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.585695 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f17464c0-2039-4016-a165-cde04f10c349-config-data\") pod \"nova-scheduler-0\" (UID: \"f17464c0-2039-4016-a165-cde04f10c349\") " pod="openstack/nova-scheduler-0" Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.585788 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f17464c0-2039-4016-a165-cde04f10c349-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f17464c0-2039-4016-a165-cde04f10c349\") " pod="openstack/nova-scheduler-0" Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.590452 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f17464c0-2039-4016-a165-cde04f10c349-config-data\") pod \"nova-scheduler-0\" (UID: \"f17464c0-2039-4016-a165-cde04f10c349\") " pod="openstack/nova-scheduler-0" Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.592243 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f17464c0-2039-4016-a165-cde04f10c349-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f17464c0-2039-4016-a165-cde04f10c349\") " pod="openstack/nova-scheduler-0" Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.607093 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x2mq\" (UniqueName: \"kubernetes.io/projected/f17464c0-2039-4016-a165-cde04f10c349-kube-api-access-4x2mq\") pod \"nova-scheduler-0\" (UID: \"f17464c0-2039-4016-a165-cde04f10c349\") " pod="openstack/nova-scheduler-0" Nov 28 08:53:31 crc kubenswrapper[4946]: I1128 08:53:31.731082 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 08:53:32 crc kubenswrapper[4946]: I1128 08:53:32.007925 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53807ecb-fa2d-4cab-a27c-1cbea80b44c3" path="/var/lib/kubelet/pods/53807ecb-fa2d-4cab-a27c-1cbea80b44c3/volumes" Nov 28 08:53:32 crc kubenswrapper[4946]: I1128 08:53:32.175211 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 08:53:32 crc kubenswrapper[4946]: I1128 08:53:32.273765 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f17464c0-2039-4016-a165-cde04f10c349","Type":"ContainerStarted","Data":"2f6ad373ced78c6ba28e53eb490b0a6c2562892de67b99020284b8483f716967"} Nov 28 08:53:33 crc kubenswrapper[4946]: I1128 08:53:33.288221 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f17464c0-2039-4016-a165-cde04f10c349","Type":"ContainerStarted","Data":"30159f8805f5a41f294e2e2eb1f1b2c6cb9ab9f6f3bc9a01ec785471c9150ef6"} Nov 28 08:53:33 crc kubenswrapper[4946]: I1128 08:53:33.324586 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.324559872 podStartE2EDuration="2.324559872s" podCreationTimestamp="2025-11-28 08:53:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:53:33.318784249 +0000 UTC m=+7267.696849400" watchObservedRunningTime="2025-11-28 08:53:33.324559872 +0000 UTC m=+7267.702625013" Nov 28 08:53:34 crc kubenswrapper[4946]: I1128 08:53:34.666052 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 08:53:34 crc kubenswrapper[4946]: I1128 08:53:34.666386 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 08:53:36 crc kubenswrapper[4946]: I1128 08:53:36.731757 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 08:53:39 crc kubenswrapper[4946]: I1128 08:53:39.666388 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 08:53:39 crc kubenswrapper[4946]: I1128 08:53:39.667062 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 08:53:39 crc kubenswrapper[4946]: I1128 08:53:39.681964 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 08:53:39 crc kubenswrapper[4946]: I1128 08:53:39.682029 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 08:53:40 crc kubenswrapper[4946]: I1128 08:53:40.831752 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="00d946a1-a238-4ab3-b8e6-e276aac8bb22" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.77:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 08:53:40 crc kubenswrapper[4946]: I1128 08:53:40.831904 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6794f905-f1ca-48a1-babd-115981e72ed4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.76:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 08:53:40 crc kubenswrapper[4946]: I1128 08:53:40.832029 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6794f905-f1ca-48a1-babd-115981e72ed4" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.76:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 08:53:40 crc kubenswrapper[4946]: I1128 08:53:40.832069 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="00d946a1-a238-4ab3-b8e6-e276aac8bb22" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.77:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 08:53:41 crc kubenswrapper[4946]: I1128 08:53:41.732050 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 08:53:41 crc kubenswrapper[4946]: I1128 08:53:41.780654 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 08:53:42 crc kubenswrapper[4946]: I1128 08:53:42.417697 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 08:53:49 crc kubenswrapper[4946]: I1128 08:53:49.702871 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 08:53:49 crc kubenswrapper[4946]: I1128 08:53:49.713019 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 08:53:49 crc kubenswrapper[4946]: I1128 08:53:49.774711 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 08:53:49 crc kubenswrapper[4946]: I1128 08:53:49.774785 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 08:53:49 crc kubenswrapper[4946]: I1128 08:53:49.775356 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 08:53:49 crc kubenswrapper[4946]: I1128 08:53:49.778017 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 08:53:49 crc kubenswrapper[4946]: I1128 08:53:49.784181 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 08:53:50 crc kubenswrapper[4946]: I1128 08:53:50.468902 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 08:53:50 crc kubenswrapper[4946]: I1128 08:53:50.472311 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 08:53:50 crc kubenswrapper[4946]: I1128 08:53:50.473408 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 08:53:50 crc kubenswrapper[4946]: I1128 08:53:50.736309 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55755df8dc-9rgd9"] Nov 28 08:53:50 crc kubenswrapper[4946]: I1128 08:53:50.740119 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" Nov 28 08:53:50 crc kubenswrapper[4946]: I1128 08:53:50.756344 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55755df8dc-9rgd9"] Nov 28 08:53:50 crc kubenswrapper[4946]: I1128 08:53:50.774901 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2856cac5-9342-490f-9649-42fbb5a9e26a-ovsdbserver-sb\") pod \"dnsmasq-dns-55755df8dc-9rgd9\" (UID: \"2856cac5-9342-490f-9649-42fbb5a9e26a\") " pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" Nov 28 08:53:50 crc kubenswrapper[4946]: I1128 08:53:50.774956 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2856cac5-9342-490f-9649-42fbb5a9e26a-ovsdbserver-nb\") pod \"dnsmasq-dns-55755df8dc-9rgd9\" (UID: \"2856cac5-9342-490f-9649-42fbb5a9e26a\") " pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" Nov 28 08:53:50 crc kubenswrapper[4946]: I1128 08:53:50.775001 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2856cac5-9342-490f-9649-42fbb5a9e26a-config\") pod \"dnsmasq-dns-55755df8dc-9rgd9\" (UID: \"2856cac5-9342-490f-9649-42fbb5a9e26a\") " pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" Nov 28 08:53:50 crc kubenswrapper[4946]: I1128 08:53:50.775028 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2856cac5-9342-490f-9649-42fbb5a9e26a-dns-svc\") pod \"dnsmasq-dns-55755df8dc-9rgd9\" (UID: \"2856cac5-9342-490f-9649-42fbb5a9e26a\") " pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" Nov 28 08:53:50 crc kubenswrapper[4946]: I1128 08:53:50.775113 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbgq4\" (UniqueName: \"kubernetes.io/projected/2856cac5-9342-490f-9649-42fbb5a9e26a-kube-api-access-bbgq4\") pod \"dnsmasq-dns-55755df8dc-9rgd9\" (UID: \"2856cac5-9342-490f-9649-42fbb5a9e26a\") " pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" Nov 28 08:53:50 crc kubenswrapper[4946]: I1128 08:53:50.876707 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbgq4\" (UniqueName: \"kubernetes.io/projected/2856cac5-9342-490f-9649-42fbb5a9e26a-kube-api-access-bbgq4\") pod \"dnsmasq-dns-55755df8dc-9rgd9\" (UID: \"2856cac5-9342-490f-9649-42fbb5a9e26a\") " pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" Nov 28 08:53:50 crc kubenswrapper[4946]: I1128 08:53:50.876781 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2856cac5-9342-490f-9649-42fbb5a9e26a-ovsdbserver-sb\") pod \"dnsmasq-dns-55755df8dc-9rgd9\" (UID: \"2856cac5-9342-490f-9649-42fbb5a9e26a\") " pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" Nov 28 08:53:50 crc kubenswrapper[4946]: I1128 08:53:50.876819 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2856cac5-9342-490f-9649-42fbb5a9e26a-ovsdbserver-nb\") pod \"dnsmasq-dns-55755df8dc-9rgd9\" (UID: \"2856cac5-9342-490f-9649-42fbb5a9e26a\") " pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" Nov 28 08:53:50 crc kubenswrapper[4946]: I1128 08:53:50.876866 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2856cac5-9342-490f-9649-42fbb5a9e26a-config\") pod \"dnsmasq-dns-55755df8dc-9rgd9\" (UID: \"2856cac5-9342-490f-9649-42fbb5a9e26a\") " pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" Nov 28 08:53:50 crc kubenswrapper[4946]: I1128 08:53:50.876888 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2856cac5-9342-490f-9649-42fbb5a9e26a-dns-svc\") pod \"dnsmasq-dns-55755df8dc-9rgd9\" (UID: \"2856cac5-9342-490f-9649-42fbb5a9e26a\") " pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" Nov 28 08:53:50 crc kubenswrapper[4946]: I1128 08:53:50.878013 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2856cac5-9342-490f-9649-42fbb5a9e26a-dns-svc\") pod \"dnsmasq-dns-55755df8dc-9rgd9\" (UID: \"2856cac5-9342-490f-9649-42fbb5a9e26a\") " pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" Nov 28 08:53:50 crc kubenswrapper[4946]: I1128 08:53:50.878013 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2856cac5-9342-490f-9649-42fbb5a9e26a-config\") pod \"dnsmasq-dns-55755df8dc-9rgd9\" (UID: \"2856cac5-9342-490f-9649-42fbb5a9e26a\") " pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" Nov 28 08:53:50 crc kubenswrapper[4946]: I1128 08:53:50.878219 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2856cac5-9342-490f-9649-42fbb5a9e26a-ovsdbserver-sb\") pod \"dnsmasq-dns-55755df8dc-9rgd9\" (UID: \"2856cac5-9342-490f-9649-42fbb5a9e26a\") " pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" Nov 28 08:53:50 crc kubenswrapper[4946]: I1128 08:53:50.878238 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2856cac5-9342-490f-9649-42fbb5a9e26a-ovsdbserver-nb\") pod \"dnsmasq-dns-55755df8dc-9rgd9\" (UID: \"2856cac5-9342-490f-9649-42fbb5a9e26a\") " pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" Nov 28 08:53:50 crc kubenswrapper[4946]: I1128 08:53:50.899324 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbgq4\" (UniqueName: \"kubernetes.io/projected/2856cac5-9342-490f-9649-42fbb5a9e26a-kube-api-access-bbgq4\") pod \"dnsmasq-dns-55755df8dc-9rgd9\" (UID: \"2856cac5-9342-490f-9649-42fbb5a9e26a\") " pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" Nov 28 08:53:51 crc kubenswrapper[4946]: I1128 08:53:51.071867 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" Nov 28 08:53:51 crc kubenswrapper[4946]: I1128 08:53:51.578969 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55755df8dc-9rgd9"] Nov 28 08:53:52 crc kubenswrapper[4946]: I1128 08:53:52.495731 4946 generic.go:334] "Generic (PLEG): container finished" podID="2856cac5-9342-490f-9649-42fbb5a9e26a" containerID="5751103456d3793cf534eca7c18580b530b381ef220ecc9e46f342ac1ee679c5" exitCode=0 Nov 28 08:53:52 crc kubenswrapper[4946]: I1128 08:53:52.495836 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" event={"ID":"2856cac5-9342-490f-9649-42fbb5a9e26a","Type":"ContainerDied","Data":"5751103456d3793cf534eca7c18580b530b381ef220ecc9e46f342ac1ee679c5"} Nov 28 08:53:52 crc kubenswrapper[4946]: I1128 08:53:52.496235 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" event={"ID":"2856cac5-9342-490f-9649-42fbb5a9e26a","Type":"ContainerStarted","Data":"98b737e505c5c63ac947368f2eb19660ba95ae93ea4005332cfdf3f2be748abe"} Nov 28 08:53:53 crc kubenswrapper[4946]: I1128 08:53:53.504783 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" event={"ID":"2856cac5-9342-490f-9649-42fbb5a9e26a","Type":"ContainerStarted","Data":"45f7ad6c99554068fecdd7dddef907d640eaf4b8d06dab276439d31c4b655171"} Nov 28 08:53:53 crc kubenswrapper[4946]: I1128 08:53:53.505293 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" Nov 28 08:53:53 crc kubenswrapper[4946]: I1128 08:53:53.525677 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" podStartSLOduration=3.525655687 podStartE2EDuration="3.525655687s" podCreationTimestamp="2025-11-28 08:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:53:53.520765536 +0000 UTC m=+7287.898830677" watchObservedRunningTime="2025-11-28 08:53:53.525655687 +0000 UTC m=+7287.903720798" Nov 28 08:54:01 crc kubenswrapper[4946]: I1128 08:54:01.074750 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" Nov 28 08:54:01 crc kubenswrapper[4946]: I1128 08:54:01.166872 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b748879ff-rm7x4"] Nov 28 08:54:01 crc kubenswrapper[4946]: I1128 08:54:01.167159 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b748879ff-rm7x4" podUID="d8b3118f-a393-4a4d-9fff-0846fb6ac6f1" containerName="dnsmasq-dns" containerID="cri-o://83ce25e78bc0d9d1d4d261d549c85653a9e589b62b53e87c7c2251d3f88453aa" gracePeriod=10 Nov 28 08:54:01 crc kubenswrapper[4946]: I1128 08:54:01.598380 4946 generic.go:334] "Generic (PLEG): container finished" podID="d8b3118f-a393-4a4d-9fff-0846fb6ac6f1" containerID="83ce25e78bc0d9d1d4d261d549c85653a9e589b62b53e87c7c2251d3f88453aa" exitCode=0 Nov 28 08:54:01 crc kubenswrapper[4946]: I1128 08:54:01.598468 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b748879ff-rm7x4" event={"ID":"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1","Type":"ContainerDied","Data":"83ce25e78bc0d9d1d4d261d549c85653a9e589b62b53e87c7c2251d3f88453aa"} Nov 28 08:54:01 crc kubenswrapper[4946]: I1128 08:54:01.598763 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b748879ff-rm7x4" event={"ID":"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1","Type":"ContainerDied","Data":"3d18fdc3cb019a932f3edc8d6a6119859fd5763266f0430223229713440c6895"} Nov 28 08:54:01 crc kubenswrapper[4946]: I1128 08:54:01.598784 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d18fdc3cb019a932f3edc8d6a6119859fd5763266f0430223229713440c6895" Nov 28 08:54:01 crc kubenswrapper[4946]: I1128 08:54:01.618995 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b748879ff-rm7x4" Nov 28 08:54:01 crc kubenswrapper[4946]: I1128 08:54:01.724582 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgjmj\" (UniqueName: \"kubernetes.io/projected/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-kube-api-access-mgjmj\") pod \"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1\" (UID: \"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1\") " Nov 28 08:54:01 crc kubenswrapper[4946]: I1128 08:54:01.724696 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-ovsdbserver-sb\") pod \"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1\" (UID: \"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1\") " Nov 28 08:54:01 crc kubenswrapper[4946]: I1128 08:54:01.724800 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-dns-svc\") pod \"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1\" (UID: \"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1\") " Nov 28 08:54:01 crc kubenswrapper[4946]: I1128 08:54:01.724847 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-ovsdbserver-nb\") pod \"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1\" (UID: \"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1\") " Nov 28 08:54:01 crc kubenswrapper[4946]: I1128 08:54:01.724875 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-config\") pod \"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1\" (UID: \"d8b3118f-a393-4a4d-9fff-0846fb6ac6f1\") " Nov 28 08:54:01 crc kubenswrapper[4946]: I1128 08:54:01.732631 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-kube-api-access-mgjmj" (OuterVolumeSpecName: "kube-api-access-mgjmj") pod "d8b3118f-a393-4a4d-9fff-0846fb6ac6f1" (UID: "d8b3118f-a393-4a4d-9fff-0846fb6ac6f1"). InnerVolumeSpecName "kube-api-access-mgjmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:54:01 crc kubenswrapper[4946]: I1128 08:54:01.777845 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-config" (OuterVolumeSpecName: "config") pod "d8b3118f-a393-4a4d-9fff-0846fb6ac6f1" (UID: "d8b3118f-a393-4a4d-9fff-0846fb6ac6f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:54:01 crc kubenswrapper[4946]: I1128 08:54:01.778848 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d8b3118f-a393-4a4d-9fff-0846fb6ac6f1" (UID: "d8b3118f-a393-4a4d-9fff-0846fb6ac6f1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:54:01 crc kubenswrapper[4946]: I1128 08:54:01.782347 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d8b3118f-a393-4a4d-9fff-0846fb6ac6f1" (UID: "d8b3118f-a393-4a4d-9fff-0846fb6ac6f1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:54:01 crc kubenswrapper[4946]: I1128 08:54:01.799921 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d8b3118f-a393-4a4d-9fff-0846fb6ac6f1" (UID: "d8b3118f-a393-4a4d-9fff-0846fb6ac6f1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:54:01 crc kubenswrapper[4946]: I1128 08:54:01.827183 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:01 crc kubenswrapper[4946]: I1128 08:54:01.827221 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:01 crc kubenswrapper[4946]: I1128 08:54:01.827234 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-config\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:01 crc kubenswrapper[4946]: I1128 08:54:01.827246 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgjmj\" (UniqueName: \"kubernetes.io/projected/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-kube-api-access-mgjmj\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:01 crc kubenswrapper[4946]: I1128 08:54:01.827257 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:02 crc kubenswrapper[4946]: I1128 08:54:02.606042 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b748879ff-rm7x4" Nov 28 08:54:02 crc kubenswrapper[4946]: I1128 08:54:02.633542 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b748879ff-rm7x4"] Nov 28 08:54:02 crc kubenswrapper[4946]: I1128 08:54:02.640683 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b748879ff-rm7x4"] Nov 28 08:54:04 crc kubenswrapper[4946]: I1128 08:54:04.019991 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8b3118f-a393-4a4d-9fff-0846fb6ac6f1" path="/var/lib/kubelet/pods/d8b3118f-a393-4a4d-9fff-0846fb6ac6f1/volumes" Nov 28 08:54:04 crc kubenswrapper[4946]: I1128 08:54:04.528383 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-qw9dr"] Nov 28 08:54:04 crc kubenswrapper[4946]: E1128 08:54:04.528921 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b3118f-a393-4a4d-9fff-0846fb6ac6f1" containerName="init" Nov 28 08:54:04 crc kubenswrapper[4946]: I1128 08:54:04.528947 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b3118f-a393-4a4d-9fff-0846fb6ac6f1" containerName="init" Nov 28 08:54:04 crc kubenswrapper[4946]: E1128 08:54:04.528967 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b3118f-a393-4a4d-9fff-0846fb6ac6f1" containerName="dnsmasq-dns" Nov 28 08:54:04 crc kubenswrapper[4946]: I1128 08:54:04.528977 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b3118f-a393-4a4d-9fff-0846fb6ac6f1" containerName="dnsmasq-dns" Nov 28 08:54:04 crc kubenswrapper[4946]: I1128 08:54:04.529218 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8b3118f-a393-4a4d-9fff-0846fb6ac6f1" containerName="dnsmasq-dns" Nov 28 08:54:04 crc kubenswrapper[4946]: I1128 08:54:04.530035 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qw9dr" Nov 28 08:54:04 crc kubenswrapper[4946]: I1128 08:54:04.537921 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qw9dr"] Nov 28 08:54:04 crc kubenswrapper[4946]: I1128 08:54:04.580239 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd203623-2c1b-4be7-baf7-cf2238097ed1-operator-scripts\") pod \"cinder-db-create-qw9dr\" (UID: \"cd203623-2c1b-4be7-baf7-cf2238097ed1\") " pod="openstack/cinder-db-create-qw9dr" Nov 28 08:54:04 crc kubenswrapper[4946]: I1128 08:54:04.580344 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjk8s\" (UniqueName: \"kubernetes.io/projected/cd203623-2c1b-4be7-baf7-cf2238097ed1-kube-api-access-qjk8s\") pod \"cinder-db-create-qw9dr\" (UID: \"cd203623-2c1b-4be7-baf7-cf2238097ed1\") " pod="openstack/cinder-db-create-qw9dr" Nov 28 08:54:04 crc kubenswrapper[4946]: I1128 08:54:04.626882 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-9652-account-create-update-xdcdt"] Nov 28 08:54:04 crc kubenswrapper[4946]: I1128 08:54:04.628152 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9652-account-create-update-xdcdt" Nov 28 08:54:04 crc kubenswrapper[4946]: I1128 08:54:04.630148 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 28 08:54:04 crc kubenswrapper[4946]: I1128 08:54:04.649924 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9652-account-create-update-xdcdt"] Nov 28 08:54:04 crc kubenswrapper[4946]: I1128 08:54:04.681675 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0cb24b0-63f6-4b88-bf38-35f389e32715-operator-scripts\") pod \"cinder-9652-account-create-update-xdcdt\" (UID: \"e0cb24b0-63f6-4b88-bf38-35f389e32715\") " pod="openstack/cinder-9652-account-create-update-xdcdt" Nov 28 08:54:04 crc kubenswrapper[4946]: I1128 08:54:04.681729 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjk8s\" (UniqueName: \"kubernetes.io/projected/cd203623-2c1b-4be7-baf7-cf2238097ed1-kube-api-access-qjk8s\") pod \"cinder-db-create-qw9dr\" (UID: \"cd203623-2c1b-4be7-baf7-cf2238097ed1\") " pod="openstack/cinder-db-create-qw9dr" Nov 28 08:54:04 crc kubenswrapper[4946]: I1128 08:54:04.681885 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd203623-2c1b-4be7-baf7-cf2238097ed1-operator-scripts\") pod \"cinder-db-create-qw9dr\" (UID: \"cd203623-2c1b-4be7-baf7-cf2238097ed1\") " pod="openstack/cinder-db-create-qw9dr" Nov 28 08:54:04 crc kubenswrapper[4946]: I1128 08:54:04.681914 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dswc\" (UniqueName: \"kubernetes.io/projected/e0cb24b0-63f6-4b88-bf38-35f389e32715-kube-api-access-2dswc\") pod \"cinder-9652-account-create-update-xdcdt\" (UID: \"e0cb24b0-63f6-4b88-bf38-35f389e32715\") " pod="openstack/cinder-9652-account-create-update-xdcdt" Nov 28 08:54:04 crc kubenswrapper[4946]: I1128 08:54:04.683129 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd203623-2c1b-4be7-baf7-cf2238097ed1-operator-scripts\") pod \"cinder-db-create-qw9dr\" (UID: \"cd203623-2c1b-4be7-baf7-cf2238097ed1\") " pod="openstack/cinder-db-create-qw9dr" Nov 28 08:54:04 crc kubenswrapper[4946]: I1128 08:54:04.703187 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjk8s\" (UniqueName: \"kubernetes.io/projected/cd203623-2c1b-4be7-baf7-cf2238097ed1-kube-api-access-qjk8s\") pod \"cinder-db-create-qw9dr\" (UID: \"cd203623-2c1b-4be7-baf7-cf2238097ed1\") " pod="openstack/cinder-db-create-qw9dr" Nov 28 08:54:04 crc kubenswrapper[4946]: I1128 08:54:04.783318 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dswc\" (UniqueName: \"kubernetes.io/projected/e0cb24b0-63f6-4b88-bf38-35f389e32715-kube-api-access-2dswc\") pod \"cinder-9652-account-create-update-xdcdt\" (UID: \"e0cb24b0-63f6-4b88-bf38-35f389e32715\") " pod="openstack/cinder-9652-account-create-update-xdcdt" Nov 28 08:54:04 crc kubenswrapper[4946]: I1128 08:54:04.783716 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0cb24b0-63f6-4b88-bf38-35f389e32715-operator-scripts\") pod \"cinder-9652-account-create-update-xdcdt\" (UID: \"e0cb24b0-63f6-4b88-bf38-35f389e32715\") " pod="openstack/cinder-9652-account-create-update-xdcdt" Nov 28 08:54:04 crc kubenswrapper[4946]: I1128 08:54:04.784330 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0cb24b0-63f6-4b88-bf38-35f389e32715-operator-scripts\") pod \"cinder-9652-account-create-update-xdcdt\" (UID: \"e0cb24b0-63f6-4b88-bf38-35f389e32715\") " pod="openstack/cinder-9652-account-create-update-xdcdt" Nov 28 08:54:04 crc kubenswrapper[4946]: I1128 08:54:04.798761 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dswc\" (UniqueName: \"kubernetes.io/projected/e0cb24b0-63f6-4b88-bf38-35f389e32715-kube-api-access-2dswc\") pod \"cinder-9652-account-create-update-xdcdt\" (UID: \"e0cb24b0-63f6-4b88-bf38-35f389e32715\") " pod="openstack/cinder-9652-account-create-update-xdcdt" Nov 28 08:54:04 crc kubenswrapper[4946]: I1128 08:54:04.861759 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qw9dr" Nov 28 08:54:04 crc kubenswrapper[4946]: I1128 08:54:04.951915 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9652-account-create-update-xdcdt" Nov 28 08:54:05 crc kubenswrapper[4946]: I1128 08:54:05.361219 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qw9dr"] Nov 28 08:54:05 crc kubenswrapper[4946]: W1128 08:54:05.361905 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd203623_2c1b_4be7_baf7_cf2238097ed1.slice/crio-846bc8f32230b59d91cde4fb35bbf3302d27a1f7cf1e3107597f4fac503b84a8 WatchSource:0}: Error finding container 846bc8f32230b59d91cde4fb35bbf3302d27a1f7cf1e3107597f4fac503b84a8: Status 404 returned error can't find the container with id 846bc8f32230b59d91cde4fb35bbf3302d27a1f7cf1e3107597f4fac503b84a8 Nov 28 08:54:05 crc kubenswrapper[4946]: I1128 08:54:05.470507 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9652-account-create-update-xdcdt"] Nov 28 08:54:05 crc kubenswrapper[4946]: W1128 08:54:05.478118 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0cb24b0_63f6_4b88_bf38_35f389e32715.slice/crio-bf3b692c13a4317c030855b5bcf71ced0fdc6041aa8d966f37073c4f9d13ec1e WatchSource:0}: Error finding container bf3b692c13a4317c030855b5bcf71ced0fdc6041aa8d966f37073c4f9d13ec1e: Status 404 returned error can't find the container with id bf3b692c13a4317c030855b5bcf71ced0fdc6041aa8d966f37073c4f9d13ec1e Nov 28 08:54:05 crc kubenswrapper[4946]: I1128 08:54:05.638208 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9652-account-create-update-xdcdt" event={"ID":"e0cb24b0-63f6-4b88-bf38-35f389e32715","Type":"ContainerStarted","Data":"f0f1801f8e4c0a7eec8dd6fc1bf01473999ebdec4b25ac1b518d836278a3232f"} Nov 28 08:54:05 crc kubenswrapper[4946]: I1128 08:54:05.638265 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9652-account-create-update-xdcdt" event={"ID":"e0cb24b0-63f6-4b88-bf38-35f389e32715","Type":"ContainerStarted","Data":"bf3b692c13a4317c030855b5bcf71ced0fdc6041aa8d966f37073c4f9d13ec1e"} Nov 28 08:54:05 crc kubenswrapper[4946]: I1128 08:54:05.640558 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qw9dr" event={"ID":"cd203623-2c1b-4be7-baf7-cf2238097ed1","Type":"ContainerStarted","Data":"a612814cb9da1360fac311a79a14baf6bad243c6d53ea79be2931a0e660794ec"} Nov 28 08:54:05 crc kubenswrapper[4946]: I1128 08:54:05.640666 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qw9dr" event={"ID":"cd203623-2c1b-4be7-baf7-cf2238097ed1","Type":"ContainerStarted","Data":"846bc8f32230b59d91cde4fb35bbf3302d27a1f7cf1e3107597f4fac503b84a8"} Nov 28 08:54:05 crc kubenswrapper[4946]: I1128 08:54:05.650932 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-9652-account-create-update-xdcdt" podStartSLOduration=1.650910595 podStartE2EDuration="1.650910595s" podCreationTimestamp="2025-11-28 08:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:54:05.64871755 +0000 UTC m=+7300.026782671" watchObservedRunningTime="2025-11-28 08:54:05.650910595 +0000 UTC m=+7300.028975716" Nov 28 08:54:05 crc kubenswrapper[4946]: I1128 08:54:05.666001 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-qw9dr" podStartSLOduration=1.665982558 podStartE2EDuration="1.665982558s" podCreationTimestamp="2025-11-28 08:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:54:05.660863311 +0000 UTC m=+7300.038928432" watchObservedRunningTime="2025-11-28 08:54:05.665982558 +0000 UTC m=+7300.044047669" Nov 28 08:54:06 crc kubenswrapper[4946]: I1128 08:54:06.657789 4946 generic.go:334] "Generic (PLEG): container finished" podID="cd203623-2c1b-4be7-baf7-cf2238097ed1" containerID="a612814cb9da1360fac311a79a14baf6bad243c6d53ea79be2931a0e660794ec" exitCode=0 Nov 28 08:54:06 crc kubenswrapper[4946]: I1128 08:54:06.657827 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qw9dr" event={"ID":"cd203623-2c1b-4be7-baf7-cf2238097ed1","Type":"ContainerDied","Data":"a612814cb9da1360fac311a79a14baf6bad243c6d53ea79be2931a0e660794ec"} Nov 28 08:54:06 crc kubenswrapper[4946]: I1128 08:54:06.661182 4946 generic.go:334] "Generic (PLEG): container finished" podID="e0cb24b0-63f6-4b88-bf38-35f389e32715" containerID="f0f1801f8e4c0a7eec8dd6fc1bf01473999ebdec4b25ac1b518d836278a3232f" exitCode=0 Nov 28 08:54:06 crc kubenswrapper[4946]: I1128 08:54:06.661256 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9652-account-create-update-xdcdt" event={"ID":"e0cb24b0-63f6-4b88-bf38-35f389e32715","Type":"ContainerDied","Data":"f0f1801f8e4c0a7eec8dd6fc1bf01473999ebdec4b25ac1b518d836278a3232f"} Nov 28 08:54:08 crc kubenswrapper[4946]: I1128 08:54:08.211691 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qw9dr" Nov 28 08:54:08 crc kubenswrapper[4946]: I1128 08:54:08.218803 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9652-account-create-update-xdcdt" Nov 28 08:54:08 crc kubenswrapper[4946]: I1128 08:54:08.353110 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjk8s\" (UniqueName: \"kubernetes.io/projected/cd203623-2c1b-4be7-baf7-cf2238097ed1-kube-api-access-qjk8s\") pod \"cd203623-2c1b-4be7-baf7-cf2238097ed1\" (UID: \"cd203623-2c1b-4be7-baf7-cf2238097ed1\") " Nov 28 08:54:08 crc kubenswrapper[4946]: I1128 08:54:08.353207 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd203623-2c1b-4be7-baf7-cf2238097ed1-operator-scripts\") pod \"cd203623-2c1b-4be7-baf7-cf2238097ed1\" (UID: \"cd203623-2c1b-4be7-baf7-cf2238097ed1\") " Nov 28 08:54:08 crc kubenswrapper[4946]: I1128 08:54:08.353329 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0cb24b0-63f6-4b88-bf38-35f389e32715-operator-scripts\") pod \"e0cb24b0-63f6-4b88-bf38-35f389e32715\" (UID: \"e0cb24b0-63f6-4b88-bf38-35f389e32715\") " Nov 28 08:54:08 crc kubenswrapper[4946]: I1128 08:54:08.353408 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dswc\" (UniqueName: \"kubernetes.io/projected/e0cb24b0-63f6-4b88-bf38-35f389e32715-kube-api-access-2dswc\") pod \"e0cb24b0-63f6-4b88-bf38-35f389e32715\" (UID: \"e0cb24b0-63f6-4b88-bf38-35f389e32715\") " Nov 28 08:54:08 crc kubenswrapper[4946]: I1128 08:54:08.354230 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0cb24b0-63f6-4b88-bf38-35f389e32715-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0cb24b0-63f6-4b88-bf38-35f389e32715" (UID: "e0cb24b0-63f6-4b88-bf38-35f389e32715"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:54:08 crc kubenswrapper[4946]: I1128 08:54:08.354818 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd203623-2c1b-4be7-baf7-cf2238097ed1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cd203623-2c1b-4be7-baf7-cf2238097ed1" (UID: "cd203623-2c1b-4be7-baf7-cf2238097ed1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:54:08 crc kubenswrapper[4946]: I1128 08:54:08.361608 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd203623-2c1b-4be7-baf7-cf2238097ed1-kube-api-access-qjk8s" (OuterVolumeSpecName: "kube-api-access-qjk8s") pod "cd203623-2c1b-4be7-baf7-cf2238097ed1" (UID: "cd203623-2c1b-4be7-baf7-cf2238097ed1"). InnerVolumeSpecName "kube-api-access-qjk8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:54:08 crc kubenswrapper[4946]: I1128 08:54:08.362275 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0cb24b0-63f6-4b88-bf38-35f389e32715-kube-api-access-2dswc" (OuterVolumeSpecName: "kube-api-access-2dswc") pod "e0cb24b0-63f6-4b88-bf38-35f389e32715" (UID: "e0cb24b0-63f6-4b88-bf38-35f389e32715"). InnerVolumeSpecName "kube-api-access-2dswc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:54:08 crc kubenswrapper[4946]: I1128 08:54:08.455426 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dswc\" (UniqueName: \"kubernetes.io/projected/e0cb24b0-63f6-4b88-bf38-35f389e32715-kube-api-access-2dswc\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:08 crc kubenswrapper[4946]: I1128 08:54:08.455543 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjk8s\" (UniqueName: \"kubernetes.io/projected/cd203623-2c1b-4be7-baf7-cf2238097ed1-kube-api-access-qjk8s\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:08 crc kubenswrapper[4946]: I1128 08:54:08.455566 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd203623-2c1b-4be7-baf7-cf2238097ed1-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:08 crc kubenswrapper[4946]: I1128 08:54:08.455583 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0cb24b0-63f6-4b88-bf38-35f389e32715-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:08 crc kubenswrapper[4946]: I1128 08:54:08.694432 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qw9dr" event={"ID":"cd203623-2c1b-4be7-baf7-cf2238097ed1","Type":"ContainerDied","Data":"846bc8f32230b59d91cde4fb35bbf3302d27a1f7cf1e3107597f4fac503b84a8"} Nov 28 08:54:08 crc kubenswrapper[4946]: I1128 08:54:08.694537 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="846bc8f32230b59d91cde4fb35bbf3302d27a1f7cf1e3107597f4fac503b84a8" Nov 28 08:54:08 crc kubenswrapper[4946]: I1128 08:54:08.694500 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qw9dr" Nov 28 08:54:08 crc kubenswrapper[4946]: I1128 08:54:08.696839 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9652-account-create-update-xdcdt" event={"ID":"e0cb24b0-63f6-4b88-bf38-35f389e32715","Type":"ContainerDied","Data":"bf3b692c13a4317c030855b5bcf71ced0fdc6041aa8d966f37073c4f9d13ec1e"} Nov 28 08:54:08 crc kubenswrapper[4946]: I1128 08:54:08.696894 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf3b692c13a4317c030855b5bcf71ced0fdc6041aa8d966f37073c4f9d13ec1e" Nov 28 08:54:08 crc kubenswrapper[4946]: I1128 08:54:08.696902 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9652-account-create-update-xdcdt" Nov 28 08:54:09 crc kubenswrapper[4946]: I1128 08:54:09.892246 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-4jgss"] Nov 28 08:54:09 crc kubenswrapper[4946]: E1128 08:54:09.892833 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd203623-2c1b-4be7-baf7-cf2238097ed1" containerName="mariadb-database-create" Nov 28 08:54:09 crc kubenswrapper[4946]: I1128 08:54:09.892858 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd203623-2c1b-4be7-baf7-cf2238097ed1" containerName="mariadb-database-create" Nov 28 08:54:09 crc kubenswrapper[4946]: E1128 08:54:09.892911 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0cb24b0-63f6-4b88-bf38-35f389e32715" containerName="mariadb-account-create-update" Nov 28 08:54:09 crc kubenswrapper[4946]: I1128 08:54:09.892922 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0cb24b0-63f6-4b88-bf38-35f389e32715" containerName="mariadb-account-create-update" Nov 28 08:54:09 crc kubenswrapper[4946]: I1128 08:54:09.893188 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0cb24b0-63f6-4b88-bf38-35f389e32715" containerName="mariadb-account-create-update" Nov 28 08:54:09 crc kubenswrapper[4946]: I1128 08:54:09.893221 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd203623-2c1b-4be7-baf7-cf2238097ed1" containerName="mariadb-database-create" Nov 28 08:54:09 crc kubenswrapper[4946]: I1128 08:54:09.894156 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4jgss" Nov 28 08:54:09 crc kubenswrapper[4946]: I1128 08:54:09.897802 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 28 08:54:09 crc kubenswrapper[4946]: I1128 08:54:09.897896 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-pnzlq" Nov 28 08:54:09 crc kubenswrapper[4946]: I1128 08:54:09.897813 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 28 08:54:09 crc kubenswrapper[4946]: I1128 08:54:09.928121 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4jgss"] Nov 28 08:54:10 crc kubenswrapper[4946]: I1128 08:54:10.086166 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75f84d54-d949-4047-becb-0d08af1d91b5-etc-machine-id\") pod \"cinder-db-sync-4jgss\" (UID: \"75f84d54-d949-4047-becb-0d08af1d91b5\") " pod="openstack/cinder-db-sync-4jgss" Nov 28 08:54:10 crc kubenswrapper[4946]: I1128 08:54:10.086223 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75f84d54-d949-4047-becb-0d08af1d91b5-config-data\") pod \"cinder-db-sync-4jgss\" (UID: \"75f84d54-d949-4047-becb-0d08af1d91b5\") " pod="openstack/cinder-db-sync-4jgss" Nov 28 08:54:10 crc kubenswrapper[4946]: I1128 08:54:10.086271 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75f84d54-d949-4047-becb-0d08af1d91b5-db-sync-config-data\") pod \"cinder-db-sync-4jgss\" (UID: \"75f84d54-d949-4047-becb-0d08af1d91b5\") " pod="openstack/cinder-db-sync-4jgss" Nov 28 08:54:10 crc kubenswrapper[4946]: I1128 08:54:10.086385 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75f84d54-d949-4047-becb-0d08af1d91b5-scripts\") pod \"cinder-db-sync-4jgss\" (UID: \"75f84d54-d949-4047-becb-0d08af1d91b5\") " pod="openstack/cinder-db-sync-4jgss" Nov 28 08:54:10 crc kubenswrapper[4946]: I1128 08:54:10.086778 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5pdd\" (UniqueName: \"kubernetes.io/projected/75f84d54-d949-4047-becb-0d08af1d91b5-kube-api-access-l5pdd\") pod \"cinder-db-sync-4jgss\" (UID: \"75f84d54-d949-4047-becb-0d08af1d91b5\") " pod="openstack/cinder-db-sync-4jgss" Nov 28 08:54:10 crc kubenswrapper[4946]: I1128 08:54:10.086830 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75f84d54-d949-4047-becb-0d08af1d91b5-combined-ca-bundle\") pod \"cinder-db-sync-4jgss\" (UID: \"75f84d54-d949-4047-becb-0d08af1d91b5\") " pod="openstack/cinder-db-sync-4jgss" Nov 28 08:54:10 crc kubenswrapper[4946]: I1128 08:54:10.188965 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75f84d54-d949-4047-becb-0d08af1d91b5-combined-ca-bundle\") pod \"cinder-db-sync-4jgss\" (UID: \"75f84d54-d949-4047-becb-0d08af1d91b5\") " pod="openstack/cinder-db-sync-4jgss" Nov 28 08:54:10 crc kubenswrapper[4946]: I1128 08:54:10.189537 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75f84d54-d949-4047-becb-0d08af1d91b5-etc-machine-id\") pod \"cinder-db-sync-4jgss\" (UID: \"75f84d54-d949-4047-becb-0d08af1d91b5\") " pod="openstack/cinder-db-sync-4jgss" Nov 28 08:54:10 crc kubenswrapper[4946]: I1128 08:54:10.189596 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75f84d54-d949-4047-becb-0d08af1d91b5-config-data\") pod \"cinder-db-sync-4jgss\" (UID: \"75f84d54-d949-4047-becb-0d08af1d91b5\") " pod="openstack/cinder-db-sync-4jgss" Nov 28 08:54:10 crc kubenswrapper[4946]: I1128 08:54:10.189703 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75f84d54-d949-4047-becb-0d08af1d91b5-etc-machine-id\") pod \"cinder-db-sync-4jgss\" (UID: \"75f84d54-d949-4047-becb-0d08af1d91b5\") " pod="openstack/cinder-db-sync-4jgss" Nov 28 08:54:10 crc kubenswrapper[4946]: I1128 08:54:10.189783 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75f84d54-d949-4047-becb-0d08af1d91b5-db-sync-config-data\") pod \"cinder-db-sync-4jgss\" (UID: \"75f84d54-d949-4047-becb-0d08af1d91b5\") " pod="openstack/cinder-db-sync-4jgss" Nov 28 08:54:10 crc kubenswrapper[4946]: I1128 08:54:10.190639 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75f84d54-d949-4047-becb-0d08af1d91b5-scripts\") pod \"cinder-db-sync-4jgss\" (UID: \"75f84d54-d949-4047-becb-0d08af1d91b5\") " pod="openstack/cinder-db-sync-4jgss" Nov 28 08:54:10 crc kubenswrapper[4946]: I1128 08:54:10.191269 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5pdd\" (UniqueName: \"kubernetes.io/projected/75f84d54-d949-4047-becb-0d08af1d91b5-kube-api-access-l5pdd\") pod \"cinder-db-sync-4jgss\" (UID: \"75f84d54-d949-4047-becb-0d08af1d91b5\") " pod="openstack/cinder-db-sync-4jgss" Nov 28 08:54:10 crc kubenswrapper[4946]: I1128 08:54:10.195658 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75f84d54-d949-4047-becb-0d08af1d91b5-db-sync-config-data\") pod \"cinder-db-sync-4jgss\" (UID: \"75f84d54-d949-4047-becb-0d08af1d91b5\") " pod="openstack/cinder-db-sync-4jgss" Nov 28 08:54:10 crc kubenswrapper[4946]: I1128 08:54:10.202096 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75f84d54-d949-4047-becb-0d08af1d91b5-scripts\") pod \"cinder-db-sync-4jgss\" (UID: \"75f84d54-d949-4047-becb-0d08af1d91b5\") " pod="openstack/cinder-db-sync-4jgss" Nov 28 08:54:10 crc kubenswrapper[4946]: I1128 08:54:10.210876 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75f84d54-d949-4047-becb-0d08af1d91b5-config-data\") pod \"cinder-db-sync-4jgss\" (UID: \"75f84d54-d949-4047-becb-0d08af1d91b5\") " pod="openstack/cinder-db-sync-4jgss" Nov 28 08:54:10 crc kubenswrapper[4946]: I1128 08:54:10.216092 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75f84d54-d949-4047-becb-0d08af1d91b5-combined-ca-bundle\") pod \"cinder-db-sync-4jgss\" (UID: \"75f84d54-d949-4047-becb-0d08af1d91b5\") " pod="openstack/cinder-db-sync-4jgss" Nov 28 08:54:10 crc kubenswrapper[4946]: I1128 08:54:10.220961 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5pdd\" (UniqueName: \"kubernetes.io/projected/75f84d54-d949-4047-becb-0d08af1d91b5-kube-api-access-l5pdd\") pod \"cinder-db-sync-4jgss\" (UID: \"75f84d54-d949-4047-becb-0d08af1d91b5\") " pod="openstack/cinder-db-sync-4jgss" Nov 28 08:54:10 crc kubenswrapper[4946]: I1128 08:54:10.516630 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4jgss" Nov 28 08:54:10 crc kubenswrapper[4946]: I1128 08:54:10.973139 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4jgss"] Nov 28 08:54:11 crc kubenswrapper[4946]: I1128 08:54:11.729193 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4jgss" event={"ID":"75f84d54-d949-4047-becb-0d08af1d91b5","Type":"ContainerStarted","Data":"616808e8cbd2e6d1bca67dedb0c760c5c7ba833687f7b33d5a5e94017d347b80"} Nov 28 08:54:30 crc kubenswrapper[4946]: I1128 08:54:30.919185 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4jgss" event={"ID":"75f84d54-d949-4047-becb-0d08af1d91b5","Type":"ContainerStarted","Data":"342b9a426b093662706492b35854dec560b3a681a5cfa698c6bf87229c603473"} Nov 28 08:54:30 crc kubenswrapper[4946]: I1128 08:54:30.950795 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-4jgss" podStartSLOduration=2.896454843 podStartE2EDuration="21.950766118s" podCreationTimestamp="2025-11-28 08:54:09 +0000 UTC" firstStartedPulling="2025-11-28 08:54:10.974335486 +0000 UTC m=+7305.352400597" lastFinishedPulling="2025-11-28 08:54:30.028646751 +0000 UTC m=+7324.406711872" observedRunningTime="2025-11-28 08:54:30.944551214 +0000 UTC m=+7325.322616365" watchObservedRunningTime="2025-11-28 08:54:30.950766118 +0000 UTC m=+7325.328831269" Nov 28 08:54:34 crc kubenswrapper[4946]: I1128 08:54:34.018500 4946 generic.go:334] "Generic (PLEG): container finished" podID="75f84d54-d949-4047-becb-0d08af1d91b5" containerID="342b9a426b093662706492b35854dec560b3a681a5cfa698c6bf87229c603473" exitCode=0 Nov 28 08:54:34 crc kubenswrapper[4946]: I1128 08:54:34.018568 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4jgss" event={"ID":"75f84d54-d949-4047-becb-0d08af1d91b5","Type":"ContainerDied","Data":"342b9a426b093662706492b35854dec560b3a681a5cfa698c6bf87229c603473"} Nov 28 08:54:35 crc kubenswrapper[4946]: I1128 08:54:35.428007 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4jgss" Nov 28 08:54:35 crc kubenswrapper[4946]: I1128 08:54:35.448495 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75f84d54-d949-4047-becb-0d08af1d91b5-config-data\") pod \"75f84d54-d949-4047-becb-0d08af1d91b5\" (UID: \"75f84d54-d949-4047-becb-0d08af1d91b5\") " Nov 28 08:54:35 crc kubenswrapper[4946]: I1128 08:54:35.448630 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75f84d54-d949-4047-becb-0d08af1d91b5-combined-ca-bundle\") pod \"75f84d54-d949-4047-becb-0d08af1d91b5\" (UID: \"75f84d54-d949-4047-becb-0d08af1d91b5\") " Nov 28 08:54:35 crc kubenswrapper[4946]: I1128 08:54:35.448719 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75f84d54-d949-4047-becb-0d08af1d91b5-db-sync-config-data\") pod \"75f84d54-d949-4047-becb-0d08af1d91b5\" (UID: \"75f84d54-d949-4047-becb-0d08af1d91b5\") " Nov 28 08:54:35 crc kubenswrapper[4946]: I1128 08:54:35.448747 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5pdd\" (UniqueName: \"kubernetes.io/projected/75f84d54-d949-4047-becb-0d08af1d91b5-kube-api-access-l5pdd\") pod \"75f84d54-d949-4047-becb-0d08af1d91b5\" (UID: \"75f84d54-d949-4047-becb-0d08af1d91b5\") " Nov 28 08:54:35 crc kubenswrapper[4946]: I1128 08:54:35.453994 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75f84d54-d949-4047-becb-0d08af1d91b5-etc-machine-id\") pod \"75f84d54-d949-4047-becb-0d08af1d91b5\" (UID: \"75f84d54-d949-4047-becb-0d08af1d91b5\") " Nov 28 08:54:35 crc kubenswrapper[4946]: I1128 08:54:35.454064 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75f84d54-d949-4047-becb-0d08af1d91b5-scripts\") pod \"75f84d54-d949-4047-becb-0d08af1d91b5\" (UID: \"75f84d54-d949-4047-becb-0d08af1d91b5\") " Nov 28 08:54:35 crc kubenswrapper[4946]: I1128 08:54:35.454359 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75f84d54-d949-4047-becb-0d08af1d91b5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "75f84d54-d949-4047-becb-0d08af1d91b5" (UID: "75f84d54-d949-4047-becb-0d08af1d91b5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 08:54:35 crc kubenswrapper[4946]: I1128 08:54:35.454990 4946 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75f84d54-d949-4047-becb-0d08af1d91b5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:35 crc kubenswrapper[4946]: I1128 08:54:35.456874 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75f84d54-d949-4047-becb-0d08af1d91b5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "75f84d54-d949-4047-becb-0d08af1d91b5" (UID: "75f84d54-d949-4047-becb-0d08af1d91b5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:54:35 crc kubenswrapper[4946]: I1128 08:54:35.459704 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75f84d54-d949-4047-becb-0d08af1d91b5-kube-api-access-l5pdd" (OuterVolumeSpecName: "kube-api-access-l5pdd") pod "75f84d54-d949-4047-becb-0d08af1d91b5" (UID: "75f84d54-d949-4047-becb-0d08af1d91b5"). InnerVolumeSpecName "kube-api-access-l5pdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:54:35 crc kubenswrapper[4946]: I1128 08:54:35.473648 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75f84d54-d949-4047-becb-0d08af1d91b5-scripts" (OuterVolumeSpecName: "scripts") pod "75f84d54-d949-4047-becb-0d08af1d91b5" (UID: "75f84d54-d949-4047-becb-0d08af1d91b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:54:35 crc kubenswrapper[4946]: I1128 08:54:35.507114 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75f84d54-d949-4047-becb-0d08af1d91b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75f84d54-d949-4047-becb-0d08af1d91b5" (UID: "75f84d54-d949-4047-becb-0d08af1d91b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:54:35 crc kubenswrapper[4946]: I1128 08:54:35.510238 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75f84d54-d949-4047-becb-0d08af1d91b5-config-data" (OuterVolumeSpecName: "config-data") pod "75f84d54-d949-4047-becb-0d08af1d91b5" (UID: "75f84d54-d949-4047-becb-0d08af1d91b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:54:35 crc kubenswrapper[4946]: I1128 08:54:35.557144 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75f84d54-d949-4047-becb-0d08af1d91b5-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:35 crc kubenswrapper[4946]: I1128 08:54:35.557178 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75f84d54-d949-4047-becb-0d08af1d91b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:35 crc kubenswrapper[4946]: I1128 08:54:35.557192 4946 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75f84d54-d949-4047-becb-0d08af1d91b5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:35 crc kubenswrapper[4946]: I1128 08:54:35.557204 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5pdd\" (UniqueName: \"kubernetes.io/projected/75f84d54-d949-4047-becb-0d08af1d91b5-kube-api-access-l5pdd\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:35 crc kubenswrapper[4946]: I1128 08:54:35.557217 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75f84d54-d949-4047-becb-0d08af1d91b5-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.048130 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4jgss" event={"ID":"75f84d54-d949-4047-becb-0d08af1d91b5","Type":"ContainerDied","Data":"616808e8cbd2e6d1bca67dedb0c760c5c7ba833687f7b33d5a5e94017d347b80"} Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.048177 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="616808e8cbd2e6d1bca67dedb0c760c5c7ba833687f7b33d5a5e94017d347b80" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.048185 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4jgss" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.488187 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-657f9b46c5-6chv7"] Nov 28 08:54:36 crc kubenswrapper[4946]: E1128 08:54:36.489262 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f84d54-d949-4047-becb-0d08af1d91b5" containerName="cinder-db-sync" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.489280 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f84d54-d949-4047-becb-0d08af1d91b5" containerName="cinder-db-sync" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.489554 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="75f84d54-d949-4047-becb-0d08af1d91b5" containerName="cinder-db-sync" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.493360 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.520457 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-657f9b46c5-6chv7"] Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.601609 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hpz5\" (UniqueName: \"kubernetes.io/projected/ea2a6b0f-c72f-4623-acf1-efe087189a34-kube-api-access-7hpz5\") pod \"dnsmasq-dns-657f9b46c5-6chv7\" (UID: \"ea2a6b0f-c72f-4623-acf1-efe087189a34\") " pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.601673 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea2a6b0f-c72f-4623-acf1-efe087189a34-config\") pod \"dnsmasq-dns-657f9b46c5-6chv7\" (UID: \"ea2a6b0f-c72f-4623-acf1-efe087189a34\") " pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.601706 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea2a6b0f-c72f-4623-acf1-efe087189a34-dns-svc\") pod \"dnsmasq-dns-657f9b46c5-6chv7\" (UID: \"ea2a6b0f-c72f-4623-acf1-efe087189a34\") " pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.601722 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea2a6b0f-c72f-4623-acf1-efe087189a34-ovsdbserver-sb\") pod \"dnsmasq-dns-657f9b46c5-6chv7\" (UID: \"ea2a6b0f-c72f-4623-acf1-efe087189a34\") " pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.601783 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea2a6b0f-c72f-4623-acf1-efe087189a34-ovsdbserver-nb\") pod \"dnsmasq-dns-657f9b46c5-6chv7\" (UID: \"ea2a6b0f-c72f-4623-acf1-efe087189a34\") " pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.721869 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea2a6b0f-c72f-4623-acf1-efe087189a34-ovsdbserver-nb\") pod \"dnsmasq-dns-657f9b46c5-6chv7\" (UID: \"ea2a6b0f-c72f-4623-acf1-efe087189a34\") " pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.724120 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hpz5\" (UniqueName: \"kubernetes.io/projected/ea2a6b0f-c72f-4623-acf1-efe087189a34-kube-api-access-7hpz5\") pod \"dnsmasq-dns-657f9b46c5-6chv7\" (UID: \"ea2a6b0f-c72f-4623-acf1-efe087189a34\") " pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.728845 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea2a6b0f-c72f-4623-acf1-efe087189a34-config\") pod \"dnsmasq-dns-657f9b46c5-6chv7\" (UID: \"ea2a6b0f-c72f-4623-acf1-efe087189a34\") " pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.729194 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea2a6b0f-c72f-4623-acf1-efe087189a34-dns-svc\") pod \"dnsmasq-dns-657f9b46c5-6chv7\" (UID: \"ea2a6b0f-c72f-4623-acf1-efe087189a34\") " pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.729305 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea2a6b0f-c72f-4623-acf1-efe087189a34-ovsdbserver-sb\") pod \"dnsmasq-dns-657f9b46c5-6chv7\" (UID: \"ea2a6b0f-c72f-4623-acf1-efe087189a34\") " pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.724727 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea2a6b0f-c72f-4623-acf1-efe087189a34-ovsdbserver-nb\") pod \"dnsmasq-dns-657f9b46c5-6chv7\" (UID: \"ea2a6b0f-c72f-4623-acf1-efe087189a34\") " pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.731698 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.731921 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea2a6b0f-c72f-4623-acf1-efe087189a34-config\") pod \"dnsmasq-dns-657f9b46c5-6chv7\" (UID: \"ea2a6b0f-c72f-4623-acf1-efe087189a34\") " pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.733213 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.734999 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea2a6b0f-c72f-4623-acf1-efe087189a34-dns-svc\") pod \"dnsmasq-dns-657f9b46c5-6chv7\" (UID: \"ea2a6b0f-c72f-4623-acf1-efe087189a34\") " pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.738823 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea2a6b0f-c72f-4623-acf1-efe087189a34-ovsdbserver-sb\") pod \"dnsmasq-dns-657f9b46c5-6chv7\" (UID: \"ea2a6b0f-c72f-4623-acf1-efe087189a34\") " pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.762047 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.762913 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.763321 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.763537 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-pnzlq" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.766778 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.775266 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hpz5\" (UniqueName: \"kubernetes.io/projected/ea2a6b0f-c72f-4623-acf1-efe087189a34-kube-api-access-7hpz5\") pod \"dnsmasq-dns-657f9b46c5-6chv7\" (UID: \"ea2a6b0f-c72f-4623-acf1-efe087189a34\") " pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.835121 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eeece55-15ef-4cd1-9f27-525991788887-config-data\") pod \"cinder-api-0\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " pod="openstack/cinder-api-0" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.835179 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4l4q\" (UniqueName: \"kubernetes.io/projected/9eeece55-15ef-4cd1-9f27-525991788887-kube-api-access-l4l4q\") pod \"cinder-api-0\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " pod="openstack/cinder-api-0" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.835208 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9eeece55-15ef-4cd1-9f27-525991788887-config-data-custom\") pod \"cinder-api-0\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " pod="openstack/cinder-api-0" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.835238 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eeece55-15ef-4cd1-9f27-525991788887-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " pod="openstack/cinder-api-0" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.835267 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eeece55-15ef-4cd1-9f27-525991788887-scripts\") pod \"cinder-api-0\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " pod="openstack/cinder-api-0" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.835283 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eeece55-15ef-4cd1-9f27-525991788887-logs\") pod \"cinder-api-0\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " pod="openstack/cinder-api-0" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.835307 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9eeece55-15ef-4cd1-9f27-525991788887-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " pod="openstack/cinder-api-0" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.868596 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.938631 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4l4q\" (UniqueName: \"kubernetes.io/projected/9eeece55-15ef-4cd1-9f27-525991788887-kube-api-access-l4l4q\") pod \"cinder-api-0\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " pod="openstack/cinder-api-0" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.939410 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9eeece55-15ef-4cd1-9f27-525991788887-config-data-custom\") pod \"cinder-api-0\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " pod="openstack/cinder-api-0" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.939578 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eeece55-15ef-4cd1-9f27-525991788887-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " pod="openstack/cinder-api-0" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.939694 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eeece55-15ef-4cd1-9f27-525991788887-logs\") pod \"cinder-api-0\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " pod="openstack/cinder-api-0" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.939788 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eeece55-15ef-4cd1-9f27-525991788887-scripts\") pod \"cinder-api-0\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " pod="openstack/cinder-api-0" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.939897 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9eeece55-15ef-4cd1-9f27-525991788887-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " pod="openstack/cinder-api-0" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.940076 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eeece55-15ef-4cd1-9f27-525991788887-config-data\") pod \"cinder-api-0\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " pod="openstack/cinder-api-0" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.940894 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9eeece55-15ef-4cd1-9f27-525991788887-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " pod="openstack/cinder-api-0" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.941306 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eeece55-15ef-4cd1-9f27-525991788887-logs\") pod \"cinder-api-0\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " pod="openstack/cinder-api-0" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.945539 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eeece55-15ef-4cd1-9f27-525991788887-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " pod="openstack/cinder-api-0" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.945895 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9eeece55-15ef-4cd1-9f27-525991788887-config-data-custom\") pod \"cinder-api-0\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " pod="openstack/cinder-api-0" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.946541 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eeece55-15ef-4cd1-9f27-525991788887-config-data\") pod \"cinder-api-0\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " pod="openstack/cinder-api-0" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.947447 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eeece55-15ef-4cd1-9f27-525991788887-scripts\") pod \"cinder-api-0\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " pod="openstack/cinder-api-0" Nov 28 08:54:36 crc kubenswrapper[4946]: I1128 08:54:36.963949 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4l4q\" (UniqueName: \"kubernetes.io/projected/9eeece55-15ef-4cd1-9f27-525991788887-kube-api-access-l4l4q\") pod \"cinder-api-0\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " pod="openstack/cinder-api-0" Nov 28 08:54:37 crc kubenswrapper[4946]: I1128 08:54:37.134796 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 08:54:37 crc kubenswrapper[4946]: I1128 08:54:37.391396 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-657f9b46c5-6chv7"] Nov 28 08:54:37 crc kubenswrapper[4946]: W1128 08:54:37.584754 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eeece55_15ef_4cd1_9f27_525991788887.slice/crio-90f30ca79d75d635fc1d6da9918628a434b8fc1b12b5dc605d20dea102119c62 WatchSource:0}: Error finding container 90f30ca79d75d635fc1d6da9918628a434b8fc1b12b5dc605d20dea102119c62: Status 404 returned error can't find the container with id 90f30ca79d75d635fc1d6da9918628a434b8fc1b12b5dc605d20dea102119c62 Nov 28 08:54:37 crc kubenswrapper[4946]: I1128 08:54:37.589794 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 08:54:38 crc kubenswrapper[4946]: I1128 08:54:38.075557 4946 generic.go:334] "Generic (PLEG): container finished" podID="ea2a6b0f-c72f-4623-acf1-efe087189a34" containerID="71e9d96d4edbbff80a970adfcecb93a3ce3550238a5d888722b79b8c7a281894" exitCode=0 Nov 28 08:54:38 crc kubenswrapper[4946]: I1128 08:54:38.075626 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" event={"ID":"ea2a6b0f-c72f-4623-acf1-efe087189a34","Type":"ContainerDied","Data":"71e9d96d4edbbff80a970adfcecb93a3ce3550238a5d888722b79b8c7a281894"} Nov 28 08:54:38 crc kubenswrapper[4946]: I1128 08:54:38.075844 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" event={"ID":"ea2a6b0f-c72f-4623-acf1-efe087189a34","Type":"ContainerStarted","Data":"c3e1a06ed4f3851c01f450e43a9b14334b650a0b9040d5adc4e0ffe395d7a47a"} Nov 28 08:54:38 crc kubenswrapper[4946]: I1128 08:54:38.076942 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9eeece55-15ef-4cd1-9f27-525991788887","Type":"ContainerStarted","Data":"90f30ca79d75d635fc1d6da9918628a434b8fc1b12b5dc605d20dea102119c62"} Nov 28 08:54:39 crc kubenswrapper[4946]: I1128 08:54:39.088117 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" event={"ID":"ea2a6b0f-c72f-4623-acf1-efe087189a34","Type":"ContainerStarted","Data":"12ff29d2a05951d419d263c8b96f417547a4bf7d4d3751c37cbc6853c7784901"} Nov 28 08:54:39 crc kubenswrapper[4946]: I1128 08:54:39.088421 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" Nov 28 08:54:39 crc kubenswrapper[4946]: I1128 08:54:39.091687 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9eeece55-15ef-4cd1-9f27-525991788887","Type":"ContainerStarted","Data":"e5ffd8bda162451b7cc9f848be59c05d0561054138c50754a4ac4536e542e20b"} Nov 28 08:54:39 crc kubenswrapper[4946]: I1128 08:54:39.091719 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9eeece55-15ef-4cd1-9f27-525991788887","Type":"ContainerStarted","Data":"5575b73911fcc3797a0de6b22f9257ebce7992d2f6f346e66ea5081c9cff5ad6"} Nov 28 08:54:39 crc kubenswrapper[4946]: I1128 08:54:39.092547 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 28 08:54:39 crc kubenswrapper[4946]: I1128 08:54:39.129995 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.129973166 podStartE2EDuration="3.129973166s" podCreationTimestamp="2025-11-28 08:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:54:39.125552516 +0000 UTC m=+7333.503617637" watchObservedRunningTime="2025-11-28 08:54:39.129973166 +0000 UTC m=+7333.508038287" Nov 28 08:54:39 crc kubenswrapper[4946]: I1128 08:54:39.138652 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" podStartSLOduration=3.13863457 podStartE2EDuration="3.13863457s" podCreationTimestamp="2025-11-28 08:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:54:39.111513079 +0000 UTC m=+7333.489578200" watchObservedRunningTime="2025-11-28 08:54:39.13863457 +0000 UTC m=+7333.516699701" Nov 28 08:54:46 crc kubenswrapper[4946]: I1128 08:54:46.871730 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" Nov 28 08:54:46 crc kubenswrapper[4946]: I1128 08:54:46.970186 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55755df8dc-9rgd9"] Nov 28 08:54:46 crc kubenswrapper[4946]: I1128 08:54:46.970487 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" podUID="2856cac5-9342-490f-9649-42fbb5a9e26a" containerName="dnsmasq-dns" containerID="cri-o://45f7ad6c99554068fecdd7dddef907d640eaf4b8d06dab276439d31c4b655171" gracePeriod=10 Nov 28 08:54:47 crc kubenswrapper[4946]: I1128 08:54:47.188147 4946 generic.go:334] "Generic (PLEG): container finished" podID="2856cac5-9342-490f-9649-42fbb5a9e26a" containerID="45f7ad6c99554068fecdd7dddef907d640eaf4b8d06dab276439d31c4b655171" exitCode=0 Nov 28 08:54:47 crc kubenswrapper[4946]: I1128 08:54:47.188197 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" event={"ID":"2856cac5-9342-490f-9649-42fbb5a9e26a","Type":"ContainerDied","Data":"45f7ad6c99554068fecdd7dddef907d640eaf4b8d06dab276439d31c4b655171"} Nov 28 08:54:47 crc kubenswrapper[4946]: I1128 08:54:47.497047 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" Nov 28 08:54:47 crc kubenswrapper[4946]: I1128 08:54:47.561967 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2856cac5-9342-490f-9649-42fbb5a9e26a-dns-svc\") pod \"2856cac5-9342-490f-9649-42fbb5a9e26a\" (UID: \"2856cac5-9342-490f-9649-42fbb5a9e26a\") " Nov 28 08:54:47 crc kubenswrapper[4946]: I1128 08:54:47.562092 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2856cac5-9342-490f-9649-42fbb5a9e26a-ovsdbserver-nb\") pod \"2856cac5-9342-490f-9649-42fbb5a9e26a\" (UID: \"2856cac5-9342-490f-9649-42fbb5a9e26a\") " Nov 28 08:54:47 crc kubenswrapper[4946]: I1128 08:54:47.562147 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2856cac5-9342-490f-9649-42fbb5a9e26a-ovsdbserver-sb\") pod \"2856cac5-9342-490f-9649-42fbb5a9e26a\" (UID: \"2856cac5-9342-490f-9649-42fbb5a9e26a\") " Nov 28 08:54:47 crc kubenswrapper[4946]: I1128 08:54:47.562272 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbgq4\" (UniqueName: \"kubernetes.io/projected/2856cac5-9342-490f-9649-42fbb5a9e26a-kube-api-access-bbgq4\") pod \"2856cac5-9342-490f-9649-42fbb5a9e26a\" (UID: \"2856cac5-9342-490f-9649-42fbb5a9e26a\") " Nov 28 08:54:47 crc kubenswrapper[4946]: I1128 08:54:47.562322 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2856cac5-9342-490f-9649-42fbb5a9e26a-config\") pod \"2856cac5-9342-490f-9649-42fbb5a9e26a\" (UID: \"2856cac5-9342-490f-9649-42fbb5a9e26a\") " Nov 28 08:54:47 crc kubenswrapper[4946]: I1128 08:54:47.568251 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2856cac5-9342-490f-9649-42fbb5a9e26a-kube-api-access-bbgq4" (OuterVolumeSpecName: "kube-api-access-bbgq4") pod "2856cac5-9342-490f-9649-42fbb5a9e26a" (UID: "2856cac5-9342-490f-9649-42fbb5a9e26a"). InnerVolumeSpecName "kube-api-access-bbgq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:54:47 crc kubenswrapper[4946]: I1128 08:54:47.621684 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2856cac5-9342-490f-9649-42fbb5a9e26a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2856cac5-9342-490f-9649-42fbb5a9e26a" (UID: "2856cac5-9342-490f-9649-42fbb5a9e26a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:54:47 crc kubenswrapper[4946]: I1128 08:54:47.622291 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2856cac5-9342-490f-9649-42fbb5a9e26a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2856cac5-9342-490f-9649-42fbb5a9e26a" (UID: "2856cac5-9342-490f-9649-42fbb5a9e26a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:54:47 crc kubenswrapper[4946]: I1128 08:54:47.638076 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2856cac5-9342-490f-9649-42fbb5a9e26a-config" (OuterVolumeSpecName: "config") pod "2856cac5-9342-490f-9649-42fbb5a9e26a" (UID: "2856cac5-9342-490f-9649-42fbb5a9e26a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:54:47 crc kubenswrapper[4946]: I1128 08:54:47.639292 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2856cac5-9342-490f-9649-42fbb5a9e26a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2856cac5-9342-490f-9649-42fbb5a9e26a" (UID: "2856cac5-9342-490f-9649-42fbb5a9e26a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:54:47 crc kubenswrapper[4946]: I1128 08:54:47.664222 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbgq4\" (UniqueName: \"kubernetes.io/projected/2856cac5-9342-490f-9649-42fbb5a9e26a-kube-api-access-bbgq4\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:47 crc kubenswrapper[4946]: I1128 08:54:47.664261 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2856cac5-9342-490f-9649-42fbb5a9e26a-config\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:47 crc kubenswrapper[4946]: I1128 08:54:47.664271 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2856cac5-9342-490f-9649-42fbb5a9e26a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:47 crc kubenswrapper[4946]: I1128 08:54:47.664280 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2856cac5-9342-490f-9649-42fbb5a9e26a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:47 crc kubenswrapper[4946]: I1128 08:54:47.664288 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2856cac5-9342-490f-9649-42fbb5a9e26a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:48 crc kubenswrapper[4946]: I1128 08:54:48.199432 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" event={"ID":"2856cac5-9342-490f-9649-42fbb5a9e26a","Type":"ContainerDied","Data":"98b737e505c5c63ac947368f2eb19660ba95ae93ea4005332cfdf3f2be748abe"} Nov 28 08:54:48 crc kubenswrapper[4946]: I1128 08:54:48.199824 4946 scope.go:117] "RemoveContainer" containerID="45f7ad6c99554068fecdd7dddef907d640eaf4b8d06dab276439d31c4b655171" Nov 28 08:54:48 crc kubenswrapper[4946]: I1128 08:54:48.199937 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55755df8dc-9rgd9" Nov 28 08:54:48 crc kubenswrapper[4946]: I1128 08:54:48.238349 4946 scope.go:117] "RemoveContainer" containerID="5751103456d3793cf534eca7c18580b530b381ef220ecc9e46f342ac1ee679c5" Nov 28 08:54:48 crc kubenswrapper[4946]: I1128 08:54:48.251090 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55755df8dc-9rgd9"] Nov 28 08:54:48 crc kubenswrapper[4946]: I1128 08:54:48.269504 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55755df8dc-9rgd9"] Nov 28 08:54:48 crc kubenswrapper[4946]: I1128 08:54:48.580865 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 08:54:48 crc kubenswrapper[4946]: I1128 08:54:48.581094 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6794f905-f1ca-48a1-babd-115981e72ed4" containerName="nova-metadata-log" containerID="cri-o://936bb085b6e5bb5dad263a9c8cccde5f9e9da7f530a1d756aec854c71404c588" gracePeriod=30 Nov 28 08:54:48 crc kubenswrapper[4946]: I1128 08:54:48.581251 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6794f905-f1ca-48a1-babd-115981e72ed4" containerName="nova-metadata-metadata" containerID="cri-o://bfc1a51101b4c552d61c7f5867477bb7aef0cb80c9a1b719d2163295d47914f3" gracePeriod=30 Nov 28 08:54:48 crc kubenswrapper[4946]: I1128 08:54:48.599591 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 08:54:48 crc kubenswrapper[4946]: I1128 08:54:48.601079 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f17464c0-2039-4016-a165-cde04f10c349" containerName="nova-scheduler-scheduler" containerID="cri-o://30159f8805f5a41f294e2e2eb1f1b2c6cb9ab9f6f3bc9a01ec785471c9150ef6" gracePeriod=30 Nov 28 08:54:48 crc kubenswrapper[4946]: I1128 08:54:48.657430 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 08:54:48 crc kubenswrapper[4946]: I1128 08:54:48.657778 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="678f7591-c9d6-43dc-8ece-0d4f0f4965f4" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://18d400fa8f5cb27d5a2fc63ee636aa43f1e420c49f73941dae45801c363dfbb9" gracePeriod=30 Nov 28 08:54:48 crc kubenswrapper[4946]: I1128 08:54:48.686175 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 08:54:48 crc kubenswrapper[4946]: I1128 08:54:48.686597 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="95bd6a55-27ab-4832-974d-20e84433add3" containerName="nova-cell0-conductor-conductor" containerID="cri-o://a068932e83858cfa3df73cfe29e939011a97ad3c4f3466fc554b218170927fc6" gracePeriod=30 Nov 28 08:54:48 crc kubenswrapper[4946]: I1128 08:54:48.695962 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 08:54:48 crc kubenswrapper[4946]: I1128 08:54:48.696168 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="00d946a1-a238-4ab3-b8e6-e276aac8bb22" containerName="nova-api-log" containerID="cri-o://e920ef783f678ccf0f74b883b2a2698545b071cd344d92d74547f8a3d6c0af40" gracePeriod=30 Nov 28 08:54:48 crc kubenswrapper[4946]: I1128 08:54:48.696609 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="00d946a1-a238-4ab3-b8e6-e276aac8bb22" containerName="nova-api-api" containerID="cri-o://70895f70e30891e0fb785baa90a70cffd83e5316b032ce5ba2d16473dc141fc8" gracePeriod=30 Nov 28 08:54:49 crc kubenswrapper[4946]: I1128 08:54:49.180726 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 28 08:54:49 crc kubenswrapper[4946]: I1128 08:54:49.231147 4946 generic.go:334] "Generic (PLEG): container finished" podID="678f7591-c9d6-43dc-8ece-0d4f0f4965f4" containerID="18d400fa8f5cb27d5a2fc63ee636aa43f1e420c49f73941dae45801c363dfbb9" exitCode=0 Nov 28 08:54:49 crc kubenswrapper[4946]: I1128 08:54:49.231247 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"678f7591-c9d6-43dc-8ece-0d4f0f4965f4","Type":"ContainerDied","Data":"18d400fa8f5cb27d5a2fc63ee636aa43f1e420c49f73941dae45801c363dfbb9"} Nov 28 08:54:49 crc kubenswrapper[4946]: I1128 08:54:49.248302 4946 generic.go:334] "Generic (PLEG): container finished" podID="00d946a1-a238-4ab3-b8e6-e276aac8bb22" containerID="e920ef783f678ccf0f74b883b2a2698545b071cd344d92d74547f8a3d6c0af40" exitCode=143 Nov 28 08:54:49 crc kubenswrapper[4946]: I1128 08:54:49.248385 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"00d946a1-a238-4ab3-b8e6-e276aac8bb22","Type":"ContainerDied","Data":"e920ef783f678ccf0f74b883b2a2698545b071cd344d92d74547f8a3d6c0af40"} Nov 28 08:54:49 crc kubenswrapper[4946]: I1128 08:54:49.253902 4946 generic.go:334] "Generic (PLEG): container finished" podID="6794f905-f1ca-48a1-babd-115981e72ed4" containerID="936bb085b6e5bb5dad263a9c8cccde5f9e9da7f530a1d756aec854c71404c588" exitCode=143 Nov 28 08:54:49 crc kubenswrapper[4946]: I1128 08:54:49.253957 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6794f905-f1ca-48a1-babd-115981e72ed4","Type":"ContainerDied","Data":"936bb085b6e5bb5dad263a9c8cccde5f9e9da7f530a1d756aec854c71404c588"} Nov 28 08:54:49 crc kubenswrapper[4946]: I1128 08:54:49.569725 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:54:49 crc kubenswrapper[4946]: I1128 08:54:49.605415 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678f7591-c9d6-43dc-8ece-0d4f0f4965f4-combined-ca-bundle\") pod \"678f7591-c9d6-43dc-8ece-0d4f0f4965f4\" (UID: \"678f7591-c9d6-43dc-8ece-0d4f0f4965f4\") " Nov 28 08:54:49 crc kubenswrapper[4946]: I1128 08:54:49.605492 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4tsm\" (UniqueName: \"kubernetes.io/projected/678f7591-c9d6-43dc-8ece-0d4f0f4965f4-kube-api-access-w4tsm\") pod \"678f7591-c9d6-43dc-8ece-0d4f0f4965f4\" (UID: \"678f7591-c9d6-43dc-8ece-0d4f0f4965f4\") " Nov 28 08:54:49 crc kubenswrapper[4946]: I1128 08:54:49.605529 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678f7591-c9d6-43dc-8ece-0d4f0f4965f4-config-data\") pod \"678f7591-c9d6-43dc-8ece-0d4f0f4965f4\" (UID: \"678f7591-c9d6-43dc-8ece-0d4f0f4965f4\") " Nov 28 08:54:49 crc kubenswrapper[4946]: I1128 08:54:49.617576 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/678f7591-c9d6-43dc-8ece-0d4f0f4965f4-kube-api-access-w4tsm" (OuterVolumeSpecName: "kube-api-access-w4tsm") pod "678f7591-c9d6-43dc-8ece-0d4f0f4965f4" (UID: "678f7591-c9d6-43dc-8ece-0d4f0f4965f4"). InnerVolumeSpecName "kube-api-access-w4tsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:54:49 crc kubenswrapper[4946]: I1128 08:54:49.635353 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/678f7591-c9d6-43dc-8ece-0d4f0f4965f4-config-data" (OuterVolumeSpecName: "config-data") pod "678f7591-c9d6-43dc-8ece-0d4f0f4965f4" (UID: "678f7591-c9d6-43dc-8ece-0d4f0f4965f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:54:49 crc kubenswrapper[4946]: I1128 08:54:49.638262 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/678f7591-c9d6-43dc-8ece-0d4f0f4965f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "678f7591-c9d6-43dc-8ece-0d4f0f4965f4" (UID: "678f7591-c9d6-43dc-8ece-0d4f0f4965f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:54:49 crc kubenswrapper[4946]: I1128 08:54:49.707857 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678f7591-c9d6-43dc-8ece-0d4f0f4965f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:49 crc kubenswrapper[4946]: I1128 08:54:49.707892 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4tsm\" (UniqueName: \"kubernetes.io/projected/678f7591-c9d6-43dc-8ece-0d4f0f4965f4-kube-api-access-w4tsm\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:49 crc kubenswrapper[4946]: I1128 08:54:49.707905 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678f7591-c9d6-43dc-8ece-0d4f0f4965f4-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:49 crc kubenswrapper[4946]: I1128 08:54:49.999814 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2856cac5-9342-490f-9649-42fbb5a9e26a" path="/var/lib/kubelet/pods/2856cac5-9342-490f-9649-42fbb5a9e26a/volumes" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.302329 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"678f7591-c9d6-43dc-8ece-0d4f0f4965f4","Type":"ContainerDied","Data":"2bc414c9c4f2bdfbf327f3c4ca343437ec0d2005319a4a80edfaebbb4ac10250"} Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.302338 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.302400 4946 scope.go:117] "RemoveContainer" containerID="18d400fa8f5cb27d5a2fc63ee636aa43f1e420c49f73941dae45801c363dfbb9" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.305343 4946 generic.go:334] "Generic (PLEG): container finished" podID="f17464c0-2039-4016-a165-cde04f10c349" containerID="30159f8805f5a41f294e2e2eb1f1b2c6cb9ab9f6f3bc9a01ec785471c9150ef6" exitCode=0 Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.305384 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f17464c0-2039-4016-a165-cde04f10c349","Type":"ContainerDied","Data":"30159f8805f5a41f294e2e2eb1f1b2c6cb9ab9f6f3bc9a01ec785471c9150ef6"} Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.341515 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.356792 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.368899 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 08:54:50 crc kubenswrapper[4946]: E1128 08:54:50.369318 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2856cac5-9342-490f-9649-42fbb5a9e26a" containerName="init" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.369331 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="2856cac5-9342-490f-9649-42fbb5a9e26a" containerName="init" Nov 28 08:54:50 crc kubenswrapper[4946]: E1128 08:54:50.369360 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2856cac5-9342-490f-9649-42fbb5a9e26a" containerName="dnsmasq-dns" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.369366 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="2856cac5-9342-490f-9649-42fbb5a9e26a" containerName="dnsmasq-dns" Nov 28 08:54:50 crc kubenswrapper[4946]: E1128 08:54:50.369381 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678f7591-c9d6-43dc-8ece-0d4f0f4965f4" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.369387 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="678f7591-c9d6-43dc-8ece-0d4f0f4965f4" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.369590 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="678f7591-c9d6-43dc-8ece-0d4f0f4965f4" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.369601 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="2856cac5-9342-490f-9649-42fbb5a9e26a" containerName="dnsmasq-dns" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.375267 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.376773 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.379122 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.440498 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab31999-7efc-4936-97c5-bb8592f61595-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ab31999-7efc-4936-97c5-bb8592f61595\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.440547 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab31999-7efc-4936-97c5-bb8592f61595-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ab31999-7efc-4936-97c5-bb8592f61595\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.440611 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwxj6\" (UniqueName: \"kubernetes.io/projected/0ab31999-7efc-4936-97c5-bb8592f61595-kube-api-access-nwxj6\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ab31999-7efc-4936-97c5-bb8592f61595\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.542261 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab31999-7efc-4936-97c5-bb8592f61595-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ab31999-7efc-4936-97c5-bb8592f61595\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.542343 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab31999-7efc-4936-97c5-bb8592f61595-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ab31999-7efc-4936-97c5-bb8592f61595\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.542423 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwxj6\" (UniqueName: \"kubernetes.io/projected/0ab31999-7efc-4936-97c5-bb8592f61595-kube-api-access-nwxj6\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ab31999-7efc-4936-97c5-bb8592f61595\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.547568 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab31999-7efc-4936-97c5-bb8592f61595-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ab31999-7efc-4936-97c5-bb8592f61595\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.550395 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab31999-7efc-4936-97c5-bb8592f61595-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ab31999-7efc-4936-97c5-bb8592f61595\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.559041 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwxj6\" (UniqueName: \"kubernetes.io/projected/0ab31999-7efc-4936-97c5-bb8592f61595-kube-api-access-nwxj6\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ab31999-7efc-4936-97c5-bb8592f61595\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.629749 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.696188 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.746434 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f17464c0-2039-4016-a165-cde04f10c349-config-data\") pod \"f17464c0-2039-4016-a165-cde04f10c349\" (UID: \"f17464c0-2039-4016-a165-cde04f10c349\") " Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.746529 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f17464c0-2039-4016-a165-cde04f10c349-combined-ca-bundle\") pod \"f17464c0-2039-4016-a165-cde04f10c349\" (UID: \"f17464c0-2039-4016-a165-cde04f10c349\") " Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.746585 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x2mq\" (UniqueName: \"kubernetes.io/projected/f17464c0-2039-4016-a165-cde04f10c349-kube-api-access-4x2mq\") pod \"f17464c0-2039-4016-a165-cde04f10c349\" (UID: \"f17464c0-2039-4016-a165-cde04f10c349\") " Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.750188 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f17464c0-2039-4016-a165-cde04f10c349-kube-api-access-4x2mq" (OuterVolumeSpecName: "kube-api-access-4x2mq") pod "f17464c0-2039-4016-a165-cde04f10c349" (UID: "f17464c0-2039-4016-a165-cde04f10c349"). InnerVolumeSpecName "kube-api-access-4x2mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.769804 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f17464c0-2039-4016-a165-cde04f10c349-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f17464c0-2039-4016-a165-cde04f10c349" (UID: "f17464c0-2039-4016-a165-cde04f10c349"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.780157 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f17464c0-2039-4016-a165-cde04f10c349-config-data" (OuterVolumeSpecName: "config-data") pod "f17464c0-2039-4016-a165-cde04f10c349" (UID: "f17464c0-2039-4016-a165-cde04f10c349"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.850068 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f17464c0-2039-4016-a165-cde04f10c349-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.850105 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f17464c0-2039-4016-a165-cde04f10c349-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:50 crc kubenswrapper[4946]: I1128 08:54:50.850119 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x2mq\" (UniqueName: \"kubernetes.io/projected/f17464c0-2039-4016-a165-cde04f10c349-kube-api-access-4x2mq\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.154544 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 08:54:51 crc kubenswrapper[4946]: W1128 08:54:51.160050 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ab31999_7efc_4936_97c5_bb8592f61595.slice/crio-b00b7adccdf7c79f2c083473a8ffec0cbfa6e0c98898581f19c395cad60a75bd WatchSource:0}: Error finding container b00b7adccdf7c79f2c083473a8ffec0cbfa6e0c98898581f19c395cad60a75bd: Status 404 returned error can't find the container with id b00b7adccdf7c79f2c083473a8ffec0cbfa6e0c98898581f19c395cad60a75bd Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.318103 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f17464c0-2039-4016-a165-cde04f10c349","Type":"ContainerDied","Data":"2f6ad373ced78c6ba28e53eb490b0a6c2562892de67b99020284b8483f716967"} Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.318419 4946 scope.go:117] "RemoveContainer" containerID="30159f8805f5a41f294e2e2eb1f1b2c6cb9ab9f6f3bc9a01ec785471c9150ef6" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.318128 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.321960 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0ab31999-7efc-4936-97c5-bb8592f61595","Type":"ContainerStarted","Data":"b00b7adccdf7c79f2c083473a8ffec0cbfa6e0c98898581f19c395cad60a75bd"} Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.332169 4946 generic.go:334] "Generic (PLEG): container finished" podID="95bd6a55-27ab-4832-974d-20e84433add3" containerID="a068932e83858cfa3df73cfe29e939011a97ad3c4f3466fc554b218170927fc6" exitCode=0 Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.332199 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"95bd6a55-27ab-4832-974d-20e84433add3","Type":"ContainerDied","Data":"a068932e83858cfa3df73cfe29e939011a97ad3c4f3466fc554b218170927fc6"} Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.409143 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.417333 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.428679 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 08:54:51 crc kubenswrapper[4946]: E1128 08:54:51.429307 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17464c0-2039-4016-a165-cde04f10c349" containerName="nova-scheduler-scheduler" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.429375 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17464c0-2039-4016-a165-cde04f10c349" containerName="nova-scheduler-scheduler" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.429624 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="f17464c0-2039-4016-a165-cde04f10c349" containerName="nova-scheduler-scheduler" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.430292 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.433538 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.437762 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.526336 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.566682 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aad4350-9266-48ae-b3ae-b4ec6046fbb1-config-data\") pod \"nova-scheduler-0\" (UID: \"6aad4350-9266-48ae-b3ae-b4ec6046fbb1\") " pod="openstack/nova-scheduler-0" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.566732 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpfrb\" (UniqueName: \"kubernetes.io/projected/6aad4350-9266-48ae-b3ae-b4ec6046fbb1-kube-api-access-hpfrb\") pod \"nova-scheduler-0\" (UID: \"6aad4350-9266-48ae-b3ae-b4ec6046fbb1\") " pod="openstack/nova-scheduler-0" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.566836 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aad4350-9266-48ae-b3ae-b4ec6046fbb1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6aad4350-9266-48ae-b3ae-b4ec6046fbb1\") " pod="openstack/nova-scheduler-0" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.668253 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg5qx\" (UniqueName: \"kubernetes.io/projected/95bd6a55-27ab-4832-974d-20e84433add3-kube-api-access-rg5qx\") pod \"95bd6a55-27ab-4832-974d-20e84433add3\" (UID: \"95bd6a55-27ab-4832-974d-20e84433add3\") " Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.668322 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95bd6a55-27ab-4832-974d-20e84433add3-combined-ca-bundle\") pod \"95bd6a55-27ab-4832-974d-20e84433add3\" (UID: \"95bd6a55-27ab-4832-974d-20e84433add3\") " Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.668447 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95bd6a55-27ab-4832-974d-20e84433add3-config-data\") pod \"95bd6a55-27ab-4832-974d-20e84433add3\" (UID: \"95bd6a55-27ab-4832-974d-20e84433add3\") " Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.669171 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aad4350-9266-48ae-b3ae-b4ec6046fbb1-config-data\") pod \"nova-scheduler-0\" (UID: \"6aad4350-9266-48ae-b3ae-b4ec6046fbb1\") " pod="openstack/nova-scheduler-0" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.669196 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpfrb\" (UniqueName: \"kubernetes.io/projected/6aad4350-9266-48ae-b3ae-b4ec6046fbb1-kube-api-access-hpfrb\") pod \"nova-scheduler-0\" (UID: \"6aad4350-9266-48ae-b3ae-b4ec6046fbb1\") " pod="openstack/nova-scheduler-0" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.669248 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aad4350-9266-48ae-b3ae-b4ec6046fbb1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6aad4350-9266-48ae-b3ae-b4ec6046fbb1\") " pod="openstack/nova-scheduler-0" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.672677 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95bd6a55-27ab-4832-974d-20e84433add3-kube-api-access-rg5qx" (OuterVolumeSpecName: "kube-api-access-rg5qx") pod "95bd6a55-27ab-4832-974d-20e84433add3" (UID: "95bd6a55-27ab-4832-974d-20e84433add3"). InnerVolumeSpecName "kube-api-access-rg5qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.672845 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aad4350-9266-48ae-b3ae-b4ec6046fbb1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6aad4350-9266-48ae-b3ae-b4ec6046fbb1\") " pod="openstack/nova-scheduler-0" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.676394 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aad4350-9266-48ae-b3ae-b4ec6046fbb1-config-data\") pod \"nova-scheduler-0\" (UID: \"6aad4350-9266-48ae-b3ae-b4ec6046fbb1\") " pod="openstack/nova-scheduler-0" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.687414 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpfrb\" (UniqueName: \"kubernetes.io/projected/6aad4350-9266-48ae-b3ae-b4ec6046fbb1-kube-api-access-hpfrb\") pod \"nova-scheduler-0\" (UID: \"6aad4350-9266-48ae-b3ae-b4ec6046fbb1\") " pod="openstack/nova-scheduler-0" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.694240 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95bd6a55-27ab-4832-974d-20e84433add3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95bd6a55-27ab-4832-974d-20e84433add3" (UID: "95bd6a55-27ab-4832-974d-20e84433add3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.709230 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95bd6a55-27ab-4832-974d-20e84433add3-config-data" (OuterVolumeSpecName: "config-data") pod "95bd6a55-27ab-4832-974d-20e84433add3" (UID: "95bd6a55-27ab-4832-974d-20e84433add3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.750311 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.771123 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg5qx\" (UniqueName: \"kubernetes.io/projected/95bd6a55-27ab-4832-974d-20e84433add3-kube-api-access-rg5qx\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.771159 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95bd6a55-27ab-4832-974d-20e84433add3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.771169 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95bd6a55-27ab-4832-974d-20e84433add3-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.771259 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6794f905-f1ca-48a1-babd-115981e72ed4" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.76:8775/\": read tcp 10.217.0.2:54048->10.217.1.76:8775: read: connection reset by peer" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.771543 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6794f905-f1ca-48a1-babd-115981e72ed4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.76:8775/\": read tcp 10.217.0.2:54046->10.217.1.76:8775: read: connection reset by peer" Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.902720 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 08:54:51 crc kubenswrapper[4946]: I1128 08:54:51.903266 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec" containerName="nova-cell1-conductor-conductor" containerID="cri-o://1548764c7bcf6b716ed869f5f9ddf284c9c3e888cfe8d038f8e9bedda9eb85d1" gracePeriod=30 Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.001680 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="678f7591-c9d6-43dc-8ece-0d4f0f4965f4" path="/var/lib/kubelet/pods/678f7591-c9d6-43dc-8ece-0d4f0f4965f4/volumes" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.002367 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f17464c0-2039-4016-a165-cde04f10c349" path="/var/lib/kubelet/pods/f17464c0-2039-4016-a165-cde04f10c349/volumes" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.285226 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.350302 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.350954 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"95bd6a55-27ab-4832-974d-20e84433add3","Type":"ContainerDied","Data":"76d9aa500fcc544b24c152a13a19efb9958fbc71585c5e4e59537b0f81733fe4"} Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.352347 4946 scope.go:117] "RemoveContainer" containerID="a068932e83858cfa3df73cfe29e939011a97ad3c4f3466fc554b218170927fc6" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.359405 4946 generic.go:334] "Generic (PLEG): container finished" podID="00d946a1-a238-4ab3-b8e6-e276aac8bb22" containerID="70895f70e30891e0fb785baa90a70cffd83e5316b032ce5ba2d16473dc141fc8" exitCode=0 Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.359454 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"00d946a1-a238-4ab3-b8e6-e276aac8bb22","Type":"ContainerDied","Data":"70895f70e30891e0fb785baa90a70cffd83e5316b032ce5ba2d16473dc141fc8"} Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.361719 4946 generic.go:334] "Generic (PLEG): container finished" podID="6794f905-f1ca-48a1-babd-115981e72ed4" containerID="bfc1a51101b4c552d61c7f5867477bb7aef0cb80c9a1b719d2163295d47914f3" exitCode=0 Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.361750 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6794f905-f1ca-48a1-babd-115981e72ed4","Type":"ContainerDied","Data":"bfc1a51101b4c552d61c7f5867477bb7aef0cb80c9a1b719d2163295d47914f3"} Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.361763 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6794f905-f1ca-48a1-babd-115981e72ed4","Type":"ContainerDied","Data":"0387a0c373947778b4adb9206de8894718cbcd5358fa42122e27ef4db6ab8434"} Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.361773 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0387a0c373947778b4adb9206de8894718cbcd5358fa42122e27ef4db6ab8434" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.362903 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0ab31999-7efc-4936-97c5-bb8592f61595","Type":"ContainerStarted","Data":"4c113c2fb5ce656c3081fcaa517a3f786c740dcc648e824c8484433db5f32e00"} Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.365613 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6aad4350-9266-48ae-b3ae-b4ec6046fbb1","Type":"ContainerStarted","Data":"034964ce32bcc7df81c867544ecbe9c3ba082f658252858409bc86d6073fce86"} Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.396403 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.396385465 podStartE2EDuration="2.396385465s" podCreationTimestamp="2025-11-28 08:54:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:54:52.3937603 +0000 UTC m=+7346.771825401" watchObservedRunningTime="2025-11-28 08:54:52.396385465 +0000 UTC m=+7346.774450576" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.529759 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.549730 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.550718 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.559559 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.574612 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 08:54:52 crc kubenswrapper[4946]: E1128 08:54:52.575052 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d946a1-a238-4ab3-b8e6-e276aac8bb22" containerName="nova-api-api" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.575072 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d946a1-a238-4ab3-b8e6-e276aac8bb22" containerName="nova-api-api" Nov 28 08:54:52 crc kubenswrapper[4946]: E1128 08:54:52.575098 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d946a1-a238-4ab3-b8e6-e276aac8bb22" containerName="nova-api-log" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.575107 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d946a1-a238-4ab3-b8e6-e276aac8bb22" containerName="nova-api-log" Nov 28 08:54:52 crc kubenswrapper[4946]: E1128 08:54:52.575127 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6794f905-f1ca-48a1-babd-115981e72ed4" containerName="nova-metadata-metadata" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.575137 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="6794f905-f1ca-48a1-babd-115981e72ed4" containerName="nova-metadata-metadata" Nov 28 08:54:52 crc kubenswrapper[4946]: E1128 08:54:52.575153 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6794f905-f1ca-48a1-babd-115981e72ed4" containerName="nova-metadata-log" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.575161 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="6794f905-f1ca-48a1-babd-115981e72ed4" containerName="nova-metadata-log" Nov 28 08:54:52 crc kubenswrapper[4946]: E1128 08:54:52.575189 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95bd6a55-27ab-4832-974d-20e84433add3" containerName="nova-cell0-conductor-conductor" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.575197 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="95bd6a55-27ab-4832-974d-20e84433add3" containerName="nova-cell0-conductor-conductor" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.575411 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="00d946a1-a238-4ab3-b8e6-e276aac8bb22" containerName="nova-api-log" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.575434 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="6794f905-f1ca-48a1-babd-115981e72ed4" containerName="nova-metadata-log" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.575447 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="6794f905-f1ca-48a1-babd-115981e72ed4" containerName="nova-metadata-metadata" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.575474 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="95bd6a55-27ab-4832-974d-20e84433add3" containerName="nova-cell0-conductor-conductor" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.575490 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="00d946a1-a238-4ab3-b8e6-e276aac8bb22" containerName="nova-api-api" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.576220 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.581853 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.602434 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6794f905-f1ca-48a1-babd-115981e72ed4-combined-ca-bundle\") pod \"6794f905-f1ca-48a1-babd-115981e72ed4\" (UID: \"6794f905-f1ca-48a1-babd-115981e72ed4\") " Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.602594 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brmrk\" (UniqueName: \"kubernetes.io/projected/6794f905-f1ca-48a1-babd-115981e72ed4-kube-api-access-brmrk\") pod \"6794f905-f1ca-48a1-babd-115981e72ed4\" (UID: \"6794f905-f1ca-48a1-babd-115981e72ed4\") " Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.602693 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6794f905-f1ca-48a1-babd-115981e72ed4-logs\") pod \"6794f905-f1ca-48a1-babd-115981e72ed4\" (UID: \"6794f905-f1ca-48a1-babd-115981e72ed4\") " Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.602808 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6794f905-f1ca-48a1-babd-115981e72ed4-config-data\") pod \"6794f905-f1ca-48a1-babd-115981e72ed4\" (UID: \"6794f905-f1ca-48a1-babd-115981e72ed4\") " Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.603298 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6794f905-f1ca-48a1-babd-115981e72ed4-logs" (OuterVolumeSpecName: "logs") pod "6794f905-f1ca-48a1-babd-115981e72ed4" (UID: "6794f905-f1ca-48a1-babd-115981e72ed4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.609992 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.611022 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6794f905-f1ca-48a1-babd-115981e72ed4-kube-api-access-brmrk" (OuterVolumeSpecName: "kube-api-access-brmrk") pod "6794f905-f1ca-48a1-babd-115981e72ed4" (UID: "6794f905-f1ca-48a1-babd-115981e72ed4"). InnerVolumeSpecName "kube-api-access-brmrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.642133 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6794f905-f1ca-48a1-babd-115981e72ed4-config-data" (OuterVolumeSpecName: "config-data") pod "6794f905-f1ca-48a1-babd-115981e72ed4" (UID: "6794f905-f1ca-48a1-babd-115981e72ed4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.652046 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6794f905-f1ca-48a1-babd-115981e72ed4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6794f905-f1ca-48a1-babd-115981e72ed4" (UID: "6794f905-f1ca-48a1-babd-115981e72ed4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.704259 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv8jq\" (UniqueName: \"kubernetes.io/projected/00d946a1-a238-4ab3-b8e6-e276aac8bb22-kube-api-access-xv8jq\") pod \"00d946a1-a238-4ab3-b8e6-e276aac8bb22\" (UID: \"00d946a1-a238-4ab3-b8e6-e276aac8bb22\") " Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.704304 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00d946a1-a238-4ab3-b8e6-e276aac8bb22-logs\") pod \"00d946a1-a238-4ab3-b8e6-e276aac8bb22\" (UID: \"00d946a1-a238-4ab3-b8e6-e276aac8bb22\") " Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.704350 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00d946a1-a238-4ab3-b8e6-e276aac8bb22-config-data\") pod \"00d946a1-a238-4ab3-b8e6-e276aac8bb22\" (UID: \"00d946a1-a238-4ab3-b8e6-e276aac8bb22\") " Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.704397 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00d946a1-a238-4ab3-b8e6-e276aac8bb22-combined-ca-bundle\") pod \"00d946a1-a238-4ab3-b8e6-e276aac8bb22\" (UID: \"00d946a1-a238-4ab3-b8e6-e276aac8bb22\") " Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.704721 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a57460-cf67-4cef-80a0-bb873598b9ed-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e5a57460-cf67-4cef-80a0-bb873598b9ed\") " pod="openstack/nova-cell0-conductor-0" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.704753 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a57460-cf67-4cef-80a0-bb873598b9ed-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e5a57460-cf67-4cef-80a0-bb873598b9ed\") " pod="openstack/nova-cell0-conductor-0" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.704777 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7q8c\" (UniqueName: \"kubernetes.io/projected/e5a57460-cf67-4cef-80a0-bb873598b9ed-kube-api-access-w7q8c\") pod \"nova-cell0-conductor-0\" (UID: \"e5a57460-cf67-4cef-80a0-bb873598b9ed\") " pod="openstack/nova-cell0-conductor-0" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.704859 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brmrk\" (UniqueName: \"kubernetes.io/projected/6794f905-f1ca-48a1-babd-115981e72ed4-kube-api-access-brmrk\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.704871 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6794f905-f1ca-48a1-babd-115981e72ed4-logs\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.704881 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6794f905-f1ca-48a1-babd-115981e72ed4-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.704889 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6794f905-f1ca-48a1-babd-115981e72ed4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.704892 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00d946a1-a238-4ab3-b8e6-e276aac8bb22-logs" (OuterVolumeSpecName: "logs") pod "00d946a1-a238-4ab3-b8e6-e276aac8bb22" (UID: "00d946a1-a238-4ab3-b8e6-e276aac8bb22"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.707700 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00d946a1-a238-4ab3-b8e6-e276aac8bb22-kube-api-access-xv8jq" (OuterVolumeSpecName: "kube-api-access-xv8jq") pod "00d946a1-a238-4ab3-b8e6-e276aac8bb22" (UID: "00d946a1-a238-4ab3-b8e6-e276aac8bb22"). InnerVolumeSpecName "kube-api-access-xv8jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.730573 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00d946a1-a238-4ab3-b8e6-e276aac8bb22-config-data" (OuterVolumeSpecName: "config-data") pod "00d946a1-a238-4ab3-b8e6-e276aac8bb22" (UID: "00d946a1-a238-4ab3-b8e6-e276aac8bb22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.742993 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00d946a1-a238-4ab3-b8e6-e276aac8bb22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00d946a1-a238-4ab3-b8e6-e276aac8bb22" (UID: "00d946a1-a238-4ab3-b8e6-e276aac8bb22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.807135 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a57460-cf67-4cef-80a0-bb873598b9ed-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e5a57460-cf67-4cef-80a0-bb873598b9ed\") " pod="openstack/nova-cell0-conductor-0" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.807212 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a57460-cf67-4cef-80a0-bb873598b9ed-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e5a57460-cf67-4cef-80a0-bb873598b9ed\") " pod="openstack/nova-cell0-conductor-0" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.807272 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7q8c\" (UniqueName: \"kubernetes.io/projected/e5a57460-cf67-4cef-80a0-bb873598b9ed-kube-api-access-w7q8c\") pod \"nova-cell0-conductor-0\" (UID: \"e5a57460-cf67-4cef-80a0-bb873598b9ed\") " pod="openstack/nova-cell0-conductor-0" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.807415 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv8jq\" (UniqueName: \"kubernetes.io/projected/00d946a1-a238-4ab3-b8e6-e276aac8bb22-kube-api-access-xv8jq\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.807431 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00d946a1-a238-4ab3-b8e6-e276aac8bb22-logs\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.807442 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00d946a1-a238-4ab3-b8e6-e276aac8bb22-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.807456 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00d946a1-a238-4ab3-b8e6-e276aac8bb22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.811259 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a57460-cf67-4cef-80a0-bb873598b9ed-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e5a57460-cf67-4cef-80a0-bb873598b9ed\") " pod="openstack/nova-cell0-conductor-0" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.812841 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a57460-cf67-4cef-80a0-bb873598b9ed-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e5a57460-cf67-4cef-80a0-bb873598b9ed\") " pod="openstack/nova-cell0-conductor-0" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.826768 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7q8c\" (UniqueName: \"kubernetes.io/projected/e5a57460-cf67-4cef-80a0-bb873598b9ed-kube-api-access-w7q8c\") pod \"nova-cell0-conductor-0\" (UID: \"e5a57460-cf67-4cef-80a0-bb873598b9ed\") " pod="openstack/nova-cell0-conductor-0" Nov 28 08:54:52 crc kubenswrapper[4946]: I1128 08:54:52.899680 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.380901 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6aad4350-9266-48ae-b3ae-b4ec6046fbb1","Type":"ContainerStarted","Data":"b9e3cc0b7dacace2f336a7508f89e0180d48d6bf43b2581788402aa77efcbf39"} Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.388125 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.391587 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.391687 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"00d946a1-a238-4ab3-b8e6-e276aac8bb22","Type":"ContainerDied","Data":"a4be75b1f2bf5db8f9dfddea2627b4bd9e4efd0405d3ecac10e7ec1c059dc29c"} Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.391749 4946 scope.go:117] "RemoveContainer" containerID="70895f70e30891e0fb785baa90a70cffd83e5316b032ce5ba2d16473dc141fc8" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.404835 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.425342 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.425323008 podStartE2EDuration="2.425323008s" podCreationTimestamp="2025-11-28 08:54:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:54:53.396948565 +0000 UTC m=+7347.775013676" watchObservedRunningTime="2025-11-28 08:54:53.425323008 +0000 UTC m=+7347.803388119" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.534735 4946 scope.go:117] "RemoveContainer" containerID="e920ef783f678ccf0f74b883b2a2698545b071cd344d92d74547f8a3d6c0af40" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.578930 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.589801 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.605532 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.610586 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.620008 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.621471 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.623773 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.634348 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.636009 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.637942 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.654384 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.685879 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.723881 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0710305-34b3-4a15-841d-a90a2ff20c6a-logs\") pod \"nova-api-0\" (UID: \"c0710305-34b3-4a15-841d-a90a2ff20c6a\") " pod="openstack/nova-api-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.723967 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0710305-34b3-4a15-841d-a90a2ff20c6a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c0710305-34b3-4a15-841d-a90a2ff20c6a\") " pod="openstack/nova-api-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.724033 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79w28\" (UniqueName: \"kubernetes.io/projected/c0710305-34b3-4a15-841d-a90a2ff20c6a-kube-api-access-79w28\") pod \"nova-api-0\" (UID: \"c0710305-34b3-4a15-841d-a90a2ff20c6a\") " pod="openstack/nova-api-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.724094 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a-logs\") pod \"nova-metadata-0\" (UID: \"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a\") " pod="openstack/nova-metadata-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.724122 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0710305-34b3-4a15-841d-a90a2ff20c6a-config-data\") pod \"nova-api-0\" (UID: \"c0710305-34b3-4a15-841d-a90a2ff20c6a\") " pod="openstack/nova-api-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.724169 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxg6r\" (UniqueName: \"kubernetes.io/projected/0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a-kube-api-access-sxg6r\") pod \"nova-metadata-0\" (UID: \"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a\") " pod="openstack/nova-metadata-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.724401 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a-config-data\") pod \"nova-metadata-0\" (UID: \"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a\") " pod="openstack/nova-metadata-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.724502 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a\") " pod="openstack/nova-metadata-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.826195 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a-config-data\") pod \"nova-metadata-0\" (UID: \"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a\") " pod="openstack/nova-metadata-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.826250 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a\") " pod="openstack/nova-metadata-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.826292 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0710305-34b3-4a15-841d-a90a2ff20c6a-logs\") pod \"nova-api-0\" (UID: \"c0710305-34b3-4a15-841d-a90a2ff20c6a\") " pod="openstack/nova-api-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.826314 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0710305-34b3-4a15-841d-a90a2ff20c6a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c0710305-34b3-4a15-841d-a90a2ff20c6a\") " pod="openstack/nova-api-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.826354 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79w28\" (UniqueName: \"kubernetes.io/projected/c0710305-34b3-4a15-841d-a90a2ff20c6a-kube-api-access-79w28\") pod \"nova-api-0\" (UID: \"c0710305-34b3-4a15-841d-a90a2ff20c6a\") " pod="openstack/nova-api-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.826384 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a-logs\") pod \"nova-metadata-0\" (UID: \"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a\") " pod="openstack/nova-metadata-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.826404 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0710305-34b3-4a15-841d-a90a2ff20c6a-config-data\") pod \"nova-api-0\" (UID: \"c0710305-34b3-4a15-841d-a90a2ff20c6a\") " pod="openstack/nova-api-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.826446 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxg6r\" (UniqueName: \"kubernetes.io/projected/0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a-kube-api-access-sxg6r\") pod \"nova-metadata-0\" (UID: \"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a\") " pod="openstack/nova-metadata-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.826819 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0710305-34b3-4a15-841d-a90a2ff20c6a-logs\") pod \"nova-api-0\" (UID: \"c0710305-34b3-4a15-841d-a90a2ff20c6a\") " pod="openstack/nova-api-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.831362 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a-logs\") pod \"nova-metadata-0\" (UID: \"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a\") " pod="openstack/nova-metadata-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.833071 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a-config-data\") pod \"nova-metadata-0\" (UID: \"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a\") " pod="openstack/nova-metadata-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.837956 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a\") " pod="openstack/nova-metadata-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.838891 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0710305-34b3-4a15-841d-a90a2ff20c6a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c0710305-34b3-4a15-841d-a90a2ff20c6a\") " pod="openstack/nova-api-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.847116 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0710305-34b3-4a15-841d-a90a2ff20c6a-config-data\") pod \"nova-api-0\" (UID: \"c0710305-34b3-4a15-841d-a90a2ff20c6a\") " pod="openstack/nova-api-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.849076 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxg6r\" (UniqueName: \"kubernetes.io/projected/0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a-kube-api-access-sxg6r\") pod \"nova-metadata-0\" (UID: \"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a\") " pod="openstack/nova-metadata-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.850800 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79w28\" (UniqueName: \"kubernetes.io/projected/c0710305-34b3-4a15-841d-a90a2ff20c6a-kube-api-access-79w28\") pod \"nova-api-0\" (UID: \"c0710305-34b3-4a15-841d-a90a2ff20c6a\") " pod="openstack/nova-api-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.948831 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 08:54:53 crc kubenswrapper[4946]: I1128 08:54:53.964093 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 08:54:54 crc kubenswrapper[4946]: I1128 08:54:54.001802 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00d946a1-a238-4ab3-b8e6-e276aac8bb22" path="/var/lib/kubelet/pods/00d946a1-a238-4ab3-b8e6-e276aac8bb22/volumes" Nov 28 08:54:54 crc kubenswrapper[4946]: I1128 08:54:54.002608 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6794f905-f1ca-48a1-babd-115981e72ed4" path="/var/lib/kubelet/pods/6794f905-f1ca-48a1-babd-115981e72ed4/volumes" Nov 28 08:54:54 crc kubenswrapper[4946]: I1128 08:54:54.003209 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95bd6a55-27ab-4832-974d-20e84433add3" path="/var/lib/kubelet/pods/95bd6a55-27ab-4832-974d-20e84433add3/volumes" Nov 28 08:54:54 crc kubenswrapper[4946]: I1128 08:54:54.411022 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e5a57460-cf67-4cef-80a0-bb873598b9ed","Type":"ContainerStarted","Data":"d91b1d82ecac143e4eed0fe464accd8eaa0a9e7ff4df07993bbdbc1616612709"} Nov 28 08:54:54 crc kubenswrapper[4946]: I1128 08:54:54.411267 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e5a57460-cf67-4cef-80a0-bb873598b9ed","Type":"ContainerStarted","Data":"5af1303474ed15dd81ca29b95812329e79acaccc59300d086d567af144c3072b"} Nov 28 08:54:54 crc kubenswrapper[4946]: I1128 08:54:54.412110 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 28 08:54:54 crc kubenswrapper[4946]: I1128 08:54:54.413100 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 08:54:54 crc kubenswrapper[4946]: I1128 08:54:54.444680 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.444643893 podStartE2EDuration="2.444643893s" podCreationTimestamp="2025-11-28 08:54:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:54:54.431938458 +0000 UTC m=+7348.810003599" watchObservedRunningTime="2025-11-28 08:54:54.444643893 +0000 UTC m=+7348.822709004" Nov 28 08:54:54 crc kubenswrapper[4946]: I1128 08:54:54.488027 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 08:54:54 crc kubenswrapper[4946]: I1128 08:54:54.731044 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:54:54 crc kubenswrapper[4946]: I1128 08:54:54.731100 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:54:55 crc kubenswrapper[4946]: I1128 08:54:55.443488 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0710305-34b3-4a15-841d-a90a2ff20c6a","Type":"ContainerStarted","Data":"d1211a4b11855f425498bc68db67776efd44f9c78e0813df5902e7e59207f617"} Nov 28 08:54:55 crc kubenswrapper[4946]: I1128 08:54:55.443730 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0710305-34b3-4a15-841d-a90a2ff20c6a","Type":"ContainerStarted","Data":"26f93af648d01ce75fda4864d9831891ef89ac59f10173075768a4125def6e9d"} Nov 28 08:54:55 crc kubenswrapper[4946]: I1128 08:54:55.443740 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0710305-34b3-4a15-841d-a90a2ff20c6a","Type":"ContainerStarted","Data":"d69c8c3fd699e4fd72a54117765d39d73454767f32349d4605eec8db039c28fc"} Nov 28 08:54:55 crc kubenswrapper[4946]: I1128 08:54:55.447915 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a","Type":"ContainerStarted","Data":"461d9f2607cac19a1f5c39e9388f04cd5c3a00c8d330d862c1f6b0f314304e8d"} Nov 28 08:54:55 crc kubenswrapper[4946]: I1128 08:54:55.447943 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a","Type":"ContainerStarted","Data":"30580d9a89c94dc908f75e10a22137d36a1aa6bc4319f9b5776f47e91685ce68"} Nov 28 08:54:55 crc kubenswrapper[4946]: I1128 08:54:55.447953 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a","Type":"ContainerStarted","Data":"7b2e33729b7e57013ceeef0c12c2268fff77a528cf19c54cbfff8b269edbf34e"} Nov 28 08:54:55 crc kubenswrapper[4946]: I1128 08:54:55.453354 4946 generic.go:334] "Generic (PLEG): container finished" podID="0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec" containerID="1548764c7bcf6b716ed869f5f9ddf284c9c3e888cfe8d038f8e9bedda9eb85d1" exitCode=0 Nov 28 08:54:55 crc kubenswrapper[4946]: I1128 08:54:55.453487 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec","Type":"ContainerDied","Data":"1548764c7bcf6b716ed869f5f9ddf284c9c3e888cfe8d038f8e9bedda9eb85d1"} Nov 28 08:54:55 crc kubenswrapper[4946]: I1128 08:54:55.453559 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec","Type":"ContainerDied","Data":"daa3098a4cc87ace905a9dbf80dc89625dac7cafbebcdd03c613a3b43cf3e693"} Nov 28 08:54:55 crc kubenswrapper[4946]: I1128 08:54:55.453580 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daa3098a4cc87ace905a9dbf80dc89625dac7cafbebcdd03c613a3b43cf3e693" Nov 28 08:54:55 crc kubenswrapper[4946]: I1128 08:54:55.473131 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.473115214 podStartE2EDuration="2.473115214s" podCreationTimestamp="2025-11-28 08:54:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:54:55.457846256 +0000 UTC m=+7349.835911377" watchObservedRunningTime="2025-11-28 08:54:55.473115214 +0000 UTC m=+7349.851180325" Nov 28 08:54:55 crc kubenswrapper[4946]: I1128 08:54:55.494790 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 08:54:55 crc kubenswrapper[4946]: I1128 08:54:55.509933 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.509918056 podStartE2EDuration="2.509918056s" podCreationTimestamp="2025-11-28 08:54:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:54:55.488201368 +0000 UTC m=+7349.866266489" watchObservedRunningTime="2025-11-28 08:54:55.509918056 +0000 UTC m=+7349.887983157" Nov 28 08:54:55 crc kubenswrapper[4946]: I1128 08:54:55.561670 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec-config-data\") pod \"0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec\" (UID: \"0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec\") " Nov 28 08:54:55 crc kubenswrapper[4946]: I1128 08:54:55.561736 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m29ds\" (UniqueName: \"kubernetes.io/projected/0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec-kube-api-access-m29ds\") pod \"0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec\" (UID: \"0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec\") " Nov 28 08:54:55 crc kubenswrapper[4946]: I1128 08:54:55.561885 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec-combined-ca-bundle\") pod \"0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec\" (UID: \"0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec\") " Nov 28 08:54:55 crc kubenswrapper[4946]: I1128 08:54:55.567069 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec-kube-api-access-m29ds" (OuterVolumeSpecName: "kube-api-access-m29ds") pod "0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec" (UID: "0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec"). InnerVolumeSpecName "kube-api-access-m29ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:54:55 crc kubenswrapper[4946]: I1128 08:54:55.584389 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec" (UID: "0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:54:55 crc kubenswrapper[4946]: I1128 08:54:55.595382 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec-config-data" (OuterVolumeSpecName: "config-data") pod "0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec" (UID: "0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:54:55 crc kubenswrapper[4946]: I1128 08:54:55.663935 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:55 crc kubenswrapper[4946]: I1128 08:54:55.663980 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m29ds\" (UniqueName: \"kubernetes.io/projected/0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec-kube-api-access-m29ds\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:55 crc kubenswrapper[4946]: I1128 08:54:55.663998 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:54:55 crc kubenswrapper[4946]: I1128 08:54:55.696439 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:54:56 crc kubenswrapper[4946]: I1128 08:54:56.462974 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 08:54:56 crc kubenswrapper[4946]: I1128 08:54:56.500737 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 08:54:56 crc kubenswrapper[4946]: I1128 08:54:56.540361 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 08:54:56 crc kubenswrapper[4946]: I1128 08:54:56.551346 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 08:54:56 crc kubenswrapper[4946]: E1128 08:54:56.551823 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec" containerName="nova-cell1-conductor-conductor" Nov 28 08:54:56 crc kubenswrapper[4946]: I1128 08:54:56.551849 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec" containerName="nova-cell1-conductor-conductor" Nov 28 08:54:56 crc kubenswrapper[4946]: I1128 08:54:56.552149 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec" containerName="nova-cell1-conductor-conductor" Nov 28 08:54:56 crc kubenswrapper[4946]: I1128 08:54:56.553289 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 08:54:56 crc kubenswrapper[4946]: I1128 08:54:56.558666 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 28 08:54:56 crc kubenswrapper[4946]: I1128 08:54:56.562988 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 08:54:56 crc kubenswrapper[4946]: I1128 08:54:56.582179 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c39660-02af-4065-8612-86ef02649a2f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"37c39660-02af-4065-8612-86ef02649a2f\") " pod="openstack/nova-cell1-conductor-0" Nov 28 08:54:56 crc kubenswrapper[4946]: I1128 08:54:56.582478 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37c39660-02af-4065-8612-86ef02649a2f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"37c39660-02af-4065-8612-86ef02649a2f\") " pod="openstack/nova-cell1-conductor-0" Nov 28 08:54:56 crc kubenswrapper[4946]: I1128 08:54:56.582642 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nxrx\" (UniqueName: \"kubernetes.io/projected/37c39660-02af-4065-8612-86ef02649a2f-kube-api-access-7nxrx\") pod \"nova-cell1-conductor-0\" (UID: \"37c39660-02af-4065-8612-86ef02649a2f\") " pod="openstack/nova-cell1-conductor-0" Nov 28 08:54:56 crc kubenswrapper[4946]: I1128 08:54:56.684292 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c39660-02af-4065-8612-86ef02649a2f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"37c39660-02af-4065-8612-86ef02649a2f\") " pod="openstack/nova-cell1-conductor-0" Nov 28 08:54:56 crc kubenswrapper[4946]: I1128 08:54:56.684775 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37c39660-02af-4065-8612-86ef02649a2f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"37c39660-02af-4065-8612-86ef02649a2f\") " pod="openstack/nova-cell1-conductor-0" Nov 28 08:54:56 crc kubenswrapper[4946]: I1128 08:54:56.684852 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nxrx\" (UniqueName: \"kubernetes.io/projected/37c39660-02af-4065-8612-86ef02649a2f-kube-api-access-7nxrx\") pod \"nova-cell1-conductor-0\" (UID: \"37c39660-02af-4065-8612-86ef02649a2f\") " pod="openstack/nova-cell1-conductor-0" Nov 28 08:54:56 crc kubenswrapper[4946]: I1128 08:54:56.693265 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37c39660-02af-4065-8612-86ef02649a2f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"37c39660-02af-4065-8612-86ef02649a2f\") " pod="openstack/nova-cell1-conductor-0" Nov 28 08:54:56 crc kubenswrapper[4946]: I1128 08:54:56.702271 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c39660-02af-4065-8612-86ef02649a2f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"37c39660-02af-4065-8612-86ef02649a2f\") " pod="openstack/nova-cell1-conductor-0" Nov 28 08:54:56 crc kubenswrapper[4946]: I1128 08:54:56.707864 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nxrx\" (UniqueName: \"kubernetes.io/projected/37c39660-02af-4065-8612-86ef02649a2f-kube-api-access-7nxrx\") pod \"nova-cell1-conductor-0\" (UID: \"37c39660-02af-4065-8612-86ef02649a2f\") " pod="openstack/nova-cell1-conductor-0" Nov 28 08:54:56 crc kubenswrapper[4946]: I1128 08:54:56.751225 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 08:54:56 crc kubenswrapper[4946]: I1128 08:54:56.879613 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 08:54:57 crc kubenswrapper[4946]: I1128 08:54:57.240900 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 08:54:57 crc kubenswrapper[4946]: I1128 08:54:57.476429 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"37c39660-02af-4065-8612-86ef02649a2f","Type":"ContainerStarted","Data":"8b7fa9697d8b4068b043297b7d17a4fa27cafffb08f872df98cd9831b62fdbb6"} Nov 28 08:54:57 crc kubenswrapper[4946]: I1128 08:54:57.476834 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 28 08:54:57 crc kubenswrapper[4946]: I1128 08:54:57.476856 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"37c39660-02af-4065-8612-86ef02649a2f","Type":"ContainerStarted","Data":"4f0258368ebfcc395917ae6ce55052b5889576f8ce88932ac9d931ee9ca9843f"} Nov 28 08:54:57 crc kubenswrapper[4946]: I1128 08:54:57.496344 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=1.4963261509999999 podStartE2EDuration="1.496326151s" podCreationTimestamp="2025-11-28 08:54:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:54:57.49102927 +0000 UTC m=+7351.869094391" watchObservedRunningTime="2025-11-28 08:54:57.496326151 +0000 UTC m=+7351.874391262" Nov 28 08:54:58 crc kubenswrapper[4946]: I1128 08:54:58.001753 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec" path="/var/lib/kubelet/pods/0eaadce7-5bc4-4f66-ba4c-2c4b11cb0fec/volumes" Nov 28 08:54:58 crc kubenswrapper[4946]: I1128 08:54:58.964556 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 08:54:58 crc kubenswrapper[4946]: I1128 08:54:58.964895 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 08:55:00 crc kubenswrapper[4946]: I1128 08:55:00.696956 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:55:00 crc kubenswrapper[4946]: I1128 08:55:00.716061 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:55:01 crc kubenswrapper[4946]: I1128 08:55:01.535376 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 28 08:55:01 crc kubenswrapper[4946]: I1128 08:55:01.751172 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 08:55:01 crc kubenswrapper[4946]: I1128 08:55:01.785396 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 08:55:02 crc kubenswrapper[4946]: I1128 08:55:02.564000 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 08:55:02 crc kubenswrapper[4946]: I1128 08:55:02.950381 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 28 08:55:03 crc kubenswrapper[4946]: I1128 08:55:03.949562 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 08:55:03 crc kubenswrapper[4946]: I1128 08:55:03.949613 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 08:55:03 crc kubenswrapper[4946]: I1128 08:55:03.964936 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 08:55:03 crc kubenswrapper[4946]: I1128 08:55:03.965005 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 08:55:05 crc kubenswrapper[4946]: I1128 08:55:05.033617 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c0710305-34b3-4a15-841d-a90a2ff20c6a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.88:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 08:55:05 crc kubenswrapper[4946]: I1128 08:55:05.115876 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.89:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 08:55:05 crc kubenswrapper[4946]: I1128 08:55:05.116776 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c0710305-34b3-4a15-841d-a90a2ff20c6a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.88:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 08:55:05 crc kubenswrapper[4946]: I1128 08:55:05.117238 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.89:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 08:55:05 crc kubenswrapper[4946]: E1128 08:55:05.288325 4946 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.2:35628->38.102.83.2:34561: write tcp 38.102.83.2:35628->38.102.83.2:34561: write: broken pipe Nov 28 08:55:06 crc kubenswrapper[4946]: I1128 08:55:06.939919 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 28 08:55:10 crc kubenswrapper[4946]: I1128 08:55:10.693833 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 08:55:10 crc kubenswrapper[4946]: I1128 08:55:10.695961 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 08:55:10 crc kubenswrapper[4946]: I1128 08:55:10.698112 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 28 08:55:10 crc kubenswrapper[4946]: I1128 08:55:10.714927 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 08:55:10 crc kubenswrapper[4946]: I1128 08:55:10.784994 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/767720cb-05f3-4879-8ee3-f22550a25d3a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"767720cb-05f3-4879-8ee3-f22550a25d3a\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:10 crc kubenswrapper[4946]: I1128 08:55:10.785051 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/767720cb-05f3-4879-8ee3-f22550a25d3a-config-data\") pod \"cinder-scheduler-0\" (UID: \"767720cb-05f3-4879-8ee3-f22550a25d3a\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:10 crc kubenswrapper[4946]: I1128 08:55:10.785084 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/767720cb-05f3-4879-8ee3-f22550a25d3a-scripts\") pod \"cinder-scheduler-0\" (UID: \"767720cb-05f3-4879-8ee3-f22550a25d3a\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:10 crc kubenswrapper[4946]: I1128 08:55:10.785181 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/767720cb-05f3-4879-8ee3-f22550a25d3a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"767720cb-05f3-4879-8ee3-f22550a25d3a\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:10 crc kubenswrapper[4946]: I1128 08:55:10.785237 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/767720cb-05f3-4879-8ee3-f22550a25d3a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"767720cb-05f3-4879-8ee3-f22550a25d3a\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:10 crc kubenswrapper[4946]: I1128 08:55:10.785305 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l96vb\" (UniqueName: \"kubernetes.io/projected/767720cb-05f3-4879-8ee3-f22550a25d3a-kube-api-access-l96vb\") pod \"cinder-scheduler-0\" (UID: \"767720cb-05f3-4879-8ee3-f22550a25d3a\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:10 crc kubenswrapper[4946]: I1128 08:55:10.887594 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/767720cb-05f3-4879-8ee3-f22550a25d3a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"767720cb-05f3-4879-8ee3-f22550a25d3a\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:10 crc kubenswrapper[4946]: I1128 08:55:10.887682 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/767720cb-05f3-4879-8ee3-f22550a25d3a-config-data\") pod \"cinder-scheduler-0\" (UID: \"767720cb-05f3-4879-8ee3-f22550a25d3a\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:10 crc kubenswrapper[4946]: I1128 08:55:10.887721 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/767720cb-05f3-4879-8ee3-f22550a25d3a-scripts\") pod \"cinder-scheduler-0\" (UID: \"767720cb-05f3-4879-8ee3-f22550a25d3a\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:10 crc kubenswrapper[4946]: I1128 08:55:10.887784 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/767720cb-05f3-4879-8ee3-f22550a25d3a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"767720cb-05f3-4879-8ee3-f22550a25d3a\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:10 crc kubenswrapper[4946]: I1128 08:55:10.887828 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/767720cb-05f3-4879-8ee3-f22550a25d3a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"767720cb-05f3-4879-8ee3-f22550a25d3a\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:10 crc kubenswrapper[4946]: I1128 08:55:10.887896 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l96vb\" (UniqueName: \"kubernetes.io/projected/767720cb-05f3-4879-8ee3-f22550a25d3a-kube-api-access-l96vb\") pod \"cinder-scheduler-0\" (UID: \"767720cb-05f3-4879-8ee3-f22550a25d3a\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:10 crc kubenswrapper[4946]: I1128 08:55:10.888068 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/767720cb-05f3-4879-8ee3-f22550a25d3a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"767720cb-05f3-4879-8ee3-f22550a25d3a\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:10 crc kubenswrapper[4946]: I1128 08:55:10.894614 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/767720cb-05f3-4879-8ee3-f22550a25d3a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"767720cb-05f3-4879-8ee3-f22550a25d3a\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:10 crc kubenswrapper[4946]: I1128 08:55:10.898963 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/767720cb-05f3-4879-8ee3-f22550a25d3a-scripts\") pod \"cinder-scheduler-0\" (UID: \"767720cb-05f3-4879-8ee3-f22550a25d3a\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:10 crc kubenswrapper[4946]: I1128 08:55:10.899493 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/767720cb-05f3-4879-8ee3-f22550a25d3a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"767720cb-05f3-4879-8ee3-f22550a25d3a\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:10 crc kubenswrapper[4946]: I1128 08:55:10.900629 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/767720cb-05f3-4879-8ee3-f22550a25d3a-config-data\") pod \"cinder-scheduler-0\" (UID: \"767720cb-05f3-4879-8ee3-f22550a25d3a\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:10 crc kubenswrapper[4946]: I1128 08:55:10.923765 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l96vb\" (UniqueName: \"kubernetes.io/projected/767720cb-05f3-4879-8ee3-f22550a25d3a-kube-api-access-l96vb\") pod \"cinder-scheduler-0\" (UID: \"767720cb-05f3-4879-8ee3-f22550a25d3a\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:11 crc kubenswrapper[4946]: I1128 08:55:11.043803 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 08:55:11 crc kubenswrapper[4946]: I1128 08:55:11.527026 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 08:55:11 crc kubenswrapper[4946]: W1128 08:55:11.532267 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod767720cb_05f3_4879_8ee3_f22550a25d3a.slice/crio-c05ff86db8b78ab3de88b40c55f3068028349afc95c5720b213851b30a84a413 WatchSource:0}: Error finding container c05ff86db8b78ab3de88b40c55f3068028349afc95c5720b213851b30a84a413: Status 404 returned error can't find the container with id c05ff86db8b78ab3de88b40c55f3068028349afc95c5720b213851b30a84a413 Nov 28 08:55:11 crc kubenswrapper[4946]: I1128 08:55:11.622624 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"767720cb-05f3-4879-8ee3-f22550a25d3a","Type":"ContainerStarted","Data":"c05ff86db8b78ab3de88b40c55f3068028349afc95c5720b213851b30a84a413"} Nov 28 08:55:12 crc kubenswrapper[4946]: I1128 08:55:12.351585 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 28 08:55:12 crc kubenswrapper[4946]: I1128 08:55:12.352176 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="9eeece55-15ef-4cd1-9f27-525991788887" containerName="cinder-api-log" containerID="cri-o://5575b73911fcc3797a0de6b22f9257ebce7992d2f6f346e66ea5081c9cff5ad6" gracePeriod=30 Nov 28 08:55:12 crc kubenswrapper[4946]: I1128 08:55:12.352585 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="9eeece55-15ef-4cd1-9f27-525991788887" containerName="cinder-api" containerID="cri-o://e5ffd8bda162451b7cc9f848be59c05d0561054138c50754a4ac4536e542e20b" gracePeriod=30 Nov 28 08:55:12 crc kubenswrapper[4946]: I1128 08:55:12.640061 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"767720cb-05f3-4879-8ee3-f22550a25d3a","Type":"ContainerStarted","Data":"a10c2561a2ccbbfdd9a72d9f2a2cb90c3eab831e68d718ea86c034c321e32670"} Nov 28 08:55:12 crc kubenswrapper[4946]: I1128 08:55:12.642686 4946 generic.go:334] "Generic (PLEG): container finished" podID="9eeece55-15ef-4cd1-9f27-525991788887" containerID="5575b73911fcc3797a0de6b22f9257ebce7992d2f6f346e66ea5081c9cff5ad6" exitCode=143 Nov 28 08:55:12 crc kubenswrapper[4946]: I1128 08:55:12.642727 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9eeece55-15ef-4cd1-9f27-525991788887","Type":"ContainerDied","Data":"5575b73911fcc3797a0de6b22f9257ebce7992d2f6f346e66ea5081c9cff5ad6"} Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.160615 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.162729 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.165084 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.200077 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.236988 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56eca230-e485-44c4-85df-36d401200197-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.237089 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.237211 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.237266 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-sys\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.237327 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56eca230-e485-44c4-85df-36d401200197-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.237356 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-dev\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.237408 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.237435 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.237502 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-run\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.237533 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.237551 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.237609 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.237644 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56eca230-e485-44c4-85df-36d401200197-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.237675 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/56eca230-e485-44c4-85df-36d401200197-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.237728 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56eca230-e485-44c4-85df-36d401200197-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.237753 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svcwr\" (UniqueName: \"kubernetes.io/projected/56eca230-e485-44c4-85df-36d401200197-kube-api-access-svcwr\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.339850 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.339895 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.339920 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-sys\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.339951 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56eca230-e485-44c4-85df-36d401200197-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.339968 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-dev\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.339986 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.340003 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.340000 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.340020 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-sys\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.340035 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-run\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.340068 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-run\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.340095 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.340133 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.340162 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56eca230-e485-44c4-85df-36d401200197-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.340182 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.340218 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/56eca230-e485-44c4-85df-36d401200197-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.340284 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.340289 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56eca230-e485-44c4-85df-36d401200197-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.340340 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svcwr\" (UniqueName: \"kubernetes.io/projected/56eca230-e485-44c4-85df-36d401200197-kube-api-access-svcwr\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.340605 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56eca230-e485-44c4-85df-36d401200197-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.340854 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.340890 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.340891 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-dev\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.340915 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.341155 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.341155 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/56eca230-e485-44c4-85df-36d401200197-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.345132 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/56eca230-e485-44c4-85df-36d401200197-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.346017 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56eca230-e485-44c4-85df-36d401200197-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.346060 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56eca230-e485-44c4-85df-36d401200197-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.356129 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56eca230-e485-44c4-85df-36d401200197-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.360422 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56eca230-e485-44c4-85df-36d401200197-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.378789 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svcwr\" (UniqueName: \"kubernetes.io/projected/56eca230-e485-44c4-85df-36d401200197-kube-api-access-svcwr\") pod \"cinder-volume-volume1-0\" (UID: \"56eca230-e485-44c4-85df-36d401200197\") " pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.485221 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.671835 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"767720cb-05f3-4879-8ee3-f22550a25d3a","Type":"ContainerStarted","Data":"db55406c4b6170bc0f31fd3abddce1119cb817f2ed44875b9080557d81dbcd78"} Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.698304 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.270247703 podStartE2EDuration="3.698279304s" podCreationTimestamp="2025-11-28 08:55:10 +0000 UTC" firstStartedPulling="2025-11-28 08:55:11.534960856 +0000 UTC m=+7365.913025967" lastFinishedPulling="2025-11-28 08:55:11.962992457 +0000 UTC m=+7366.341057568" observedRunningTime="2025-11-28 08:55:13.690326897 +0000 UTC m=+7368.068392008" watchObservedRunningTime="2025-11-28 08:55:13.698279304 +0000 UTC m=+7368.076344425" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.950503 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.952796 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.957092 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.966406 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.968016 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.975299 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.978553 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.983791 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.987868 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.988008 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 08:55:13 crc kubenswrapper[4946]: I1128 08:55:13.988088 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.053451 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-etc-nvme\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.053527 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-sys\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.053548 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1ca140e-917e-4491-89e9-b5d62997fb8f-config-data\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.053568 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1ca140e-917e-4491-89e9-b5d62997fb8f-config-data-custom\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.053669 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.053700 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.053717 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-dev\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.053740 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ca140e-917e-4491-89e9-b5d62997fb8f-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.053843 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.054209 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-lib-modules\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.054288 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.054364 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e1ca140e-917e-4491-89e9-b5d62997fb8f-ceph\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.054388 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1ca140e-917e-4491-89e9-b5d62997fb8f-scripts\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.054403 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-run\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.054423 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.054522 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct967\" (UniqueName: \"kubernetes.io/projected/e1ca140e-917e-4491-89e9-b5d62997fb8f-kube-api-access-ct967\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.120408 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.155631 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e1ca140e-917e-4491-89e9-b5d62997fb8f-ceph\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.155685 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1ca140e-917e-4491-89e9-b5d62997fb8f-scripts\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.155710 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-run\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.155733 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.155775 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct967\" (UniqueName: \"kubernetes.io/projected/e1ca140e-917e-4491-89e9-b5d62997fb8f-kube-api-access-ct967\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.155800 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-etc-nvme\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.155818 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-sys\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.155838 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1ca140e-917e-4491-89e9-b5d62997fb8f-config-data\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.155857 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1ca140e-917e-4491-89e9-b5d62997fb8f-config-data-custom\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.155897 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.155921 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.155942 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-dev\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.155967 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ca140e-917e-4491-89e9-b5d62997fb8f-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.155992 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.156096 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-lib-modules\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.156131 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.156281 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.156630 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-dev\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.156711 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.156795 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-sys\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.156838 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-etc-nvme\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.156836 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-lib-modules\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.156908 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-run\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.156913 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.157001 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.157177 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e1ca140e-917e-4491-89e9-b5d62997fb8f-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.160961 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1ca140e-917e-4491-89e9-b5d62997fb8f-scripts\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.161064 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ca140e-917e-4491-89e9-b5d62997fb8f-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.165293 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1ca140e-917e-4491-89e9-b5d62997fb8f-config-data\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.169091 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1ca140e-917e-4491-89e9-b5d62997fb8f-config-data-custom\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.172858 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e1ca140e-917e-4491-89e9-b5d62997fb8f-ceph\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.173026 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct967\" (UniqueName: \"kubernetes.io/projected/e1ca140e-917e-4491-89e9-b5d62997fb8f-kube-api-access-ct967\") pod \"cinder-backup-0\" (UID: \"e1ca140e-917e-4491-89e9-b5d62997fb8f\") " pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.284603 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.622023 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.681126 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"56eca230-e485-44c4-85df-36d401200197","Type":"ContainerStarted","Data":"643dba71e7adc16d1910a0d2e3f6cbfdff97b9ff67e1dcc5b1fc37c60ef89d72"} Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.683347 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"e1ca140e-917e-4491-89e9-b5d62997fb8f","Type":"ContainerStarted","Data":"0e944d1f689110a06e492406df5cef78127a67955e576718e149077e1d887474"} Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.684543 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.687947 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 08:55:14 crc kubenswrapper[4946]: I1128 08:55:14.697622 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 08:55:15 crc kubenswrapper[4946]: I1128 08:55:15.712875 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"56eca230-e485-44c4-85df-36d401200197","Type":"ContainerStarted","Data":"3e072784fa038b857229080f16a2ba697bfb5ac9557f807760a8dece3b26bb22"} Nov 28 08:55:15 crc kubenswrapper[4946]: I1128 08:55:15.715858 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"56eca230-e485-44c4-85df-36d401200197","Type":"ContainerStarted","Data":"d89ae7cb50295ed616aa56bdc243624c141878e45866a8501b6b609834b0af94"} Nov 28 08:55:15 crc kubenswrapper[4946]: I1128 08:55:15.722870 4946 generic.go:334] "Generic (PLEG): container finished" podID="9eeece55-15ef-4cd1-9f27-525991788887" containerID="e5ffd8bda162451b7cc9f848be59c05d0561054138c50754a4ac4536e542e20b" exitCode=0 Nov 28 08:55:15 crc kubenswrapper[4946]: I1128 08:55:15.722946 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9eeece55-15ef-4cd1-9f27-525991788887","Type":"ContainerDied","Data":"e5ffd8bda162451b7cc9f848be59c05d0561054138c50754a4ac4536e542e20b"} Nov 28 08:55:15 crc kubenswrapper[4946]: I1128 08:55:15.726217 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"e1ca140e-917e-4491-89e9-b5d62997fb8f","Type":"ContainerStarted","Data":"2035a4c6d9382b5a2c684f9bfc3368ce062d9776a5b8845f3ffa63163154f267"} Nov 28 08:55:15 crc kubenswrapper[4946]: I1128 08:55:15.726248 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"e1ca140e-917e-4491-89e9-b5d62997fb8f","Type":"ContainerStarted","Data":"409a5fdb0d0b832ab8088efe79c0dcd6fa0d5c2b4114f7bd6a1c27d35538905e"} Nov 28 08:55:15 crc kubenswrapper[4946]: I1128 08:55:15.776041 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.348126855 podStartE2EDuration="2.776015821s" podCreationTimestamp="2025-11-28 08:55:13 +0000 UTC" firstStartedPulling="2025-11-28 08:55:14.104389762 +0000 UTC m=+7368.482454873" lastFinishedPulling="2025-11-28 08:55:14.532278728 +0000 UTC m=+7368.910343839" observedRunningTime="2025-11-28 08:55:15.735390705 +0000 UTC m=+7370.113455816" watchObservedRunningTime="2025-11-28 08:55:15.776015821 +0000 UTC m=+7370.154080972" Nov 28 08:55:15 crc kubenswrapper[4946]: I1128 08:55:15.785346 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.466340452 podStartE2EDuration="2.785324342s" podCreationTimestamp="2025-11-28 08:55:13 +0000 UTC" firstStartedPulling="2025-11-28 08:55:14.624083812 +0000 UTC m=+7369.002148923" lastFinishedPulling="2025-11-28 08:55:14.943067712 +0000 UTC m=+7369.321132813" observedRunningTime="2025-11-28 08:55:15.766845814 +0000 UTC m=+7370.144910925" watchObservedRunningTime="2025-11-28 08:55:15.785324342 +0000 UTC m=+7370.163389483" Nov 28 08:55:15 crc kubenswrapper[4946]: I1128 08:55:15.915448 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 08:55:15 crc kubenswrapper[4946]: I1128 08:55:15.987335 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4l4q\" (UniqueName: \"kubernetes.io/projected/9eeece55-15ef-4cd1-9f27-525991788887-kube-api-access-l4l4q\") pod \"9eeece55-15ef-4cd1-9f27-525991788887\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " Nov 28 08:55:15 crc kubenswrapper[4946]: I1128 08:55:15.988801 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eeece55-15ef-4cd1-9f27-525991788887-scripts\") pod \"9eeece55-15ef-4cd1-9f27-525991788887\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " Nov 28 08:55:15 crc kubenswrapper[4946]: I1128 08:55:15.989212 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eeece55-15ef-4cd1-9f27-525991788887-config-data\") pod \"9eeece55-15ef-4cd1-9f27-525991788887\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " Nov 28 08:55:15 crc kubenswrapper[4946]: I1128 08:55:15.989246 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eeece55-15ef-4cd1-9f27-525991788887-combined-ca-bundle\") pod \"9eeece55-15ef-4cd1-9f27-525991788887\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " Nov 28 08:55:15 crc kubenswrapper[4946]: I1128 08:55:15.989299 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9eeece55-15ef-4cd1-9f27-525991788887-config-data-custom\") pod \"9eeece55-15ef-4cd1-9f27-525991788887\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " Nov 28 08:55:15 crc kubenswrapper[4946]: I1128 08:55:15.989366 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eeece55-15ef-4cd1-9f27-525991788887-logs\") pod \"9eeece55-15ef-4cd1-9f27-525991788887\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " Nov 28 08:55:15 crc kubenswrapper[4946]: I1128 08:55:15.989479 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9eeece55-15ef-4cd1-9f27-525991788887-etc-machine-id\") pod \"9eeece55-15ef-4cd1-9f27-525991788887\" (UID: \"9eeece55-15ef-4cd1-9f27-525991788887\") " Nov 28 08:55:15 crc kubenswrapper[4946]: I1128 08:55:15.990024 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9eeece55-15ef-4cd1-9f27-525991788887-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9eeece55-15ef-4cd1-9f27-525991788887" (UID: "9eeece55-15ef-4cd1-9f27-525991788887"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 08:55:15 crc kubenswrapper[4946]: I1128 08:55:15.994079 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9eeece55-15ef-4cd1-9f27-525991788887-logs" (OuterVolumeSpecName: "logs") pod "9eeece55-15ef-4cd1-9f27-525991788887" (UID: "9eeece55-15ef-4cd1-9f27-525991788887"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:55:15 crc kubenswrapper[4946]: I1128 08:55:15.994649 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eeece55-15ef-4cd1-9f27-525991788887-scripts" (OuterVolumeSpecName: "scripts") pod "9eeece55-15ef-4cd1-9f27-525991788887" (UID: "9eeece55-15ef-4cd1-9f27-525991788887"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:55:15 crc kubenswrapper[4946]: I1128 08:55:15.994955 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eeece55-15ef-4cd1-9f27-525991788887-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9eeece55-15ef-4cd1-9f27-525991788887" (UID: "9eeece55-15ef-4cd1-9f27-525991788887"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:55:15 crc kubenswrapper[4946]: I1128 08:55:15.999704 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eeece55-15ef-4cd1-9f27-525991788887-kube-api-access-l4l4q" (OuterVolumeSpecName: "kube-api-access-l4l4q") pod "9eeece55-15ef-4cd1-9f27-525991788887" (UID: "9eeece55-15ef-4cd1-9f27-525991788887"). InnerVolumeSpecName "kube-api-access-l4l4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.035537 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eeece55-15ef-4cd1-9f27-525991788887-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9eeece55-15ef-4cd1-9f27-525991788887" (UID: "9eeece55-15ef-4cd1-9f27-525991788887"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.047618 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.061200 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eeece55-15ef-4cd1-9f27-525991788887-config-data" (OuterVolumeSpecName: "config-data") pod "9eeece55-15ef-4cd1-9f27-525991788887" (UID: "9eeece55-15ef-4cd1-9f27-525991788887"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.095841 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4l4q\" (UniqueName: \"kubernetes.io/projected/9eeece55-15ef-4cd1-9f27-525991788887-kube-api-access-l4l4q\") on node \"crc\" DevicePath \"\"" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.095869 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eeece55-15ef-4cd1-9f27-525991788887-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.095879 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eeece55-15ef-4cd1-9f27-525991788887-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.095888 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eeece55-15ef-4cd1-9f27-525991788887-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.095896 4946 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9eeece55-15ef-4cd1-9f27-525991788887-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.095904 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eeece55-15ef-4cd1-9f27-525991788887-logs\") on node \"crc\" DevicePath \"\"" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.095912 4946 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9eeece55-15ef-4cd1-9f27-525991788887-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.736476 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9eeece55-15ef-4cd1-9f27-525991788887","Type":"ContainerDied","Data":"90f30ca79d75d635fc1d6da9918628a434b8fc1b12b5dc605d20dea102119c62"} Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.736715 4946 scope.go:117] "RemoveContainer" containerID="e5ffd8bda162451b7cc9f848be59c05d0561054138c50754a4ac4536e542e20b" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.737779 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.776944 4946 scope.go:117] "RemoveContainer" containerID="5575b73911fcc3797a0de6b22f9257ebce7992d2f6f346e66ea5081c9cff5ad6" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.788953 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.805819 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.818444 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 28 08:55:16 crc kubenswrapper[4946]: E1128 08:55:16.818858 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eeece55-15ef-4cd1-9f27-525991788887" containerName="cinder-api" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.818877 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eeece55-15ef-4cd1-9f27-525991788887" containerName="cinder-api" Nov 28 08:55:16 crc kubenswrapper[4946]: E1128 08:55:16.818887 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eeece55-15ef-4cd1-9f27-525991788887" containerName="cinder-api-log" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.818894 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eeece55-15ef-4cd1-9f27-525991788887" containerName="cinder-api-log" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.819080 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eeece55-15ef-4cd1-9f27-525991788887" containerName="cinder-api-log" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.819115 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eeece55-15ef-4cd1-9f27-525991788887" containerName="cinder-api" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.820090 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.822592 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.831909 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.913267 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/425d34b3-faf7-4523-99e6-88dcf41ed3c6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"425d34b3-faf7-4523-99e6-88dcf41ed3c6\") " pod="openstack/cinder-api-0" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.913440 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/425d34b3-faf7-4523-99e6-88dcf41ed3c6-logs\") pod \"cinder-api-0\" (UID: \"425d34b3-faf7-4523-99e6-88dcf41ed3c6\") " pod="openstack/cinder-api-0" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.913517 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/425d34b3-faf7-4523-99e6-88dcf41ed3c6-config-data-custom\") pod \"cinder-api-0\" (UID: \"425d34b3-faf7-4523-99e6-88dcf41ed3c6\") " pod="openstack/cinder-api-0" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.913558 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/425d34b3-faf7-4523-99e6-88dcf41ed3c6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"425d34b3-faf7-4523-99e6-88dcf41ed3c6\") " pod="openstack/cinder-api-0" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.913589 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/425d34b3-faf7-4523-99e6-88dcf41ed3c6-scripts\") pod \"cinder-api-0\" (UID: \"425d34b3-faf7-4523-99e6-88dcf41ed3c6\") " pod="openstack/cinder-api-0" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.913667 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/425d34b3-faf7-4523-99e6-88dcf41ed3c6-config-data\") pod \"cinder-api-0\" (UID: \"425d34b3-faf7-4523-99e6-88dcf41ed3c6\") " pod="openstack/cinder-api-0" Nov 28 08:55:16 crc kubenswrapper[4946]: I1128 08:55:16.913695 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzlq9\" (UniqueName: \"kubernetes.io/projected/425d34b3-faf7-4523-99e6-88dcf41ed3c6-kube-api-access-zzlq9\") pod \"cinder-api-0\" (UID: \"425d34b3-faf7-4523-99e6-88dcf41ed3c6\") " pod="openstack/cinder-api-0" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.015927 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/425d34b3-faf7-4523-99e6-88dcf41ed3c6-logs\") pod \"cinder-api-0\" (UID: \"425d34b3-faf7-4523-99e6-88dcf41ed3c6\") " pod="openstack/cinder-api-0" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.016058 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/425d34b3-faf7-4523-99e6-88dcf41ed3c6-config-data-custom\") pod \"cinder-api-0\" (UID: \"425d34b3-faf7-4523-99e6-88dcf41ed3c6\") " pod="openstack/cinder-api-0" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.016120 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/425d34b3-faf7-4523-99e6-88dcf41ed3c6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"425d34b3-faf7-4523-99e6-88dcf41ed3c6\") " pod="openstack/cinder-api-0" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.016159 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/425d34b3-faf7-4523-99e6-88dcf41ed3c6-scripts\") pod \"cinder-api-0\" (UID: \"425d34b3-faf7-4523-99e6-88dcf41ed3c6\") " pod="openstack/cinder-api-0" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.016309 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/425d34b3-faf7-4523-99e6-88dcf41ed3c6-config-data\") pod \"cinder-api-0\" (UID: \"425d34b3-faf7-4523-99e6-88dcf41ed3c6\") " pod="openstack/cinder-api-0" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.016352 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzlq9\" (UniqueName: \"kubernetes.io/projected/425d34b3-faf7-4523-99e6-88dcf41ed3c6-kube-api-access-zzlq9\") pod \"cinder-api-0\" (UID: \"425d34b3-faf7-4523-99e6-88dcf41ed3c6\") " pod="openstack/cinder-api-0" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.016519 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/425d34b3-faf7-4523-99e6-88dcf41ed3c6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"425d34b3-faf7-4523-99e6-88dcf41ed3c6\") " pod="openstack/cinder-api-0" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.016678 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/425d34b3-faf7-4523-99e6-88dcf41ed3c6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"425d34b3-faf7-4523-99e6-88dcf41ed3c6\") " pod="openstack/cinder-api-0" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.017172 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/425d34b3-faf7-4523-99e6-88dcf41ed3c6-logs\") pod \"cinder-api-0\" (UID: \"425d34b3-faf7-4523-99e6-88dcf41ed3c6\") " pod="openstack/cinder-api-0" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.031369 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/425d34b3-faf7-4523-99e6-88dcf41ed3c6-scripts\") pod \"cinder-api-0\" (UID: \"425d34b3-faf7-4523-99e6-88dcf41ed3c6\") " pod="openstack/cinder-api-0" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.032549 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/425d34b3-faf7-4523-99e6-88dcf41ed3c6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"425d34b3-faf7-4523-99e6-88dcf41ed3c6\") " pod="openstack/cinder-api-0" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.032899 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/425d34b3-faf7-4523-99e6-88dcf41ed3c6-config-data\") pod \"cinder-api-0\" (UID: \"425d34b3-faf7-4523-99e6-88dcf41ed3c6\") " pod="openstack/cinder-api-0" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.037323 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/425d34b3-faf7-4523-99e6-88dcf41ed3c6-config-data-custom\") pod \"cinder-api-0\" (UID: \"425d34b3-faf7-4523-99e6-88dcf41ed3c6\") " pod="openstack/cinder-api-0" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.039213 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzlq9\" (UniqueName: \"kubernetes.io/projected/425d34b3-faf7-4523-99e6-88dcf41ed3c6-kube-api-access-zzlq9\") pod \"cinder-api-0\" (UID: \"425d34b3-faf7-4523-99e6-88dcf41ed3c6\") " pod="openstack/cinder-api-0" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.153633 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.558377 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b4swm"] Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.560572 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4swm" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.576912 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b4swm"] Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.627119 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/419b30ad-96ca-4a84-9dbe-21048d48eaff-utilities\") pod \"redhat-operators-b4swm\" (UID: \"419b30ad-96ca-4a84-9dbe-21048d48eaff\") " pod="openshift-marketplace/redhat-operators-b4swm" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.627163 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85gxz\" (UniqueName: \"kubernetes.io/projected/419b30ad-96ca-4a84-9dbe-21048d48eaff-kube-api-access-85gxz\") pod \"redhat-operators-b4swm\" (UID: \"419b30ad-96ca-4a84-9dbe-21048d48eaff\") " pod="openshift-marketplace/redhat-operators-b4swm" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.627197 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/419b30ad-96ca-4a84-9dbe-21048d48eaff-catalog-content\") pod \"redhat-operators-b4swm\" (UID: \"419b30ad-96ca-4a84-9dbe-21048d48eaff\") " pod="openshift-marketplace/redhat-operators-b4swm" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.651799 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 08:55:17 crc kubenswrapper[4946]: W1128 08:55:17.653171 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod425d34b3_faf7_4523_99e6_88dcf41ed3c6.slice/crio-cd53f10cac326551ad549d77621c04e647966df3fdb28ec02f2a196100aa342c WatchSource:0}: Error finding container cd53f10cac326551ad549d77621c04e647966df3fdb28ec02f2a196100aa342c: Status 404 returned error can't find the container with id cd53f10cac326551ad549d77621c04e647966df3fdb28ec02f2a196100aa342c Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.729555 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/419b30ad-96ca-4a84-9dbe-21048d48eaff-utilities\") pod \"redhat-operators-b4swm\" (UID: \"419b30ad-96ca-4a84-9dbe-21048d48eaff\") " pod="openshift-marketplace/redhat-operators-b4swm" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.729609 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85gxz\" (UniqueName: \"kubernetes.io/projected/419b30ad-96ca-4a84-9dbe-21048d48eaff-kube-api-access-85gxz\") pod \"redhat-operators-b4swm\" (UID: \"419b30ad-96ca-4a84-9dbe-21048d48eaff\") " pod="openshift-marketplace/redhat-operators-b4swm" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.729632 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/419b30ad-96ca-4a84-9dbe-21048d48eaff-catalog-content\") pod \"redhat-operators-b4swm\" (UID: \"419b30ad-96ca-4a84-9dbe-21048d48eaff\") " pod="openshift-marketplace/redhat-operators-b4swm" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.730060 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/419b30ad-96ca-4a84-9dbe-21048d48eaff-utilities\") pod \"redhat-operators-b4swm\" (UID: \"419b30ad-96ca-4a84-9dbe-21048d48eaff\") " pod="openshift-marketplace/redhat-operators-b4swm" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.730166 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/419b30ad-96ca-4a84-9dbe-21048d48eaff-catalog-content\") pod \"redhat-operators-b4swm\" (UID: \"419b30ad-96ca-4a84-9dbe-21048d48eaff\") " pod="openshift-marketplace/redhat-operators-b4swm" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.745919 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"425d34b3-faf7-4523-99e6-88dcf41ed3c6","Type":"ContainerStarted","Data":"cd53f10cac326551ad549d77621c04e647966df3fdb28ec02f2a196100aa342c"} Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.750482 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85gxz\" (UniqueName: \"kubernetes.io/projected/419b30ad-96ca-4a84-9dbe-21048d48eaff-kube-api-access-85gxz\") pod \"redhat-operators-b4swm\" (UID: \"419b30ad-96ca-4a84-9dbe-21048d48eaff\") " pod="openshift-marketplace/redhat-operators-b4swm" Nov 28 08:55:17 crc kubenswrapper[4946]: I1128 08:55:17.888314 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4swm" Nov 28 08:55:18 crc kubenswrapper[4946]: I1128 08:55:18.027109 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eeece55-15ef-4cd1-9f27-525991788887" path="/var/lib/kubelet/pods/9eeece55-15ef-4cd1-9f27-525991788887/volumes" Nov 28 08:55:18 crc kubenswrapper[4946]: W1128 08:55:18.364713 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod419b30ad_96ca_4a84_9dbe_21048d48eaff.slice/crio-474e0aaee289c5ffb8d1e39a536f0546087117095ca9b88287a3bba9aff06777 WatchSource:0}: Error finding container 474e0aaee289c5ffb8d1e39a536f0546087117095ca9b88287a3bba9aff06777: Status 404 returned error can't find the container with id 474e0aaee289c5ffb8d1e39a536f0546087117095ca9b88287a3bba9aff06777 Nov 28 08:55:18 crc kubenswrapper[4946]: I1128 08:55:18.367330 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b4swm"] Nov 28 08:55:18 crc kubenswrapper[4946]: I1128 08:55:18.486565 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:18 crc kubenswrapper[4946]: I1128 08:55:18.765645 4946 generic.go:334] "Generic (PLEG): container finished" podID="419b30ad-96ca-4a84-9dbe-21048d48eaff" containerID="45ceeff5c576b48a7e04f4176e9f90f7f9f2726e5c2c317298948d0c51fd2dd6" exitCode=0 Nov 28 08:55:18 crc kubenswrapper[4946]: I1128 08:55:18.765822 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4swm" event={"ID":"419b30ad-96ca-4a84-9dbe-21048d48eaff","Type":"ContainerDied","Data":"45ceeff5c576b48a7e04f4176e9f90f7f9f2726e5c2c317298948d0c51fd2dd6"} Nov 28 08:55:18 crc kubenswrapper[4946]: I1128 08:55:18.765983 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4swm" event={"ID":"419b30ad-96ca-4a84-9dbe-21048d48eaff","Type":"ContainerStarted","Data":"474e0aaee289c5ffb8d1e39a536f0546087117095ca9b88287a3bba9aff06777"} Nov 28 08:55:18 crc kubenswrapper[4946]: I1128 08:55:18.770049 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"425d34b3-faf7-4523-99e6-88dcf41ed3c6","Type":"ContainerStarted","Data":"11e01a69cfe2f35f59d0322efe1c816a243f26c846646e7d034b687f53cd43c5"} Nov 28 08:55:19 crc kubenswrapper[4946]: I1128 08:55:19.284752 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Nov 28 08:55:19 crc kubenswrapper[4946]: I1128 08:55:19.793038 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4swm" event={"ID":"419b30ad-96ca-4a84-9dbe-21048d48eaff","Type":"ContainerStarted","Data":"7340f1c94fa54c94b7315be39be09abfb18c23954e3784a3a78fe89a8b05e48a"} Nov 28 08:55:19 crc kubenswrapper[4946]: I1128 08:55:19.798760 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"425d34b3-faf7-4523-99e6-88dcf41ed3c6","Type":"ContainerStarted","Data":"a082cc9fc5a3befe110589385fa9988073ae1e18fec6ebafa913f58b8c328860"} Nov 28 08:55:19 crc kubenswrapper[4946]: I1128 08:55:19.798942 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 28 08:55:19 crc kubenswrapper[4946]: I1128 08:55:19.849908 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.8498904659999997 podStartE2EDuration="3.849890466s" podCreationTimestamp="2025-11-28 08:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:55:19.847170369 +0000 UTC m=+7374.225235490" watchObservedRunningTime="2025-11-28 08:55:19.849890466 +0000 UTC m=+7374.227955587" Nov 28 08:55:20 crc kubenswrapper[4946]: I1128 08:55:20.812412 4946 generic.go:334] "Generic (PLEG): container finished" podID="419b30ad-96ca-4a84-9dbe-21048d48eaff" containerID="7340f1c94fa54c94b7315be39be09abfb18c23954e3784a3a78fe89a8b05e48a" exitCode=0 Nov 28 08:55:20 crc kubenswrapper[4946]: I1128 08:55:20.813573 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4swm" event={"ID":"419b30ad-96ca-4a84-9dbe-21048d48eaff","Type":"ContainerDied","Data":"7340f1c94fa54c94b7315be39be09abfb18c23954e3784a3a78fe89a8b05e48a"} Nov 28 08:55:21 crc kubenswrapper[4946]: I1128 08:55:21.266201 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 28 08:55:21 crc kubenswrapper[4946]: I1128 08:55:21.327313 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 08:55:21 crc kubenswrapper[4946]: I1128 08:55:21.820417 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="767720cb-05f3-4879-8ee3-f22550a25d3a" containerName="cinder-scheduler" containerID="cri-o://a10c2561a2ccbbfdd9a72d9f2a2cb90c3eab831e68d718ea86c034c321e32670" gracePeriod=30 Nov 28 08:55:21 crc kubenswrapper[4946]: I1128 08:55:21.820523 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="767720cb-05f3-4879-8ee3-f22550a25d3a" containerName="probe" containerID="cri-o://db55406c4b6170bc0f31fd3abddce1119cb817f2ed44875b9080557d81dbcd78" gracePeriod=30 Nov 28 08:55:22 crc kubenswrapper[4946]: I1128 08:55:22.534535 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m85sv"] Nov 28 08:55:22 crc kubenswrapper[4946]: I1128 08:55:22.537429 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m85sv" Nov 28 08:55:22 crc kubenswrapper[4946]: I1128 08:55:22.577604 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m85sv"] Nov 28 08:55:22 crc kubenswrapper[4946]: I1128 08:55:22.644993 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619-catalog-content\") pod \"redhat-marketplace-m85sv\" (UID: \"fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619\") " pod="openshift-marketplace/redhat-marketplace-m85sv" Nov 28 08:55:22 crc kubenswrapper[4946]: I1128 08:55:22.645283 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqgs2\" (UniqueName: \"kubernetes.io/projected/fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619-kube-api-access-cqgs2\") pod \"redhat-marketplace-m85sv\" (UID: \"fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619\") " pod="openshift-marketplace/redhat-marketplace-m85sv" Nov 28 08:55:22 crc kubenswrapper[4946]: I1128 08:55:22.645350 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619-utilities\") pod \"redhat-marketplace-m85sv\" (UID: \"fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619\") " pod="openshift-marketplace/redhat-marketplace-m85sv" Nov 28 08:55:22 crc kubenswrapper[4946]: I1128 08:55:22.747437 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqgs2\" (UniqueName: \"kubernetes.io/projected/fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619-kube-api-access-cqgs2\") pod \"redhat-marketplace-m85sv\" (UID: \"fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619\") " pod="openshift-marketplace/redhat-marketplace-m85sv" Nov 28 08:55:22 crc kubenswrapper[4946]: I1128 08:55:22.747506 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619-utilities\") pod \"redhat-marketplace-m85sv\" (UID: \"fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619\") " pod="openshift-marketplace/redhat-marketplace-m85sv" Nov 28 08:55:22 crc kubenswrapper[4946]: I1128 08:55:22.747631 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619-catalog-content\") pod \"redhat-marketplace-m85sv\" (UID: \"fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619\") " pod="openshift-marketplace/redhat-marketplace-m85sv" Nov 28 08:55:22 crc kubenswrapper[4946]: I1128 08:55:22.748245 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619-catalog-content\") pod \"redhat-marketplace-m85sv\" (UID: \"fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619\") " pod="openshift-marketplace/redhat-marketplace-m85sv" Nov 28 08:55:22 crc kubenswrapper[4946]: I1128 08:55:22.748292 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619-utilities\") pod \"redhat-marketplace-m85sv\" (UID: \"fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619\") " pod="openshift-marketplace/redhat-marketplace-m85sv" Nov 28 08:55:22 crc kubenswrapper[4946]: I1128 08:55:22.769742 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqgs2\" (UniqueName: \"kubernetes.io/projected/fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619-kube-api-access-cqgs2\") pod \"redhat-marketplace-m85sv\" (UID: \"fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619\") " pod="openshift-marketplace/redhat-marketplace-m85sv" Nov 28 08:55:22 crc kubenswrapper[4946]: I1128 08:55:22.865477 4946 generic.go:334] "Generic (PLEG): container finished" podID="767720cb-05f3-4879-8ee3-f22550a25d3a" containerID="db55406c4b6170bc0f31fd3abddce1119cb817f2ed44875b9080557d81dbcd78" exitCode=0 Nov 28 08:55:22 crc kubenswrapper[4946]: I1128 08:55:22.865557 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"767720cb-05f3-4879-8ee3-f22550a25d3a","Type":"ContainerDied","Data":"db55406c4b6170bc0f31fd3abddce1119cb817f2ed44875b9080557d81dbcd78"} Nov 28 08:55:22 crc kubenswrapper[4946]: I1128 08:55:22.869841 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4swm" event={"ID":"419b30ad-96ca-4a84-9dbe-21048d48eaff","Type":"ContainerStarted","Data":"1f73cb2d5e1a31aef7b0b65b29569d94e6b02847d62722617553e6195a96a7c5"} Nov 28 08:55:22 crc kubenswrapper[4946]: I1128 08:55:22.872438 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m85sv" Nov 28 08:55:22 crc kubenswrapper[4946]: I1128 08:55:22.891338 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b4swm" podStartSLOduration=2.558655414 podStartE2EDuration="5.891313821s" podCreationTimestamp="2025-11-28 08:55:17 +0000 UTC" firstStartedPulling="2025-11-28 08:55:18.768613167 +0000 UTC m=+7373.146678288" lastFinishedPulling="2025-11-28 08:55:22.101271584 +0000 UTC m=+7376.479336695" observedRunningTime="2025-11-28 08:55:22.886639985 +0000 UTC m=+7377.264705096" watchObservedRunningTime="2025-11-28 08:55:22.891313821 +0000 UTC m=+7377.269378932" Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.399069 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m85sv"] Nov 28 08:55:23 crc kubenswrapper[4946]: W1128 08:55:23.451089 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa8dcfd6_ac4a_4c3a_9b0a_fa68230ba619.slice/crio-18d308d093dc3f2794879e2d8332cc5d76b1e2d3b2f10a4a4d0b73a0523a2d4a WatchSource:0}: Error finding container 18d308d093dc3f2794879e2d8332cc5d76b1e2d3b2f10a4a4d0b73a0523a2d4a: Status 404 returned error can't find the container with id 18d308d093dc3f2794879e2d8332cc5d76b1e2d3b2f10a4a4d0b73a0523a2d4a Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.745644 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.836532 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.896108 4946 generic.go:334] "Generic (PLEG): container finished" podID="767720cb-05f3-4879-8ee3-f22550a25d3a" containerID="a10c2561a2ccbbfdd9a72d9f2a2cb90c3eab831e68d718ea86c034c321e32670" exitCode=0 Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.896175 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"767720cb-05f3-4879-8ee3-f22550a25d3a","Type":"ContainerDied","Data":"a10c2561a2ccbbfdd9a72d9f2a2cb90c3eab831e68d718ea86c034c321e32670"} Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.896203 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"767720cb-05f3-4879-8ee3-f22550a25d3a","Type":"ContainerDied","Data":"c05ff86db8b78ab3de88b40c55f3068028349afc95c5720b213851b30a84a413"} Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.896220 4946 scope.go:117] "RemoveContainer" containerID="db55406c4b6170bc0f31fd3abddce1119cb817f2ed44875b9080557d81dbcd78" Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.896423 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.902131 4946 generic.go:334] "Generic (PLEG): container finished" podID="fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619" containerID="dfb32396e955c2a23dc0465d5b8d5a21b3efe87a77982ec5df131d04f1a44215" exitCode=0 Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.903331 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m85sv" event={"ID":"fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619","Type":"ContainerDied","Data":"dfb32396e955c2a23dc0465d5b8d5a21b3efe87a77982ec5df131d04f1a44215"} Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.903367 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m85sv" event={"ID":"fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619","Type":"ContainerStarted","Data":"18d308d093dc3f2794879e2d8332cc5d76b1e2d3b2f10a4a4d0b73a0523a2d4a"} Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.905836 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.930391 4946 scope.go:117] "RemoveContainer" containerID="a10c2561a2ccbbfdd9a72d9f2a2cb90c3eab831e68d718ea86c034c321e32670" Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.963017 4946 scope.go:117] "RemoveContainer" containerID="db55406c4b6170bc0f31fd3abddce1119cb817f2ed44875b9080557d81dbcd78" Nov 28 08:55:23 crc kubenswrapper[4946]: E1128 08:55:23.963552 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db55406c4b6170bc0f31fd3abddce1119cb817f2ed44875b9080557d81dbcd78\": container with ID starting with db55406c4b6170bc0f31fd3abddce1119cb817f2ed44875b9080557d81dbcd78 not found: ID does not exist" containerID="db55406c4b6170bc0f31fd3abddce1119cb817f2ed44875b9080557d81dbcd78" Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.963607 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db55406c4b6170bc0f31fd3abddce1119cb817f2ed44875b9080557d81dbcd78"} err="failed to get container status \"db55406c4b6170bc0f31fd3abddce1119cb817f2ed44875b9080557d81dbcd78\": rpc error: code = NotFound desc = could not find container \"db55406c4b6170bc0f31fd3abddce1119cb817f2ed44875b9080557d81dbcd78\": container with ID starting with db55406c4b6170bc0f31fd3abddce1119cb817f2ed44875b9080557d81dbcd78 not found: ID does not exist" Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.963638 4946 scope.go:117] "RemoveContainer" containerID="a10c2561a2ccbbfdd9a72d9f2a2cb90c3eab831e68d718ea86c034c321e32670" Nov 28 08:55:23 crc kubenswrapper[4946]: E1128 08:55:23.963931 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a10c2561a2ccbbfdd9a72d9f2a2cb90c3eab831e68d718ea86c034c321e32670\": container with ID starting with a10c2561a2ccbbfdd9a72d9f2a2cb90c3eab831e68d718ea86c034c321e32670 not found: ID does not exist" containerID="a10c2561a2ccbbfdd9a72d9f2a2cb90c3eab831e68d718ea86c034c321e32670" Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.963955 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a10c2561a2ccbbfdd9a72d9f2a2cb90c3eab831e68d718ea86c034c321e32670"} err="failed to get container status \"a10c2561a2ccbbfdd9a72d9f2a2cb90c3eab831e68d718ea86c034c321e32670\": rpc error: code = NotFound desc = could not find container \"a10c2561a2ccbbfdd9a72d9f2a2cb90c3eab831e68d718ea86c034c321e32670\": container with ID starting with a10c2561a2ccbbfdd9a72d9f2a2cb90c3eab831e68d718ea86c034c321e32670 not found: ID does not exist" Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.974733 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/767720cb-05f3-4879-8ee3-f22550a25d3a-config-data-custom\") pod \"767720cb-05f3-4879-8ee3-f22550a25d3a\" (UID: \"767720cb-05f3-4879-8ee3-f22550a25d3a\") " Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.974887 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/767720cb-05f3-4879-8ee3-f22550a25d3a-scripts\") pod \"767720cb-05f3-4879-8ee3-f22550a25d3a\" (UID: \"767720cb-05f3-4879-8ee3-f22550a25d3a\") " Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.974930 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/767720cb-05f3-4879-8ee3-f22550a25d3a-combined-ca-bundle\") pod \"767720cb-05f3-4879-8ee3-f22550a25d3a\" (UID: \"767720cb-05f3-4879-8ee3-f22550a25d3a\") " Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.975003 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l96vb\" (UniqueName: \"kubernetes.io/projected/767720cb-05f3-4879-8ee3-f22550a25d3a-kube-api-access-l96vb\") pod \"767720cb-05f3-4879-8ee3-f22550a25d3a\" (UID: \"767720cb-05f3-4879-8ee3-f22550a25d3a\") " Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.975031 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/767720cb-05f3-4879-8ee3-f22550a25d3a-config-data\") pod \"767720cb-05f3-4879-8ee3-f22550a25d3a\" (UID: \"767720cb-05f3-4879-8ee3-f22550a25d3a\") " Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.975176 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/767720cb-05f3-4879-8ee3-f22550a25d3a-etc-machine-id\") pod \"767720cb-05f3-4879-8ee3-f22550a25d3a\" (UID: \"767720cb-05f3-4879-8ee3-f22550a25d3a\") " Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.975520 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/767720cb-05f3-4879-8ee3-f22550a25d3a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "767720cb-05f3-4879-8ee3-f22550a25d3a" (UID: "767720cb-05f3-4879-8ee3-f22550a25d3a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.975924 4946 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/767720cb-05f3-4879-8ee3-f22550a25d3a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.980582 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/767720cb-05f3-4879-8ee3-f22550a25d3a-kube-api-access-l96vb" (OuterVolumeSpecName: "kube-api-access-l96vb") pod "767720cb-05f3-4879-8ee3-f22550a25d3a" (UID: "767720cb-05f3-4879-8ee3-f22550a25d3a"). InnerVolumeSpecName "kube-api-access-l96vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.980604 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/767720cb-05f3-4879-8ee3-f22550a25d3a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "767720cb-05f3-4879-8ee3-f22550a25d3a" (UID: "767720cb-05f3-4879-8ee3-f22550a25d3a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:55:23 crc kubenswrapper[4946]: I1128 08:55:23.984138 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/767720cb-05f3-4879-8ee3-f22550a25d3a-scripts" (OuterVolumeSpecName: "scripts") pod "767720cb-05f3-4879-8ee3-f22550a25d3a" (UID: "767720cb-05f3-4879-8ee3-f22550a25d3a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.026744 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/767720cb-05f3-4879-8ee3-f22550a25d3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "767720cb-05f3-4879-8ee3-f22550a25d3a" (UID: "767720cb-05f3-4879-8ee3-f22550a25d3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.062931 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/767720cb-05f3-4879-8ee3-f22550a25d3a-config-data" (OuterVolumeSpecName: "config-data") pod "767720cb-05f3-4879-8ee3-f22550a25d3a" (UID: "767720cb-05f3-4879-8ee3-f22550a25d3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.077428 4946 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/767720cb-05f3-4879-8ee3-f22550a25d3a-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.077454 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/767720cb-05f3-4879-8ee3-f22550a25d3a-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.077473 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/767720cb-05f3-4879-8ee3-f22550a25d3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.077482 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l96vb\" (UniqueName: \"kubernetes.io/projected/767720cb-05f3-4879-8ee3-f22550a25d3a-kube-api-access-l96vb\") on node \"crc\" DevicePath \"\"" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.077490 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/767720cb-05f3-4879-8ee3-f22550a25d3a-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.234313 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.245968 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.262162 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 08:55:24 crc kubenswrapper[4946]: E1128 08:55:24.262750 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="767720cb-05f3-4879-8ee3-f22550a25d3a" containerName="cinder-scheduler" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.262779 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="767720cb-05f3-4879-8ee3-f22550a25d3a" containerName="cinder-scheduler" Nov 28 08:55:24 crc kubenswrapper[4946]: E1128 08:55:24.262809 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="767720cb-05f3-4879-8ee3-f22550a25d3a" containerName="probe" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.262818 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="767720cb-05f3-4879-8ee3-f22550a25d3a" containerName="probe" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.263045 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="767720cb-05f3-4879-8ee3-f22550a25d3a" containerName="probe" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.263070 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="767720cb-05f3-4879-8ee3-f22550a25d3a" containerName="cinder-scheduler" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.264397 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.267956 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.278378 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.382776 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgg28\" (UniqueName: \"kubernetes.io/projected/5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d-kube-api-access-cgg28\") pod \"cinder-scheduler-0\" (UID: \"5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.382852 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d-scripts\") pod \"cinder-scheduler-0\" (UID: \"5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.382913 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d-config-data\") pod \"cinder-scheduler-0\" (UID: \"5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.383062 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.383159 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.383199 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.485015 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.485092 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.485117 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.485155 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgg28\" (UniqueName: \"kubernetes.io/projected/5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d-kube-api-access-cgg28\") pod \"cinder-scheduler-0\" (UID: \"5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.485177 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d-scripts\") pod \"cinder-scheduler-0\" (UID: \"5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.485203 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d-config-data\") pod \"cinder-scheduler-0\" (UID: \"5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.486518 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.496656 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.503173 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d-scripts\") pod \"cinder-scheduler-0\" (UID: \"5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.516939 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d-config-data\") pod \"cinder-scheduler-0\" (UID: \"5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.525063 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.550409 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgg28\" (UniqueName: \"kubernetes.io/projected/5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d-kube-api-access-cgg28\") pod \"cinder-scheduler-0\" (UID: \"5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d\") " pod="openstack/cinder-scheduler-0" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.585353 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.601621 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.731724 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.732071 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:55:24 crc kubenswrapper[4946]: I1128 08:55:24.912601 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m85sv" event={"ID":"fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619","Type":"ContainerStarted","Data":"bfaac346690d7292b7b653c4b19d98fe2de27c912b5da552a07b475db3f38d44"} Nov 28 08:55:25 crc kubenswrapper[4946]: W1128 08:55:25.095883 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e89ce1f_b6c8_4cb9_ad32_ded744cb7f2d.slice/crio-782363cd383865635b6f65218d8a9de869ea2b210bd3b9d2fd8a4ce5f5892e44 WatchSource:0}: Error finding container 782363cd383865635b6f65218d8a9de869ea2b210bd3b9d2fd8a4ce5f5892e44: Status 404 returned error can't find the container with id 782363cd383865635b6f65218d8a9de869ea2b210bd3b9d2fd8a4ce5f5892e44 Nov 28 08:55:25 crc kubenswrapper[4946]: I1128 08:55:25.102008 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 08:55:25 crc kubenswrapper[4946]: I1128 08:55:25.928303 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d","Type":"ContainerStarted","Data":"c978696942580f02540982edb7fd74a491f2b2ab1d307c1d64583fd39a7c524d"} Nov 28 08:55:25 crc kubenswrapper[4946]: I1128 08:55:25.928629 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d","Type":"ContainerStarted","Data":"782363cd383865635b6f65218d8a9de869ea2b210bd3b9d2fd8a4ce5f5892e44"} Nov 28 08:55:25 crc kubenswrapper[4946]: I1128 08:55:25.931303 4946 generic.go:334] "Generic (PLEG): container finished" podID="fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619" containerID="bfaac346690d7292b7b653c4b19d98fe2de27c912b5da552a07b475db3f38d44" exitCode=0 Nov 28 08:55:25 crc kubenswrapper[4946]: I1128 08:55:25.931347 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m85sv" event={"ID":"fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619","Type":"ContainerDied","Data":"bfaac346690d7292b7b653c4b19d98fe2de27c912b5da552a07b475db3f38d44"} Nov 28 08:55:26 crc kubenswrapper[4946]: I1128 08:55:26.015937 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="767720cb-05f3-4879-8ee3-f22550a25d3a" path="/var/lib/kubelet/pods/767720cb-05f3-4879-8ee3-f22550a25d3a/volumes" Nov 28 08:55:26 crc kubenswrapper[4946]: I1128 08:55:26.941739 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d","Type":"ContainerStarted","Data":"ac58d81736103151b6623bfca8a0f1298ad395840648df1b93d1c95314b138de"} Nov 28 08:55:26 crc kubenswrapper[4946]: I1128 08:55:26.968756 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.968741554 podStartE2EDuration="2.968741554s" podCreationTimestamp="2025-11-28 08:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:55:26.961586176 +0000 UTC m=+7381.339651287" watchObservedRunningTime="2025-11-28 08:55:26.968741554 +0000 UTC m=+7381.346806665" Nov 28 08:55:27 crc kubenswrapper[4946]: I1128 08:55:27.888849 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b4swm" Nov 28 08:55:27 crc kubenswrapper[4946]: I1128 08:55:27.889219 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b4swm" Nov 28 08:55:27 crc kubenswrapper[4946]: I1128 08:55:27.954828 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m85sv" event={"ID":"fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619","Type":"ContainerStarted","Data":"3c776a789240c6293ff5177b67c0ad74eae9274ba621b6dd7cbff1bb0908abf1"} Nov 28 08:55:27 crc kubenswrapper[4946]: I1128 08:55:27.988373 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m85sv" podStartSLOduration=2.758560806 podStartE2EDuration="5.988346706s" podCreationTimestamp="2025-11-28 08:55:22 +0000 UTC" firstStartedPulling="2025-11-28 08:55:23.904226907 +0000 UTC m=+7378.282292018" lastFinishedPulling="2025-11-28 08:55:27.134012787 +0000 UTC m=+7381.512077918" observedRunningTime="2025-11-28 08:55:27.982085671 +0000 UTC m=+7382.360150802" watchObservedRunningTime="2025-11-28 08:55:27.988346706 +0000 UTC m=+7382.366411827" Nov 28 08:55:28 crc kubenswrapper[4946]: I1128 08:55:28.959684 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b4swm" podUID="419b30ad-96ca-4a84-9dbe-21048d48eaff" containerName="registry-server" probeResult="failure" output=< Nov 28 08:55:28 crc kubenswrapper[4946]: timeout: failed to connect service ":50051" within 1s Nov 28 08:55:28 crc kubenswrapper[4946]: > Nov 28 08:55:28 crc kubenswrapper[4946]: I1128 08:55:28.980414 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 28 08:55:29 crc kubenswrapper[4946]: I1128 08:55:29.585767 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 28 08:55:30 crc kubenswrapper[4946]: I1128 08:55:30.285221 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c4zcm"] Nov 28 08:55:30 crc kubenswrapper[4946]: I1128 08:55:30.288101 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4zcm" Nov 28 08:55:30 crc kubenswrapper[4946]: I1128 08:55:30.295693 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c4zcm"] Nov 28 08:55:30 crc kubenswrapper[4946]: I1128 08:55:30.408386 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e7a5b4c-8cd5-418d-b691-194af2ba7b0a-utilities\") pod \"certified-operators-c4zcm\" (UID: \"0e7a5b4c-8cd5-418d-b691-194af2ba7b0a\") " pod="openshift-marketplace/certified-operators-c4zcm" Nov 28 08:55:30 crc kubenswrapper[4946]: I1128 08:55:30.408770 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr68j\" (UniqueName: \"kubernetes.io/projected/0e7a5b4c-8cd5-418d-b691-194af2ba7b0a-kube-api-access-nr68j\") pod \"certified-operators-c4zcm\" (UID: \"0e7a5b4c-8cd5-418d-b691-194af2ba7b0a\") " pod="openshift-marketplace/certified-operators-c4zcm" Nov 28 08:55:30 crc kubenswrapper[4946]: I1128 08:55:30.408853 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e7a5b4c-8cd5-418d-b691-194af2ba7b0a-catalog-content\") pod \"certified-operators-c4zcm\" (UID: \"0e7a5b4c-8cd5-418d-b691-194af2ba7b0a\") " pod="openshift-marketplace/certified-operators-c4zcm" Nov 28 08:55:30 crc kubenswrapper[4946]: I1128 08:55:30.510994 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e7a5b4c-8cd5-418d-b691-194af2ba7b0a-utilities\") pod \"certified-operators-c4zcm\" (UID: \"0e7a5b4c-8cd5-418d-b691-194af2ba7b0a\") " pod="openshift-marketplace/certified-operators-c4zcm" Nov 28 08:55:30 crc kubenswrapper[4946]: I1128 08:55:30.511229 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr68j\" (UniqueName: \"kubernetes.io/projected/0e7a5b4c-8cd5-418d-b691-194af2ba7b0a-kube-api-access-nr68j\") pod \"certified-operators-c4zcm\" (UID: \"0e7a5b4c-8cd5-418d-b691-194af2ba7b0a\") " pod="openshift-marketplace/certified-operators-c4zcm" Nov 28 08:55:30 crc kubenswrapper[4946]: I1128 08:55:30.511282 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e7a5b4c-8cd5-418d-b691-194af2ba7b0a-catalog-content\") pod \"certified-operators-c4zcm\" (UID: \"0e7a5b4c-8cd5-418d-b691-194af2ba7b0a\") " pod="openshift-marketplace/certified-operators-c4zcm" Nov 28 08:55:30 crc kubenswrapper[4946]: I1128 08:55:30.511666 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e7a5b4c-8cd5-418d-b691-194af2ba7b0a-utilities\") pod \"certified-operators-c4zcm\" (UID: \"0e7a5b4c-8cd5-418d-b691-194af2ba7b0a\") " pod="openshift-marketplace/certified-operators-c4zcm" Nov 28 08:55:30 crc kubenswrapper[4946]: I1128 08:55:30.512056 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e7a5b4c-8cd5-418d-b691-194af2ba7b0a-catalog-content\") pod \"certified-operators-c4zcm\" (UID: \"0e7a5b4c-8cd5-418d-b691-194af2ba7b0a\") " pod="openshift-marketplace/certified-operators-c4zcm" Nov 28 08:55:30 crc kubenswrapper[4946]: I1128 08:55:30.541726 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr68j\" (UniqueName: \"kubernetes.io/projected/0e7a5b4c-8cd5-418d-b691-194af2ba7b0a-kube-api-access-nr68j\") pod \"certified-operators-c4zcm\" (UID: \"0e7a5b4c-8cd5-418d-b691-194af2ba7b0a\") " pod="openshift-marketplace/certified-operators-c4zcm" Nov 28 08:55:30 crc kubenswrapper[4946]: I1128 08:55:30.640356 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4zcm" Nov 28 08:55:31 crc kubenswrapper[4946]: I1128 08:55:31.208657 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c4zcm"] Nov 28 08:55:31 crc kubenswrapper[4946]: W1128 08:55:31.209403 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e7a5b4c_8cd5_418d_b691_194af2ba7b0a.slice/crio-aea0369fad1ed7b460f7685189cd6bc62db1cfd92b9cbe2766d311a25f760bbe WatchSource:0}: Error finding container aea0369fad1ed7b460f7685189cd6bc62db1cfd92b9cbe2766d311a25f760bbe: Status 404 returned error can't find the container with id aea0369fad1ed7b460f7685189cd6bc62db1cfd92b9cbe2766d311a25f760bbe Nov 28 08:55:31 crc kubenswrapper[4946]: I1128 08:55:31.992951 4946 generic.go:334] "Generic (PLEG): container finished" podID="0e7a5b4c-8cd5-418d-b691-194af2ba7b0a" containerID="1b5c7d07128945403d229c8c49789bdcc43cccb8d113b3a01bd460c7c05ae3d4" exitCode=0 Nov 28 08:55:32 crc kubenswrapper[4946]: I1128 08:55:32.001334 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4zcm" event={"ID":"0e7a5b4c-8cd5-418d-b691-194af2ba7b0a","Type":"ContainerDied","Data":"1b5c7d07128945403d229c8c49789bdcc43cccb8d113b3a01bd460c7c05ae3d4"} Nov 28 08:55:32 crc kubenswrapper[4946]: I1128 08:55:32.001377 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4zcm" event={"ID":"0e7a5b4c-8cd5-418d-b691-194af2ba7b0a","Type":"ContainerStarted","Data":"aea0369fad1ed7b460f7685189cd6bc62db1cfd92b9cbe2766d311a25f760bbe"} Nov 28 08:55:32 crc kubenswrapper[4946]: I1128 08:55:32.872973 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m85sv" Nov 28 08:55:32 crc kubenswrapper[4946]: I1128 08:55:32.873030 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m85sv" Nov 28 08:55:32 crc kubenswrapper[4946]: I1128 08:55:32.942121 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m85sv" Nov 28 08:55:33 crc kubenswrapper[4946]: I1128 08:55:33.014261 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4zcm" event={"ID":"0e7a5b4c-8cd5-418d-b691-194af2ba7b0a","Type":"ContainerStarted","Data":"cda9667f0743d79a86d0e2e2b798f411f72f200409b4c46fab79ef511534dffa"} Nov 28 08:55:33 crc kubenswrapper[4946]: I1128 08:55:33.092870 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m85sv" Nov 28 08:55:34 crc kubenswrapper[4946]: I1128 08:55:34.033966 4946 generic.go:334] "Generic (PLEG): container finished" podID="0e7a5b4c-8cd5-418d-b691-194af2ba7b0a" containerID="cda9667f0743d79a86d0e2e2b798f411f72f200409b4c46fab79ef511534dffa" exitCode=0 Nov 28 08:55:34 crc kubenswrapper[4946]: I1128 08:55:34.034059 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4zcm" event={"ID":"0e7a5b4c-8cd5-418d-b691-194af2ba7b0a","Type":"ContainerDied","Data":"cda9667f0743d79a86d0e2e2b798f411f72f200409b4c46fab79ef511534dffa"} Nov 28 08:55:34 crc kubenswrapper[4946]: I1128 08:55:34.814201 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 28 08:55:35 crc kubenswrapper[4946]: I1128 08:55:35.316391 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m85sv"] Nov 28 08:55:35 crc kubenswrapper[4946]: I1128 08:55:35.317029 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m85sv" podUID="fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619" containerName="registry-server" containerID="cri-o://3c776a789240c6293ff5177b67c0ad74eae9274ba621b6dd7cbff1bb0908abf1" gracePeriod=2 Nov 28 08:55:35 crc kubenswrapper[4946]: E1128 08:55:35.548128 4946 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa8dcfd6_ac4a_4c3a_9b0a_fa68230ba619.slice/crio-3c776a789240c6293ff5177b67c0ad74eae9274ba621b6dd7cbff1bb0908abf1.scope\": RecentStats: unable to find data in memory cache]" Nov 28 08:55:36 crc kubenswrapper[4946]: I1128 08:55:36.058278 4946 generic.go:334] "Generic (PLEG): container finished" podID="fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619" containerID="3c776a789240c6293ff5177b67c0ad74eae9274ba621b6dd7cbff1bb0908abf1" exitCode=0 Nov 28 08:55:36 crc kubenswrapper[4946]: I1128 08:55:36.058396 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m85sv" event={"ID":"fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619","Type":"ContainerDied","Data":"3c776a789240c6293ff5177b67c0ad74eae9274ba621b6dd7cbff1bb0908abf1"} Nov 28 08:55:36 crc kubenswrapper[4946]: I1128 08:55:36.061864 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4zcm" event={"ID":"0e7a5b4c-8cd5-418d-b691-194af2ba7b0a","Type":"ContainerStarted","Data":"1df7b017b54a775f572be134d875f497826003e73b336f2c66093a3858205e52"} Nov 28 08:55:36 crc kubenswrapper[4946]: I1128 08:55:36.111304 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c4zcm" podStartSLOduration=3.315419494 podStartE2EDuration="6.111278737s" podCreationTimestamp="2025-11-28 08:55:30 +0000 UTC" firstStartedPulling="2025-11-28 08:55:31.995362922 +0000 UTC m=+7386.373428033" lastFinishedPulling="2025-11-28 08:55:34.791222165 +0000 UTC m=+7389.169287276" observedRunningTime="2025-11-28 08:55:36.081281454 +0000 UTC m=+7390.459346565" watchObservedRunningTime="2025-11-28 08:55:36.111278737 +0000 UTC m=+7390.489343858" Nov 28 08:55:36 crc kubenswrapper[4946]: I1128 08:55:36.361322 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m85sv" Nov 28 08:55:36 crc kubenswrapper[4946]: I1128 08:55:36.437816 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619-catalog-content\") pod \"fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619\" (UID: \"fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619\") " Nov 28 08:55:36 crc kubenswrapper[4946]: I1128 08:55:36.438259 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqgs2\" (UniqueName: \"kubernetes.io/projected/fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619-kube-api-access-cqgs2\") pod \"fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619\" (UID: \"fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619\") " Nov 28 08:55:36 crc kubenswrapper[4946]: I1128 08:55:36.438280 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619-utilities\") pod \"fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619\" (UID: \"fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619\") " Nov 28 08:55:36 crc kubenswrapper[4946]: I1128 08:55:36.439353 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619-utilities" (OuterVolumeSpecName: "utilities") pod "fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619" (UID: "fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:55:36 crc kubenswrapper[4946]: I1128 08:55:36.445000 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619-kube-api-access-cqgs2" (OuterVolumeSpecName: "kube-api-access-cqgs2") pod "fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619" (UID: "fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619"). InnerVolumeSpecName "kube-api-access-cqgs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:55:36 crc kubenswrapper[4946]: I1128 08:55:36.462441 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619" (UID: "fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:55:36 crc kubenswrapper[4946]: I1128 08:55:36.540258 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqgs2\" (UniqueName: \"kubernetes.io/projected/fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619-kube-api-access-cqgs2\") on node \"crc\" DevicePath \"\"" Nov 28 08:55:36 crc kubenswrapper[4946]: I1128 08:55:36.540289 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 08:55:36 crc kubenswrapper[4946]: I1128 08:55:36.540298 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 08:55:37 crc kubenswrapper[4946]: I1128 08:55:37.075284 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m85sv" Nov 28 08:55:37 crc kubenswrapper[4946]: I1128 08:55:37.075281 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m85sv" event={"ID":"fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619","Type":"ContainerDied","Data":"18d308d093dc3f2794879e2d8332cc5d76b1e2d3b2f10a4a4d0b73a0523a2d4a"} Nov 28 08:55:37 crc kubenswrapper[4946]: I1128 08:55:37.075561 4946 scope.go:117] "RemoveContainer" containerID="3c776a789240c6293ff5177b67c0ad74eae9274ba621b6dd7cbff1bb0908abf1" Nov 28 08:55:37 crc kubenswrapper[4946]: I1128 08:55:37.121980 4946 scope.go:117] "RemoveContainer" containerID="bfaac346690d7292b7b653c4b19d98fe2de27c912b5da552a07b475db3f38d44" Nov 28 08:55:37 crc kubenswrapper[4946]: I1128 08:55:37.137416 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m85sv"] Nov 28 08:55:37 crc kubenswrapper[4946]: I1128 08:55:37.146734 4946 scope.go:117] "RemoveContainer" containerID="dfb32396e955c2a23dc0465d5b8d5a21b3efe87a77982ec5df131d04f1a44215" Nov 28 08:55:37 crc kubenswrapper[4946]: I1128 08:55:37.147216 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m85sv"] Nov 28 08:55:37 crc kubenswrapper[4946]: I1128 08:55:37.980288 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b4swm" Nov 28 08:55:38 crc kubenswrapper[4946]: I1128 08:55:38.006811 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619" path="/var/lib/kubelet/pods/fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619/volumes" Nov 28 08:55:38 crc kubenswrapper[4946]: I1128 08:55:38.062057 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b4swm" Nov 28 08:55:40 crc kubenswrapper[4946]: I1128 08:55:40.526771 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b4swm"] Nov 28 08:55:40 crc kubenswrapper[4946]: I1128 08:55:40.527421 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b4swm" podUID="419b30ad-96ca-4a84-9dbe-21048d48eaff" containerName="registry-server" containerID="cri-o://1f73cb2d5e1a31aef7b0b65b29569d94e6b02847d62722617553e6195a96a7c5" gracePeriod=2 Nov 28 08:55:40 crc kubenswrapper[4946]: I1128 08:55:40.641621 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c4zcm" Nov 28 08:55:40 crc kubenswrapper[4946]: I1128 08:55:40.641903 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c4zcm" Nov 28 08:55:40 crc kubenswrapper[4946]: I1128 08:55:40.690640 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c4zcm" Nov 28 08:55:40 crc kubenswrapper[4946]: I1128 08:55:40.965225 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4swm" Nov 28 08:55:41 crc kubenswrapper[4946]: I1128 08:55:41.038225 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/419b30ad-96ca-4a84-9dbe-21048d48eaff-utilities\") pod \"419b30ad-96ca-4a84-9dbe-21048d48eaff\" (UID: \"419b30ad-96ca-4a84-9dbe-21048d48eaff\") " Nov 28 08:55:41 crc kubenswrapper[4946]: I1128 08:55:41.038526 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85gxz\" (UniqueName: \"kubernetes.io/projected/419b30ad-96ca-4a84-9dbe-21048d48eaff-kube-api-access-85gxz\") pod \"419b30ad-96ca-4a84-9dbe-21048d48eaff\" (UID: \"419b30ad-96ca-4a84-9dbe-21048d48eaff\") " Nov 28 08:55:41 crc kubenswrapper[4946]: I1128 08:55:41.038617 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/419b30ad-96ca-4a84-9dbe-21048d48eaff-catalog-content\") pod \"419b30ad-96ca-4a84-9dbe-21048d48eaff\" (UID: \"419b30ad-96ca-4a84-9dbe-21048d48eaff\") " Nov 28 08:55:41 crc kubenswrapper[4946]: I1128 08:55:41.040281 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/419b30ad-96ca-4a84-9dbe-21048d48eaff-utilities" (OuterVolumeSpecName: "utilities") pod "419b30ad-96ca-4a84-9dbe-21048d48eaff" (UID: "419b30ad-96ca-4a84-9dbe-21048d48eaff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:55:41 crc kubenswrapper[4946]: I1128 08:55:41.041133 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/419b30ad-96ca-4a84-9dbe-21048d48eaff-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 08:55:41 crc kubenswrapper[4946]: I1128 08:55:41.045795 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/419b30ad-96ca-4a84-9dbe-21048d48eaff-kube-api-access-85gxz" (OuterVolumeSpecName: "kube-api-access-85gxz") pod "419b30ad-96ca-4a84-9dbe-21048d48eaff" (UID: "419b30ad-96ca-4a84-9dbe-21048d48eaff"). InnerVolumeSpecName "kube-api-access-85gxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:55:41 crc kubenswrapper[4946]: I1128 08:55:41.141248 4946 generic.go:334] "Generic (PLEG): container finished" podID="419b30ad-96ca-4a84-9dbe-21048d48eaff" containerID="1f73cb2d5e1a31aef7b0b65b29569d94e6b02847d62722617553e6195a96a7c5" exitCode=0 Nov 28 08:55:41 crc kubenswrapper[4946]: I1128 08:55:41.141330 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4swm" Nov 28 08:55:41 crc kubenswrapper[4946]: I1128 08:55:41.141360 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4swm" event={"ID":"419b30ad-96ca-4a84-9dbe-21048d48eaff","Type":"ContainerDied","Data":"1f73cb2d5e1a31aef7b0b65b29569d94e6b02847d62722617553e6195a96a7c5"} Nov 28 08:55:41 crc kubenswrapper[4946]: I1128 08:55:41.141433 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4swm" event={"ID":"419b30ad-96ca-4a84-9dbe-21048d48eaff","Type":"ContainerDied","Data":"474e0aaee289c5ffb8d1e39a536f0546087117095ca9b88287a3bba9aff06777"} Nov 28 08:55:41 crc kubenswrapper[4946]: I1128 08:55:41.141532 4946 scope.go:117] "RemoveContainer" containerID="1f73cb2d5e1a31aef7b0b65b29569d94e6b02847d62722617553e6195a96a7c5" Nov 28 08:55:41 crc kubenswrapper[4946]: I1128 08:55:41.142698 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85gxz\" (UniqueName: \"kubernetes.io/projected/419b30ad-96ca-4a84-9dbe-21048d48eaff-kube-api-access-85gxz\") on node \"crc\" DevicePath \"\"" Nov 28 08:55:41 crc kubenswrapper[4946]: I1128 08:55:41.181830 4946 scope.go:117] "RemoveContainer" containerID="7340f1c94fa54c94b7315be39be09abfb18c23954e3784a3a78fe89a8b05e48a" Nov 28 08:55:41 crc kubenswrapper[4946]: I1128 08:55:41.182374 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/419b30ad-96ca-4a84-9dbe-21048d48eaff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "419b30ad-96ca-4a84-9dbe-21048d48eaff" (UID: "419b30ad-96ca-4a84-9dbe-21048d48eaff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:55:41 crc kubenswrapper[4946]: I1128 08:55:41.212065 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c4zcm" Nov 28 08:55:41 crc kubenswrapper[4946]: I1128 08:55:41.214004 4946 scope.go:117] "RemoveContainer" containerID="45ceeff5c576b48a7e04f4176e9f90f7f9f2726e5c2c317298948d0c51fd2dd6" Nov 28 08:55:41 crc kubenswrapper[4946]: I1128 08:55:41.245932 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/419b30ad-96ca-4a84-9dbe-21048d48eaff-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 08:55:41 crc kubenswrapper[4946]: I1128 08:55:41.257488 4946 scope.go:117] "RemoveContainer" containerID="1f73cb2d5e1a31aef7b0b65b29569d94e6b02847d62722617553e6195a96a7c5" Nov 28 08:55:41 crc kubenswrapper[4946]: E1128 08:55:41.258004 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f73cb2d5e1a31aef7b0b65b29569d94e6b02847d62722617553e6195a96a7c5\": container with ID starting with 1f73cb2d5e1a31aef7b0b65b29569d94e6b02847d62722617553e6195a96a7c5 not found: ID does not exist" containerID="1f73cb2d5e1a31aef7b0b65b29569d94e6b02847d62722617553e6195a96a7c5" Nov 28 08:55:41 crc kubenswrapper[4946]: I1128 08:55:41.258033 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f73cb2d5e1a31aef7b0b65b29569d94e6b02847d62722617553e6195a96a7c5"} err="failed to get container status \"1f73cb2d5e1a31aef7b0b65b29569d94e6b02847d62722617553e6195a96a7c5\": rpc error: code = NotFound desc = could not find container \"1f73cb2d5e1a31aef7b0b65b29569d94e6b02847d62722617553e6195a96a7c5\": container with ID starting with 1f73cb2d5e1a31aef7b0b65b29569d94e6b02847d62722617553e6195a96a7c5 not found: ID does not exist" Nov 28 08:55:41 crc kubenswrapper[4946]: I1128 08:55:41.258052 4946 scope.go:117] "RemoveContainer" containerID="7340f1c94fa54c94b7315be39be09abfb18c23954e3784a3a78fe89a8b05e48a" Nov 28 08:55:41 crc kubenswrapper[4946]: E1128 08:55:41.258603 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7340f1c94fa54c94b7315be39be09abfb18c23954e3784a3a78fe89a8b05e48a\": container with ID starting with 7340f1c94fa54c94b7315be39be09abfb18c23954e3784a3a78fe89a8b05e48a not found: ID does not exist" containerID="7340f1c94fa54c94b7315be39be09abfb18c23954e3784a3a78fe89a8b05e48a" Nov 28 08:55:41 crc kubenswrapper[4946]: I1128 08:55:41.258628 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7340f1c94fa54c94b7315be39be09abfb18c23954e3784a3a78fe89a8b05e48a"} err="failed to get container status \"7340f1c94fa54c94b7315be39be09abfb18c23954e3784a3a78fe89a8b05e48a\": rpc error: code = NotFound desc = could not find container \"7340f1c94fa54c94b7315be39be09abfb18c23954e3784a3a78fe89a8b05e48a\": container with ID starting with 7340f1c94fa54c94b7315be39be09abfb18c23954e3784a3a78fe89a8b05e48a not found: ID does not exist" Nov 28 08:55:41 crc kubenswrapper[4946]: I1128 08:55:41.258642 4946 scope.go:117] "RemoveContainer" containerID="45ceeff5c576b48a7e04f4176e9f90f7f9f2726e5c2c317298948d0c51fd2dd6" Nov 28 08:55:41 crc kubenswrapper[4946]: E1128 08:55:41.259002 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45ceeff5c576b48a7e04f4176e9f90f7f9f2726e5c2c317298948d0c51fd2dd6\": container with ID starting with 45ceeff5c576b48a7e04f4176e9f90f7f9f2726e5c2c317298948d0c51fd2dd6 not found: ID does not exist" containerID="45ceeff5c576b48a7e04f4176e9f90f7f9f2726e5c2c317298948d0c51fd2dd6" Nov 28 08:55:41 crc kubenswrapper[4946]: I1128 08:55:41.259026 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45ceeff5c576b48a7e04f4176e9f90f7f9f2726e5c2c317298948d0c51fd2dd6"} err="failed to get container status \"45ceeff5c576b48a7e04f4176e9f90f7f9f2726e5c2c317298948d0c51fd2dd6\": rpc error: code = NotFound desc = could not find container \"45ceeff5c576b48a7e04f4176e9f90f7f9f2726e5c2c317298948d0c51fd2dd6\": container with ID starting with 45ceeff5c576b48a7e04f4176e9f90f7f9f2726e5c2c317298948d0c51fd2dd6 not found: ID does not exist" Nov 28 08:55:41 crc kubenswrapper[4946]: I1128 08:55:41.484880 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b4swm"] Nov 28 08:55:41 crc kubenswrapper[4946]: I1128 08:55:41.494261 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b4swm"] Nov 28 08:55:42 crc kubenswrapper[4946]: I1128 08:55:42.010770 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="419b30ad-96ca-4a84-9dbe-21048d48eaff" path="/var/lib/kubelet/pods/419b30ad-96ca-4a84-9dbe-21048d48eaff/volumes" Nov 28 08:55:43 crc kubenswrapper[4946]: I1128 08:55:43.132174 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c4zcm"] Nov 28 08:55:43 crc kubenswrapper[4946]: I1128 08:55:43.170253 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c4zcm" podUID="0e7a5b4c-8cd5-418d-b691-194af2ba7b0a" containerName="registry-server" containerID="cri-o://1df7b017b54a775f572be134d875f497826003e73b336f2c66093a3858205e52" gracePeriod=2 Nov 28 08:55:43 crc kubenswrapper[4946]: I1128 08:55:43.684215 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4zcm" Nov 28 08:55:43 crc kubenswrapper[4946]: I1128 08:55:43.798539 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr68j\" (UniqueName: \"kubernetes.io/projected/0e7a5b4c-8cd5-418d-b691-194af2ba7b0a-kube-api-access-nr68j\") pod \"0e7a5b4c-8cd5-418d-b691-194af2ba7b0a\" (UID: \"0e7a5b4c-8cd5-418d-b691-194af2ba7b0a\") " Nov 28 08:55:43 crc kubenswrapper[4946]: I1128 08:55:43.798604 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e7a5b4c-8cd5-418d-b691-194af2ba7b0a-catalog-content\") pod \"0e7a5b4c-8cd5-418d-b691-194af2ba7b0a\" (UID: \"0e7a5b4c-8cd5-418d-b691-194af2ba7b0a\") " Nov 28 08:55:43 crc kubenswrapper[4946]: I1128 08:55:43.798693 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e7a5b4c-8cd5-418d-b691-194af2ba7b0a-utilities\") pod \"0e7a5b4c-8cd5-418d-b691-194af2ba7b0a\" (UID: \"0e7a5b4c-8cd5-418d-b691-194af2ba7b0a\") " Nov 28 08:55:43 crc kubenswrapper[4946]: I1128 08:55:43.800024 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e7a5b4c-8cd5-418d-b691-194af2ba7b0a-utilities" (OuterVolumeSpecName: "utilities") pod "0e7a5b4c-8cd5-418d-b691-194af2ba7b0a" (UID: "0e7a5b4c-8cd5-418d-b691-194af2ba7b0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:55:43 crc kubenswrapper[4946]: I1128 08:55:43.804617 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e7a5b4c-8cd5-418d-b691-194af2ba7b0a-kube-api-access-nr68j" (OuterVolumeSpecName: "kube-api-access-nr68j") pod "0e7a5b4c-8cd5-418d-b691-194af2ba7b0a" (UID: "0e7a5b4c-8cd5-418d-b691-194af2ba7b0a"). InnerVolumeSpecName "kube-api-access-nr68j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:55:43 crc kubenswrapper[4946]: I1128 08:55:43.871946 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e7a5b4c-8cd5-418d-b691-194af2ba7b0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e7a5b4c-8cd5-418d-b691-194af2ba7b0a" (UID: "0e7a5b4c-8cd5-418d-b691-194af2ba7b0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:55:43 crc kubenswrapper[4946]: I1128 08:55:43.901196 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e7a5b4c-8cd5-418d-b691-194af2ba7b0a-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 08:55:43 crc kubenswrapper[4946]: I1128 08:55:43.901230 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr68j\" (UniqueName: \"kubernetes.io/projected/0e7a5b4c-8cd5-418d-b691-194af2ba7b0a-kube-api-access-nr68j\") on node \"crc\" DevicePath \"\"" Nov 28 08:55:43 crc kubenswrapper[4946]: I1128 08:55:43.901242 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e7a5b4c-8cd5-418d-b691-194af2ba7b0a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 08:55:44 crc kubenswrapper[4946]: I1128 08:55:44.181169 4946 generic.go:334] "Generic (PLEG): container finished" podID="0e7a5b4c-8cd5-418d-b691-194af2ba7b0a" containerID="1df7b017b54a775f572be134d875f497826003e73b336f2c66093a3858205e52" exitCode=0 Nov 28 08:55:44 crc kubenswrapper[4946]: I1128 08:55:44.181296 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4zcm" event={"ID":"0e7a5b4c-8cd5-418d-b691-194af2ba7b0a","Type":"ContainerDied","Data":"1df7b017b54a775f572be134d875f497826003e73b336f2c66093a3858205e52"} Nov 28 08:55:44 crc kubenswrapper[4946]: I1128 08:55:44.181422 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4zcm" event={"ID":"0e7a5b4c-8cd5-418d-b691-194af2ba7b0a","Type":"ContainerDied","Data":"aea0369fad1ed7b460f7685189cd6bc62db1cfd92b9cbe2766d311a25f760bbe"} Nov 28 08:55:44 crc kubenswrapper[4946]: I1128 08:55:44.181368 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4zcm" Nov 28 08:55:44 crc kubenswrapper[4946]: I1128 08:55:44.181439 4946 scope.go:117] "RemoveContainer" containerID="1df7b017b54a775f572be134d875f497826003e73b336f2c66093a3858205e52" Nov 28 08:55:44 crc kubenswrapper[4946]: I1128 08:55:44.217500 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c4zcm"] Nov 28 08:55:44 crc kubenswrapper[4946]: I1128 08:55:44.225163 4946 scope.go:117] "RemoveContainer" containerID="cda9667f0743d79a86d0e2e2b798f411f72f200409b4c46fab79ef511534dffa" Nov 28 08:55:44 crc kubenswrapper[4946]: I1128 08:55:44.231338 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c4zcm"] Nov 28 08:55:44 crc kubenswrapper[4946]: I1128 08:55:44.246539 4946 scope.go:117] "RemoveContainer" containerID="1b5c7d07128945403d229c8c49789bdcc43cccb8d113b3a01bd460c7c05ae3d4" Nov 28 08:55:44 crc kubenswrapper[4946]: I1128 08:55:44.284658 4946 scope.go:117] "RemoveContainer" containerID="1df7b017b54a775f572be134d875f497826003e73b336f2c66093a3858205e52" Nov 28 08:55:44 crc kubenswrapper[4946]: E1128 08:55:44.285309 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1df7b017b54a775f572be134d875f497826003e73b336f2c66093a3858205e52\": container with ID starting with 1df7b017b54a775f572be134d875f497826003e73b336f2c66093a3858205e52 not found: ID does not exist" containerID="1df7b017b54a775f572be134d875f497826003e73b336f2c66093a3858205e52" Nov 28 08:55:44 crc kubenswrapper[4946]: I1128 08:55:44.285348 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1df7b017b54a775f572be134d875f497826003e73b336f2c66093a3858205e52"} err="failed to get container status \"1df7b017b54a775f572be134d875f497826003e73b336f2c66093a3858205e52\": rpc error: code = NotFound desc = could not find container \"1df7b017b54a775f572be134d875f497826003e73b336f2c66093a3858205e52\": container with ID starting with 1df7b017b54a775f572be134d875f497826003e73b336f2c66093a3858205e52 not found: ID does not exist" Nov 28 08:55:44 crc kubenswrapper[4946]: I1128 08:55:44.285373 4946 scope.go:117] "RemoveContainer" containerID="cda9667f0743d79a86d0e2e2b798f411f72f200409b4c46fab79ef511534dffa" Nov 28 08:55:44 crc kubenswrapper[4946]: E1128 08:55:44.285589 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cda9667f0743d79a86d0e2e2b798f411f72f200409b4c46fab79ef511534dffa\": container with ID starting with cda9667f0743d79a86d0e2e2b798f411f72f200409b4c46fab79ef511534dffa not found: ID does not exist" containerID="cda9667f0743d79a86d0e2e2b798f411f72f200409b4c46fab79ef511534dffa" Nov 28 08:55:44 crc kubenswrapper[4946]: I1128 08:55:44.285612 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cda9667f0743d79a86d0e2e2b798f411f72f200409b4c46fab79ef511534dffa"} err="failed to get container status \"cda9667f0743d79a86d0e2e2b798f411f72f200409b4c46fab79ef511534dffa\": rpc error: code = NotFound desc = could not find container \"cda9667f0743d79a86d0e2e2b798f411f72f200409b4c46fab79ef511534dffa\": container with ID starting with cda9667f0743d79a86d0e2e2b798f411f72f200409b4c46fab79ef511534dffa not found: ID does not exist" Nov 28 08:55:44 crc kubenswrapper[4946]: I1128 08:55:44.285628 4946 scope.go:117] "RemoveContainer" containerID="1b5c7d07128945403d229c8c49789bdcc43cccb8d113b3a01bd460c7c05ae3d4" Nov 28 08:55:44 crc kubenswrapper[4946]: E1128 08:55:44.285825 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b5c7d07128945403d229c8c49789bdcc43cccb8d113b3a01bd460c7c05ae3d4\": container with ID starting with 1b5c7d07128945403d229c8c49789bdcc43cccb8d113b3a01bd460c7c05ae3d4 not found: ID does not exist" containerID="1b5c7d07128945403d229c8c49789bdcc43cccb8d113b3a01bd460c7c05ae3d4" Nov 28 08:55:44 crc kubenswrapper[4946]: I1128 08:55:44.285855 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b5c7d07128945403d229c8c49789bdcc43cccb8d113b3a01bd460c7c05ae3d4"} err="failed to get container status \"1b5c7d07128945403d229c8c49789bdcc43cccb8d113b3a01bd460c7c05ae3d4\": rpc error: code = NotFound desc = could not find container \"1b5c7d07128945403d229c8c49789bdcc43cccb8d113b3a01bd460c7c05ae3d4\": container with ID starting with 1b5c7d07128945403d229c8c49789bdcc43cccb8d113b3a01bd460c7c05ae3d4 not found: ID does not exist" Nov 28 08:55:46 crc kubenswrapper[4946]: I1128 08:55:46.003238 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e7a5b4c-8cd5-418d-b691-194af2ba7b0a" path="/var/lib/kubelet/pods/0e7a5b4c-8cd5-418d-b691-194af2ba7b0a/volumes" Nov 28 08:55:54 crc kubenswrapper[4946]: I1128 08:55:54.731095 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 08:55:54 crc kubenswrapper[4946]: I1128 08:55:54.731796 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 08:55:54 crc kubenswrapper[4946]: I1128 08:55:54.731850 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 08:55:54 crc kubenswrapper[4946]: I1128 08:55:54.732510 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 08:55:54 crc kubenswrapper[4946]: I1128 08:55:54.732571 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" gracePeriod=600 Nov 28 08:55:54 crc kubenswrapper[4946]: E1128 08:55:54.860523 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:55:55 crc kubenswrapper[4946]: I1128 08:55:55.350973 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" exitCode=0 Nov 28 08:55:55 crc kubenswrapper[4946]: I1128 08:55:55.351044 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376"} Nov 28 08:55:55 crc kubenswrapper[4946]: I1128 08:55:55.351147 4946 scope.go:117] "RemoveContainer" containerID="5d101f00978c0d1403b405e85a9fdb8ca2bcf5d89e67e93ef9450e8af3a21554" Nov 28 08:55:55 crc kubenswrapper[4946]: I1128 08:55:55.353563 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 08:55:55 crc kubenswrapper[4946]: E1128 08:55:55.354146 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:56:08 crc kubenswrapper[4946]: I1128 08:56:08.990980 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 08:56:08 crc kubenswrapper[4946]: E1128 08:56:08.993742 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:56:14 crc kubenswrapper[4946]: I1128 08:56:14.095799 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6fb29"] Nov 28 08:56:14 crc kubenswrapper[4946]: I1128 08:56:14.111435 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b55e-account-create-update-lcv5z"] Nov 28 08:56:14 crc kubenswrapper[4946]: I1128 08:56:14.121530 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6fb29"] Nov 28 08:56:14 crc kubenswrapper[4946]: I1128 08:56:14.128655 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b55e-account-create-update-lcv5z"] Nov 28 08:56:16 crc kubenswrapper[4946]: I1128 08:56:16.003315 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d0bf946-6b41-4e55-9840-9757ca834ad9" path="/var/lib/kubelet/pods/1d0bf946-6b41-4e55-9840-9757ca834ad9/volumes" Nov 28 08:56:16 crc kubenswrapper[4946]: I1128 08:56:16.005576 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f1e52f2-1e13-44f6-b98e-293fc481aac4" path="/var/lib/kubelet/pods/9f1e52f2-1e13-44f6-b98e-293fc481aac4/volumes" Nov 28 08:56:20 crc kubenswrapper[4946]: I1128 08:56:20.990600 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 08:56:20 crc kubenswrapper[4946]: E1128 08:56:20.991638 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:56:24 crc kubenswrapper[4946]: I1128 08:56:24.051360 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-fdw2z"] Nov 28 08:56:24 crc kubenswrapper[4946]: I1128 08:56:24.061273 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-fdw2z"] Nov 28 08:56:26 crc kubenswrapper[4946]: I1128 08:56:26.007039 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29b9dcab-32d0-4854-b903-c81f133031ec" path="/var/lib/kubelet/pods/29b9dcab-32d0-4854-b903-c81f133031ec/volumes" Nov 28 08:56:31 crc kubenswrapper[4946]: I1128 08:56:31.991348 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 08:56:31 crc kubenswrapper[4946]: E1128 08:56:31.992420 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:56:39 crc kubenswrapper[4946]: I1128 08:56:39.033538 4946 scope.go:117] "RemoveContainer" containerID="9c7f8bdd9a6dc0da3cf813d0c3fcaa42c57e69d3b550326c129e30628f15116e" Nov 28 08:56:39 crc kubenswrapper[4946]: I1128 08:56:39.065774 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-dj9lp"] Nov 28 08:56:39 crc kubenswrapper[4946]: I1128 08:56:39.075697 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-dj9lp"] Nov 28 08:56:39 crc kubenswrapper[4946]: I1128 08:56:39.085182 4946 scope.go:117] "RemoveContainer" containerID="9fea45ecdbf06b48e01e977b867a6dae2c334519ba5b66a2a30971852c90c63c" Nov 28 08:56:39 crc kubenswrapper[4946]: I1128 08:56:39.132619 4946 scope.go:117] "RemoveContainer" containerID="aabed6754523ef4c0afb6dd0f1afb470741353e55042d71601d2e5a6ae491ede" Nov 28 08:56:40 crc kubenswrapper[4946]: I1128 08:56:40.010875 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="859d5115-6c6c-4452-b0f7-9a031bc502ee" path="/var/lib/kubelet/pods/859d5115-6c6c-4452-b0f7-9a031bc502ee/volumes" Nov 28 08:56:42 crc kubenswrapper[4946]: I1128 08:56:42.990791 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 08:56:42 crc kubenswrapper[4946]: E1128 08:56:42.991332 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:56:56 crc kubenswrapper[4946]: I1128 08:56:56.000826 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 08:56:56 crc kubenswrapper[4946]: E1128 08:56:56.002044 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.035252 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-77bcbd946c-4jcpk"] Nov 28 08:57:08 crc kubenswrapper[4946]: E1128 08:57:08.036210 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619" containerName="extract-utilities" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.036227 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619" containerName="extract-utilities" Nov 28 08:57:08 crc kubenswrapper[4946]: E1128 08:57:08.036240 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619" containerName="registry-server" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.036249 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619" containerName="registry-server" Nov 28 08:57:08 crc kubenswrapper[4946]: E1128 08:57:08.036270 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="419b30ad-96ca-4a84-9dbe-21048d48eaff" containerName="extract-content" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.036277 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="419b30ad-96ca-4a84-9dbe-21048d48eaff" containerName="extract-content" Nov 28 08:57:08 crc kubenswrapper[4946]: E1128 08:57:08.036308 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7a5b4c-8cd5-418d-b691-194af2ba7b0a" containerName="registry-server" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.036321 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7a5b4c-8cd5-418d-b691-194af2ba7b0a" containerName="registry-server" Nov 28 08:57:08 crc kubenswrapper[4946]: E1128 08:57:08.036340 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7a5b4c-8cd5-418d-b691-194af2ba7b0a" containerName="extract-content" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.036347 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7a5b4c-8cd5-418d-b691-194af2ba7b0a" containerName="extract-content" Nov 28 08:57:08 crc kubenswrapper[4946]: E1128 08:57:08.036364 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="419b30ad-96ca-4a84-9dbe-21048d48eaff" containerName="registry-server" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.036371 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="419b30ad-96ca-4a84-9dbe-21048d48eaff" containerName="registry-server" Nov 28 08:57:08 crc kubenswrapper[4946]: E1128 08:57:08.036385 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7a5b4c-8cd5-418d-b691-194af2ba7b0a" containerName="extract-utilities" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.036394 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7a5b4c-8cd5-418d-b691-194af2ba7b0a" containerName="extract-utilities" Nov 28 08:57:08 crc kubenswrapper[4946]: E1128 08:57:08.036412 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="419b30ad-96ca-4a84-9dbe-21048d48eaff" containerName="extract-utilities" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.036420 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="419b30ad-96ca-4a84-9dbe-21048d48eaff" containerName="extract-utilities" Nov 28 08:57:08 crc kubenswrapper[4946]: E1128 08:57:08.036437 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619" containerName="extract-content" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.036446 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619" containerName="extract-content" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.036684 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="419b30ad-96ca-4a84-9dbe-21048d48eaff" containerName="registry-server" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.036708 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e7a5b4c-8cd5-418d-b691-194af2ba7b0a" containerName="registry-server" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.036718 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa8dcfd6-ac4a-4c3a-9b0a-fa68230ba619" containerName="registry-server" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.038033 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77bcbd946c-4jcpk" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.044679 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-2p8jv" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.045114 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.045326 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.047450 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.058084 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77bcbd946c-4jcpk"] Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.077459 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.077756 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="896ed951-3552-439f-836f-245b73b84c91" containerName="glance-log" containerID="cri-o://39b1e173baf848e7f2d3eed8b494a2a7cdd12f726ffd780d4714c4716666d117" gracePeriod=30 Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.078209 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="896ed951-3552-439f-836f-245b73b84c91" containerName="glance-httpd" containerID="cri-o://3575c48b97a5fa07ec0dcbc91c1d074c45ad8e692b4e83b4731bd5fc878ab9e6" gracePeriod=30 Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.094639 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e54031ee-9971-4d4b-8e34-204b6053b2e2-scripts\") pod \"horizon-77bcbd946c-4jcpk\" (UID: \"e54031ee-9971-4d4b-8e34-204b6053b2e2\") " pod="openstack/horizon-77bcbd946c-4jcpk" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.094695 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e54031ee-9971-4d4b-8e34-204b6053b2e2-horizon-secret-key\") pod \"horizon-77bcbd946c-4jcpk\" (UID: \"e54031ee-9971-4d4b-8e34-204b6053b2e2\") " pod="openstack/horizon-77bcbd946c-4jcpk" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.094774 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e54031ee-9971-4d4b-8e34-204b6053b2e2-logs\") pod \"horizon-77bcbd946c-4jcpk\" (UID: \"e54031ee-9971-4d4b-8e34-204b6053b2e2\") " pod="openstack/horizon-77bcbd946c-4jcpk" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.094813 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg2gt\" (UniqueName: \"kubernetes.io/projected/e54031ee-9971-4d4b-8e34-204b6053b2e2-kube-api-access-gg2gt\") pod \"horizon-77bcbd946c-4jcpk\" (UID: \"e54031ee-9971-4d4b-8e34-204b6053b2e2\") " pod="openstack/horizon-77bcbd946c-4jcpk" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.094842 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e54031ee-9971-4d4b-8e34-204b6053b2e2-config-data\") pod \"horizon-77bcbd946c-4jcpk\" (UID: \"e54031ee-9971-4d4b-8e34-204b6053b2e2\") " pod="openstack/horizon-77bcbd946c-4jcpk" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.125361 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5db78f448f-xrgv2"] Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.127038 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5db78f448f-xrgv2" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.142930 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5db78f448f-xrgv2"] Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.156219 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.156507 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e32f1876-143a-4c22-b70f-19c9beaee9f5" containerName="glance-log" containerID="cri-o://31cf236dfe5b715766773673ba523d31b8564e0d30a333ed9e7c013532a72679" gracePeriod=30 Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.156680 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e32f1876-143a-4c22-b70f-19c9beaee9f5" containerName="glance-httpd" containerID="cri-o://086b37a34495e44d8f45fc6bdd971d9cf1bf95f0b2df9805cbcfaf057ca1e41f" gracePeriod=30 Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.197375 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e54031ee-9971-4d4b-8e34-204b6053b2e2-scripts\") pod \"horizon-77bcbd946c-4jcpk\" (UID: \"e54031ee-9971-4d4b-8e34-204b6053b2e2\") " pod="openstack/horizon-77bcbd946c-4jcpk" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.197421 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10b51e57-70c4-43f4-8bf3-92261a7875eb-config-data\") pod \"horizon-5db78f448f-xrgv2\" (UID: \"10b51e57-70c4-43f4-8bf3-92261a7875eb\") " pod="openstack/horizon-5db78f448f-xrgv2" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.197445 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10b51e57-70c4-43f4-8bf3-92261a7875eb-scripts\") pod \"horizon-5db78f448f-xrgv2\" (UID: \"10b51e57-70c4-43f4-8bf3-92261a7875eb\") " pod="openstack/horizon-5db78f448f-xrgv2" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.197475 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10b51e57-70c4-43f4-8bf3-92261a7875eb-logs\") pod \"horizon-5db78f448f-xrgv2\" (UID: \"10b51e57-70c4-43f4-8bf3-92261a7875eb\") " pod="openstack/horizon-5db78f448f-xrgv2" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.197502 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e54031ee-9971-4d4b-8e34-204b6053b2e2-horizon-secret-key\") pod \"horizon-77bcbd946c-4jcpk\" (UID: \"e54031ee-9971-4d4b-8e34-204b6053b2e2\") " pod="openstack/horizon-77bcbd946c-4jcpk" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.197541 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/10b51e57-70c4-43f4-8bf3-92261a7875eb-horizon-secret-key\") pod \"horizon-5db78f448f-xrgv2\" (UID: \"10b51e57-70c4-43f4-8bf3-92261a7875eb\") " pod="openstack/horizon-5db78f448f-xrgv2" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.197579 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e54031ee-9971-4d4b-8e34-204b6053b2e2-logs\") pod \"horizon-77bcbd946c-4jcpk\" (UID: \"e54031ee-9971-4d4b-8e34-204b6053b2e2\") " pod="openstack/horizon-77bcbd946c-4jcpk" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.197607 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg2gt\" (UniqueName: \"kubernetes.io/projected/e54031ee-9971-4d4b-8e34-204b6053b2e2-kube-api-access-gg2gt\") pod \"horizon-77bcbd946c-4jcpk\" (UID: \"e54031ee-9971-4d4b-8e34-204b6053b2e2\") " pod="openstack/horizon-77bcbd946c-4jcpk" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.197626 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e54031ee-9971-4d4b-8e34-204b6053b2e2-config-data\") pod \"horizon-77bcbd946c-4jcpk\" (UID: \"e54031ee-9971-4d4b-8e34-204b6053b2e2\") " pod="openstack/horizon-77bcbd946c-4jcpk" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.197658 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67zjz\" (UniqueName: \"kubernetes.io/projected/10b51e57-70c4-43f4-8bf3-92261a7875eb-kube-api-access-67zjz\") pod \"horizon-5db78f448f-xrgv2\" (UID: \"10b51e57-70c4-43f4-8bf3-92261a7875eb\") " pod="openstack/horizon-5db78f448f-xrgv2" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.198303 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e54031ee-9971-4d4b-8e34-204b6053b2e2-scripts\") pod \"horizon-77bcbd946c-4jcpk\" (UID: \"e54031ee-9971-4d4b-8e34-204b6053b2e2\") " pod="openstack/horizon-77bcbd946c-4jcpk" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.198989 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e54031ee-9971-4d4b-8e34-204b6053b2e2-logs\") pod \"horizon-77bcbd946c-4jcpk\" (UID: \"e54031ee-9971-4d4b-8e34-204b6053b2e2\") " pod="openstack/horizon-77bcbd946c-4jcpk" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.199286 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e54031ee-9971-4d4b-8e34-204b6053b2e2-config-data\") pod \"horizon-77bcbd946c-4jcpk\" (UID: \"e54031ee-9971-4d4b-8e34-204b6053b2e2\") " pod="openstack/horizon-77bcbd946c-4jcpk" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.206071 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e54031ee-9971-4d4b-8e34-204b6053b2e2-horizon-secret-key\") pod \"horizon-77bcbd946c-4jcpk\" (UID: \"e54031ee-9971-4d4b-8e34-204b6053b2e2\") " pod="openstack/horizon-77bcbd946c-4jcpk" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.213436 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg2gt\" (UniqueName: \"kubernetes.io/projected/e54031ee-9971-4d4b-8e34-204b6053b2e2-kube-api-access-gg2gt\") pod \"horizon-77bcbd946c-4jcpk\" (UID: \"e54031ee-9971-4d4b-8e34-204b6053b2e2\") " pod="openstack/horizon-77bcbd946c-4jcpk" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.253135 4946 generic.go:334] "Generic (PLEG): container finished" podID="896ed951-3552-439f-836f-245b73b84c91" containerID="39b1e173baf848e7f2d3eed8b494a2a7cdd12f726ffd780d4714c4716666d117" exitCode=143 Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.253207 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"896ed951-3552-439f-836f-245b73b84c91","Type":"ContainerDied","Data":"39b1e173baf848e7f2d3eed8b494a2a7cdd12f726ffd780d4714c4716666d117"} Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.299239 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/10b51e57-70c4-43f4-8bf3-92261a7875eb-horizon-secret-key\") pod \"horizon-5db78f448f-xrgv2\" (UID: \"10b51e57-70c4-43f4-8bf3-92261a7875eb\") " pod="openstack/horizon-5db78f448f-xrgv2" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.299344 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67zjz\" (UniqueName: \"kubernetes.io/projected/10b51e57-70c4-43f4-8bf3-92261a7875eb-kube-api-access-67zjz\") pod \"horizon-5db78f448f-xrgv2\" (UID: \"10b51e57-70c4-43f4-8bf3-92261a7875eb\") " pod="openstack/horizon-5db78f448f-xrgv2" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.299415 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10b51e57-70c4-43f4-8bf3-92261a7875eb-config-data\") pod \"horizon-5db78f448f-xrgv2\" (UID: \"10b51e57-70c4-43f4-8bf3-92261a7875eb\") " pod="openstack/horizon-5db78f448f-xrgv2" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.299442 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10b51e57-70c4-43f4-8bf3-92261a7875eb-scripts\") pod \"horizon-5db78f448f-xrgv2\" (UID: \"10b51e57-70c4-43f4-8bf3-92261a7875eb\") " pod="openstack/horizon-5db78f448f-xrgv2" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.299475 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10b51e57-70c4-43f4-8bf3-92261a7875eb-logs\") pod \"horizon-5db78f448f-xrgv2\" (UID: \"10b51e57-70c4-43f4-8bf3-92261a7875eb\") " pod="openstack/horizon-5db78f448f-xrgv2" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.299948 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10b51e57-70c4-43f4-8bf3-92261a7875eb-logs\") pod \"horizon-5db78f448f-xrgv2\" (UID: \"10b51e57-70c4-43f4-8bf3-92261a7875eb\") " pod="openstack/horizon-5db78f448f-xrgv2" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.300334 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10b51e57-70c4-43f4-8bf3-92261a7875eb-scripts\") pod \"horizon-5db78f448f-xrgv2\" (UID: \"10b51e57-70c4-43f4-8bf3-92261a7875eb\") " pod="openstack/horizon-5db78f448f-xrgv2" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.300620 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10b51e57-70c4-43f4-8bf3-92261a7875eb-config-data\") pod \"horizon-5db78f448f-xrgv2\" (UID: \"10b51e57-70c4-43f4-8bf3-92261a7875eb\") " pod="openstack/horizon-5db78f448f-xrgv2" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.302690 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/10b51e57-70c4-43f4-8bf3-92261a7875eb-horizon-secret-key\") pod \"horizon-5db78f448f-xrgv2\" (UID: \"10b51e57-70c4-43f4-8bf3-92261a7875eb\") " pod="openstack/horizon-5db78f448f-xrgv2" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.314001 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67zjz\" (UniqueName: \"kubernetes.io/projected/10b51e57-70c4-43f4-8bf3-92261a7875eb-kube-api-access-67zjz\") pod \"horizon-5db78f448f-xrgv2\" (UID: \"10b51e57-70c4-43f4-8bf3-92261a7875eb\") " pod="openstack/horizon-5db78f448f-xrgv2" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.368187 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77bcbd946c-4jcpk" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.450453 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5db78f448f-xrgv2" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.568632 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77bcbd946c-4jcpk"] Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.599003 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5bbf848689-ld46g"] Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.601415 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bbf848689-ld46g" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.637388 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5bbf848689-ld46g"] Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.717284 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-horizon-secret-key\") pod \"horizon-5bbf848689-ld46g\" (UID: \"a4155e7d-1a52-46b9-9838-aa62c08bd7ad\") " pod="openstack/horizon-5bbf848689-ld46g" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.717374 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-config-data\") pod \"horizon-5bbf848689-ld46g\" (UID: \"a4155e7d-1a52-46b9-9838-aa62c08bd7ad\") " pod="openstack/horizon-5bbf848689-ld46g" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.717604 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-logs\") pod \"horizon-5bbf848689-ld46g\" (UID: \"a4155e7d-1a52-46b9-9838-aa62c08bd7ad\") " pod="openstack/horizon-5bbf848689-ld46g" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.717772 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgr47\" (UniqueName: \"kubernetes.io/projected/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-kube-api-access-bgr47\") pod \"horizon-5bbf848689-ld46g\" (UID: \"a4155e7d-1a52-46b9-9838-aa62c08bd7ad\") " pod="openstack/horizon-5bbf848689-ld46g" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.717895 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-scripts\") pod \"horizon-5bbf848689-ld46g\" (UID: \"a4155e7d-1a52-46b9-9838-aa62c08bd7ad\") " pod="openstack/horizon-5bbf848689-ld46g" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.822843 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-horizon-secret-key\") pod \"horizon-5bbf848689-ld46g\" (UID: \"a4155e7d-1a52-46b9-9838-aa62c08bd7ad\") " pod="openstack/horizon-5bbf848689-ld46g" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.822924 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-config-data\") pod \"horizon-5bbf848689-ld46g\" (UID: \"a4155e7d-1a52-46b9-9838-aa62c08bd7ad\") " pod="openstack/horizon-5bbf848689-ld46g" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.822967 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-logs\") pod \"horizon-5bbf848689-ld46g\" (UID: \"a4155e7d-1a52-46b9-9838-aa62c08bd7ad\") " pod="openstack/horizon-5bbf848689-ld46g" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.823014 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgr47\" (UniqueName: \"kubernetes.io/projected/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-kube-api-access-bgr47\") pod \"horizon-5bbf848689-ld46g\" (UID: \"a4155e7d-1a52-46b9-9838-aa62c08bd7ad\") " pod="openstack/horizon-5bbf848689-ld46g" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.823057 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-scripts\") pod \"horizon-5bbf848689-ld46g\" (UID: \"a4155e7d-1a52-46b9-9838-aa62c08bd7ad\") " pod="openstack/horizon-5bbf848689-ld46g" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.823487 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-logs\") pod \"horizon-5bbf848689-ld46g\" (UID: \"a4155e7d-1a52-46b9-9838-aa62c08bd7ad\") " pod="openstack/horizon-5bbf848689-ld46g" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.823877 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-scripts\") pod \"horizon-5bbf848689-ld46g\" (UID: \"a4155e7d-1a52-46b9-9838-aa62c08bd7ad\") " pod="openstack/horizon-5bbf848689-ld46g" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.824432 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-config-data\") pod \"horizon-5bbf848689-ld46g\" (UID: \"a4155e7d-1a52-46b9-9838-aa62c08bd7ad\") " pod="openstack/horizon-5bbf848689-ld46g" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.827850 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-horizon-secret-key\") pod \"horizon-5bbf848689-ld46g\" (UID: \"a4155e7d-1a52-46b9-9838-aa62c08bd7ad\") " pod="openstack/horizon-5bbf848689-ld46g" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.841312 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgr47\" (UniqueName: \"kubernetes.io/projected/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-kube-api-access-bgr47\") pod \"horizon-5bbf848689-ld46g\" (UID: \"a4155e7d-1a52-46b9-9838-aa62c08bd7ad\") " pod="openstack/horizon-5bbf848689-ld46g" Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.931946 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77bcbd946c-4jcpk"] Nov 28 08:57:08 crc kubenswrapper[4946]: I1128 08:57:08.973264 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bbf848689-ld46g" Nov 28 08:57:09 crc kubenswrapper[4946]: I1128 08:57:09.058381 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5db78f448f-xrgv2"] Nov 28 08:57:09 crc kubenswrapper[4946]: I1128 08:57:09.272544 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77bcbd946c-4jcpk" event={"ID":"e54031ee-9971-4d4b-8e34-204b6053b2e2","Type":"ContainerStarted","Data":"319f09752399fbbfde0a2b1cf2c458e4939c67f552e3567efa7404076d6c8bc3"} Nov 28 08:57:09 crc kubenswrapper[4946]: I1128 08:57:09.275305 4946 generic.go:334] "Generic (PLEG): container finished" podID="e32f1876-143a-4c22-b70f-19c9beaee9f5" containerID="31cf236dfe5b715766773673ba523d31b8564e0d30a333ed9e7c013532a72679" exitCode=143 Nov 28 08:57:09 crc kubenswrapper[4946]: I1128 08:57:09.275357 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e32f1876-143a-4c22-b70f-19c9beaee9f5","Type":"ContainerDied","Data":"31cf236dfe5b715766773673ba523d31b8564e0d30a333ed9e7c013532a72679"} Nov 28 08:57:09 crc kubenswrapper[4946]: I1128 08:57:09.277024 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5db78f448f-xrgv2" event={"ID":"10b51e57-70c4-43f4-8bf3-92261a7875eb","Type":"ContainerStarted","Data":"580a7481339204be332861b3659f42cdf3616f170fdc4952e3c261792c0cca6c"} Nov 28 08:57:09 crc kubenswrapper[4946]: I1128 08:57:09.465698 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5bbf848689-ld46g"] Nov 28 08:57:10 crc kubenswrapper[4946]: I1128 08:57:10.289016 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bbf848689-ld46g" event={"ID":"a4155e7d-1a52-46b9-9838-aa62c08bd7ad","Type":"ContainerStarted","Data":"c4363ffb57ea463f621fd8e50157880aa89b8106c4529e2fd2515470b6189aaf"} Nov 28 08:57:10 crc kubenswrapper[4946]: I1128 08:57:10.989848 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 08:57:10 crc kubenswrapper[4946]: E1128 08:57:10.990172 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:57:11 crc kubenswrapper[4946]: I1128 08:57:11.767862 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 08:57:11 crc kubenswrapper[4946]: I1128 08:57:11.898466 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/896ed951-3552-439f-836f-245b73b84c91-ceph\") pod \"896ed951-3552-439f-836f-245b73b84c91\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " Nov 28 08:57:11 crc kubenswrapper[4946]: I1128 08:57:11.898556 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/896ed951-3552-439f-836f-245b73b84c91-combined-ca-bundle\") pod \"896ed951-3552-439f-836f-245b73b84c91\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " Nov 28 08:57:11 crc kubenswrapper[4946]: I1128 08:57:11.899417 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/896ed951-3552-439f-836f-245b73b84c91-scripts\") pod \"896ed951-3552-439f-836f-245b73b84c91\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " Nov 28 08:57:11 crc kubenswrapper[4946]: I1128 08:57:11.899459 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw67t\" (UniqueName: \"kubernetes.io/projected/896ed951-3552-439f-836f-245b73b84c91-kube-api-access-cw67t\") pod \"896ed951-3552-439f-836f-245b73b84c91\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " Nov 28 08:57:11 crc kubenswrapper[4946]: I1128 08:57:11.899498 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/896ed951-3552-439f-836f-245b73b84c91-config-data\") pod \"896ed951-3552-439f-836f-245b73b84c91\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " Nov 28 08:57:11 crc kubenswrapper[4946]: I1128 08:57:11.899555 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/896ed951-3552-439f-836f-245b73b84c91-logs\") pod \"896ed951-3552-439f-836f-245b73b84c91\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " Nov 28 08:57:11 crc kubenswrapper[4946]: I1128 08:57:11.899627 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/896ed951-3552-439f-836f-245b73b84c91-httpd-run\") pod \"896ed951-3552-439f-836f-245b73b84c91\" (UID: \"896ed951-3552-439f-836f-245b73b84c91\") " Nov 28 08:57:11 crc kubenswrapper[4946]: I1128 08:57:11.900285 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/896ed951-3552-439f-836f-245b73b84c91-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "896ed951-3552-439f-836f-245b73b84c91" (UID: "896ed951-3552-439f-836f-245b73b84c91"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:57:11 crc kubenswrapper[4946]: I1128 08:57:11.903322 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/896ed951-3552-439f-836f-245b73b84c91-ceph" (OuterVolumeSpecName: "ceph") pod "896ed951-3552-439f-836f-245b73b84c91" (UID: "896ed951-3552-439f-836f-245b73b84c91"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:57:11 crc kubenswrapper[4946]: I1128 08:57:11.904558 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/896ed951-3552-439f-836f-245b73b84c91-logs" (OuterVolumeSpecName: "logs") pod "896ed951-3552-439f-836f-245b73b84c91" (UID: "896ed951-3552-439f-836f-245b73b84c91"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:57:11 crc kubenswrapper[4946]: I1128 08:57:11.905566 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/896ed951-3552-439f-836f-245b73b84c91-scripts" (OuterVolumeSpecName: "scripts") pod "896ed951-3552-439f-836f-245b73b84c91" (UID: "896ed951-3552-439f-836f-245b73b84c91"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:57:11 crc kubenswrapper[4946]: I1128 08:57:11.906780 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/896ed951-3552-439f-836f-245b73b84c91-kube-api-access-cw67t" (OuterVolumeSpecName: "kube-api-access-cw67t") pod "896ed951-3552-439f-836f-245b73b84c91" (UID: "896ed951-3552-439f-836f-245b73b84c91"). InnerVolumeSpecName "kube-api-access-cw67t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:57:11 crc kubenswrapper[4946]: I1128 08:57:11.926392 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 08:57:11 crc kubenswrapper[4946]: I1128 08:57:11.931613 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/896ed951-3552-439f-836f-245b73b84c91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "896ed951-3552-439f-836f-245b73b84c91" (UID: "896ed951-3552-439f-836f-245b73b84c91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:57:11 crc kubenswrapper[4946]: I1128 08:57:11.970012 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/896ed951-3552-439f-836f-245b73b84c91-config-data" (OuterVolumeSpecName: "config-data") pod "896ed951-3552-439f-836f-245b73b84c91" (UID: "896ed951-3552-439f-836f-245b73b84c91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.000713 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e32f1876-143a-4c22-b70f-19c9beaee9f5-logs\") pod \"e32f1876-143a-4c22-b70f-19c9beaee9f5\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.000826 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e32f1876-143a-4c22-b70f-19c9beaee9f5-httpd-run\") pod \"e32f1876-143a-4c22-b70f-19c9beaee9f5\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.000881 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e32f1876-143a-4c22-b70f-19c9beaee9f5-config-data\") pod \"e32f1876-143a-4c22-b70f-19c9beaee9f5\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.000900 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e32f1876-143a-4c22-b70f-19c9beaee9f5-scripts\") pod \"e32f1876-143a-4c22-b70f-19c9beaee9f5\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.000968 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rqsv\" (UniqueName: \"kubernetes.io/projected/e32f1876-143a-4c22-b70f-19c9beaee9f5-kube-api-access-8rqsv\") pod \"e32f1876-143a-4c22-b70f-19c9beaee9f5\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.000985 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e32f1876-143a-4c22-b70f-19c9beaee9f5-ceph\") pod \"e32f1876-143a-4c22-b70f-19c9beaee9f5\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.001084 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e32f1876-143a-4c22-b70f-19c9beaee9f5-combined-ca-bundle\") pod \"e32f1876-143a-4c22-b70f-19c9beaee9f5\" (UID: \"e32f1876-143a-4c22-b70f-19c9beaee9f5\") " Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.001469 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/896ed951-3552-439f-836f-245b73b84c91-logs\") on node \"crc\" DevicePath \"\"" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.001496 4946 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/896ed951-3552-439f-836f-245b73b84c91-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.001505 4946 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/896ed951-3552-439f-836f-245b73b84c91-ceph\") on node \"crc\" DevicePath \"\"" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.001514 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/896ed951-3552-439f-836f-245b73b84c91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.001522 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/896ed951-3552-439f-836f-245b73b84c91-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.001531 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cw67t\" (UniqueName: \"kubernetes.io/projected/896ed951-3552-439f-836f-245b73b84c91-kube-api-access-cw67t\") on node \"crc\" DevicePath \"\"" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.001538 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/896ed951-3552-439f-836f-245b73b84c91-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.002732 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e32f1876-143a-4c22-b70f-19c9beaee9f5-logs" (OuterVolumeSpecName: "logs") pod "e32f1876-143a-4c22-b70f-19c9beaee9f5" (UID: "e32f1876-143a-4c22-b70f-19c9beaee9f5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.002975 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e32f1876-143a-4c22-b70f-19c9beaee9f5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e32f1876-143a-4c22-b70f-19c9beaee9f5" (UID: "e32f1876-143a-4c22-b70f-19c9beaee9f5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.006948 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e32f1876-143a-4c22-b70f-19c9beaee9f5-kube-api-access-8rqsv" (OuterVolumeSpecName: "kube-api-access-8rqsv") pod "e32f1876-143a-4c22-b70f-19c9beaee9f5" (UID: "e32f1876-143a-4c22-b70f-19c9beaee9f5"). InnerVolumeSpecName "kube-api-access-8rqsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.006993 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e32f1876-143a-4c22-b70f-19c9beaee9f5-scripts" (OuterVolumeSpecName: "scripts") pod "e32f1876-143a-4c22-b70f-19c9beaee9f5" (UID: "e32f1876-143a-4c22-b70f-19c9beaee9f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.007382 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e32f1876-143a-4c22-b70f-19c9beaee9f5-ceph" (OuterVolumeSpecName: "ceph") pod "e32f1876-143a-4c22-b70f-19c9beaee9f5" (UID: "e32f1876-143a-4c22-b70f-19c9beaee9f5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.025495 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e32f1876-143a-4c22-b70f-19c9beaee9f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e32f1876-143a-4c22-b70f-19c9beaee9f5" (UID: "e32f1876-143a-4c22-b70f-19c9beaee9f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.063215 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e32f1876-143a-4c22-b70f-19c9beaee9f5-config-data" (OuterVolumeSpecName: "config-data") pod "e32f1876-143a-4c22-b70f-19c9beaee9f5" (UID: "e32f1876-143a-4c22-b70f-19c9beaee9f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.103591 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e32f1876-143a-4c22-b70f-19c9beaee9f5-logs\") on node \"crc\" DevicePath \"\"" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.103625 4946 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e32f1876-143a-4c22-b70f-19c9beaee9f5-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.103636 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e32f1876-143a-4c22-b70f-19c9beaee9f5-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.103644 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e32f1876-143a-4c22-b70f-19c9beaee9f5-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.103653 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rqsv\" (UniqueName: \"kubernetes.io/projected/e32f1876-143a-4c22-b70f-19c9beaee9f5-kube-api-access-8rqsv\") on node \"crc\" DevicePath \"\"" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.103663 4946 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e32f1876-143a-4c22-b70f-19c9beaee9f5-ceph\") on node \"crc\" DevicePath \"\"" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.103671 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e32f1876-143a-4c22-b70f-19c9beaee9f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.336184 4946 generic.go:334] "Generic (PLEG): container finished" podID="896ed951-3552-439f-836f-245b73b84c91" containerID="3575c48b97a5fa07ec0dcbc91c1d074c45ad8e692b4e83b4731bd5fc878ab9e6" exitCode=0 Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.336273 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"896ed951-3552-439f-836f-245b73b84c91","Type":"ContainerDied","Data":"3575c48b97a5fa07ec0dcbc91c1d074c45ad8e692b4e83b4731bd5fc878ab9e6"} Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.336275 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.336308 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"896ed951-3552-439f-836f-245b73b84c91","Type":"ContainerDied","Data":"bae2944c8e4275496b1088ba927ca3b909a54bb6334e3cdc5643183c52cc059b"} Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.336329 4946 scope.go:117] "RemoveContainer" containerID="3575c48b97a5fa07ec0dcbc91c1d074c45ad8e692b4e83b4731bd5fc878ab9e6" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.340101 4946 generic.go:334] "Generic (PLEG): container finished" podID="e32f1876-143a-4c22-b70f-19c9beaee9f5" containerID="086b37a34495e44d8f45fc6bdd971d9cf1bf95f0b2df9805cbcfaf057ca1e41f" exitCode=0 Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.340144 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e32f1876-143a-4c22-b70f-19c9beaee9f5","Type":"ContainerDied","Data":"086b37a34495e44d8f45fc6bdd971d9cf1bf95f0b2df9805cbcfaf057ca1e41f"} Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.340175 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e32f1876-143a-4c22-b70f-19c9beaee9f5","Type":"ContainerDied","Data":"69ac4319b546b1d76b7207382b0ee2501ccc1b4e554f45fa30c0ce9dac86d731"} Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.340231 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.382036 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.399669 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.402003 4946 scope.go:117] "RemoveContainer" containerID="39b1e173baf848e7f2d3eed8b494a2a7cdd12f726ffd780d4714c4716666d117" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.431583 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.439403 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.451939 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 08:57:12 crc kubenswrapper[4946]: E1128 08:57:12.452363 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="896ed951-3552-439f-836f-245b73b84c91" containerName="glance-log" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.452379 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="896ed951-3552-439f-836f-245b73b84c91" containerName="glance-log" Nov 28 08:57:12 crc kubenswrapper[4946]: E1128 08:57:12.452402 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e32f1876-143a-4c22-b70f-19c9beaee9f5" containerName="glance-log" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.452410 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="e32f1876-143a-4c22-b70f-19c9beaee9f5" containerName="glance-log" Nov 28 08:57:12 crc kubenswrapper[4946]: E1128 08:57:12.452446 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e32f1876-143a-4c22-b70f-19c9beaee9f5" containerName="glance-httpd" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.452453 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="e32f1876-143a-4c22-b70f-19c9beaee9f5" containerName="glance-httpd" Nov 28 08:57:12 crc kubenswrapper[4946]: E1128 08:57:12.452478 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="896ed951-3552-439f-836f-245b73b84c91" containerName="glance-httpd" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.452485 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="896ed951-3552-439f-836f-245b73b84c91" containerName="glance-httpd" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.452643 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="e32f1876-143a-4c22-b70f-19c9beaee9f5" containerName="glance-log" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.452664 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="896ed951-3552-439f-836f-245b73b84c91" containerName="glance-httpd" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.452674 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="e32f1876-143a-4c22-b70f-19c9beaee9f5" containerName="glance-httpd" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.452684 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="896ed951-3552-439f-836f-245b73b84c91" containerName="glance-log" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.457447 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.481431 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.483645 4946 scope.go:117] "RemoveContainer" containerID="3575c48b97a5fa07ec0dcbc91c1d074c45ad8e692b4e83b4731bd5fc878ab9e6" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.484567 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 28 08:57:12 crc kubenswrapper[4946]: E1128 08:57:12.484682 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3575c48b97a5fa07ec0dcbc91c1d074c45ad8e692b4e83b4731bd5fc878ab9e6\": container with ID starting with 3575c48b97a5fa07ec0dcbc91c1d074c45ad8e692b4e83b4731bd5fc878ab9e6 not found: ID does not exist" containerID="3575c48b97a5fa07ec0dcbc91c1d074c45ad8e692b4e83b4731bd5fc878ab9e6" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.484749 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3575c48b97a5fa07ec0dcbc91c1d074c45ad8e692b4e83b4731bd5fc878ab9e6"} err="failed to get container status \"3575c48b97a5fa07ec0dcbc91c1d074c45ad8e692b4e83b4731bd5fc878ab9e6\": rpc error: code = NotFound desc = could not find container \"3575c48b97a5fa07ec0dcbc91c1d074c45ad8e692b4e83b4731bd5fc878ab9e6\": container with ID starting with 3575c48b97a5fa07ec0dcbc91c1d074c45ad8e692b4e83b4731bd5fc878ab9e6 not found: ID does not exist" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.484777 4946 scope.go:117] "RemoveContainer" containerID="39b1e173baf848e7f2d3eed8b494a2a7cdd12f726ffd780d4714c4716666d117" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.484783 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.485095 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-9spxc" Nov 28 08:57:12 crc kubenswrapper[4946]: E1128 08:57:12.494872 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39b1e173baf848e7f2d3eed8b494a2a7cdd12f726ffd780d4714c4716666d117\": container with ID starting with 39b1e173baf848e7f2d3eed8b494a2a7cdd12f726ffd780d4714c4716666d117 not found: ID does not exist" containerID="39b1e173baf848e7f2d3eed8b494a2a7cdd12f726ffd780d4714c4716666d117" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.494950 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39b1e173baf848e7f2d3eed8b494a2a7cdd12f726ffd780d4714c4716666d117"} err="failed to get container status \"39b1e173baf848e7f2d3eed8b494a2a7cdd12f726ffd780d4714c4716666d117\": rpc error: code = NotFound desc = could not find container \"39b1e173baf848e7f2d3eed8b494a2a7cdd12f726ffd780d4714c4716666d117\": container with ID starting with 39b1e173baf848e7f2d3eed8b494a2a7cdd12f726ffd780d4714c4716666d117 not found: ID does not exist" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.496634 4946 scope.go:117] "RemoveContainer" containerID="086b37a34495e44d8f45fc6bdd971d9cf1bf95f0b2df9805cbcfaf057ca1e41f" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.521104 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.523711 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.526886 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.534114 4946 scope.go:117] "RemoveContainer" containerID="31cf236dfe5b715766773673ba523d31b8564e0d30a333ed9e7c013532a72679" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.541780 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.568772 4946 scope.go:117] "RemoveContainer" containerID="086b37a34495e44d8f45fc6bdd971d9cf1bf95f0b2df9805cbcfaf057ca1e41f" Nov 28 08:57:12 crc kubenswrapper[4946]: E1128 08:57:12.570422 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"086b37a34495e44d8f45fc6bdd971d9cf1bf95f0b2df9805cbcfaf057ca1e41f\": container with ID starting with 086b37a34495e44d8f45fc6bdd971d9cf1bf95f0b2df9805cbcfaf057ca1e41f not found: ID does not exist" containerID="086b37a34495e44d8f45fc6bdd971d9cf1bf95f0b2df9805cbcfaf057ca1e41f" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.570456 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"086b37a34495e44d8f45fc6bdd971d9cf1bf95f0b2df9805cbcfaf057ca1e41f"} err="failed to get container status \"086b37a34495e44d8f45fc6bdd971d9cf1bf95f0b2df9805cbcfaf057ca1e41f\": rpc error: code = NotFound desc = could not find container \"086b37a34495e44d8f45fc6bdd971d9cf1bf95f0b2df9805cbcfaf057ca1e41f\": container with ID starting with 086b37a34495e44d8f45fc6bdd971d9cf1bf95f0b2df9805cbcfaf057ca1e41f not found: ID does not exist" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.570491 4946 scope.go:117] "RemoveContainer" containerID="31cf236dfe5b715766773673ba523d31b8564e0d30a333ed9e7c013532a72679" Nov 28 08:57:12 crc kubenswrapper[4946]: E1128 08:57:12.572143 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31cf236dfe5b715766773673ba523d31b8564e0d30a333ed9e7c013532a72679\": container with ID starting with 31cf236dfe5b715766773673ba523d31b8564e0d30a333ed9e7c013532a72679 not found: ID does not exist" containerID="31cf236dfe5b715766773673ba523d31b8564e0d30a333ed9e7c013532a72679" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.572174 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31cf236dfe5b715766773673ba523d31b8564e0d30a333ed9e7c013532a72679"} err="failed to get container status \"31cf236dfe5b715766773673ba523d31b8564e0d30a333ed9e7c013532a72679\": rpc error: code = NotFound desc = could not find container \"31cf236dfe5b715766773673ba523d31b8564e0d30a333ed9e7c013532a72679\": container with ID starting with 31cf236dfe5b715766773673ba523d31b8564e0d30a333ed9e7c013532a72679 not found: ID does not exist" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.614810 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc7140a-4615-45a9-9f5d-6df5637a64cb-config-data\") pod \"glance-default-external-api-0\" (UID: \"cbc7140a-4615-45a9-9f5d-6df5637a64cb\") " pod="openstack/glance-default-external-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.614855 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ded096e4-45dd-4d70-adf7-4beff00d6662-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ded096e4-45dd-4d70-adf7-4beff00d6662\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.614913 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ded096e4-45dd-4d70-adf7-4beff00d6662-ceph\") pod \"glance-default-internal-api-0\" (UID: \"ded096e4-45dd-4d70-adf7-4beff00d6662\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.614936 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97hfb\" (UniqueName: \"kubernetes.io/projected/cbc7140a-4615-45a9-9f5d-6df5637a64cb-kube-api-access-97hfb\") pod \"glance-default-external-api-0\" (UID: \"cbc7140a-4615-45a9-9f5d-6df5637a64cb\") " pod="openstack/glance-default-external-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.615019 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc7140a-4615-45a9-9f5d-6df5637a64cb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cbc7140a-4615-45a9-9f5d-6df5637a64cb\") " pod="openstack/glance-default-external-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.615190 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbc7140a-4615-45a9-9f5d-6df5637a64cb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cbc7140a-4615-45a9-9f5d-6df5637a64cb\") " pod="openstack/glance-default-external-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.615253 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded096e4-45dd-4d70-adf7-4beff00d6662-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ded096e4-45dd-4d70-adf7-4beff00d6662\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.615370 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ded096e4-45dd-4d70-adf7-4beff00d6662-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ded096e4-45dd-4d70-adf7-4beff00d6662\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.615541 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cbc7140a-4615-45a9-9f5d-6df5637a64cb-ceph\") pod \"glance-default-external-api-0\" (UID: \"cbc7140a-4615-45a9-9f5d-6df5637a64cb\") " pod="openstack/glance-default-external-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.615669 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ded096e4-45dd-4d70-adf7-4beff00d6662-logs\") pod \"glance-default-internal-api-0\" (UID: \"ded096e4-45dd-4d70-adf7-4beff00d6662\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.615708 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh7r6\" (UniqueName: \"kubernetes.io/projected/ded096e4-45dd-4d70-adf7-4beff00d6662-kube-api-access-mh7r6\") pod \"glance-default-internal-api-0\" (UID: \"ded096e4-45dd-4d70-adf7-4beff00d6662\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.615748 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbc7140a-4615-45a9-9f5d-6df5637a64cb-logs\") pod \"glance-default-external-api-0\" (UID: \"cbc7140a-4615-45a9-9f5d-6df5637a64cb\") " pod="openstack/glance-default-external-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.615799 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded096e4-45dd-4d70-adf7-4beff00d6662-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ded096e4-45dd-4d70-adf7-4beff00d6662\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.615818 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbc7140a-4615-45a9-9f5d-6df5637a64cb-scripts\") pod \"glance-default-external-api-0\" (UID: \"cbc7140a-4615-45a9-9f5d-6df5637a64cb\") " pod="openstack/glance-default-external-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.718284 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded096e4-45dd-4d70-adf7-4beff00d6662-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ded096e4-45dd-4d70-adf7-4beff00d6662\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.718327 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbc7140a-4615-45a9-9f5d-6df5637a64cb-scripts\") pod \"glance-default-external-api-0\" (UID: \"cbc7140a-4615-45a9-9f5d-6df5637a64cb\") " pod="openstack/glance-default-external-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.718349 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc7140a-4615-45a9-9f5d-6df5637a64cb-config-data\") pod \"glance-default-external-api-0\" (UID: \"cbc7140a-4615-45a9-9f5d-6df5637a64cb\") " pod="openstack/glance-default-external-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.718370 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ded096e4-45dd-4d70-adf7-4beff00d6662-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ded096e4-45dd-4d70-adf7-4beff00d6662\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.722150 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ded096e4-45dd-4d70-adf7-4beff00d6662-ceph\") pod \"glance-default-internal-api-0\" (UID: \"ded096e4-45dd-4d70-adf7-4beff00d6662\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.722216 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97hfb\" (UniqueName: \"kubernetes.io/projected/cbc7140a-4615-45a9-9f5d-6df5637a64cb-kube-api-access-97hfb\") pod \"glance-default-external-api-0\" (UID: \"cbc7140a-4615-45a9-9f5d-6df5637a64cb\") " pod="openstack/glance-default-external-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.722247 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc7140a-4615-45a9-9f5d-6df5637a64cb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cbc7140a-4615-45a9-9f5d-6df5637a64cb\") " pod="openstack/glance-default-external-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.722374 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbc7140a-4615-45a9-9f5d-6df5637a64cb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cbc7140a-4615-45a9-9f5d-6df5637a64cb\") " pod="openstack/glance-default-external-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.722426 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded096e4-45dd-4d70-adf7-4beff00d6662-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ded096e4-45dd-4d70-adf7-4beff00d6662\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.722510 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ded096e4-45dd-4d70-adf7-4beff00d6662-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ded096e4-45dd-4d70-adf7-4beff00d6662\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.722618 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cbc7140a-4615-45a9-9f5d-6df5637a64cb-ceph\") pod \"glance-default-external-api-0\" (UID: \"cbc7140a-4615-45a9-9f5d-6df5637a64cb\") " pod="openstack/glance-default-external-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.722697 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ded096e4-45dd-4d70-adf7-4beff00d6662-logs\") pod \"glance-default-internal-api-0\" (UID: \"ded096e4-45dd-4d70-adf7-4beff00d6662\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.722779 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh7r6\" (UniqueName: \"kubernetes.io/projected/ded096e4-45dd-4d70-adf7-4beff00d6662-kube-api-access-mh7r6\") pod \"glance-default-internal-api-0\" (UID: \"ded096e4-45dd-4d70-adf7-4beff00d6662\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.722814 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbc7140a-4615-45a9-9f5d-6df5637a64cb-logs\") pod \"glance-default-external-api-0\" (UID: \"cbc7140a-4615-45a9-9f5d-6df5637a64cb\") " pod="openstack/glance-default-external-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.723307 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbc7140a-4615-45a9-9f5d-6df5637a64cb-logs\") pod \"glance-default-external-api-0\" (UID: \"cbc7140a-4615-45a9-9f5d-6df5637a64cb\") " pod="openstack/glance-default-external-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.723663 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ded096e4-45dd-4d70-adf7-4beff00d6662-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ded096e4-45dd-4d70-adf7-4beff00d6662\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.724242 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ded096e4-45dd-4d70-adf7-4beff00d6662-logs\") pod \"glance-default-internal-api-0\" (UID: \"ded096e4-45dd-4d70-adf7-4beff00d6662\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.724615 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbc7140a-4615-45a9-9f5d-6df5637a64cb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cbc7140a-4615-45a9-9f5d-6df5637a64cb\") " pod="openstack/glance-default-external-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.725291 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ded096e4-45dd-4d70-adf7-4beff00d6662-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ded096e4-45dd-4d70-adf7-4beff00d6662\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.726438 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded096e4-45dd-4d70-adf7-4beff00d6662-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ded096e4-45dd-4d70-adf7-4beff00d6662\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.727006 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc7140a-4615-45a9-9f5d-6df5637a64cb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cbc7140a-4615-45a9-9f5d-6df5637a64cb\") " pod="openstack/glance-default-external-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.729334 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded096e4-45dd-4d70-adf7-4beff00d6662-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ded096e4-45dd-4d70-adf7-4beff00d6662\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.730101 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ded096e4-45dd-4d70-adf7-4beff00d6662-ceph\") pod \"glance-default-internal-api-0\" (UID: \"ded096e4-45dd-4d70-adf7-4beff00d6662\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.730903 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbc7140a-4615-45a9-9f5d-6df5637a64cb-scripts\") pod \"glance-default-external-api-0\" (UID: \"cbc7140a-4615-45a9-9f5d-6df5637a64cb\") " pod="openstack/glance-default-external-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.731791 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc7140a-4615-45a9-9f5d-6df5637a64cb-config-data\") pod \"glance-default-external-api-0\" (UID: \"cbc7140a-4615-45a9-9f5d-6df5637a64cb\") " pod="openstack/glance-default-external-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.735852 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cbc7140a-4615-45a9-9f5d-6df5637a64cb-ceph\") pod \"glance-default-external-api-0\" (UID: \"cbc7140a-4615-45a9-9f5d-6df5637a64cb\") " pod="openstack/glance-default-external-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.741739 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97hfb\" (UniqueName: \"kubernetes.io/projected/cbc7140a-4615-45a9-9f5d-6df5637a64cb-kube-api-access-97hfb\") pod \"glance-default-external-api-0\" (UID: \"cbc7140a-4615-45a9-9f5d-6df5637a64cb\") " pod="openstack/glance-default-external-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.742139 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh7r6\" (UniqueName: \"kubernetes.io/projected/ded096e4-45dd-4d70-adf7-4beff00d6662-kube-api-access-mh7r6\") pod \"glance-default-internal-api-0\" (UID: \"ded096e4-45dd-4d70-adf7-4beff00d6662\") " pod="openstack/glance-default-internal-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.798519 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 08:57:12 crc kubenswrapper[4946]: I1128 08:57:12.849818 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 08:57:14 crc kubenswrapper[4946]: I1128 08:57:14.001867 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="896ed951-3552-439f-836f-245b73b84c91" path="/var/lib/kubelet/pods/896ed951-3552-439f-836f-245b73b84c91/volumes" Nov 28 08:57:14 crc kubenswrapper[4946]: I1128 08:57:14.004377 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e32f1876-143a-4c22-b70f-19c9beaee9f5" path="/var/lib/kubelet/pods/e32f1876-143a-4c22-b70f-19c9beaee9f5/volumes" Nov 28 08:57:17 crc kubenswrapper[4946]: I1128 08:57:17.976376 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 08:57:17 crc kubenswrapper[4946]: W1128 08:57:17.996648 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbc7140a_4615_45a9_9f5d_6df5637a64cb.slice/crio-c6c567dcfc47002eae82027239794e068f4e5adc2a490b6b930e1a622b38f6a3 WatchSource:0}: Error finding container c6c567dcfc47002eae82027239794e068f4e5adc2a490b6b930e1a622b38f6a3: Status 404 returned error can't find the container with id c6c567dcfc47002eae82027239794e068f4e5adc2a490b6b930e1a622b38f6a3 Nov 28 08:57:18 crc kubenswrapper[4946]: I1128 08:57:18.084680 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 08:57:18 crc kubenswrapper[4946]: W1128 08:57:18.090787 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podded096e4_45dd_4d70_adf7_4beff00d6662.slice/crio-62015e8f0eebe456369ba091d25abab977d2825e13aad1511eeaacdc8b54d9eb WatchSource:0}: Error finding container 62015e8f0eebe456369ba091d25abab977d2825e13aad1511eeaacdc8b54d9eb: Status 404 returned error can't find the container with id 62015e8f0eebe456369ba091d25abab977d2825e13aad1511eeaacdc8b54d9eb Nov 28 08:57:18 crc kubenswrapper[4946]: I1128 08:57:18.437494 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77bcbd946c-4jcpk" event={"ID":"e54031ee-9971-4d4b-8e34-204b6053b2e2","Type":"ContainerStarted","Data":"3d30bc84cc75f4e4e9f3e9c5e2118ced06770fa2c67301bbd039a794eb781cba"} Nov 28 08:57:18 crc kubenswrapper[4946]: I1128 08:57:18.437762 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-77bcbd946c-4jcpk" podUID="e54031ee-9971-4d4b-8e34-204b6053b2e2" containerName="horizon" containerID="cri-o://3d30bc84cc75f4e4e9f3e9c5e2118ced06770fa2c67301bbd039a794eb781cba" gracePeriod=30 Nov 28 08:57:18 crc kubenswrapper[4946]: I1128 08:57:18.437781 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77bcbd946c-4jcpk" event={"ID":"e54031ee-9971-4d4b-8e34-204b6053b2e2","Type":"ContainerStarted","Data":"9cf158db9d89e87fc23ec0084050f06140a89769da47a0702f19437c7faecfea"} Nov 28 08:57:18 crc kubenswrapper[4946]: I1128 08:57:18.437745 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-77bcbd946c-4jcpk" podUID="e54031ee-9971-4d4b-8e34-204b6053b2e2" containerName="horizon-log" containerID="cri-o://9cf158db9d89e87fc23ec0084050f06140a89769da47a0702f19437c7faecfea" gracePeriod=30 Nov 28 08:57:18 crc kubenswrapper[4946]: I1128 08:57:18.441355 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ded096e4-45dd-4d70-adf7-4beff00d6662","Type":"ContainerStarted","Data":"62015e8f0eebe456369ba091d25abab977d2825e13aad1511eeaacdc8b54d9eb"} Nov 28 08:57:18 crc kubenswrapper[4946]: I1128 08:57:18.445026 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bbf848689-ld46g" event={"ID":"a4155e7d-1a52-46b9-9838-aa62c08bd7ad","Type":"ContainerStarted","Data":"dea563b7c08c9a9b005851d544f580aa601b6a7b489769b8fe41d1f0389abe00"} Nov 28 08:57:18 crc kubenswrapper[4946]: I1128 08:57:18.445075 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bbf848689-ld46g" event={"ID":"a4155e7d-1a52-46b9-9838-aa62c08bd7ad","Type":"ContainerStarted","Data":"df8232c59465a89afd50b4e76d3957d06b45d0fd65568cdce416357145a56155"} Nov 28 08:57:18 crc kubenswrapper[4946]: I1128 08:57:18.447076 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cbc7140a-4615-45a9-9f5d-6df5637a64cb","Type":"ContainerStarted","Data":"c6c567dcfc47002eae82027239794e068f4e5adc2a490b6b930e1a622b38f6a3"} Nov 28 08:57:18 crc kubenswrapper[4946]: I1128 08:57:18.448956 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5db78f448f-xrgv2" event={"ID":"10b51e57-70c4-43f4-8bf3-92261a7875eb","Type":"ContainerStarted","Data":"60346e20f06f7a9e68228a717462a1199289d1730ad13ae3c36bcc100b4d76c8"} Nov 28 08:57:18 crc kubenswrapper[4946]: I1128 08:57:18.448980 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5db78f448f-xrgv2" event={"ID":"10b51e57-70c4-43f4-8bf3-92261a7875eb","Type":"ContainerStarted","Data":"dd06f4fb21ca6f3398e04fa9f4a94ca44a70fc205f5028ddf521a39b7ced8a41"} Nov 28 08:57:18 crc kubenswrapper[4946]: I1128 08:57:18.451381 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5db78f448f-xrgv2" Nov 28 08:57:18 crc kubenswrapper[4946]: I1128 08:57:18.451410 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5db78f448f-xrgv2" Nov 28 08:57:18 crc kubenswrapper[4946]: I1128 08:57:18.461006 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-77bcbd946c-4jcpk" podStartSLOduration=2.899013738 podStartE2EDuration="11.460987206s" podCreationTimestamp="2025-11-28 08:57:07 +0000 UTC" firstStartedPulling="2025-11-28 08:57:08.935581417 +0000 UTC m=+7483.313646528" lastFinishedPulling="2025-11-28 08:57:17.497554865 +0000 UTC m=+7491.875619996" observedRunningTime="2025-11-28 08:57:18.453594773 +0000 UTC m=+7492.831659904" watchObservedRunningTime="2025-11-28 08:57:18.460987206 +0000 UTC m=+7492.839052317" Nov 28 08:57:18 crc kubenswrapper[4946]: I1128 08:57:18.486109 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5db78f448f-xrgv2" podStartSLOduration=2.018436125 podStartE2EDuration="10.486092338s" podCreationTimestamp="2025-11-28 08:57:08 +0000 UTC" firstStartedPulling="2025-11-28 08:57:09.08068768 +0000 UTC m=+7483.458752791" lastFinishedPulling="2025-11-28 08:57:17.548343893 +0000 UTC m=+7491.926409004" observedRunningTime="2025-11-28 08:57:18.479652598 +0000 UTC m=+7492.857717709" watchObservedRunningTime="2025-11-28 08:57:18.486092338 +0000 UTC m=+7492.864157449" Nov 28 08:57:18 crc kubenswrapper[4946]: I1128 08:57:18.506991 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5bbf848689-ld46g" podStartSLOduration=2.380261896 podStartE2EDuration="10.506971115s" podCreationTimestamp="2025-11-28 08:57:08 +0000 UTC" firstStartedPulling="2025-11-28 08:57:09.477228861 +0000 UTC m=+7483.855293972" lastFinishedPulling="2025-11-28 08:57:17.60393808 +0000 UTC m=+7491.982003191" observedRunningTime="2025-11-28 08:57:18.499021168 +0000 UTC m=+7492.877086279" watchObservedRunningTime="2025-11-28 08:57:18.506971115 +0000 UTC m=+7492.885036226" Nov 28 08:57:18 crc kubenswrapper[4946]: I1128 08:57:18.973752 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5bbf848689-ld46g" Nov 28 08:57:18 crc kubenswrapper[4946]: I1128 08:57:18.974187 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5bbf848689-ld46g" Nov 28 08:57:19 crc kubenswrapper[4946]: I1128 08:57:19.466515 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ded096e4-45dd-4d70-adf7-4beff00d6662","Type":"ContainerStarted","Data":"fe399a0040b8d61ab855d2a42295a343d5e15429c64402b689c98da59472bb77"} Nov 28 08:57:19 crc kubenswrapper[4946]: I1128 08:57:19.467964 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ded096e4-45dd-4d70-adf7-4beff00d6662","Type":"ContainerStarted","Data":"19ab474360101942a22c9e631e390212ac46a6947e6f4ffe9ebab386f35a0f8a"} Nov 28 08:57:19 crc kubenswrapper[4946]: I1128 08:57:19.470368 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cbc7140a-4615-45a9-9f5d-6df5637a64cb","Type":"ContainerStarted","Data":"111bfdf1efe623e1c079c86e4e6b0ddc373786d8aed6aee2d19baf98cb108c58"} Nov 28 08:57:19 crc kubenswrapper[4946]: I1128 08:57:19.470424 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cbc7140a-4615-45a9-9f5d-6df5637a64cb","Type":"ContainerStarted","Data":"f24f31670299d0d155c646aa4e17bf42f00fa85ad1b758a8d640d1975852bbf9"} Nov 28 08:57:19 crc kubenswrapper[4946]: I1128 08:57:19.496015 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.49599621 podStartE2EDuration="7.49599621s" podCreationTimestamp="2025-11-28 08:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:57:19.486539076 +0000 UTC m=+7493.864604187" watchObservedRunningTime="2025-11-28 08:57:19.49599621 +0000 UTC m=+7493.874061321" Nov 28 08:57:19 crc kubenswrapper[4946]: I1128 08:57:19.526978 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.526960677 podStartE2EDuration="7.526960677s" podCreationTimestamp="2025-11-28 08:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:57:19.512040057 +0000 UTC m=+7493.890105208" watchObservedRunningTime="2025-11-28 08:57:19.526960677 +0000 UTC m=+7493.905025788" Nov 28 08:57:21 crc kubenswrapper[4946]: I1128 08:57:21.990288 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 08:57:21 crc kubenswrapper[4946]: E1128 08:57:21.991055 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:57:22 crc kubenswrapper[4946]: I1128 08:57:22.799565 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 08:57:22 crc kubenswrapper[4946]: I1128 08:57:22.799990 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 08:57:22 crc kubenswrapper[4946]: I1128 08:57:22.841251 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 08:57:22 crc kubenswrapper[4946]: I1128 08:57:22.850050 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 08:57:22 crc kubenswrapper[4946]: I1128 08:57:22.850644 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 08:57:22 crc kubenswrapper[4946]: I1128 08:57:22.870444 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 08:57:22 crc kubenswrapper[4946]: I1128 08:57:22.916650 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 08:57:22 crc kubenswrapper[4946]: I1128 08:57:22.918637 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 08:57:23 crc kubenswrapper[4946]: I1128 08:57:23.513918 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 08:57:23 crc kubenswrapper[4946]: I1128 08:57:23.514145 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 08:57:23 crc kubenswrapper[4946]: I1128 08:57:23.514157 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 08:57:23 crc kubenswrapper[4946]: I1128 08:57:23.514165 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 08:57:25 crc kubenswrapper[4946]: I1128 08:57:25.549768 4946 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 08:57:25 crc kubenswrapper[4946]: I1128 08:57:25.549797 4946 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 08:57:26 crc kubenswrapper[4946]: I1128 08:57:26.201199 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 08:57:26 crc kubenswrapper[4946]: I1128 08:57:26.269034 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 08:57:26 crc kubenswrapper[4946]: I1128 08:57:26.407532 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 08:57:26 crc kubenswrapper[4946]: I1128 08:57:26.564383 4946 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 08:57:26 crc kubenswrapper[4946]: I1128 08:57:26.576791 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 08:57:28 crc kubenswrapper[4946]: I1128 08:57:28.369593 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-77bcbd946c-4jcpk" Nov 28 08:57:28 crc kubenswrapper[4946]: I1128 08:57:28.454674 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5db78f448f-xrgv2" podUID="10b51e57-70c4-43f4-8bf3-92261a7875eb" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.100:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.100:8080: connect: connection refused" Nov 28 08:57:28 crc kubenswrapper[4946]: I1128 08:57:28.977145 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5bbf848689-ld46g" podUID="a4155e7d-1a52-46b9-9838-aa62c08bd7ad" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.101:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.101:8080: connect: connection refused" Nov 28 08:57:35 crc kubenswrapper[4946]: I1128 08:57:35.997146 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 08:57:35 crc kubenswrapper[4946]: E1128 08:57:35.998150 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:57:39 crc kubenswrapper[4946]: I1128 08:57:39.297998 4946 scope.go:117] "RemoveContainer" containerID="64970413d4401a2be1c65e689f16be4bdc0265c800501eed282a2eade2737bfe" Nov 28 08:57:39 crc kubenswrapper[4946]: I1128 08:57:39.335427 4946 scope.go:117] "RemoveContainer" containerID="d0c8381e7025e9aad7e5c6c45bd710ae6e10dd7f4e242c0f196cb9ca4af925b7" Nov 28 08:57:39 crc kubenswrapper[4946]: I1128 08:57:39.417223 4946 scope.go:117] "RemoveContainer" containerID="c778bb7d442bce7c504805b7ed9c3630d155ec8ed86fa0f7fc5b4539f42ed3a2" Nov 28 08:57:40 crc kubenswrapper[4946]: I1128 08:57:40.277410 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5db78f448f-xrgv2" Nov 28 08:57:40 crc kubenswrapper[4946]: I1128 08:57:40.682152 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5bbf848689-ld46g" Nov 28 08:57:42 crc kubenswrapper[4946]: I1128 08:57:42.165058 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5db78f448f-xrgv2" Nov 28 08:57:42 crc kubenswrapper[4946]: I1128 08:57:42.448458 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5bbf848689-ld46g" Nov 28 08:57:42 crc kubenswrapper[4946]: I1128 08:57:42.514564 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5db78f448f-xrgv2"] Nov 28 08:57:42 crc kubenswrapper[4946]: I1128 08:57:42.770843 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5db78f448f-xrgv2" podUID="10b51e57-70c4-43f4-8bf3-92261a7875eb" containerName="horizon-log" containerID="cri-o://dd06f4fb21ca6f3398e04fa9f4a94ca44a70fc205f5028ddf521a39b7ced8a41" gracePeriod=30 Nov 28 08:57:42 crc kubenswrapper[4946]: I1128 08:57:42.770892 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5db78f448f-xrgv2" podUID="10b51e57-70c4-43f4-8bf3-92261a7875eb" containerName="horizon" containerID="cri-o://60346e20f06f7a9e68228a717462a1199289d1730ad13ae3c36bcc100b4d76c8" gracePeriod=30 Nov 28 08:57:46 crc kubenswrapper[4946]: I1128 08:57:46.813835 4946 generic.go:334] "Generic (PLEG): container finished" podID="10b51e57-70c4-43f4-8bf3-92261a7875eb" containerID="60346e20f06f7a9e68228a717462a1199289d1730ad13ae3c36bcc100b4d76c8" exitCode=0 Nov 28 08:57:46 crc kubenswrapper[4946]: I1128 08:57:46.813884 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5db78f448f-xrgv2" event={"ID":"10b51e57-70c4-43f4-8bf3-92261a7875eb","Type":"ContainerDied","Data":"60346e20f06f7a9e68228a717462a1199289d1730ad13ae3c36bcc100b4d76c8"} Nov 28 08:57:47 crc kubenswrapper[4946]: I1128 08:57:47.990814 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 08:57:47 crc kubenswrapper[4946]: E1128 08:57:47.991962 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:57:48 crc kubenswrapper[4946]: I1128 08:57:48.452364 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5db78f448f-xrgv2" podUID="10b51e57-70c4-43f4-8bf3-92261a7875eb" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.100:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.100:8080: connect: connection refused" Nov 28 08:57:48 crc kubenswrapper[4946]: I1128 08:57:48.840261 4946 generic.go:334] "Generic (PLEG): container finished" podID="e54031ee-9971-4d4b-8e34-204b6053b2e2" containerID="3d30bc84cc75f4e4e9f3e9c5e2118ced06770fa2c67301bbd039a794eb781cba" exitCode=137 Nov 28 08:57:48 crc kubenswrapper[4946]: I1128 08:57:48.840569 4946 generic.go:334] "Generic (PLEG): container finished" podID="e54031ee-9971-4d4b-8e34-204b6053b2e2" containerID="9cf158db9d89e87fc23ec0084050f06140a89769da47a0702f19437c7faecfea" exitCode=137 Nov 28 08:57:48 crc kubenswrapper[4946]: I1128 08:57:48.840370 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77bcbd946c-4jcpk" event={"ID":"e54031ee-9971-4d4b-8e34-204b6053b2e2","Type":"ContainerDied","Data":"3d30bc84cc75f4e4e9f3e9c5e2118ced06770fa2c67301bbd039a794eb781cba"} Nov 28 08:57:48 crc kubenswrapper[4946]: I1128 08:57:48.840608 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77bcbd946c-4jcpk" event={"ID":"e54031ee-9971-4d4b-8e34-204b6053b2e2","Type":"ContainerDied","Data":"9cf158db9d89e87fc23ec0084050f06140a89769da47a0702f19437c7faecfea"} Nov 28 08:57:49 crc kubenswrapper[4946]: I1128 08:57:49.041260 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77bcbd946c-4jcpk" Nov 28 08:57:49 crc kubenswrapper[4946]: I1128 08:57:49.201397 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e54031ee-9971-4d4b-8e34-204b6053b2e2-config-data\") pod \"e54031ee-9971-4d4b-8e34-204b6053b2e2\" (UID: \"e54031ee-9971-4d4b-8e34-204b6053b2e2\") " Nov 28 08:57:49 crc kubenswrapper[4946]: I1128 08:57:49.201621 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg2gt\" (UniqueName: \"kubernetes.io/projected/e54031ee-9971-4d4b-8e34-204b6053b2e2-kube-api-access-gg2gt\") pod \"e54031ee-9971-4d4b-8e34-204b6053b2e2\" (UID: \"e54031ee-9971-4d4b-8e34-204b6053b2e2\") " Nov 28 08:57:49 crc kubenswrapper[4946]: I1128 08:57:49.201653 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e54031ee-9971-4d4b-8e34-204b6053b2e2-scripts\") pod \"e54031ee-9971-4d4b-8e34-204b6053b2e2\" (UID: \"e54031ee-9971-4d4b-8e34-204b6053b2e2\") " Nov 28 08:57:49 crc kubenswrapper[4946]: I1128 08:57:49.201678 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e54031ee-9971-4d4b-8e34-204b6053b2e2-horizon-secret-key\") pod \"e54031ee-9971-4d4b-8e34-204b6053b2e2\" (UID: \"e54031ee-9971-4d4b-8e34-204b6053b2e2\") " Nov 28 08:57:49 crc kubenswrapper[4946]: I1128 08:57:49.201718 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e54031ee-9971-4d4b-8e34-204b6053b2e2-logs\") pod \"e54031ee-9971-4d4b-8e34-204b6053b2e2\" (UID: \"e54031ee-9971-4d4b-8e34-204b6053b2e2\") " Nov 28 08:57:49 crc kubenswrapper[4946]: I1128 08:57:49.202411 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e54031ee-9971-4d4b-8e34-204b6053b2e2-logs" (OuterVolumeSpecName: "logs") pod "e54031ee-9971-4d4b-8e34-204b6053b2e2" (UID: "e54031ee-9971-4d4b-8e34-204b6053b2e2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:57:49 crc kubenswrapper[4946]: I1128 08:57:49.208208 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e54031ee-9971-4d4b-8e34-204b6053b2e2-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e54031ee-9971-4d4b-8e34-204b6053b2e2" (UID: "e54031ee-9971-4d4b-8e34-204b6053b2e2"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:57:49 crc kubenswrapper[4946]: I1128 08:57:49.209391 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e54031ee-9971-4d4b-8e34-204b6053b2e2-kube-api-access-gg2gt" (OuterVolumeSpecName: "kube-api-access-gg2gt") pod "e54031ee-9971-4d4b-8e34-204b6053b2e2" (UID: "e54031ee-9971-4d4b-8e34-204b6053b2e2"). InnerVolumeSpecName "kube-api-access-gg2gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:57:49 crc kubenswrapper[4946]: I1128 08:57:49.236361 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e54031ee-9971-4d4b-8e34-204b6053b2e2-config-data" (OuterVolumeSpecName: "config-data") pod "e54031ee-9971-4d4b-8e34-204b6053b2e2" (UID: "e54031ee-9971-4d4b-8e34-204b6053b2e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:57:49 crc kubenswrapper[4946]: I1128 08:57:49.237658 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e54031ee-9971-4d4b-8e34-204b6053b2e2-scripts" (OuterVolumeSpecName: "scripts") pod "e54031ee-9971-4d4b-8e34-204b6053b2e2" (UID: "e54031ee-9971-4d4b-8e34-204b6053b2e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:57:49 crc kubenswrapper[4946]: I1128 08:57:49.304842 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg2gt\" (UniqueName: \"kubernetes.io/projected/e54031ee-9971-4d4b-8e34-204b6053b2e2-kube-api-access-gg2gt\") on node \"crc\" DevicePath \"\"" Nov 28 08:57:49 crc kubenswrapper[4946]: I1128 08:57:49.304893 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e54031ee-9971-4d4b-8e34-204b6053b2e2-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:57:49 crc kubenswrapper[4946]: I1128 08:57:49.304913 4946 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e54031ee-9971-4d4b-8e34-204b6053b2e2-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 28 08:57:49 crc kubenswrapper[4946]: I1128 08:57:49.304932 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e54031ee-9971-4d4b-8e34-204b6053b2e2-logs\") on node \"crc\" DevicePath \"\"" Nov 28 08:57:49 crc kubenswrapper[4946]: I1128 08:57:49.304951 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e54031ee-9971-4d4b-8e34-204b6053b2e2-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:57:49 crc kubenswrapper[4946]: I1128 08:57:49.851403 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77bcbd946c-4jcpk" event={"ID":"e54031ee-9971-4d4b-8e34-204b6053b2e2","Type":"ContainerDied","Data":"319f09752399fbbfde0a2b1cf2c458e4939c67f552e3567efa7404076d6c8bc3"} Nov 28 08:57:49 crc kubenswrapper[4946]: I1128 08:57:49.851452 4946 scope.go:117] "RemoveContainer" containerID="3d30bc84cc75f4e4e9f3e9c5e2118ced06770fa2c67301bbd039a794eb781cba" Nov 28 08:57:49 crc kubenswrapper[4946]: I1128 08:57:49.851612 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77bcbd946c-4jcpk" Nov 28 08:57:49 crc kubenswrapper[4946]: I1128 08:57:49.959700 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77bcbd946c-4jcpk"] Nov 28 08:57:50 crc kubenswrapper[4946]: I1128 08:57:50.057475 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-77bcbd946c-4jcpk"] Nov 28 08:57:50 crc kubenswrapper[4946]: I1128 08:57:50.087732 4946 scope.go:117] "RemoveContainer" containerID="9cf158db9d89e87fc23ec0084050f06140a89769da47a0702f19437c7faecfea" Nov 28 08:57:52 crc kubenswrapper[4946]: I1128 08:57:52.003524 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e54031ee-9971-4d4b-8e34-204b6053b2e2" path="/var/lib/kubelet/pods/e54031ee-9971-4d4b-8e34-204b6053b2e2/volumes" Nov 28 08:57:58 crc kubenswrapper[4946]: I1128 08:57:58.452432 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5db78f448f-xrgv2" podUID="10b51e57-70c4-43f4-8bf3-92261a7875eb" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.100:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.100:8080: connect: connection refused" Nov 28 08:57:59 crc kubenswrapper[4946]: I1128 08:57:59.990224 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 08:57:59 crc kubenswrapper[4946]: E1128 08:57:59.990789 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:58:08 crc kubenswrapper[4946]: I1128 08:58:08.453524 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5db78f448f-xrgv2" podUID="10b51e57-70c4-43f4-8bf3-92261a7875eb" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.100:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.100:8080: connect: connection refused" Nov 28 08:58:08 crc kubenswrapper[4946]: I1128 08:58:08.454093 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5db78f448f-xrgv2" Nov 28 08:58:10 crc kubenswrapper[4946]: I1128 08:58:10.989743 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 08:58:10 crc kubenswrapper[4946]: E1128 08:58:10.990255 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:58:13 crc kubenswrapper[4946]: I1128 08:58:13.109398 4946 generic.go:334] "Generic (PLEG): container finished" podID="10b51e57-70c4-43f4-8bf3-92261a7875eb" containerID="dd06f4fb21ca6f3398e04fa9f4a94ca44a70fc205f5028ddf521a39b7ced8a41" exitCode=137 Nov 28 08:58:13 crc kubenswrapper[4946]: I1128 08:58:13.109624 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5db78f448f-xrgv2" event={"ID":"10b51e57-70c4-43f4-8bf3-92261a7875eb","Type":"ContainerDied","Data":"dd06f4fb21ca6f3398e04fa9f4a94ca44a70fc205f5028ddf521a39b7ced8a41"} Nov 28 08:58:13 crc kubenswrapper[4946]: I1128 08:58:13.110699 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5db78f448f-xrgv2" event={"ID":"10b51e57-70c4-43f4-8bf3-92261a7875eb","Type":"ContainerDied","Data":"580a7481339204be332861b3659f42cdf3616f170fdc4952e3c261792c0cca6c"} Nov 28 08:58:13 crc kubenswrapper[4946]: I1128 08:58:13.110741 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="580a7481339204be332861b3659f42cdf3616f170fdc4952e3c261792c0cca6c" Nov 28 08:58:13 crc kubenswrapper[4946]: I1128 08:58:13.189605 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5db78f448f-xrgv2" Nov 28 08:58:13 crc kubenswrapper[4946]: I1128 08:58:13.326287 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10b51e57-70c4-43f4-8bf3-92261a7875eb-config-data\") pod \"10b51e57-70c4-43f4-8bf3-92261a7875eb\" (UID: \"10b51e57-70c4-43f4-8bf3-92261a7875eb\") " Nov 28 08:58:13 crc kubenswrapper[4946]: I1128 08:58:13.326360 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10b51e57-70c4-43f4-8bf3-92261a7875eb-logs\") pod \"10b51e57-70c4-43f4-8bf3-92261a7875eb\" (UID: \"10b51e57-70c4-43f4-8bf3-92261a7875eb\") " Nov 28 08:58:13 crc kubenswrapper[4946]: I1128 08:58:13.326448 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10b51e57-70c4-43f4-8bf3-92261a7875eb-scripts\") pod \"10b51e57-70c4-43f4-8bf3-92261a7875eb\" (UID: \"10b51e57-70c4-43f4-8bf3-92261a7875eb\") " Nov 28 08:58:13 crc kubenswrapper[4946]: I1128 08:58:13.326644 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67zjz\" (UniqueName: \"kubernetes.io/projected/10b51e57-70c4-43f4-8bf3-92261a7875eb-kube-api-access-67zjz\") pod \"10b51e57-70c4-43f4-8bf3-92261a7875eb\" (UID: \"10b51e57-70c4-43f4-8bf3-92261a7875eb\") " Nov 28 08:58:13 crc kubenswrapper[4946]: I1128 08:58:13.326690 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/10b51e57-70c4-43f4-8bf3-92261a7875eb-horizon-secret-key\") pod \"10b51e57-70c4-43f4-8bf3-92261a7875eb\" (UID: \"10b51e57-70c4-43f4-8bf3-92261a7875eb\") " Nov 28 08:58:13 crc kubenswrapper[4946]: I1128 08:58:13.327065 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10b51e57-70c4-43f4-8bf3-92261a7875eb-logs" (OuterVolumeSpecName: "logs") pod "10b51e57-70c4-43f4-8bf3-92261a7875eb" (UID: "10b51e57-70c4-43f4-8bf3-92261a7875eb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:58:13 crc kubenswrapper[4946]: I1128 08:58:13.327592 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10b51e57-70c4-43f4-8bf3-92261a7875eb-logs\") on node \"crc\" DevicePath \"\"" Nov 28 08:58:13 crc kubenswrapper[4946]: I1128 08:58:13.333861 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10b51e57-70c4-43f4-8bf3-92261a7875eb-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "10b51e57-70c4-43f4-8bf3-92261a7875eb" (UID: "10b51e57-70c4-43f4-8bf3-92261a7875eb"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:58:13 crc kubenswrapper[4946]: I1128 08:58:13.333890 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10b51e57-70c4-43f4-8bf3-92261a7875eb-kube-api-access-67zjz" (OuterVolumeSpecName: "kube-api-access-67zjz") pod "10b51e57-70c4-43f4-8bf3-92261a7875eb" (UID: "10b51e57-70c4-43f4-8bf3-92261a7875eb"). InnerVolumeSpecName "kube-api-access-67zjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:58:13 crc kubenswrapper[4946]: I1128 08:58:13.367504 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10b51e57-70c4-43f4-8bf3-92261a7875eb-config-data" (OuterVolumeSpecName: "config-data") pod "10b51e57-70c4-43f4-8bf3-92261a7875eb" (UID: "10b51e57-70c4-43f4-8bf3-92261a7875eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:58:13 crc kubenswrapper[4946]: I1128 08:58:13.373779 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10b51e57-70c4-43f4-8bf3-92261a7875eb-scripts" (OuterVolumeSpecName: "scripts") pod "10b51e57-70c4-43f4-8bf3-92261a7875eb" (UID: "10b51e57-70c4-43f4-8bf3-92261a7875eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:58:13 crc kubenswrapper[4946]: I1128 08:58:13.430070 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67zjz\" (UniqueName: \"kubernetes.io/projected/10b51e57-70c4-43f4-8bf3-92261a7875eb-kube-api-access-67zjz\") on node \"crc\" DevicePath \"\"" Nov 28 08:58:13 crc kubenswrapper[4946]: I1128 08:58:13.430246 4946 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/10b51e57-70c4-43f4-8bf3-92261a7875eb-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 28 08:58:13 crc kubenswrapper[4946]: I1128 08:58:13.430338 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10b51e57-70c4-43f4-8bf3-92261a7875eb-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:58:13 crc kubenswrapper[4946]: I1128 08:58:13.430419 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10b51e57-70c4-43f4-8bf3-92261a7875eb-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:58:14 crc kubenswrapper[4946]: I1128 08:58:14.119065 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5db78f448f-xrgv2" Nov 28 08:58:14 crc kubenswrapper[4946]: I1128 08:58:14.140351 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5db78f448f-xrgv2"] Nov 28 08:58:14 crc kubenswrapper[4946]: I1128 08:58:14.146932 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5db78f448f-xrgv2"] Nov 28 08:58:16 crc kubenswrapper[4946]: I1128 08:58:16.001024 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10b51e57-70c4-43f4-8bf3-92261a7875eb" path="/var/lib/kubelet/pods/10b51e57-70c4-43f4-8bf3-92261a7875eb/volumes" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.530167 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f6dfdcfff-hv9rg"] Nov 28 08:58:25 crc kubenswrapper[4946]: E1128 08:58:25.531209 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e54031ee-9971-4d4b-8e34-204b6053b2e2" containerName="horizon-log" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.531223 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="e54031ee-9971-4d4b-8e34-204b6053b2e2" containerName="horizon-log" Nov 28 08:58:25 crc kubenswrapper[4946]: E1128 08:58:25.531251 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10b51e57-70c4-43f4-8bf3-92261a7875eb" containerName="horizon" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.531257 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="10b51e57-70c4-43f4-8bf3-92261a7875eb" containerName="horizon" Nov 28 08:58:25 crc kubenswrapper[4946]: E1128 08:58:25.531270 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10b51e57-70c4-43f4-8bf3-92261a7875eb" containerName="horizon-log" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.531276 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="10b51e57-70c4-43f4-8bf3-92261a7875eb" containerName="horizon-log" Nov 28 08:58:25 crc kubenswrapper[4946]: E1128 08:58:25.531287 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e54031ee-9971-4d4b-8e34-204b6053b2e2" containerName="horizon" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.531293 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="e54031ee-9971-4d4b-8e34-204b6053b2e2" containerName="horizon" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.531515 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="10b51e57-70c4-43f4-8bf3-92261a7875eb" containerName="horizon-log" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.531534 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="e54031ee-9971-4d4b-8e34-204b6053b2e2" containerName="horizon-log" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.531554 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="e54031ee-9971-4d4b-8e34-204b6053b2e2" containerName="horizon" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.531567 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="10b51e57-70c4-43f4-8bf3-92261a7875eb" containerName="horizon" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.532713 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f6dfdcfff-hv9rg" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.555167 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f6dfdcfff-hv9rg"] Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.657093 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/245cf95b-ff97-4971-bc3b-5bd178679d4a-scripts\") pod \"horizon-f6dfdcfff-hv9rg\" (UID: \"245cf95b-ff97-4971-bc3b-5bd178679d4a\") " pod="openstack/horizon-f6dfdcfff-hv9rg" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.657222 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crlcp\" (UniqueName: \"kubernetes.io/projected/245cf95b-ff97-4971-bc3b-5bd178679d4a-kube-api-access-crlcp\") pod \"horizon-f6dfdcfff-hv9rg\" (UID: \"245cf95b-ff97-4971-bc3b-5bd178679d4a\") " pod="openstack/horizon-f6dfdcfff-hv9rg" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.657262 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/245cf95b-ff97-4971-bc3b-5bd178679d4a-horizon-secret-key\") pod \"horizon-f6dfdcfff-hv9rg\" (UID: \"245cf95b-ff97-4971-bc3b-5bd178679d4a\") " pod="openstack/horizon-f6dfdcfff-hv9rg" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.657331 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/245cf95b-ff97-4971-bc3b-5bd178679d4a-config-data\") pod \"horizon-f6dfdcfff-hv9rg\" (UID: \"245cf95b-ff97-4971-bc3b-5bd178679d4a\") " pod="openstack/horizon-f6dfdcfff-hv9rg" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.657418 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/245cf95b-ff97-4971-bc3b-5bd178679d4a-logs\") pod \"horizon-f6dfdcfff-hv9rg\" (UID: \"245cf95b-ff97-4971-bc3b-5bd178679d4a\") " pod="openstack/horizon-f6dfdcfff-hv9rg" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.759243 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crlcp\" (UniqueName: \"kubernetes.io/projected/245cf95b-ff97-4971-bc3b-5bd178679d4a-kube-api-access-crlcp\") pod \"horizon-f6dfdcfff-hv9rg\" (UID: \"245cf95b-ff97-4971-bc3b-5bd178679d4a\") " pod="openstack/horizon-f6dfdcfff-hv9rg" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.759341 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/245cf95b-ff97-4971-bc3b-5bd178679d4a-horizon-secret-key\") pod \"horizon-f6dfdcfff-hv9rg\" (UID: \"245cf95b-ff97-4971-bc3b-5bd178679d4a\") " pod="openstack/horizon-f6dfdcfff-hv9rg" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.759441 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/245cf95b-ff97-4971-bc3b-5bd178679d4a-config-data\") pod \"horizon-f6dfdcfff-hv9rg\" (UID: \"245cf95b-ff97-4971-bc3b-5bd178679d4a\") " pod="openstack/horizon-f6dfdcfff-hv9rg" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.759547 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/245cf95b-ff97-4971-bc3b-5bd178679d4a-logs\") pod \"horizon-f6dfdcfff-hv9rg\" (UID: \"245cf95b-ff97-4971-bc3b-5bd178679d4a\") " pod="openstack/horizon-f6dfdcfff-hv9rg" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.759633 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/245cf95b-ff97-4971-bc3b-5bd178679d4a-scripts\") pod \"horizon-f6dfdcfff-hv9rg\" (UID: \"245cf95b-ff97-4971-bc3b-5bd178679d4a\") " pod="openstack/horizon-f6dfdcfff-hv9rg" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.760155 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/245cf95b-ff97-4971-bc3b-5bd178679d4a-logs\") pod \"horizon-f6dfdcfff-hv9rg\" (UID: \"245cf95b-ff97-4971-bc3b-5bd178679d4a\") " pod="openstack/horizon-f6dfdcfff-hv9rg" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.760918 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/245cf95b-ff97-4971-bc3b-5bd178679d4a-scripts\") pod \"horizon-f6dfdcfff-hv9rg\" (UID: \"245cf95b-ff97-4971-bc3b-5bd178679d4a\") " pod="openstack/horizon-f6dfdcfff-hv9rg" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.761091 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/245cf95b-ff97-4971-bc3b-5bd178679d4a-config-data\") pod \"horizon-f6dfdcfff-hv9rg\" (UID: \"245cf95b-ff97-4971-bc3b-5bd178679d4a\") " pod="openstack/horizon-f6dfdcfff-hv9rg" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.774092 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/245cf95b-ff97-4971-bc3b-5bd178679d4a-horizon-secret-key\") pod \"horizon-f6dfdcfff-hv9rg\" (UID: \"245cf95b-ff97-4971-bc3b-5bd178679d4a\") " pod="openstack/horizon-f6dfdcfff-hv9rg" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.790363 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crlcp\" (UniqueName: \"kubernetes.io/projected/245cf95b-ff97-4971-bc3b-5bd178679d4a-kube-api-access-crlcp\") pod \"horizon-f6dfdcfff-hv9rg\" (UID: \"245cf95b-ff97-4971-bc3b-5bd178679d4a\") " pod="openstack/horizon-f6dfdcfff-hv9rg" Nov 28 08:58:25 crc kubenswrapper[4946]: I1128 08:58:25.851511 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f6dfdcfff-hv9rg" Nov 28 08:58:26 crc kubenswrapper[4946]: I1128 08:58:26.000644 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 08:58:26 crc kubenswrapper[4946]: E1128 08:58:26.001287 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:58:26 crc kubenswrapper[4946]: I1128 08:58:26.314387 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f6dfdcfff-hv9rg"] Nov 28 08:58:26 crc kubenswrapper[4946]: I1128 08:58:26.764540 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-nk2js"] Nov 28 08:58:26 crc kubenswrapper[4946]: I1128 08:58:26.765823 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-nk2js" Nov 28 08:58:26 crc kubenswrapper[4946]: I1128 08:58:26.771851 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-nk2js"] Nov 28 08:58:26 crc kubenswrapper[4946]: I1128 08:58:26.867271 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-1c1b-account-create-update-4s6kt"] Nov 28 08:58:26 crc kubenswrapper[4946]: I1128 08:58:26.868535 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-1c1b-account-create-update-4s6kt" Nov 28 08:58:26 crc kubenswrapper[4946]: I1128 08:58:26.870646 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 28 08:58:26 crc kubenswrapper[4946]: I1128 08:58:26.883320 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq9zd\" (UniqueName: \"kubernetes.io/projected/ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05-kube-api-access-rq9zd\") pod \"heat-db-create-nk2js\" (UID: \"ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05\") " pod="openstack/heat-db-create-nk2js" Nov 28 08:58:26 crc kubenswrapper[4946]: I1128 08:58:26.883386 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05-operator-scripts\") pod \"heat-db-create-nk2js\" (UID: \"ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05\") " pod="openstack/heat-db-create-nk2js" Nov 28 08:58:26 crc kubenswrapper[4946]: I1128 08:58:26.887297 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-1c1b-account-create-update-4s6kt"] Nov 28 08:58:26 crc kubenswrapper[4946]: I1128 08:58:26.986542 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxdkb\" (UniqueName: \"kubernetes.io/projected/a776c3e3-091d-4963-9aa8-744e319c7193-kube-api-access-wxdkb\") pod \"heat-1c1b-account-create-update-4s6kt\" (UID: \"a776c3e3-091d-4963-9aa8-744e319c7193\") " pod="openstack/heat-1c1b-account-create-update-4s6kt" Nov 28 08:58:26 crc kubenswrapper[4946]: I1128 08:58:26.987025 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq9zd\" (UniqueName: \"kubernetes.io/projected/ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05-kube-api-access-rq9zd\") pod \"heat-db-create-nk2js\" (UID: \"ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05\") " pod="openstack/heat-db-create-nk2js" Nov 28 08:58:26 crc kubenswrapper[4946]: I1128 08:58:26.987183 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a776c3e3-091d-4963-9aa8-744e319c7193-operator-scripts\") pod \"heat-1c1b-account-create-update-4s6kt\" (UID: \"a776c3e3-091d-4963-9aa8-744e319c7193\") " pod="openstack/heat-1c1b-account-create-update-4s6kt" Nov 28 08:58:26 crc kubenswrapper[4946]: I1128 08:58:26.987226 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05-operator-scripts\") pod \"heat-db-create-nk2js\" (UID: \"ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05\") " pod="openstack/heat-db-create-nk2js" Nov 28 08:58:26 crc kubenswrapper[4946]: I1128 08:58:26.988096 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05-operator-scripts\") pod \"heat-db-create-nk2js\" (UID: \"ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05\") " pod="openstack/heat-db-create-nk2js" Nov 28 08:58:27 crc kubenswrapper[4946]: I1128 08:58:27.004596 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq9zd\" (UniqueName: \"kubernetes.io/projected/ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05-kube-api-access-rq9zd\") pod \"heat-db-create-nk2js\" (UID: \"ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05\") " pod="openstack/heat-db-create-nk2js" Nov 28 08:58:27 crc kubenswrapper[4946]: I1128 08:58:27.088753 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a776c3e3-091d-4963-9aa8-744e319c7193-operator-scripts\") pod \"heat-1c1b-account-create-update-4s6kt\" (UID: \"a776c3e3-091d-4963-9aa8-744e319c7193\") " pod="openstack/heat-1c1b-account-create-update-4s6kt" Nov 28 08:58:27 crc kubenswrapper[4946]: I1128 08:58:27.088950 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxdkb\" (UniqueName: \"kubernetes.io/projected/a776c3e3-091d-4963-9aa8-744e319c7193-kube-api-access-wxdkb\") pod \"heat-1c1b-account-create-update-4s6kt\" (UID: \"a776c3e3-091d-4963-9aa8-744e319c7193\") " pod="openstack/heat-1c1b-account-create-update-4s6kt" Nov 28 08:58:27 crc kubenswrapper[4946]: I1128 08:58:27.090635 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a776c3e3-091d-4963-9aa8-744e319c7193-operator-scripts\") pod \"heat-1c1b-account-create-update-4s6kt\" (UID: \"a776c3e3-091d-4963-9aa8-744e319c7193\") " pod="openstack/heat-1c1b-account-create-update-4s6kt" Nov 28 08:58:27 crc kubenswrapper[4946]: I1128 08:58:27.108553 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxdkb\" (UniqueName: \"kubernetes.io/projected/a776c3e3-091d-4963-9aa8-744e319c7193-kube-api-access-wxdkb\") pod \"heat-1c1b-account-create-update-4s6kt\" (UID: \"a776c3e3-091d-4963-9aa8-744e319c7193\") " pod="openstack/heat-1c1b-account-create-update-4s6kt" Nov 28 08:58:27 crc kubenswrapper[4946]: I1128 08:58:27.130783 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-nk2js" Nov 28 08:58:27 crc kubenswrapper[4946]: I1128 08:58:27.191770 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-1c1b-account-create-update-4s6kt" Nov 28 08:58:27 crc kubenswrapper[4946]: I1128 08:58:27.295985 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f6dfdcfff-hv9rg" event={"ID":"245cf95b-ff97-4971-bc3b-5bd178679d4a","Type":"ContainerStarted","Data":"3c11055d0bddd8614ae289c680c22b429c4e159bb08ef186389a9de12f5cd1d7"} Nov 28 08:58:27 crc kubenswrapper[4946]: I1128 08:58:27.297156 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f6dfdcfff-hv9rg" event={"ID":"245cf95b-ff97-4971-bc3b-5bd178679d4a","Type":"ContainerStarted","Data":"e8823f47269c7ddee81fde583fbf7db156ae10ee7c7cc4c30afbb7689c9c2351"} Nov 28 08:58:27 crc kubenswrapper[4946]: I1128 08:58:27.297175 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f6dfdcfff-hv9rg" event={"ID":"245cf95b-ff97-4971-bc3b-5bd178679d4a","Type":"ContainerStarted","Data":"ded2c419eff01e9ac5075b693d092f1fb4a628a4acdf532d0d11aa8624e70646"} Nov 28 08:58:27 crc kubenswrapper[4946]: I1128 08:58:27.329120 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-f6dfdcfff-hv9rg" podStartSLOduration=2.329097802 podStartE2EDuration="2.329097802s" podCreationTimestamp="2025-11-28 08:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:58:27.318434998 +0000 UTC m=+7561.696500109" watchObservedRunningTime="2025-11-28 08:58:27.329097802 +0000 UTC m=+7561.707162923" Nov 28 08:58:27 crc kubenswrapper[4946]: I1128 08:58:27.645403 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-nk2js"] Nov 28 08:58:27 crc kubenswrapper[4946]: W1128 08:58:27.647145 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae25b6b6_23c8_4e7b_bb5a_9efa36f9ca05.slice/crio-653e47ceb9906132a37289f88666d75f6832c5ccb62f0e51fe42a32ab841e53e WatchSource:0}: Error finding container 653e47ceb9906132a37289f88666d75f6832c5ccb62f0e51fe42a32ab841e53e: Status 404 returned error can't find the container with id 653e47ceb9906132a37289f88666d75f6832c5ccb62f0e51fe42a32ab841e53e Nov 28 08:58:27 crc kubenswrapper[4946]: I1128 08:58:27.726958 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-1c1b-account-create-update-4s6kt"] Nov 28 08:58:28 crc kubenswrapper[4946]: I1128 08:58:28.307495 4946 generic.go:334] "Generic (PLEG): container finished" podID="ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05" containerID="1248b3066e93cde578f2079e7d9b519d362908349eb3d5f9b29415f1370e3694" exitCode=0 Nov 28 08:58:28 crc kubenswrapper[4946]: I1128 08:58:28.307608 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-nk2js" event={"ID":"ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05","Type":"ContainerDied","Data":"1248b3066e93cde578f2079e7d9b519d362908349eb3d5f9b29415f1370e3694"} Nov 28 08:58:28 crc kubenswrapper[4946]: I1128 08:58:28.307637 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-nk2js" event={"ID":"ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05","Type":"ContainerStarted","Data":"653e47ceb9906132a37289f88666d75f6832c5ccb62f0e51fe42a32ab841e53e"} Nov 28 08:58:28 crc kubenswrapper[4946]: I1128 08:58:28.310817 4946 generic.go:334] "Generic (PLEG): container finished" podID="a776c3e3-091d-4963-9aa8-744e319c7193" containerID="de302d81b64cda88fa16ace4909b818f00fed9b1acd965c2f7144987b8e64d21" exitCode=0 Nov 28 08:58:28 crc kubenswrapper[4946]: I1128 08:58:28.311338 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-1c1b-account-create-update-4s6kt" event={"ID":"a776c3e3-091d-4963-9aa8-744e319c7193","Type":"ContainerDied","Data":"de302d81b64cda88fa16ace4909b818f00fed9b1acd965c2f7144987b8e64d21"} Nov 28 08:58:28 crc kubenswrapper[4946]: I1128 08:58:28.311682 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-1c1b-account-create-update-4s6kt" event={"ID":"a776c3e3-091d-4963-9aa8-744e319c7193","Type":"ContainerStarted","Data":"70533f6cae0f9a799e6f4a1d2a780a89ed7a980d524bc5d4f654a9c6c3558221"} Nov 28 08:58:29 crc kubenswrapper[4946]: I1128 08:58:29.796197 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-nk2js" Nov 28 08:58:29 crc kubenswrapper[4946]: I1128 08:58:29.800451 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-1c1b-account-create-update-4s6kt" Nov 28 08:58:29 crc kubenswrapper[4946]: I1128 08:58:29.950228 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rq9zd\" (UniqueName: \"kubernetes.io/projected/ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05-kube-api-access-rq9zd\") pod \"ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05\" (UID: \"ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05\") " Nov 28 08:58:29 crc kubenswrapper[4946]: I1128 08:58:29.950392 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a776c3e3-091d-4963-9aa8-744e319c7193-operator-scripts\") pod \"a776c3e3-091d-4963-9aa8-744e319c7193\" (UID: \"a776c3e3-091d-4963-9aa8-744e319c7193\") " Nov 28 08:58:29 crc kubenswrapper[4946]: I1128 08:58:29.950446 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05-operator-scripts\") pod \"ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05\" (UID: \"ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05\") " Nov 28 08:58:29 crc kubenswrapper[4946]: I1128 08:58:29.950531 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxdkb\" (UniqueName: \"kubernetes.io/projected/a776c3e3-091d-4963-9aa8-744e319c7193-kube-api-access-wxdkb\") pod \"a776c3e3-091d-4963-9aa8-744e319c7193\" (UID: \"a776c3e3-091d-4963-9aa8-744e319c7193\") " Nov 28 08:58:29 crc kubenswrapper[4946]: I1128 08:58:29.951336 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05" (UID: "ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:58:29 crc kubenswrapper[4946]: I1128 08:58:29.951334 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a776c3e3-091d-4963-9aa8-744e319c7193-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a776c3e3-091d-4963-9aa8-744e319c7193" (UID: "a776c3e3-091d-4963-9aa8-744e319c7193"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:58:29 crc kubenswrapper[4946]: I1128 08:58:29.955702 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05-kube-api-access-rq9zd" (OuterVolumeSpecName: "kube-api-access-rq9zd") pod "ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05" (UID: "ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05"). InnerVolumeSpecName "kube-api-access-rq9zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:58:29 crc kubenswrapper[4946]: I1128 08:58:29.956531 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a776c3e3-091d-4963-9aa8-744e319c7193-kube-api-access-wxdkb" (OuterVolumeSpecName: "kube-api-access-wxdkb") pod "a776c3e3-091d-4963-9aa8-744e319c7193" (UID: "a776c3e3-091d-4963-9aa8-744e319c7193"). InnerVolumeSpecName "kube-api-access-wxdkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:58:30 crc kubenswrapper[4946]: I1128 08:58:30.053417 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rq9zd\" (UniqueName: \"kubernetes.io/projected/ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05-kube-api-access-rq9zd\") on node \"crc\" DevicePath \"\"" Nov 28 08:58:30 crc kubenswrapper[4946]: I1128 08:58:30.053507 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a776c3e3-091d-4963-9aa8-744e319c7193-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:58:30 crc kubenswrapper[4946]: I1128 08:58:30.053519 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:58:30 crc kubenswrapper[4946]: I1128 08:58:30.053528 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxdkb\" (UniqueName: \"kubernetes.io/projected/a776c3e3-091d-4963-9aa8-744e319c7193-kube-api-access-wxdkb\") on node \"crc\" DevicePath \"\"" Nov 28 08:58:30 crc kubenswrapper[4946]: I1128 08:58:30.334754 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-nk2js" event={"ID":"ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05","Type":"ContainerDied","Data":"653e47ceb9906132a37289f88666d75f6832c5ccb62f0e51fe42a32ab841e53e"} Nov 28 08:58:30 crc kubenswrapper[4946]: I1128 08:58:30.334793 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-nk2js" Nov 28 08:58:30 crc kubenswrapper[4946]: I1128 08:58:30.334816 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="653e47ceb9906132a37289f88666d75f6832c5ccb62f0e51fe42a32ab841e53e" Nov 28 08:58:30 crc kubenswrapper[4946]: I1128 08:58:30.338139 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-1c1b-account-create-update-4s6kt" event={"ID":"a776c3e3-091d-4963-9aa8-744e319c7193","Type":"ContainerDied","Data":"70533f6cae0f9a799e6f4a1d2a780a89ed7a980d524bc5d4f654a9c6c3558221"} Nov 28 08:58:30 crc kubenswrapper[4946]: I1128 08:58:30.338186 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70533f6cae0f9a799e6f4a1d2a780a89ed7a980d524bc5d4f654a9c6c3558221" Nov 28 08:58:30 crc kubenswrapper[4946]: I1128 08:58:30.338254 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-1c1b-account-create-update-4s6kt" Nov 28 08:58:31 crc kubenswrapper[4946]: I1128 08:58:31.980347 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-54t9z"] Nov 28 08:58:31 crc kubenswrapper[4946]: E1128 08:58:31.980985 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a776c3e3-091d-4963-9aa8-744e319c7193" containerName="mariadb-account-create-update" Nov 28 08:58:31 crc kubenswrapper[4946]: I1128 08:58:31.980999 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="a776c3e3-091d-4963-9aa8-744e319c7193" containerName="mariadb-account-create-update" Nov 28 08:58:31 crc kubenswrapper[4946]: E1128 08:58:31.981026 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05" containerName="mariadb-database-create" Nov 28 08:58:31 crc kubenswrapper[4946]: I1128 08:58:31.981032 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05" containerName="mariadb-database-create" Nov 28 08:58:31 crc kubenswrapper[4946]: I1128 08:58:31.981191 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05" containerName="mariadb-database-create" Nov 28 08:58:31 crc kubenswrapper[4946]: I1128 08:58:31.981208 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="a776c3e3-091d-4963-9aa8-744e319c7193" containerName="mariadb-account-create-update" Nov 28 08:58:31 crc kubenswrapper[4946]: I1128 08:58:31.981809 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-54t9z" Nov 28 08:58:31 crc kubenswrapper[4946]: I1128 08:58:31.983730 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-5947b" Nov 28 08:58:31 crc kubenswrapper[4946]: I1128 08:58:31.985708 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 28 08:58:32 crc kubenswrapper[4946]: I1128 08:58:32.005122 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-54t9z"] Nov 28 08:58:32 crc kubenswrapper[4946]: I1128 08:58:32.097397 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b293abfb-6014-4e21-9e78-8feaf9fc1e9a-config-data\") pod \"heat-db-sync-54t9z\" (UID: \"b293abfb-6014-4e21-9e78-8feaf9fc1e9a\") " pod="openstack/heat-db-sync-54t9z" Nov 28 08:58:32 crc kubenswrapper[4946]: I1128 08:58:32.097678 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2pxn\" (UniqueName: \"kubernetes.io/projected/b293abfb-6014-4e21-9e78-8feaf9fc1e9a-kube-api-access-k2pxn\") pod \"heat-db-sync-54t9z\" (UID: \"b293abfb-6014-4e21-9e78-8feaf9fc1e9a\") " pod="openstack/heat-db-sync-54t9z" Nov 28 08:58:32 crc kubenswrapper[4946]: I1128 08:58:32.097780 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b293abfb-6014-4e21-9e78-8feaf9fc1e9a-combined-ca-bundle\") pod \"heat-db-sync-54t9z\" (UID: \"b293abfb-6014-4e21-9e78-8feaf9fc1e9a\") " pod="openstack/heat-db-sync-54t9z" Nov 28 08:58:32 crc kubenswrapper[4946]: I1128 08:58:32.200548 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b293abfb-6014-4e21-9e78-8feaf9fc1e9a-config-data\") pod \"heat-db-sync-54t9z\" (UID: \"b293abfb-6014-4e21-9e78-8feaf9fc1e9a\") " pod="openstack/heat-db-sync-54t9z" Nov 28 08:58:32 crc kubenswrapper[4946]: I1128 08:58:32.200646 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2pxn\" (UniqueName: \"kubernetes.io/projected/b293abfb-6014-4e21-9e78-8feaf9fc1e9a-kube-api-access-k2pxn\") pod \"heat-db-sync-54t9z\" (UID: \"b293abfb-6014-4e21-9e78-8feaf9fc1e9a\") " pod="openstack/heat-db-sync-54t9z" Nov 28 08:58:32 crc kubenswrapper[4946]: I1128 08:58:32.200689 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b293abfb-6014-4e21-9e78-8feaf9fc1e9a-combined-ca-bundle\") pod \"heat-db-sync-54t9z\" (UID: \"b293abfb-6014-4e21-9e78-8feaf9fc1e9a\") " pod="openstack/heat-db-sync-54t9z" Nov 28 08:58:32 crc kubenswrapper[4946]: I1128 08:58:32.217270 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b293abfb-6014-4e21-9e78-8feaf9fc1e9a-combined-ca-bundle\") pod \"heat-db-sync-54t9z\" (UID: \"b293abfb-6014-4e21-9e78-8feaf9fc1e9a\") " pod="openstack/heat-db-sync-54t9z" Nov 28 08:58:32 crc kubenswrapper[4946]: I1128 08:58:32.217737 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b293abfb-6014-4e21-9e78-8feaf9fc1e9a-config-data\") pod \"heat-db-sync-54t9z\" (UID: \"b293abfb-6014-4e21-9e78-8feaf9fc1e9a\") " pod="openstack/heat-db-sync-54t9z" Nov 28 08:58:32 crc kubenswrapper[4946]: I1128 08:58:32.233700 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2pxn\" (UniqueName: \"kubernetes.io/projected/b293abfb-6014-4e21-9e78-8feaf9fc1e9a-kube-api-access-k2pxn\") pod \"heat-db-sync-54t9z\" (UID: \"b293abfb-6014-4e21-9e78-8feaf9fc1e9a\") " pod="openstack/heat-db-sync-54t9z" Nov 28 08:58:32 crc kubenswrapper[4946]: I1128 08:58:32.324525 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-54t9z" Nov 28 08:58:32 crc kubenswrapper[4946]: I1128 08:58:32.814511 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-54t9z"] Nov 28 08:58:33 crc kubenswrapper[4946]: I1128 08:58:33.382234 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-54t9z" event={"ID":"b293abfb-6014-4e21-9e78-8feaf9fc1e9a","Type":"ContainerStarted","Data":"ab20f748744b4bb442a892d50e6aa12cea66a31b2142bb261c6367dedd8b6fd9"} Nov 28 08:58:35 crc kubenswrapper[4946]: I1128 08:58:35.852376 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-f6dfdcfff-hv9rg" Nov 28 08:58:35 crc kubenswrapper[4946]: I1128 08:58:35.853651 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-f6dfdcfff-hv9rg" Nov 28 08:58:38 crc kubenswrapper[4946]: I1128 08:58:38.990537 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 08:58:38 crc kubenswrapper[4946]: E1128 08:58:38.990957 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:58:42 crc kubenswrapper[4946]: I1128 08:58:42.507792 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-54t9z" event={"ID":"b293abfb-6014-4e21-9e78-8feaf9fc1e9a","Type":"ContainerStarted","Data":"f15407ec2e722978c2af1347446c35e5edf1066bae5dd69fc797a8cee128483f"} Nov 28 08:58:42 crc kubenswrapper[4946]: I1128 08:58:42.532292 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-54t9z" podStartSLOduration=2.875817719 podStartE2EDuration="11.532266967s" podCreationTimestamp="2025-11-28 08:58:31 +0000 UTC" firstStartedPulling="2025-11-28 08:58:32.821398886 +0000 UTC m=+7567.199463997" lastFinishedPulling="2025-11-28 08:58:41.477848124 +0000 UTC m=+7575.855913245" observedRunningTime="2025-11-28 08:58:42.52669903 +0000 UTC m=+7576.904764171" watchObservedRunningTime="2025-11-28 08:58:42.532266967 +0000 UTC m=+7576.910332118" Nov 28 08:58:44 crc kubenswrapper[4946]: I1128 08:58:44.546538 4946 generic.go:334] "Generic (PLEG): container finished" podID="b293abfb-6014-4e21-9e78-8feaf9fc1e9a" containerID="f15407ec2e722978c2af1347446c35e5edf1066bae5dd69fc797a8cee128483f" exitCode=0 Nov 28 08:58:44 crc kubenswrapper[4946]: I1128 08:58:44.546654 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-54t9z" event={"ID":"b293abfb-6014-4e21-9e78-8feaf9fc1e9a","Type":"ContainerDied","Data":"f15407ec2e722978c2af1347446c35e5edf1066bae5dd69fc797a8cee128483f"} Nov 28 08:58:46 crc kubenswrapper[4946]: I1128 08:58:46.003349 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-54t9z" Nov 28 08:58:46 crc kubenswrapper[4946]: I1128 08:58:46.086018 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2pxn\" (UniqueName: \"kubernetes.io/projected/b293abfb-6014-4e21-9e78-8feaf9fc1e9a-kube-api-access-k2pxn\") pod \"b293abfb-6014-4e21-9e78-8feaf9fc1e9a\" (UID: \"b293abfb-6014-4e21-9e78-8feaf9fc1e9a\") " Nov 28 08:58:46 crc kubenswrapper[4946]: I1128 08:58:46.086082 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b293abfb-6014-4e21-9e78-8feaf9fc1e9a-combined-ca-bundle\") pod \"b293abfb-6014-4e21-9e78-8feaf9fc1e9a\" (UID: \"b293abfb-6014-4e21-9e78-8feaf9fc1e9a\") " Nov 28 08:58:46 crc kubenswrapper[4946]: I1128 08:58:46.086223 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b293abfb-6014-4e21-9e78-8feaf9fc1e9a-config-data\") pod \"b293abfb-6014-4e21-9e78-8feaf9fc1e9a\" (UID: \"b293abfb-6014-4e21-9e78-8feaf9fc1e9a\") " Nov 28 08:58:46 crc kubenswrapper[4946]: I1128 08:58:46.100004 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b293abfb-6014-4e21-9e78-8feaf9fc1e9a-kube-api-access-k2pxn" (OuterVolumeSpecName: "kube-api-access-k2pxn") pod "b293abfb-6014-4e21-9e78-8feaf9fc1e9a" (UID: "b293abfb-6014-4e21-9e78-8feaf9fc1e9a"). InnerVolumeSpecName "kube-api-access-k2pxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:58:46 crc kubenswrapper[4946]: I1128 08:58:46.135185 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b293abfb-6014-4e21-9e78-8feaf9fc1e9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b293abfb-6014-4e21-9e78-8feaf9fc1e9a" (UID: "b293abfb-6014-4e21-9e78-8feaf9fc1e9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:58:46 crc kubenswrapper[4946]: I1128 08:58:46.187287 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b293abfb-6014-4e21-9e78-8feaf9fc1e9a-config-data" (OuterVolumeSpecName: "config-data") pod "b293abfb-6014-4e21-9e78-8feaf9fc1e9a" (UID: "b293abfb-6014-4e21-9e78-8feaf9fc1e9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:58:46 crc kubenswrapper[4946]: I1128 08:58:46.187896 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2pxn\" (UniqueName: \"kubernetes.io/projected/b293abfb-6014-4e21-9e78-8feaf9fc1e9a-kube-api-access-k2pxn\") on node \"crc\" DevicePath \"\"" Nov 28 08:58:46 crc kubenswrapper[4946]: I1128 08:58:46.187922 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b293abfb-6014-4e21-9e78-8feaf9fc1e9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:58:46 crc kubenswrapper[4946]: I1128 08:58:46.187934 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b293abfb-6014-4e21-9e78-8feaf9fc1e9a-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:58:46 crc kubenswrapper[4946]: I1128 08:58:46.571001 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-54t9z" event={"ID":"b293abfb-6014-4e21-9e78-8feaf9fc1e9a","Type":"ContainerDied","Data":"ab20f748744b4bb442a892d50e6aa12cea66a31b2142bb261c6367dedd8b6fd9"} Nov 28 08:58:46 crc kubenswrapper[4946]: I1128 08:58:46.571493 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab20f748744b4bb442a892d50e6aa12cea66a31b2142bb261c6367dedd8b6fd9" Nov 28 08:58:46 crc kubenswrapper[4946]: I1128 08:58:46.571098 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-54t9z" Nov 28 08:58:47 crc kubenswrapper[4946]: I1128 08:58:47.570373 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-f6dfdcfff-hv9rg" Nov 28 08:58:47 crc kubenswrapper[4946]: I1128 08:58:47.876369 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5b8b49c7cc-tdcbd"] Nov 28 08:58:47 crc kubenswrapper[4946]: E1128 08:58:47.884250 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b293abfb-6014-4e21-9e78-8feaf9fc1e9a" containerName="heat-db-sync" Nov 28 08:58:47 crc kubenswrapper[4946]: I1128 08:58:47.884294 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b293abfb-6014-4e21-9e78-8feaf9fc1e9a" containerName="heat-db-sync" Nov 28 08:58:47 crc kubenswrapper[4946]: I1128 08:58:47.884873 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b293abfb-6014-4e21-9e78-8feaf9fc1e9a" containerName="heat-db-sync" Nov 28 08:58:47 crc kubenswrapper[4946]: I1128 08:58:47.886143 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b8b49c7cc-tdcbd" Nov 28 08:58:47 crc kubenswrapper[4946]: I1128 08:58:47.906489 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 28 08:58:47 crc kubenswrapper[4946]: I1128 08:58:47.906905 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-5947b" Nov 28 08:58:47 crc kubenswrapper[4946]: I1128 08:58:47.911365 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 28 08:58:47 crc kubenswrapper[4946]: I1128 08:58:47.942862 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5b8b49c7cc-tdcbd"] Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.023893 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81f69514-2093-4cb7-a7aa-62ba6da948fa-config-data\") pod \"heat-engine-5b8b49c7cc-tdcbd\" (UID: \"81f69514-2093-4cb7-a7aa-62ba6da948fa\") " pod="openstack/heat-engine-5b8b49c7cc-tdcbd" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.024176 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81f69514-2093-4cb7-a7aa-62ba6da948fa-combined-ca-bundle\") pod \"heat-engine-5b8b49c7cc-tdcbd\" (UID: \"81f69514-2093-4cb7-a7aa-62ba6da948fa\") " pod="openstack/heat-engine-5b8b49c7cc-tdcbd" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.024276 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx9kc\" (UniqueName: \"kubernetes.io/projected/81f69514-2093-4cb7-a7aa-62ba6da948fa-kube-api-access-bx9kc\") pod \"heat-engine-5b8b49c7cc-tdcbd\" (UID: \"81f69514-2093-4cb7-a7aa-62ba6da948fa\") " pod="openstack/heat-engine-5b8b49c7cc-tdcbd" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.024399 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81f69514-2093-4cb7-a7aa-62ba6da948fa-config-data-custom\") pod \"heat-engine-5b8b49c7cc-tdcbd\" (UID: \"81f69514-2093-4cb7-a7aa-62ba6da948fa\") " pod="openstack/heat-engine-5b8b49c7cc-tdcbd" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.102177 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6789f5c976-5lgn4"] Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.103839 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6789f5c976-5lgn4" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.108762 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.113223 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-646c654b76-5l7fn"] Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.115115 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-646c654b76-5l7fn" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.117661 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.128206 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81f69514-2093-4cb7-a7aa-62ba6da948fa-config-data\") pod \"heat-engine-5b8b49c7cc-tdcbd\" (UID: \"81f69514-2093-4cb7-a7aa-62ba6da948fa\") " pod="openstack/heat-engine-5b8b49c7cc-tdcbd" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.128299 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81f69514-2093-4cb7-a7aa-62ba6da948fa-combined-ca-bundle\") pod \"heat-engine-5b8b49c7cc-tdcbd\" (UID: \"81f69514-2093-4cb7-a7aa-62ba6da948fa\") " pod="openstack/heat-engine-5b8b49c7cc-tdcbd" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.128317 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx9kc\" (UniqueName: \"kubernetes.io/projected/81f69514-2093-4cb7-a7aa-62ba6da948fa-kube-api-access-bx9kc\") pod \"heat-engine-5b8b49c7cc-tdcbd\" (UID: \"81f69514-2093-4cb7-a7aa-62ba6da948fa\") " pod="openstack/heat-engine-5b8b49c7cc-tdcbd" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.128355 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81f69514-2093-4cb7-a7aa-62ba6da948fa-config-data-custom\") pod \"heat-engine-5b8b49c7cc-tdcbd\" (UID: \"81f69514-2093-4cb7-a7aa-62ba6da948fa\") " pod="openstack/heat-engine-5b8b49c7cc-tdcbd" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.128511 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6789f5c976-5lgn4"] Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.139151 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81f69514-2093-4cb7-a7aa-62ba6da948fa-combined-ca-bundle\") pod \"heat-engine-5b8b49c7cc-tdcbd\" (UID: \"81f69514-2093-4cb7-a7aa-62ba6da948fa\") " pod="openstack/heat-engine-5b8b49c7cc-tdcbd" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.139528 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81f69514-2093-4cb7-a7aa-62ba6da948fa-config-data\") pod \"heat-engine-5b8b49c7cc-tdcbd\" (UID: \"81f69514-2093-4cb7-a7aa-62ba6da948fa\") " pod="openstack/heat-engine-5b8b49c7cc-tdcbd" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.141631 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-646c654b76-5l7fn"] Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.145369 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81f69514-2093-4cb7-a7aa-62ba6da948fa-config-data-custom\") pod \"heat-engine-5b8b49c7cc-tdcbd\" (UID: \"81f69514-2093-4cb7-a7aa-62ba6da948fa\") " pod="openstack/heat-engine-5b8b49c7cc-tdcbd" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.157182 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx9kc\" (UniqueName: \"kubernetes.io/projected/81f69514-2093-4cb7-a7aa-62ba6da948fa-kube-api-access-bx9kc\") pod \"heat-engine-5b8b49c7cc-tdcbd\" (UID: \"81f69514-2093-4cb7-a7aa-62ba6da948fa\") " pod="openstack/heat-engine-5b8b49c7cc-tdcbd" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.230574 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3961bc3-826e-42e6-96cf-4d6c454dde1e-config-data\") pod \"heat-api-6789f5c976-5lgn4\" (UID: \"f3961bc3-826e-42e6-96cf-4d6c454dde1e\") " pod="openstack/heat-api-6789f5c976-5lgn4" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.230672 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc1c1a72-5fe7-467b-8309-61664e19d710-config-data\") pod \"heat-cfnapi-646c654b76-5l7fn\" (UID: \"dc1c1a72-5fe7-467b-8309-61664e19d710\") " pod="openstack/heat-cfnapi-646c654b76-5l7fn" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.230693 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3961bc3-826e-42e6-96cf-4d6c454dde1e-config-data-custom\") pod \"heat-api-6789f5c976-5lgn4\" (UID: \"f3961bc3-826e-42e6-96cf-4d6c454dde1e\") " pod="openstack/heat-api-6789f5c976-5lgn4" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.230714 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc1c1a72-5fe7-467b-8309-61664e19d710-combined-ca-bundle\") pod \"heat-cfnapi-646c654b76-5l7fn\" (UID: \"dc1c1a72-5fe7-467b-8309-61664e19d710\") " pod="openstack/heat-cfnapi-646c654b76-5l7fn" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.230737 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc1c1a72-5fe7-467b-8309-61664e19d710-config-data-custom\") pod \"heat-cfnapi-646c654b76-5l7fn\" (UID: \"dc1c1a72-5fe7-467b-8309-61664e19d710\") " pod="openstack/heat-cfnapi-646c654b76-5l7fn" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.230752 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vttjb\" (UniqueName: \"kubernetes.io/projected/dc1c1a72-5fe7-467b-8309-61664e19d710-kube-api-access-vttjb\") pod \"heat-cfnapi-646c654b76-5l7fn\" (UID: \"dc1c1a72-5fe7-467b-8309-61664e19d710\") " pod="openstack/heat-cfnapi-646c654b76-5l7fn" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.230785 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62m62\" (UniqueName: \"kubernetes.io/projected/f3961bc3-826e-42e6-96cf-4d6c454dde1e-kube-api-access-62m62\") pod \"heat-api-6789f5c976-5lgn4\" (UID: \"f3961bc3-826e-42e6-96cf-4d6c454dde1e\") " pod="openstack/heat-api-6789f5c976-5lgn4" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.230803 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3961bc3-826e-42e6-96cf-4d6c454dde1e-combined-ca-bundle\") pod \"heat-api-6789f5c976-5lgn4\" (UID: \"f3961bc3-826e-42e6-96cf-4d6c454dde1e\") " pod="openstack/heat-api-6789f5c976-5lgn4" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.244192 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b8b49c7cc-tdcbd" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.332907 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc1c1a72-5fe7-467b-8309-61664e19d710-config-data\") pod \"heat-cfnapi-646c654b76-5l7fn\" (UID: \"dc1c1a72-5fe7-467b-8309-61664e19d710\") " pod="openstack/heat-cfnapi-646c654b76-5l7fn" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.332951 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3961bc3-826e-42e6-96cf-4d6c454dde1e-config-data-custom\") pod \"heat-api-6789f5c976-5lgn4\" (UID: \"f3961bc3-826e-42e6-96cf-4d6c454dde1e\") " pod="openstack/heat-api-6789f5c976-5lgn4" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.332971 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc1c1a72-5fe7-467b-8309-61664e19d710-combined-ca-bundle\") pod \"heat-cfnapi-646c654b76-5l7fn\" (UID: \"dc1c1a72-5fe7-467b-8309-61664e19d710\") " pod="openstack/heat-cfnapi-646c654b76-5l7fn" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.332999 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc1c1a72-5fe7-467b-8309-61664e19d710-config-data-custom\") pod \"heat-cfnapi-646c654b76-5l7fn\" (UID: \"dc1c1a72-5fe7-467b-8309-61664e19d710\") " pod="openstack/heat-cfnapi-646c654b76-5l7fn" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.333014 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vttjb\" (UniqueName: \"kubernetes.io/projected/dc1c1a72-5fe7-467b-8309-61664e19d710-kube-api-access-vttjb\") pod \"heat-cfnapi-646c654b76-5l7fn\" (UID: \"dc1c1a72-5fe7-467b-8309-61664e19d710\") " pod="openstack/heat-cfnapi-646c654b76-5l7fn" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.333050 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62m62\" (UniqueName: \"kubernetes.io/projected/f3961bc3-826e-42e6-96cf-4d6c454dde1e-kube-api-access-62m62\") pod \"heat-api-6789f5c976-5lgn4\" (UID: \"f3961bc3-826e-42e6-96cf-4d6c454dde1e\") " pod="openstack/heat-api-6789f5c976-5lgn4" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.333069 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3961bc3-826e-42e6-96cf-4d6c454dde1e-combined-ca-bundle\") pod \"heat-api-6789f5c976-5lgn4\" (UID: \"f3961bc3-826e-42e6-96cf-4d6c454dde1e\") " pod="openstack/heat-api-6789f5c976-5lgn4" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.333136 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3961bc3-826e-42e6-96cf-4d6c454dde1e-config-data\") pod \"heat-api-6789f5c976-5lgn4\" (UID: \"f3961bc3-826e-42e6-96cf-4d6c454dde1e\") " pod="openstack/heat-api-6789f5c976-5lgn4" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.341948 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc1c1a72-5fe7-467b-8309-61664e19d710-combined-ca-bundle\") pod \"heat-cfnapi-646c654b76-5l7fn\" (UID: \"dc1c1a72-5fe7-467b-8309-61664e19d710\") " pod="openstack/heat-cfnapi-646c654b76-5l7fn" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.345588 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3961bc3-826e-42e6-96cf-4d6c454dde1e-combined-ca-bundle\") pod \"heat-api-6789f5c976-5lgn4\" (UID: \"f3961bc3-826e-42e6-96cf-4d6c454dde1e\") " pod="openstack/heat-api-6789f5c976-5lgn4" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.348218 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc1c1a72-5fe7-467b-8309-61664e19d710-config-data-custom\") pod \"heat-cfnapi-646c654b76-5l7fn\" (UID: \"dc1c1a72-5fe7-467b-8309-61664e19d710\") " pod="openstack/heat-cfnapi-646c654b76-5l7fn" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.348795 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3961bc3-826e-42e6-96cf-4d6c454dde1e-config-data-custom\") pod \"heat-api-6789f5c976-5lgn4\" (UID: \"f3961bc3-826e-42e6-96cf-4d6c454dde1e\") " pod="openstack/heat-api-6789f5c976-5lgn4" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.350187 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3961bc3-826e-42e6-96cf-4d6c454dde1e-config-data\") pod \"heat-api-6789f5c976-5lgn4\" (UID: \"f3961bc3-826e-42e6-96cf-4d6c454dde1e\") " pod="openstack/heat-api-6789f5c976-5lgn4" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.364712 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc1c1a72-5fe7-467b-8309-61664e19d710-config-data\") pod \"heat-cfnapi-646c654b76-5l7fn\" (UID: \"dc1c1a72-5fe7-467b-8309-61664e19d710\") " pod="openstack/heat-cfnapi-646c654b76-5l7fn" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.375255 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62m62\" (UniqueName: \"kubernetes.io/projected/f3961bc3-826e-42e6-96cf-4d6c454dde1e-kube-api-access-62m62\") pod \"heat-api-6789f5c976-5lgn4\" (UID: \"f3961bc3-826e-42e6-96cf-4d6c454dde1e\") " pod="openstack/heat-api-6789f5c976-5lgn4" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.380120 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vttjb\" (UniqueName: \"kubernetes.io/projected/dc1c1a72-5fe7-467b-8309-61664e19d710-kube-api-access-vttjb\") pod \"heat-cfnapi-646c654b76-5l7fn\" (UID: \"dc1c1a72-5fe7-467b-8309-61664e19d710\") " pod="openstack/heat-cfnapi-646c654b76-5l7fn" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.429886 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6789f5c976-5lgn4" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.518949 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-646c654b76-5l7fn" Nov 28 08:58:48 crc kubenswrapper[4946]: I1128 08:58:48.875967 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5b8b49c7cc-tdcbd"] Nov 28 08:58:49 crc kubenswrapper[4946]: I1128 08:58:49.015190 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6789f5c976-5lgn4"] Nov 28 08:58:49 crc kubenswrapper[4946]: W1128 08:58:49.021635 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3961bc3_826e_42e6_96cf_4d6c454dde1e.slice/crio-59ab45900b489b9dbd8e381315d539902059a77793b330213e15a9e4ef065f28 WatchSource:0}: Error finding container 59ab45900b489b9dbd8e381315d539902059a77793b330213e15a9e4ef065f28: Status 404 returned error can't find the container with id 59ab45900b489b9dbd8e381315d539902059a77793b330213e15a9e4ef065f28 Nov 28 08:58:49 crc kubenswrapper[4946]: W1128 08:58:49.108086 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc1c1a72_5fe7_467b_8309_61664e19d710.slice/crio-5614de0a50e4f123bc37f10d198f1aa12318e5499e6c063f1be6b437675e9a0d WatchSource:0}: Error finding container 5614de0a50e4f123bc37f10d198f1aa12318e5499e6c063f1be6b437675e9a0d: Status 404 returned error can't find the container with id 5614de0a50e4f123bc37f10d198f1aa12318e5499e6c063f1be6b437675e9a0d Nov 28 08:58:49 crc kubenswrapper[4946]: I1128 08:58:49.114563 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-646c654b76-5l7fn"] Nov 28 08:58:49 crc kubenswrapper[4946]: I1128 08:58:49.603282 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-f6dfdcfff-hv9rg" Nov 28 08:58:49 crc kubenswrapper[4946]: I1128 08:58:49.618806 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-646c654b76-5l7fn" event={"ID":"dc1c1a72-5fe7-467b-8309-61664e19d710","Type":"ContainerStarted","Data":"5614de0a50e4f123bc37f10d198f1aa12318e5499e6c063f1be6b437675e9a0d"} Nov 28 08:58:49 crc kubenswrapper[4946]: I1128 08:58:49.629089 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b8b49c7cc-tdcbd" event={"ID":"81f69514-2093-4cb7-a7aa-62ba6da948fa","Type":"ContainerStarted","Data":"f6731eb330e2791afa97dbbcee8b421e3ea71faf4d430f105ca641bf400e5446"} Nov 28 08:58:49 crc kubenswrapper[4946]: I1128 08:58:49.629143 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b8b49c7cc-tdcbd" event={"ID":"81f69514-2093-4cb7-a7aa-62ba6da948fa","Type":"ContainerStarted","Data":"accffb8268f07ae447da23dcda93ef10c5cd599399371418072b07b8039d26e8"} Nov 28 08:58:49 crc kubenswrapper[4946]: I1128 08:58:49.629268 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5b8b49c7cc-tdcbd" Nov 28 08:58:49 crc kubenswrapper[4946]: I1128 08:58:49.653388 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5b8b49c7cc-tdcbd" podStartSLOduration=2.653366681 podStartE2EDuration="2.653366681s" podCreationTimestamp="2025-11-28 08:58:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 08:58:49.646694075 +0000 UTC m=+7584.024759186" watchObservedRunningTime="2025-11-28 08:58:49.653366681 +0000 UTC m=+7584.031431792" Nov 28 08:58:49 crc kubenswrapper[4946]: I1128 08:58:49.674211 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5bbf848689-ld46g"] Nov 28 08:58:49 crc kubenswrapper[4946]: I1128 08:58:49.674545 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5bbf848689-ld46g" podUID="a4155e7d-1a52-46b9-9838-aa62c08bd7ad" containerName="horizon-log" containerID="cri-o://df8232c59465a89afd50b4e76d3957d06b45d0fd65568cdce416357145a56155" gracePeriod=30 Nov 28 08:58:49 crc kubenswrapper[4946]: I1128 08:58:49.674867 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6789f5c976-5lgn4" event={"ID":"f3961bc3-826e-42e6-96cf-4d6c454dde1e","Type":"ContainerStarted","Data":"59ab45900b489b9dbd8e381315d539902059a77793b330213e15a9e4ef065f28"} Nov 28 08:58:49 crc kubenswrapper[4946]: I1128 08:58:49.674954 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5bbf848689-ld46g" podUID="a4155e7d-1a52-46b9-9838-aa62c08bd7ad" containerName="horizon" containerID="cri-o://dea563b7c08c9a9b005851d544f580aa601b6a7b489769b8fe41d1f0389abe00" gracePeriod=30 Nov 28 08:58:51 crc kubenswrapper[4946]: I1128 08:58:51.693564 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-646c654b76-5l7fn" event={"ID":"dc1c1a72-5fe7-467b-8309-61664e19d710","Type":"ContainerStarted","Data":"f3b5953cb27cf1b9edea96e326b9601477b183569d86d33028fc410cbaf7d634"} Nov 28 08:58:51 crc kubenswrapper[4946]: I1128 08:58:51.694143 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-646c654b76-5l7fn" Nov 28 08:58:51 crc kubenswrapper[4946]: I1128 08:58:51.695100 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6789f5c976-5lgn4" event={"ID":"f3961bc3-826e-42e6-96cf-4d6c454dde1e","Type":"ContainerStarted","Data":"d802a5207dd017e1e07bb7141473148cd7af30ba4d4239bb7f25c6a579d84b4e"} Nov 28 08:58:51 crc kubenswrapper[4946]: I1128 08:58:51.695516 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6789f5c976-5lgn4" Nov 28 08:58:51 crc kubenswrapper[4946]: I1128 08:58:51.734867 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-646c654b76-5l7fn" podStartSLOduration=2.273731027 podStartE2EDuration="3.734851041s" podCreationTimestamp="2025-11-28 08:58:48 +0000 UTC" firstStartedPulling="2025-11-28 08:58:49.110333713 +0000 UTC m=+7583.488398824" lastFinishedPulling="2025-11-28 08:58:50.571453727 +0000 UTC m=+7584.949518838" observedRunningTime="2025-11-28 08:58:51.718103276 +0000 UTC m=+7586.096168397" watchObservedRunningTime="2025-11-28 08:58:51.734851041 +0000 UTC m=+7586.112916152" Nov 28 08:58:51 crc kubenswrapper[4946]: I1128 08:58:51.736752 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6789f5c976-5lgn4" podStartSLOduration=2.191454639 podStartE2EDuration="3.736743068s" podCreationTimestamp="2025-11-28 08:58:48 +0000 UTC" firstStartedPulling="2025-11-28 08:58:49.023485002 +0000 UTC m=+7583.401550113" lastFinishedPulling="2025-11-28 08:58:50.568773431 +0000 UTC m=+7584.946838542" observedRunningTime="2025-11-28 08:58:51.733966229 +0000 UTC m=+7586.112031340" watchObservedRunningTime="2025-11-28 08:58:51.736743068 +0000 UTC m=+7586.114808179" Nov 28 08:58:51 crc kubenswrapper[4946]: I1128 08:58:51.990329 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 08:58:51 crc kubenswrapper[4946]: E1128 08:58:51.990582 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:58:53 crc kubenswrapper[4946]: I1128 08:58:53.725495 4946 generic.go:334] "Generic (PLEG): container finished" podID="a4155e7d-1a52-46b9-9838-aa62c08bd7ad" containerID="dea563b7c08c9a9b005851d544f580aa601b6a7b489769b8fe41d1f0389abe00" exitCode=0 Nov 28 08:58:53 crc kubenswrapper[4946]: I1128 08:58:53.725614 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bbf848689-ld46g" event={"ID":"a4155e7d-1a52-46b9-9838-aa62c08bd7ad","Type":"ContainerDied","Data":"dea563b7c08c9a9b005851d544f580aa601b6a7b489769b8fe41d1f0389abe00"} Nov 28 08:58:55 crc kubenswrapper[4946]: I1128 08:58:55.067185 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-6601-account-create-update-6gc7l"] Nov 28 08:58:55 crc kubenswrapper[4946]: I1128 08:58:55.098939 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-kswft"] Nov 28 08:58:55 crc kubenswrapper[4946]: I1128 08:58:55.098998 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-kswft"] Nov 28 08:58:55 crc kubenswrapper[4946]: I1128 08:58:55.110412 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-6601-account-create-update-6gc7l"] Nov 28 08:58:56 crc kubenswrapper[4946]: I1128 08:58:56.009381 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23efb7b3-ab4a-4127-9b80-2475de4c5c17" path="/var/lib/kubelet/pods/23efb7b3-ab4a-4127-9b80-2475de4c5c17/volumes" Nov 28 08:58:56 crc kubenswrapper[4946]: I1128 08:58:56.010743 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e99b54-40de-4764-a18b-ecccd01a9887" path="/var/lib/kubelet/pods/d7e99b54-40de-4764-a18b-ecccd01a9887/volumes" Nov 28 08:58:58 crc kubenswrapper[4946]: I1128 08:58:58.974259 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5bbf848689-ld46g" podUID="a4155e7d-1a52-46b9-9838-aa62c08bd7ad" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.101:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.101:8080: connect: connection refused" Nov 28 08:58:59 crc kubenswrapper[4946]: I1128 08:58:59.726709 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6789f5c976-5lgn4" Nov 28 08:58:59 crc kubenswrapper[4946]: I1128 08:58:59.805807 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-646c654b76-5l7fn" Nov 28 08:59:06 crc kubenswrapper[4946]: I1128 08:59:06.006279 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 08:59:06 crc kubenswrapper[4946]: E1128 08:59:06.007637 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:59:07 crc kubenswrapper[4946]: I1128 08:59:07.051689 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-qhf5g"] Nov 28 08:59:07 crc kubenswrapper[4946]: I1128 08:59:07.072567 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-qhf5g"] Nov 28 08:59:08 crc kubenswrapper[4946]: I1128 08:59:08.010788 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecd158f2-cbc1-4965-bcbf-68d0ab35afaf" path="/var/lib/kubelet/pods/ecd158f2-cbc1-4965-bcbf-68d0ab35afaf/volumes" Nov 28 08:59:08 crc kubenswrapper[4946]: I1128 08:59:08.301668 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5b8b49c7cc-tdcbd" Nov 28 08:59:08 crc kubenswrapper[4946]: I1128 08:59:08.974106 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5bbf848689-ld46g" podUID="a4155e7d-1a52-46b9-9838-aa62c08bd7ad" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.101:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.101:8080: connect: connection refused" Nov 28 08:59:17 crc kubenswrapper[4946]: I1128 08:59:17.990431 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 08:59:17 crc kubenswrapper[4946]: E1128 08:59:17.991122 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:59:18 crc kubenswrapper[4946]: I1128 08:59:18.759941 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq"] Nov 28 08:59:18 crc kubenswrapper[4946]: I1128 08:59:18.762294 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq" Nov 28 08:59:18 crc kubenswrapper[4946]: I1128 08:59:18.764583 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 08:59:18 crc kubenswrapper[4946]: I1128 08:59:18.792210 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq"] Nov 28 08:59:18 crc kubenswrapper[4946]: I1128 08:59:18.826617 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab2ed536-9dcd-49fc-be72-0bc6232e2bdb-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq\" (UID: \"ab2ed536-9dcd-49fc-be72-0bc6232e2bdb\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq" Nov 28 08:59:18 crc kubenswrapper[4946]: I1128 08:59:18.826712 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab2ed536-9dcd-49fc-be72-0bc6232e2bdb-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq\" (UID: \"ab2ed536-9dcd-49fc-be72-0bc6232e2bdb\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq" Nov 28 08:59:18 crc kubenswrapper[4946]: I1128 08:59:18.826774 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28rvt\" (UniqueName: \"kubernetes.io/projected/ab2ed536-9dcd-49fc-be72-0bc6232e2bdb-kube-api-access-28rvt\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq\" (UID: \"ab2ed536-9dcd-49fc-be72-0bc6232e2bdb\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq" Nov 28 08:59:18 crc kubenswrapper[4946]: I1128 08:59:18.928890 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab2ed536-9dcd-49fc-be72-0bc6232e2bdb-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq\" (UID: \"ab2ed536-9dcd-49fc-be72-0bc6232e2bdb\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq" Nov 28 08:59:18 crc kubenswrapper[4946]: I1128 08:59:18.928932 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab2ed536-9dcd-49fc-be72-0bc6232e2bdb-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq\" (UID: \"ab2ed536-9dcd-49fc-be72-0bc6232e2bdb\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq" Nov 28 08:59:18 crc kubenswrapper[4946]: I1128 08:59:18.928966 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28rvt\" (UniqueName: \"kubernetes.io/projected/ab2ed536-9dcd-49fc-be72-0bc6232e2bdb-kube-api-access-28rvt\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq\" (UID: \"ab2ed536-9dcd-49fc-be72-0bc6232e2bdb\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq" Nov 28 08:59:18 crc kubenswrapper[4946]: I1128 08:59:18.929808 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab2ed536-9dcd-49fc-be72-0bc6232e2bdb-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq\" (UID: \"ab2ed536-9dcd-49fc-be72-0bc6232e2bdb\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq" Nov 28 08:59:18 crc kubenswrapper[4946]: I1128 08:59:18.930040 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab2ed536-9dcd-49fc-be72-0bc6232e2bdb-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq\" (UID: \"ab2ed536-9dcd-49fc-be72-0bc6232e2bdb\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq" Nov 28 08:59:18 crc kubenswrapper[4946]: I1128 08:59:18.953384 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28rvt\" (UniqueName: \"kubernetes.io/projected/ab2ed536-9dcd-49fc-be72-0bc6232e2bdb-kube-api-access-28rvt\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq\" (UID: \"ab2ed536-9dcd-49fc-be72-0bc6232e2bdb\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq" Nov 28 08:59:18 crc kubenswrapper[4946]: I1128 08:59:18.974159 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5bbf848689-ld46g" podUID="a4155e7d-1a52-46b9-9838-aa62c08bd7ad" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.101:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.101:8080: connect: connection refused" Nov 28 08:59:18 crc kubenswrapper[4946]: I1128 08:59:18.974415 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5bbf848689-ld46g" Nov 28 08:59:19 crc kubenswrapper[4946]: I1128 08:59:19.102610 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq" Nov 28 08:59:19 crc kubenswrapper[4946]: I1128 08:59:19.641651 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq"] Nov 28 08:59:20 crc kubenswrapper[4946]: I1128 08:59:20.061520 4946 generic.go:334] "Generic (PLEG): container finished" podID="a4155e7d-1a52-46b9-9838-aa62c08bd7ad" containerID="df8232c59465a89afd50b4e76d3957d06b45d0fd65568cdce416357145a56155" exitCode=137 Nov 28 08:59:20 crc kubenswrapper[4946]: I1128 08:59:20.061587 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bbf848689-ld46g" event={"ID":"a4155e7d-1a52-46b9-9838-aa62c08bd7ad","Type":"ContainerDied","Data":"df8232c59465a89afd50b4e76d3957d06b45d0fd65568cdce416357145a56155"} Nov 28 08:59:20 crc kubenswrapper[4946]: I1128 08:59:20.064099 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq" event={"ID":"ab2ed536-9dcd-49fc-be72-0bc6232e2bdb","Type":"ContainerStarted","Data":"ec682201806ab0fd9805a7bd77dbb0216817ad08f5ac1f496b1240978dbb28fd"} Nov 28 08:59:20 crc kubenswrapper[4946]: I1128 08:59:20.065407 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq" event={"ID":"ab2ed536-9dcd-49fc-be72-0bc6232e2bdb","Type":"ContainerStarted","Data":"923a13364c6c34707926aa1192b46a860a90d85aa7347db75c0ec338a1b0b22b"} Nov 28 08:59:20 crc kubenswrapper[4946]: I1128 08:59:20.165129 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bbf848689-ld46g" Nov 28 08:59:20 crc kubenswrapper[4946]: I1128 08:59:20.256238 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-config-data\") pod \"a4155e7d-1a52-46b9-9838-aa62c08bd7ad\" (UID: \"a4155e7d-1a52-46b9-9838-aa62c08bd7ad\") " Nov 28 08:59:20 crc kubenswrapper[4946]: I1128 08:59:20.256418 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-horizon-secret-key\") pod \"a4155e7d-1a52-46b9-9838-aa62c08bd7ad\" (UID: \"a4155e7d-1a52-46b9-9838-aa62c08bd7ad\") " Nov 28 08:59:20 crc kubenswrapper[4946]: I1128 08:59:20.256463 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-logs\") pod \"a4155e7d-1a52-46b9-9838-aa62c08bd7ad\" (UID: \"a4155e7d-1a52-46b9-9838-aa62c08bd7ad\") " Nov 28 08:59:20 crc kubenswrapper[4946]: I1128 08:59:20.256584 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-scripts\") pod \"a4155e7d-1a52-46b9-9838-aa62c08bd7ad\" (UID: \"a4155e7d-1a52-46b9-9838-aa62c08bd7ad\") " Nov 28 08:59:20 crc kubenswrapper[4946]: I1128 08:59:20.256648 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgr47\" (UniqueName: \"kubernetes.io/projected/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-kube-api-access-bgr47\") pod \"a4155e7d-1a52-46b9-9838-aa62c08bd7ad\" (UID: \"a4155e7d-1a52-46b9-9838-aa62c08bd7ad\") " Nov 28 08:59:20 crc kubenswrapper[4946]: I1128 08:59:20.257196 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-logs" (OuterVolumeSpecName: "logs") pod "a4155e7d-1a52-46b9-9838-aa62c08bd7ad" (UID: "a4155e7d-1a52-46b9-9838-aa62c08bd7ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:59:20 crc kubenswrapper[4946]: I1128 08:59:20.265638 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-kube-api-access-bgr47" (OuterVolumeSpecName: "kube-api-access-bgr47") pod "a4155e7d-1a52-46b9-9838-aa62c08bd7ad" (UID: "a4155e7d-1a52-46b9-9838-aa62c08bd7ad"). InnerVolumeSpecName "kube-api-access-bgr47". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:59:20 crc kubenswrapper[4946]: I1128 08:59:20.267551 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a4155e7d-1a52-46b9-9838-aa62c08bd7ad" (UID: "a4155e7d-1a52-46b9-9838-aa62c08bd7ad"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 08:59:20 crc kubenswrapper[4946]: I1128 08:59:20.284046 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-config-data" (OuterVolumeSpecName: "config-data") pod "a4155e7d-1a52-46b9-9838-aa62c08bd7ad" (UID: "a4155e7d-1a52-46b9-9838-aa62c08bd7ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:59:20 crc kubenswrapper[4946]: I1128 08:59:20.292044 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-scripts" (OuterVolumeSpecName: "scripts") pod "a4155e7d-1a52-46b9-9838-aa62c08bd7ad" (UID: "a4155e7d-1a52-46b9-9838-aa62c08bd7ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 08:59:20 crc kubenswrapper[4946]: I1128 08:59:20.360535 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 08:59:20 crc kubenswrapper[4946]: I1128 08:59:20.360593 4946 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 28 08:59:20 crc kubenswrapper[4946]: I1128 08:59:20.360609 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-logs\") on node \"crc\" DevicePath \"\"" Nov 28 08:59:20 crc kubenswrapper[4946]: I1128 08:59:20.360622 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 08:59:20 crc kubenswrapper[4946]: I1128 08:59:20.360638 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgr47\" (UniqueName: \"kubernetes.io/projected/a4155e7d-1a52-46b9-9838-aa62c08bd7ad-kube-api-access-bgr47\") on node \"crc\" DevicePath \"\"" Nov 28 08:59:21 crc kubenswrapper[4946]: I1128 08:59:21.080549 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bbf848689-ld46g" Nov 28 08:59:21 crc kubenswrapper[4946]: I1128 08:59:21.080571 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bbf848689-ld46g" event={"ID":"a4155e7d-1a52-46b9-9838-aa62c08bd7ad","Type":"ContainerDied","Data":"c4363ffb57ea463f621fd8e50157880aa89b8106c4529e2fd2515470b6189aaf"} Nov 28 08:59:21 crc kubenswrapper[4946]: I1128 08:59:21.080643 4946 scope.go:117] "RemoveContainer" containerID="dea563b7c08c9a9b005851d544f580aa601b6a7b489769b8fe41d1f0389abe00" Nov 28 08:59:21 crc kubenswrapper[4946]: I1128 08:59:21.085783 4946 generic.go:334] "Generic (PLEG): container finished" podID="ab2ed536-9dcd-49fc-be72-0bc6232e2bdb" containerID="ec682201806ab0fd9805a7bd77dbb0216817ad08f5ac1f496b1240978dbb28fd" exitCode=0 Nov 28 08:59:21 crc kubenswrapper[4946]: I1128 08:59:21.085845 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq" event={"ID":"ab2ed536-9dcd-49fc-be72-0bc6232e2bdb","Type":"ContainerDied","Data":"ec682201806ab0fd9805a7bd77dbb0216817ad08f5ac1f496b1240978dbb28fd"} Nov 28 08:59:21 crc kubenswrapper[4946]: I1128 08:59:21.154217 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5bbf848689-ld46g"] Nov 28 08:59:21 crc kubenswrapper[4946]: I1128 08:59:21.168649 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5bbf848689-ld46g"] Nov 28 08:59:21 crc kubenswrapper[4946]: I1128 08:59:21.324017 4946 scope.go:117] "RemoveContainer" containerID="df8232c59465a89afd50b4e76d3957d06b45d0fd65568cdce416357145a56155" Nov 28 08:59:22 crc kubenswrapper[4946]: I1128 08:59:22.005409 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4155e7d-1a52-46b9-9838-aa62c08bd7ad" path="/var/lib/kubelet/pods/a4155e7d-1a52-46b9-9838-aa62c08bd7ad/volumes" Nov 28 08:59:23 crc kubenswrapper[4946]: I1128 08:59:23.119422 4946 generic.go:334] "Generic (PLEG): container finished" podID="ab2ed536-9dcd-49fc-be72-0bc6232e2bdb" containerID="8a17b8adc7914512c962b49a3b5318b1912556710b1a82ce2e0c5733a0b3bb0b" exitCode=0 Nov 28 08:59:23 crc kubenswrapper[4946]: I1128 08:59:23.119524 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq" event={"ID":"ab2ed536-9dcd-49fc-be72-0bc6232e2bdb","Type":"ContainerDied","Data":"8a17b8adc7914512c962b49a3b5318b1912556710b1a82ce2e0c5733a0b3bb0b"} Nov 28 08:59:24 crc kubenswrapper[4946]: I1128 08:59:24.135840 4946 generic.go:334] "Generic (PLEG): container finished" podID="ab2ed536-9dcd-49fc-be72-0bc6232e2bdb" containerID="0053da3c53394600f9fa9e80456344901da1c74188e3f5c174facc852da20356" exitCode=0 Nov 28 08:59:24 crc kubenswrapper[4946]: I1128 08:59:24.135903 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq" event={"ID":"ab2ed536-9dcd-49fc-be72-0bc6232e2bdb","Type":"ContainerDied","Data":"0053da3c53394600f9fa9e80456344901da1c74188e3f5c174facc852da20356"} Nov 28 08:59:25 crc kubenswrapper[4946]: I1128 08:59:25.592760 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq" Nov 28 08:59:25 crc kubenswrapper[4946]: I1128 08:59:25.679370 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28rvt\" (UniqueName: \"kubernetes.io/projected/ab2ed536-9dcd-49fc-be72-0bc6232e2bdb-kube-api-access-28rvt\") pod \"ab2ed536-9dcd-49fc-be72-0bc6232e2bdb\" (UID: \"ab2ed536-9dcd-49fc-be72-0bc6232e2bdb\") " Nov 28 08:59:25 crc kubenswrapper[4946]: I1128 08:59:25.679558 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab2ed536-9dcd-49fc-be72-0bc6232e2bdb-bundle\") pod \"ab2ed536-9dcd-49fc-be72-0bc6232e2bdb\" (UID: \"ab2ed536-9dcd-49fc-be72-0bc6232e2bdb\") " Nov 28 08:59:25 crc kubenswrapper[4946]: I1128 08:59:25.679705 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab2ed536-9dcd-49fc-be72-0bc6232e2bdb-util\") pod \"ab2ed536-9dcd-49fc-be72-0bc6232e2bdb\" (UID: \"ab2ed536-9dcd-49fc-be72-0bc6232e2bdb\") " Nov 28 08:59:25 crc kubenswrapper[4946]: I1128 08:59:25.682595 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab2ed536-9dcd-49fc-be72-0bc6232e2bdb-bundle" (OuterVolumeSpecName: "bundle") pod "ab2ed536-9dcd-49fc-be72-0bc6232e2bdb" (UID: "ab2ed536-9dcd-49fc-be72-0bc6232e2bdb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:59:25 crc kubenswrapper[4946]: I1128 08:59:25.688259 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab2ed536-9dcd-49fc-be72-0bc6232e2bdb-kube-api-access-28rvt" (OuterVolumeSpecName: "kube-api-access-28rvt") pod "ab2ed536-9dcd-49fc-be72-0bc6232e2bdb" (UID: "ab2ed536-9dcd-49fc-be72-0bc6232e2bdb"). InnerVolumeSpecName "kube-api-access-28rvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 08:59:25 crc kubenswrapper[4946]: I1128 08:59:25.694268 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab2ed536-9dcd-49fc-be72-0bc6232e2bdb-util" (OuterVolumeSpecName: "util") pod "ab2ed536-9dcd-49fc-be72-0bc6232e2bdb" (UID: "ab2ed536-9dcd-49fc-be72-0bc6232e2bdb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 08:59:25 crc kubenswrapper[4946]: I1128 08:59:25.782803 4946 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab2ed536-9dcd-49fc-be72-0bc6232e2bdb-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 08:59:25 crc kubenswrapper[4946]: I1128 08:59:25.782866 4946 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab2ed536-9dcd-49fc-be72-0bc6232e2bdb-util\") on node \"crc\" DevicePath \"\"" Nov 28 08:59:25 crc kubenswrapper[4946]: I1128 08:59:25.782888 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28rvt\" (UniqueName: \"kubernetes.io/projected/ab2ed536-9dcd-49fc-be72-0bc6232e2bdb-kube-api-access-28rvt\") on node \"crc\" DevicePath \"\"" Nov 28 08:59:26 crc kubenswrapper[4946]: I1128 08:59:26.160756 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq" event={"ID":"ab2ed536-9dcd-49fc-be72-0bc6232e2bdb","Type":"ContainerDied","Data":"923a13364c6c34707926aa1192b46a860a90d85aa7347db75c0ec338a1b0b22b"} Nov 28 08:59:26 crc kubenswrapper[4946]: I1128 08:59:26.161162 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="923a13364c6c34707926aa1192b46a860a90d85aa7347db75c0ec338a1b0b22b" Nov 28 08:59:26 crc kubenswrapper[4946]: I1128 08:59:26.160860 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq" Nov 28 08:59:29 crc kubenswrapper[4946]: I1128 08:59:29.990565 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 08:59:29 crc kubenswrapper[4946]: E1128 08:59:29.991494 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:59:30 crc kubenswrapper[4946]: I1128 08:59:30.045009 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-zkc24"] Nov 28 08:59:30 crc kubenswrapper[4946]: I1128 08:59:30.057983 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-1d3f-account-create-update-q8h5l"] Nov 28 08:59:30 crc kubenswrapper[4946]: I1128 08:59:30.070187 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-zkc24"] Nov 28 08:59:30 crc kubenswrapper[4946]: I1128 08:59:30.078904 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-1d3f-account-create-update-q8h5l"] Nov 28 08:59:32 crc kubenswrapper[4946]: I1128 08:59:32.008835 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d4e2519-b955-43b3-9110-690e645f9d15" path="/var/lib/kubelet/pods/1d4e2519-b955-43b3-9110-690e645f9d15/volumes" Nov 28 08:59:32 crc kubenswrapper[4946]: I1128 08:59:32.009795 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67c87597-b262-40ac-a349-9fd176f7e0da" path="/var/lib/kubelet/pods/67c87597-b262-40ac-a349-9fd176f7e0da/volumes" Nov 28 08:59:32 crc kubenswrapper[4946]: E1128 08:59:32.016439 4946 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab2ed536_9dcd_49fc_be72_0bc6232e2bdb.slice/crio-923a13364c6c34707926aa1192b46a860a90d85aa7347db75c0ec338a1b0b22b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab2ed536_9dcd_49fc_be72_0bc6232e2bdb.slice\": RecentStats: unable to find data in memory cache]" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.639164 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-nwc4q"] Nov 28 08:59:37 crc kubenswrapper[4946]: E1128 08:59:37.640027 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2ed536-9dcd-49fc-be72-0bc6232e2bdb" containerName="util" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.640040 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2ed536-9dcd-49fc-be72-0bc6232e2bdb" containerName="util" Nov 28 08:59:37 crc kubenswrapper[4946]: E1128 08:59:37.640075 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4155e7d-1a52-46b9-9838-aa62c08bd7ad" containerName="horizon" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.640083 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4155e7d-1a52-46b9-9838-aa62c08bd7ad" containerName="horizon" Nov 28 08:59:37 crc kubenswrapper[4946]: E1128 08:59:37.640099 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2ed536-9dcd-49fc-be72-0bc6232e2bdb" containerName="pull" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.640105 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2ed536-9dcd-49fc-be72-0bc6232e2bdb" containerName="pull" Nov 28 08:59:37 crc kubenswrapper[4946]: E1128 08:59:37.640121 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2ed536-9dcd-49fc-be72-0bc6232e2bdb" containerName="extract" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.640126 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2ed536-9dcd-49fc-be72-0bc6232e2bdb" containerName="extract" Nov 28 08:59:37 crc kubenswrapper[4946]: E1128 08:59:37.640138 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4155e7d-1a52-46b9-9838-aa62c08bd7ad" containerName="horizon-log" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.640144 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4155e7d-1a52-46b9-9838-aa62c08bd7ad" containerName="horizon-log" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.640323 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4155e7d-1a52-46b9-9838-aa62c08bd7ad" containerName="horizon-log" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.640334 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4155e7d-1a52-46b9-9838-aa62c08bd7ad" containerName="horizon" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.640350 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab2ed536-9dcd-49fc-be72-0bc6232e2bdb" containerName="extract" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.641002 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-nwc4q" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.643922 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.643934 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.644184 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-589dh" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.661056 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-nwc4q"] Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.765344 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-2vvdq"] Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.766538 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-2vvdq" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.771789 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-q4bzw" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.771791 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.793204 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-b5sqs"] Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.794600 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-b5sqs" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.803798 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-2vvdq"] Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.821559 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-b5sqs"] Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.832872 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b27rt\" (UniqueName: \"kubernetes.io/projected/e1430df3-899d-4393-8356-76efd1a21981-kube-api-access-b27rt\") pod \"obo-prometheus-operator-668cf9dfbb-nwc4q\" (UID: \"e1430df3-899d-4393-8356-76efd1a21981\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-nwc4q" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.934850 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea0a1b65-05a9-4a34-866a-5c22e1ee9235-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66bb8447f-2vvdq\" (UID: \"ea0a1b65-05a9-4a34-866a-5c22e1ee9235\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-2vvdq" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.934917 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cebbc948-553e-4951-a3bc-b5c83e87f9ca-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66bb8447f-b5sqs\" (UID: \"cebbc948-553e-4951-a3bc-b5c83e87f9ca\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-b5sqs" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.935014 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cebbc948-553e-4951-a3bc-b5c83e87f9ca-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66bb8447f-b5sqs\" (UID: \"cebbc948-553e-4951-a3bc-b5c83e87f9ca\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-b5sqs" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.935073 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b27rt\" (UniqueName: \"kubernetes.io/projected/e1430df3-899d-4393-8356-76efd1a21981-kube-api-access-b27rt\") pod \"obo-prometheus-operator-668cf9dfbb-nwc4q\" (UID: \"e1430df3-899d-4393-8356-76efd1a21981\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-nwc4q" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.935105 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea0a1b65-05a9-4a34-866a-5c22e1ee9235-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66bb8447f-2vvdq\" (UID: \"ea0a1b65-05a9-4a34-866a-5c22e1ee9235\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-2vvdq" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.953329 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b27rt\" (UniqueName: \"kubernetes.io/projected/e1430df3-899d-4393-8356-76efd1a21981-kube-api-access-b27rt\") pod \"obo-prometheus-operator-668cf9dfbb-nwc4q\" (UID: \"e1430df3-899d-4393-8356-76efd1a21981\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-nwc4q" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.965310 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-v9829"] Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.966049 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-nwc4q" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.966515 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-v9829" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.969734 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-gmp4h" Nov 28 08:59:37 crc kubenswrapper[4946]: I1128 08:59:37.969928 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.008635 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-v9829"] Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.036449 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea0a1b65-05a9-4a34-866a-5c22e1ee9235-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66bb8447f-2vvdq\" (UID: \"ea0a1b65-05a9-4a34-866a-5c22e1ee9235\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-2vvdq" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.036581 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea0a1b65-05a9-4a34-866a-5c22e1ee9235-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66bb8447f-2vvdq\" (UID: \"ea0a1b65-05a9-4a34-866a-5c22e1ee9235\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-2vvdq" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.036607 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cebbc948-553e-4951-a3bc-b5c83e87f9ca-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66bb8447f-b5sqs\" (UID: \"cebbc948-553e-4951-a3bc-b5c83e87f9ca\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-b5sqs" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.036653 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cebbc948-553e-4951-a3bc-b5c83e87f9ca-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66bb8447f-b5sqs\" (UID: \"cebbc948-553e-4951-a3bc-b5c83e87f9ca\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-b5sqs" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.040216 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea0a1b65-05a9-4a34-866a-5c22e1ee9235-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66bb8447f-2vvdq\" (UID: \"ea0a1b65-05a9-4a34-866a-5c22e1ee9235\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-2vvdq" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.040580 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cebbc948-553e-4951-a3bc-b5c83e87f9ca-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66bb8447f-b5sqs\" (UID: \"cebbc948-553e-4951-a3bc-b5c83e87f9ca\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-b5sqs" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.041631 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cebbc948-553e-4951-a3bc-b5c83e87f9ca-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66bb8447f-b5sqs\" (UID: \"cebbc948-553e-4951-a3bc-b5c83e87f9ca\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-b5sqs" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.041800 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea0a1b65-05a9-4a34-866a-5c22e1ee9235-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66bb8447f-2vvdq\" (UID: \"ea0a1b65-05a9-4a34-866a-5c22e1ee9235\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-2vvdq" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.083373 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-2vvdq" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.106963 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-tplpq"] Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.108937 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-tplpq" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.117219 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-xl97k" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.131452 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-b5sqs" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.134670 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-tplpq"] Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.140151 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgrwh\" (UniqueName: \"kubernetes.io/projected/38d84819-1411-47fa-92da-d212f932f6ed-kube-api-access-pgrwh\") pod \"observability-operator-d8bb48f5d-v9829\" (UID: \"38d84819-1411-47fa-92da-d212f932f6ed\") " pod="openshift-operators/observability-operator-d8bb48f5d-v9829" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.140298 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/38d84819-1411-47fa-92da-d212f932f6ed-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-v9829\" (UID: \"38d84819-1411-47fa-92da-d212f932f6ed\") " pod="openshift-operators/observability-operator-d8bb48f5d-v9829" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.242998 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgrwh\" (UniqueName: \"kubernetes.io/projected/38d84819-1411-47fa-92da-d212f932f6ed-kube-api-access-pgrwh\") pod \"observability-operator-d8bb48f5d-v9829\" (UID: \"38d84819-1411-47fa-92da-d212f932f6ed\") " pod="openshift-operators/observability-operator-d8bb48f5d-v9829" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.243059 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d50c9b34-ab7c-4f34-ab0a-04149dad3959-openshift-service-ca\") pod \"perses-operator-5446b9c989-tplpq\" (UID: \"d50c9b34-ab7c-4f34-ab0a-04149dad3959\") " pod="openshift-operators/perses-operator-5446b9c989-tplpq" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.243084 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcbm5\" (UniqueName: \"kubernetes.io/projected/d50c9b34-ab7c-4f34-ab0a-04149dad3959-kube-api-access-rcbm5\") pod \"perses-operator-5446b9c989-tplpq\" (UID: \"d50c9b34-ab7c-4f34-ab0a-04149dad3959\") " pod="openshift-operators/perses-operator-5446b9c989-tplpq" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.243145 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/38d84819-1411-47fa-92da-d212f932f6ed-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-v9829\" (UID: \"38d84819-1411-47fa-92da-d212f932f6ed\") " pod="openshift-operators/observability-operator-d8bb48f5d-v9829" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.254157 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/38d84819-1411-47fa-92da-d212f932f6ed-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-v9829\" (UID: \"38d84819-1411-47fa-92da-d212f932f6ed\") " pod="openshift-operators/observability-operator-d8bb48f5d-v9829" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.271773 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgrwh\" (UniqueName: \"kubernetes.io/projected/38d84819-1411-47fa-92da-d212f932f6ed-kube-api-access-pgrwh\") pod \"observability-operator-d8bb48f5d-v9829\" (UID: \"38d84819-1411-47fa-92da-d212f932f6ed\") " pod="openshift-operators/observability-operator-d8bb48f5d-v9829" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.328719 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-v9829" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.346415 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d50c9b34-ab7c-4f34-ab0a-04149dad3959-openshift-service-ca\") pod \"perses-operator-5446b9c989-tplpq\" (UID: \"d50c9b34-ab7c-4f34-ab0a-04149dad3959\") " pod="openshift-operators/perses-operator-5446b9c989-tplpq" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.346482 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcbm5\" (UniqueName: \"kubernetes.io/projected/d50c9b34-ab7c-4f34-ab0a-04149dad3959-kube-api-access-rcbm5\") pod \"perses-operator-5446b9c989-tplpq\" (UID: \"d50c9b34-ab7c-4f34-ab0a-04149dad3959\") " pod="openshift-operators/perses-operator-5446b9c989-tplpq" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.348048 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d50c9b34-ab7c-4f34-ab0a-04149dad3959-openshift-service-ca\") pod \"perses-operator-5446b9c989-tplpq\" (UID: \"d50c9b34-ab7c-4f34-ab0a-04149dad3959\") " pod="openshift-operators/perses-operator-5446b9c989-tplpq" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.387252 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcbm5\" (UniqueName: \"kubernetes.io/projected/d50c9b34-ab7c-4f34-ab0a-04149dad3959-kube-api-access-rcbm5\") pod \"perses-operator-5446b9c989-tplpq\" (UID: \"d50c9b34-ab7c-4f34-ab0a-04149dad3959\") " pod="openshift-operators/perses-operator-5446b9c989-tplpq" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.506359 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-tplpq" Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.719442 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-2vvdq"] Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.728919 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-nwc4q"] Nov 28 08:59:38 crc kubenswrapper[4946]: W1128 08:59:38.734497 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea0a1b65_05a9_4a34_866a_5c22e1ee9235.slice/crio-5df1a6541078c4a4f650d6b5fa76271a88249f483bb6d6462f7c0e0dc6089bc1 WatchSource:0}: Error finding container 5df1a6541078c4a4f650d6b5fa76271a88249f483bb6d6462f7c0e0dc6089bc1: Status 404 returned error can't find the container with id 5df1a6541078c4a4f650d6b5fa76271a88249f483bb6d6462f7c0e0dc6089bc1 Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.928609 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-b5sqs"] Nov 28 08:59:38 crc kubenswrapper[4946]: I1128 08:59:38.953939 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-v9829"] Nov 28 08:59:39 crc kubenswrapper[4946]: I1128 08:59:39.037117 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-srwds"] Nov 28 08:59:39 crc kubenswrapper[4946]: I1128 08:59:39.046963 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-srwds"] Nov 28 08:59:39 crc kubenswrapper[4946]: I1128 08:59:39.268291 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-tplpq"] Nov 28 08:59:39 crc kubenswrapper[4946]: W1128 08:59:39.280489 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd50c9b34_ab7c_4f34_ab0a_04149dad3959.slice/crio-c5e59886551f2205a5789918a91b9440f415c826a3d9ae6e0f5e717eee36ff9c WatchSource:0}: Error finding container c5e59886551f2205a5789918a91b9440f415c826a3d9ae6e0f5e717eee36ff9c: Status 404 returned error can't find the container with id c5e59886551f2205a5789918a91b9440f415c826a3d9ae6e0f5e717eee36ff9c Nov 28 08:59:39 crc kubenswrapper[4946]: I1128 08:59:39.313652 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-b5sqs" event={"ID":"cebbc948-553e-4951-a3bc-b5c83e87f9ca","Type":"ContainerStarted","Data":"86633869214ba19767c2419d2ada9f32538ca4ab97640579b8282be86910b619"} Nov 28 08:59:39 crc kubenswrapper[4946]: I1128 08:59:39.315222 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-nwc4q" event={"ID":"e1430df3-899d-4393-8356-76efd1a21981","Type":"ContainerStarted","Data":"70229ecb1b4fc9847d30bef0fa24794315d030ec6a6dd06a05c8545f07b5629b"} Nov 28 08:59:39 crc kubenswrapper[4946]: I1128 08:59:39.316565 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-v9829" event={"ID":"38d84819-1411-47fa-92da-d212f932f6ed","Type":"ContainerStarted","Data":"433774044b82d196d26dc3595e8f4e612c6c25792415f35b3f9ae265b49400cc"} Nov 28 08:59:39 crc kubenswrapper[4946]: I1128 08:59:39.317613 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-2vvdq" event={"ID":"ea0a1b65-05a9-4a34-866a-5c22e1ee9235","Type":"ContainerStarted","Data":"5df1a6541078c4a4f650d6b5fa76271a88249f483bb6d6462f7c0e0dc6089bc1"} Nov 28 08:59:39 crc kubenswrapper[4946]: I1128 08:59:39.319169 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-tplpq" event={"ID":"d50c9b34-ab7c-4f34-ab0a-04149dad3959","Type":"ContainerStarted","Data":"c5e59886551f2205a5789918a91b9440f415c826a3d9ae6e0f5e717eee36ff9c"} Nov 28 08:59:40 crc kubenswrapper[4946]: I1128 08:59:40.007548 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bff22882-abd9-45cb-b73b-f71e66526523" path="/var/lib/kubelet/pods/bff22882-abd9-45cb-b73b-f71e66526523/volumes" Nov 28 08:59:41 crc kubenswrapper[4946]: I1128 08:59:41.429309 4946 scope.go:117] "RemoveContainer" containerID="1548764c7bcf6b716ed869f5f9ddf284c9c3e888cfe8d038f8e9bedda9eb85d1" Nov 28 08:59:42 crc kubenswrapper[4946]: E1128 08:59:42.289582 4946 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab2ed536_9dcd_49fc_be72_0bc6232e2bdb.slice/crio-923a13364c6c34707926aa1192b46a860a90d85aa7347db75c0ec338a1b0b22b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab2ed536_9dcd_49fc_be72_0bc6232e2bdb.slice\": RecentStats: unable to find data in memory cache]" Nov 28 08:59:43 crc kubenswrapper[4946]: I1128 08:59:43.990015 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 08:59:43 crc kubenswrapper[4946]: E1128 08:59:43.990720 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:59:48 crc kubenswrapper[4946]: I1128 08:59:48.528570 4946 scope.go:117] "RemoveContainer" containerID="2a679ecab020588299dbe05fec513d8b0793950c31cc5ebdf5348f7c91c65beb" Nov 28 08:59:48 crc kubenswrapper[4946]: I1128 08:59:48.583936 4946 scope.go:117] "RemoveContainer" containerID="83ce25e78bc0d9d1d4d261d549c85653a9e589b62b53e87c7c2251d3f88453aa" Nov 28 08:59:48 crc kubenswrapper[4946]: I1128 08:59:48.633098 4946 scope.go:117] "RemoveContainer" containerID="b568b9dfffd5e27aaaf2f400cb75ee51512dfc416c665867c9c086e46a354d2f" Nov 28 08:59:48 crc kubenswrapper[4946]: I1128 08:59:48.699347 4946 scope.go:117] "RemoveContainer" containerID="d1e3230019f670958d19752551793badc7d7b5bace0fafa532f0f1ff77a392c0" Nov 28 08:59:48 crc kubenswrapper[4946]: I1128 08:59:48.836540 4946 scope.go:117] "RemoveContainer" containerID="4923a975f389ceb2a20894cd7508fac09ae3ded9f07d16ff931787b058e7fbda" Nov 28 08:59:48 crc kubenswrapper[4946]: I1128 08:59:48.904967 4946 scope.go:117] "RemoveContainer" containerID="936bb085b6e5bb5dad263a9c8cccde5f9e9da7f530a1d756aec854c71404c588" Nov 28 08:59:48 crc kubenswrapper[4946]: I1128 08:59:48.930301 4946 scope.go:117] "RemoveContainer" containerID="bfc1a51101b4c552d61c7f5867477bb7aef0cb80c9a1b719d2163295d47914f3" Nov 28 08:59:48 crc kubenswrapper[4946]: I1128 08:59:48.958717 4946 scope.go:117] "RemoveContainer" containerID="8666d385cd752a53250dd22410d8fbb4741e93836b5b7bf2960a49f41b510058" Nov 28 08:59:48 crc kubenswrapper[4946]: I1128 08:59:48.995587 4946 scope.go:117] "RemoveContainer" containerID="10b925f316dd4a74af64efce73ab4dd436122184227ab7db1ba402b1f2eb3727" Nov 28 08:59:49 crc kubenswrapper[4946]: I1128 08:59:49.035882 4946 scope.go:117] "RemoveContainer" containerID="83bc162eac8bc64d5e2ddc5a5afb207bf352aa7239de745e33448cb364cfa2bf" Nov 28 08:59:49 crc kubenswrapper[4946]: I1128 08:59:49.464087 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-2vvdq" event={"ID":"ea0a1b65-05a9-4a34-866a-5c22e1ee9235","Type":"ContainerStarted","Data":"5c65d47f07ca8d5a9d864e80a79de020078dc4b44da1495868818131a6471bcd"} Nov 28 08:59:49 crc kubenswrapper[4946]: I1128 08:59:49.466439 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-tplpq" event={"ID":"d50c9b34-ab7c-4f34-ab0a-04149dad3959","Type":"ContainerStarted","Data":"7329e9c9a237de9b72a7df479c27eca59b68404089d628c35c8f20e4fb972f86"} Nov 28 08:59:49 crc kubenswrapper[4946]: I1128 08:59:49.466596 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-tplpq" Nov 28 08:59:49 crc kubenswrapper[4946]: I1128 08:59:49.469575 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-b5sqs" event={"ID":"cebbc948-553e-4951-a3bc-b5c83e87f9ca","Type":"ContainerStarted","Data":"03bf0a35b7483e065825d92e090eaf2e356c3b9b7f960addb22855583a3eae78"} Nov 28 08:59:49 crc kubenswrapper[4946]: I1128 08:59:49.471870 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-nwc4q" event={"ID":"e1430df3-899d-4393-8356-76efd1a21981","Type":"ContainerStarted","Data":"67d669c41db61c0316349abd22913a32a8802b9bcc7c38ab644f6cd00595151f"} Nov 28 08:59:49 crc kubenswrapper[4946]: I1128 08:59:49.473783 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-v9829" event={"ID":"38d84819-1411-47fa-92da-d212f932f6ed","Type":"ContainerStarted","Data":"1c6e190cfcf4018fc1dc9076e9986ab4e68fbcf57b74251a770869c57ecd32d0"} Nov 28 08:59:49 crc kubenswrapper[4946]: I1128 08:59:49.474063 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-v9829" Nov 28 08:59:49 crc kubenswrapper[4946]: I1128 08:59:49.486025 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-2vvdq" podStartSLOduration=2.685341175 podStartE2EDuration="12.486011331s" podCreationTimestamp="2025-11-28 08:59:37 +0000 UTC" firstStartedPulling="2025-11-28 08:59:38.749963919 +0000 UTC m=+7633.128029030" lastFinishedPulling="2025-11-28 08:59:48.550634075 +0000 UTC m=+7642.928699186" observedRunningTime="2025-11-28 08:59:49.483256183 +0000 UTC m=+7643.861321294" watchObservedRunningTime="2025-11-28 08:59:49.486011331 +0000 UTC m=+7643.864076432" Nov 28 08:59:49 crc kubenswrapper[4946]: I1128 08:59:49.508158 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-v9829" Nov 28 08:59:49 crc kubenswrapper[4946]: I1128 08:59:49.533321 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb8447f-b5sqs" podStartSLOduration=2.855989122 podStartE2EDuration="12.533304773s" podCreationTimestamp="2025-11-28 08:59:37 +0000 UTC" firstStartedPulling="2025-11-28 08:59:38.915097469 +0000 UTC m=+7633.293162580" lastFinishedPulling="2025-11-28 08:59:48.59241312 +0000 UTC m=+7642.970478231" observedRunningTime="2025-11-28 08:59:49.526427992 +0000 UTC m=+7643.904493113" watchObservedRunningTime="2025-11-28 08:59:49.533304773 +0000 UTC m=+7643.911369894" Nov 28 08:59:49 crc kubenswrapper[4946]: I1128 08:59:49.574565 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-v9829" podStartSLOduration=2.96128898 podStartE2EDuration="12.574544454s" podCreationTimestamp="2025-11-28 08:59:37 +0000 UTC" firstStartedPulling="2025-11-28 08:59:38.971978898 +0000 UTC m=+7633.350044009" lastFinishedPulling="2025-11-28 08:59:48.585234362 +0000 UTC m=+7642.963299483" observedRunningTime="2025-11-28 08:59:49.572286528 +0000 UTC m=+7643.950351639" watchObservedRunningTime="2025-11-28 08:59:49.574544454 +0000 UTC m=+7643.952609575" Nov 28 08:59:49 crc kubenswrapper[4946]: I1128 08:59:49.604279 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-tplpq" podStartSLOduration=2.303043064 podStartE2EDuration="11.60426118s" podCreationTimestamp="2025-11-28 08:59:38 +0000 UTC" firstStartedPulling="2025-11-28 08:59:39.283596196 +0000 UTC m=+7633.661661307" lastFinishedPulling="2025-11-28 08:59:48.584814292 +0000 UTC m=+7642.962879423" observedRunningTime="2025-11-28 08:59:49.59458682 +0000 UTC m=+7643.972651941" watchObservedRunningTime="2025-11-28 08:59:49.60426118 +0000 UTC m=+7643.982326291" Nov 28 08:59:49 crc kubenswrapper[4946]: I1128 08:59:49.621962 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-nwc4q" podStartSLOduration=2.826863571 podStartE2EDuration="12.621940628s" podCreationTimestamp="2025-11-28 08:59:37 +0000 UTC" firstStartedPulling="2025-11-28 08:59:38.790793981 +0000 UTC m=+7633.168859092" lastFinishedPulling="2025-11-28 08:59:48.585871038 +0000 UTC m=+7642.963936149" observedRunningTime="2025-11-28 08:59:49.610573726 +0000 UTC m=+7643.988638827" watchObservedRunningTime="2025-11-28 08:59:49.621940628 +0000 UTC m=+7644.000005739" Nov 28 08:59:52 crc kubenswrapper[4946]: E1128 08:59:52.538220 4946 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab2ed536_9dcd_49fc_be72_0bc6232e2bdb.slice/crio-923a13364c6c34707926aa1192b46a860a90d85aa7347db75c0ec338a1b0b22b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab2ed536_9dcd_49fc_be72_0bc6232e2bdb.slice\": RecentStats: unable to find data in memory cache]" Nov 28 08:59:57 crc kubenswrapper[4946]: I1128 08:59:57.992045 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 08:59:57 crc kubenswrapper[4946]: E1128 08:59:57.993035 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 08:59:58 crc kubenswrapper[4946]: I1128 08:59:58.510659 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-tplpq" Nov 28 09:00:00 crc kubenswrapper[4946]: I1128 09:00:00.148563 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405340-4qmvl"] Nov 28 09:00:00 crc kubenswrapper[4946]: I1128 09:00:00.149835 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405340-4qmvl" Nov 28 09:00:00 crc kubenswrapper[4946]: I1128 09:00:00.151806 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 09:00:00 crc kubenswrapper[4946]: I1128 09:00:00.152317 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 09:00:00 crc kubenswrapper[4946]: I1128 09:00:00.165377 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405340-4qmvl"] Nov 28 09:00:00 crc kubenswrapper[4946]: I1128 09:00:00.223719 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqrwf\" (UniqueName: \"kubernetes.io/projected/0282a26a-f451-4607-9fb7-35a20456c493-kube-api-access-kqrwf\") pod \"collect-profiles-29405340-4qmvl\" (UID: \"0282a26a-f451-4607-9fb7-35a20456c493\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405340-4qmvl" Nov 28 09:00:00 crc kubenswrapper[4946]: I1128 09:00:00.223779 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0282a26a-f451-4607-9fb7-35a20456c493-secret-volume\") pod \"collect-profiles-29405340-4qmvl\" (UID: \"0282a26a-f451-4607-9fb7-35a20456c493\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405340-4qmvl" Nov 28 09:00:00 crc kubenswrapper[4946]: I1128 09:00:00.223836 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0282a26a-f451-4607-9fb7-35a20456c493-config-volume\") pod \"collect-profiles-29405340-4qmvl\" (UID: \"0282a26a-f451-4607-9fb7-35a20456c493\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405340-4qmvl" Nov 28 09:00:00 crc kubenswrapper[4946]: I1128 09:00:00.325455 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0282a26a-f451-4607-9fb7-35a20456c493-secret-volume\") pod \"collect-profiles-29405340-4qmvl\" (UID: \"0282a26a-f451-4607-9fb7-35a20456c493\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405340-4qmvl" Nov 28 09:00:00 crc kubenswrapper[4946]: I1128 09:00:00.325888 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0282a26a-f451-4607-9fb7-35a20456c493-config-volume\") pod \"collect-profiles-29405340-4qmvl\" (UID: \"0282a26a-f451-4607-9fb7-35a20456c493\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405340-4qmvl" Nov 28 09:00:00 crc kubenswrapper[4946]: I1128 09:00:00.326223 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqrwf\" (UniqueName: \"kubernetes.io/projected/0282a26a-f451-4607-9fb7-35a20456c493-kube-api-access-kqrwf\") pod \"collect-profiles-29405340-4qmvl\" (UID: \"0282a26a-f451-4607-9fb7-35a20456c493\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405340-4qmvl" Nov 28 09:00:00 crc kubenswrapper[4946]: I1128 09:00:00.326806 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0282a26a-f451-4607-9fb7-35a20456c493-config-volume\") pod \"collect-profiles-29405340-4qmvl\" (UID: \"0282a26a-f451-4607-9fb7-35a20456c493\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405340-4qmvl" Nov 28 09:00:00 crc kubenswrapper[4946]: I1128 09:00:00.346338 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0282a26a-f451-4607-9fb7-35a20456c493-secret-volume\") pod \"collect-profiles-29405340-4qmvl\" (UID: \"0282a26a-f451-4607-9fb7-35a20456c493\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405340-4qmvl" Nov 28 09:00:00 crc kubenswrapper[4946]: I1128 09:00:00.359489 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqrwf\" (UniqueName: \"kubernetes.io/projected/0282a26a-f451-4607-9fb7-35a20456c493-kube-api-access-kqrwf\") pod \"collect-profiles-29405340-4qmvl\" (UID: \"0282a26a-f451-4607-9fb7-35a20456c493\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405340-4qmvl" Nov 28 09:00:00 crc kubenswrapper[4946]: I1128 09:00:00.525873 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405340-4qmvl" Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.155047 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.155609 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="fca9a6bf-87fc-4862-bb8a-7aa72eeabc64" containerName="openstackclient" containerID="cri-o://bc7e8bd3178a163ac1d345a60a8e45120428b9eef9eb6bfebe77d056216515b0" gracePeriod=2 Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.179368 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.198524 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405340-4qmvl"] Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.259525 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 28 09:00:01 crc kubenswrapper[4946]: E1128 09:00:01.260048 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca9a6bf-87fc-4862-bb8a-7aa72eeabc64" containerName="openstackclient" Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.260063 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca9a6bf-87fc-4862-bb8a-7aa72eeabc64" containerName="openstackclient" Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.260290 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca9a6bf-87fc-4862-bb8a-7aa72eeabc64" containerName="openstackclient" Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.261078 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.266962 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.267883 4946 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="fca9a6bf-87fc-4862-bb8a-7aa72eeabc64" podUID="771bf4b7-953b-4d1d-9afe-fc93ff4f2341" Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.355554 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/771bf4b7-953b-4d1d-9afe-fc93ff4f2341-openstack-config-secret\") pod \"openstackclient\" (UID: \"771bf4b7-953b-4d1d-9afe-fc93ff4f2341\") " pod="openstack/openstackclient" Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.355666 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq52q\" (UniqueName: \"kubernetes.io/projected/771bf4b7-953b-4d1d-9afe-fc93ff4f2341-kube-api-access-wq52q\") pod \"openstackclient\" (UID: \"771bf4b7-953b-4d1d-9afe-fc93ff4f2341\") " pod="openstack/openstackclient" Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.355746 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/771bf4b7-953b-4d1d-9afe-fc93ff4f2341-openstack-config\") pod \"openstackclient\" (UID: \"771bf4b7-953b-4d1d-9afe-fc93ff4f2341\") " pod="openstack/openstackclient" Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.457688 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/771bf4b7-953b-4d1d-9afe-fc93ff4f2341-openstack-config\") pod \"openstackclient\" (UID: \"771bf4b7-953b-4d1d-9afe-fc93ff4f2341\") " pod="openstack/openstackclient" Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.458090 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/771bf4b7-953b-4d1d-9afe-fc93ff4f2341-openstack-config-secret\") pod \"openstackclient\" (UID: \"771bf4b7-953b-4d1d-9afe-fc93ff4f2341\") " pod="openstack/openstackclient" Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.458198 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq52q\" (UniqueName: \"kubernetes.io/projected/771bf4b7-953b-4d1d-9afe-fc93ff4f2341-kube-api-access-wq52q\") pod \"openstackclient\" (UID: \"771bf4b7-953b-4d1d-9afe-fc93ff4f2341\") " pod="openstack/openstackclient" Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.459671 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/771bf4b7-953b-4d1d-9afe-fc93ff4f2341-openstack-config\") pod \"openstackclient\" (UID: \"771bf4b7-953b-4d1d-9afe-fc93ff4f2341\") " pod="openstack/openstackclient" Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.485115 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.486215 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/771bf4b7-953b-4d1d-9afe-fc93ff4f2341-openstack-config-secret\") pod \"openstackclient\" (UID: \"771bf4b7-953b-4d1d-9afe-fc93ff4f2341\") " pod="openstack/openstackclient" Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.486386 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.499656 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.507062 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-gj8dh" Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.515239 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq52q\" (UniqueName: \"kubernetes.io/projected/771bf4b7-953b-4d1d-9afe-fc93ff4f2341-kube-api-access-wq52q\") pod \"openstackclient\" (UID: \"771bf4b7-953b-4d1d-9afe-fc93ff4f2341\") " pod="openstack/openstackclient" Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.622484 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405340-4qmvl" event={"ID":"0282a26a-f451-4607-9fb7-35a20456c493","Type":"ContainerStarted","Data":"e0da45573f4ec0620eafd2fcf0001a19b10f66248fdf076d579c7b1b4ecbc0dc"} Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.672719 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhzp2\" (UniqueName: \"kubernetes.io/projected/6fa41e94-689e-4339-ad9c-43e4a91cddc2-kube-api-access-xhzp2\") pod \"kube-state-metrics-0\" (UID: \"6fa41e94-689e-4339-ad9c-43e4a91cddc2\") " pod="openstack/kube-state-metrics-0" Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.695051 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.777718 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhzp2\" (UniqueName: \"kubernetes.io/projected/6fa41e94-689e-4339-ad9c-43e4a91cddc2-kube-api-access-xhzp2\") pod \"kube-state-metrics-0\" (UID: \"6fa41e94-689e-4339-ad9c-43e4a91cddc2\") " pod="openstack/kube-state-metrics-0" Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.847871 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhzp2\" (UniqueName: \"kubernetes.io/projected/6fa41e94-689e-4339-ad9c-43e4a91cddc2-kube-api-access-xhzp2\") pod \"kube-state-metrics-0\" (UID: \"6fa41e94-689e-4339-ad9c-43e4a91cddc2\") " pod="openstack/kube-state-metrics-0" Nov 28 09:00:01 crc kubenswrapper[4946]: I1128 09:00:01.912076 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.299581 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.312283 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.319439 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.319615 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.319684 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.319865 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.320574 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-vq6p5" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.330694 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.496130 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d418295b-f6d4-4ff4-9245-09df12057df7-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"d418295b-f6d4-4ff4-9245-09df12057df7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.496161 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/d418295b-f6d4-4ff4-9245-09df12057df7-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"d418295b-f6d4-4ff4-9245-09df12057df7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.496180 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d418295b-f6d4-4ff4-9245-09df12057df7-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"d418295b-f6d4-4ff4-9245-09df12057df7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.496206 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/d418295b-f6d4-4ff4-9245-09df12057df7-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"d418295b-f6d4-4ff4-9245-09df12057df7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.496237 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km2k6\" (UniqueName: \"kubernetes.io/projected/d418295b-f6d4-4ff4-9245-09df12057df7-kube-api-access-km2k6\") pod \"alertmanager-metric-storage-0\" (UID: \"d418295b-f6d4-4ff4-9245-09df12057df7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.496280 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d418295b-f6d4-4ff4-9245-09df12057df7-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"d418295b-f6d4-4ff4-9245-09df12057df7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.496314 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/d418295b-f6d4-4ff4-9245-09df12057df7-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"d418295b-f6d4-4ff4-9245-09df12057df7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.604105 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d418295b-f6d4-4ff4-9245-09df12057df7-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"d418295b-f6d4-4ff4-9245-09df12057df7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.604374 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/d418295b-f6d4-4ff4-9245-09df12057df7-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"d418295b-f6d4-4ff4-9245-09df12057df7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.604506 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d418295b-f6d4-4ff4-9245-09df12057df7-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"d418295b-f6d4-4ff4-9245-09df12057df7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.604642 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/d418295b-f6d4-4ff4-9245-09df12057df7-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"d418295b-f6d4-4ff4-9245-09df12057df7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.604815 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km2k6\" (UniqueName: \"kubernetes.io/projected/d418295b-f6d4-4ff4-9245-09df12057df7-kube-api-access-km2k6\") pod \"alertmanager-metric-storage-0\" (UID: \"d418295b-f6d4-4ff4-9245-09df12057df7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.605006 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d418295b-f6d4-4ff4-9245-09df12057df7-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"d418295b-f6d4-4ff4-9245-09df12057df7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.605185 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/d418295b-f6d4-4ff4-9245-09df12057df7-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"d418295b-f6d4-4ff4-9245-09df12057df7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.605617 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/d418295b-f6d4-4ff4-9245-09df12057df7-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"d418295b-f6d4-4ff4-9245-09df12057df7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.631947 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d418295b-f6d4-4ff4-9245-09df12057df7-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"d418295b-f6d4-4ff4-9245-09df12057df7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.633139 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d418295b-f6d4-4ff4-9245-09df12057df7-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"d418295b-f6d4-4ff4-9245-09df12057df7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.633574 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/d418295b-f6d4-4ff4-9245-09df12057df7-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"d418295b-f6d4-4ff4-9245-09df12057df7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.645848 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km2k6\" (UniqueName: \"kubernetes.io/projected/d418295b-f6d4-4ff4-9245-09df12057df7-kube-api-access-km2k6\") pod \"alertmanager-metric-storage-0\" (UID: \"d418295b-f6d4-4ff4-9245-09df12057df7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.645984 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d418295b-f6d4-4ff4-9245-09df12057df7-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"d418295b-f6d4-4ff4-9245-09df12057df7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.646265 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/d418295b-f6d4-4ff4-9245-09df12057df7-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"d418295b-f6d4-4ff4-9245-09df12057df7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.653605 4946 generic.go:334] "Generic (PLEG): container finished" podID="0282a26a-f451-4607-9fb7-35a20456c493" containerID="dccc546b1b091d76212c3293e5da353c92143e3a1d1d0b80f95be1072e8bfb30" exitCode=0 Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.653637 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405340-4qmvl" event={"ID":"0282a26a-f451-4607-9fb7-35a20456c493","Type":"ContainerDied","Data":"dccc546b1b091d76212c3293e5da353c92143e3a1d1d0b80f95be1072e8bfb30"} Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.718197 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.736404 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.866426 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.930953 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.941347 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.944503 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.944941 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.945057 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.946255 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.947749 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-4n9b8" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.950538 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 28 09:00:02 crc kubenswrapper[4946]: I1128 09:00:02.951635 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 28 09:00:03 crc kubenswrapper[4946]: E1128 09:00:03.120975 4946 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab2ed536_9dcd_49fc_be72_0bc6232e2bdb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab2ed536_9dcd_49fc_be72_0bc6232e2bdb.slice/crio-923a13364c6c34707926aa1192b46a860a90d85aa7347db75c0ec338a1b0b22b\": RecentStats: unable to find data in memory cache]" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.121424 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwqxt\" (UniqueName: \"kubernetes.io/projected/a56c363b-0694-4f6b-882a-adb92c93e6e5-kube-api-access-pwqxt\") pod \"prometheus-metric-storage-0\" (UID: \"a56c363b-0694-4f6b-882a-adb92c93e6e5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.121549 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a56c363b-0694-4f6b-882a-adb92c93e6e5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a56c363b-0694-4f6b-882a-adb92c93e6e5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.129609 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a56c363b-0694-4f6b-882a-adb92c93e6e5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a56c363b-0694-4f6b-882a-adb92c93e6e5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.129729 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a56c363b-0694-4f6b-882a-adb92c93e6e5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a56c363b-0694-4f6b-882a-adb92c93e6e5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.130035 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ddd0061f-4a8a-4119-979e-9a2fa97ae555\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ddd0061f-4a8a-4119-979e-9a2fa97ae555\") pod \"prometheus-metric-storage-0\" (UID: \"a56c363b-0694-4f6b-882a-adb92c93e6e5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.130476 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a56c363b-0694-4f6b-882a-adb92c93e6e5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a56c363b-0694-4f6b-882a-adb92c93e6e5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.130734 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a56c363b-0694-4f6b-882a-adb92c93e6e5-config\") pod \"prometheus-metric-storage-0\" (UID: \"a56c363b-0694-4f6b-882a-adb92c93e6e5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.130795 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a56c363b-0694-4f6b-882a-adb92c93e6e5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a56c363b-0694-4f6b-882a-adb92c93e6e5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.232629 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ddd0061f-4a8a-4119-979e-9a2fa97ae555\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ddd0061f-4a8a-4119-979e-9a2fa97ae555\") pod \"prometheus-metric-storage-0\" (UID: \"a56c363b-0694-4f6b-882a-adb92c93e6e5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.232980 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a56c363b-0694-4f6b-882a-adb92c93e6e5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a56c363b-0694-4f6b-882a-adb92c93e6e5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.233019 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a56c363b-0694-4f6b-882a-adb92c93e6e5-config\") pod \"prometheus-metric-storage-0\" (UID: \"a56c363b-0694-4f6b-882a-adb92c93e6e5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.233040 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a56c363b-0694-4f6b-882a-adb92c93e6e5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a56c363b-0694-4f6b-882a-adb92c93e6e5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.233071 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwqxt\" (UniqueName: \"kubernetes.io/projected/a56c363b-0694-4f6b-882a-adb92c93e6e5-kube-api-access-pwqxt\") pod \"prometheus-metric-storage-0\" (UID: \"a56c363b-0694-4f6b-882a-adb92c93e6e5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.233105 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a56c363b-0694-4f6b-882a-adb92c93e6e5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a56c363b-0694-4f6b-882a-adb92c93e6e5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.233142 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a56c363b-0694-4f6b-882a-adb92c93e6e5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a56c363b-0694-4f6b-882a-adb92c93e6e5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.233163 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a56c363b-0694-4f6b-882a-adb92c93e6e5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a56c363b-0694-4f6b-882a-adb92c93e6e5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.234172 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a56c363b-0694-4f6b-882a-adb92c93e6e5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a56c363b-0694-4f6b-882a-adb92c93e6e5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.239134 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a56c363b-0694-4f6b-882a-adb92c93e6e5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a56c363b-0694-4f6b-882a-adb92c93e6e5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.242498 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a56c363b-0694-4f6b-882a-adb92c93e6e5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a56c363b-0694-4f6b-882a-adb92c93e6e5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.243367 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a56c363b-0694-4f6b-882a-adb92c93e6e5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a56c363b-0694-4f6b-882a-adb92c93e6e5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.248217 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a56c363b-0694-4f6b-882a-adb92c93e6e5-config\") pod \"prometheus-metric-storage-0\" (UID: \"a56c363b-0694-4f6b-882a-adb92c93e6e5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.253046 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a56c363b-0694-4f6b-882a-adb92c93e6e5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a56c363b-0694-4f6b-882a-adb92c93e6e5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.257283 4946 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.257323 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ddd0061f-4a8a-4119-979e-9a2fa97ae555\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ddd0061f-4a8a-4119-979e-9a2fa97ae555\") pod \"prometheus-metric-storage-0\" (UID: \"a56c363b-0694-4f6b-882a-adb92c93e6e5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e5727735b5ba9cfbaa05491940dd43949e341185f301455afdf310147da2fdad/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.269651 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwqxt\" (UniqueName: \"kubernetes.io/projected/a56c363b-0694-4f6b-882a-adb92c93e6e5-kube-api-access-pwqxt\") pod \"prometheus-metric-storage-0\" (UID: \"a56c363b-0694-4f6b-882a-adb92c93e6e5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.364943 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ddd0061f-4a8a-4119-979e-9a2fa97ae555\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ddd0061f-4a8a-4119-979e-9a2fa97ae555\") pod \"prometheus-metric-storage-0\" (UID: \"a56c363b-0694-4f6b-882a-adb92c93e6e5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:03 crc kubenswrapper[4946]: I1128 09:00:03.405268 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 28 09:00:04 crc kubenswrapper[4946]: I1128 09:00:03.596154 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:04 crc kubenswrapper[4946]: I1128 09:00:03.673135 4946 generic.go:334] "Generic (PLEG): container finished" podID="fca9a6bf-87fc-4862-bb8a-7aa72eeabc64" containerID="bc7e8bd3178a163ac1d345a60a8e45120428b9eef9eb6bfebe77d056216515b0" exitCode=137 Nov 28 09:00:04 crc kubenswrapper[4946]: I1128 09:00:03.673201 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf8297856e4704c568b69800d1fc921fd6ff5b4f78e6083032ff01212912bfb5" Nov 28 09:00:04 crc kubenswrapper[4946]: I1128 09:00:03.677875 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"771bf4b7-953b-4d1d-9afe-fc93ff4f2341","Type":"ContainerStarted","Data":"8a7d620a49ef8efcfb2639df5c495dbe993056964398e7a667c3b8a10c4610d4"} Nov 28 09:00:04 crc kubenswrapper[4946]: I1128 09:00:03.677939 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"771bf4b7-953b-4d1d-9afe-fc93ff4f2341","Type":"ContainerStarted","Data":"3f46404b74b0121179a8ce663ae3621f727a9899c32311f6ed4e818f0a73a340"} Nov 28 09:00:04 crc kubenswrapper[4946]: I1128 09:00:03.678546 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 09:00:04 crc kubenswrapper[4946]: I1128 09:00:03.684545 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6fa41e94-689e-4339-ad9c-43e4a91cddc2","Type":"ContainerStarted","Data":"efb39363e844f38e8b0ceccb918bcbd47b166bce35fcf7414622a7b512b1481c"} Nov 28 09:00:04 crc kubenswrapper[4946]: I1128 09:00:03.704080 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"d418295b-f6d4-4ff4-9245-09df12057df7","Type":"ContainerStarted","Data":"1ba871344b6d37e485d01af3702e8cd9083e4f47bcc313d1641cbd3ef8bc7348"} Nov 28 09:00:04 crc kubenswrapper[4946]: I1128 09:00:03.742433 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.74241454 podStartE2EDuration="2.74241454s" podCreationTimestamp="2025-11-28 09:00:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 09:00:03.734119664 +0000 UTC m=+7658.112184785" watchObservedRunningTime="2025-11-28 09:00:03.74241454 +0000 UTC m=+7658.120479651" Nov 28 09:00:04 crc kubenswrapper[4946]: I1128 09:00:03.850264 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4m6nr\" (UniqueName: \"kubernetes.io/projected/fca9a6bf-87fc-4862-bb8a-7aa72eeabc64-kube-api-access-4m6nr\") pod \"fca9a6bf-87fc-4862-bb8a-7aa72eeabc64\" (UID: \"fca9a6bf-87fc-4862-bb8a-7aa72eeabc64\") " Nov 28 09:00:04 crc kubenswrapper[4946]: I1128 09:00:03.850773 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fca9a6bf-87fc-4862-bb8a-7aa72eeabc64-openstack-config-secret\") pod \"fca9a6bf-87fc-4862-bb8a-7aa72eeabc64\" (UID: \"fca9a6bf-87fc-4862-bb8a-7aa72eeabc64\") " Nov 28 09:00:04 crc kubenswrapper[4946]: I1128 09:00:03.850895 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fca9a6bf-87fc-4862-bb8a-7aa72eeabc64-openstack-config\") pod \"fca9a6bf-87fc-4862-bb8a-7aa72eeabc64\" (UID: \"fca9a6bf-87fc-4862-bb8a-7aa72eeabc64\") " Nov 28 09:00:04 crc kubenswrapper[4946]: I1128 09:00:03.868698 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fca9a6bf-87fc-4862-bb8a-7aa72eeabc64-kube-api-access-4m6nr" (OuterVolumeSpecName: "kube-api-access-4m6nr") pod "fca9a6bf-87fc-4862-bb8a-7aa72eeabc64" (UID: "fca9a6bf-87fc-4862-bb8a-7aa72eeabc64"). InnerVolumeSpecName "kube-api-access-4m6nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:00:04 crc kubenswrapper[4946]: I1128 09:00:03.934087 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fca9a6bf-87fc-4862-bb8a-7aa72eeabc64-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "fca9a6bf-87fc-4862-bb8a-7aa72eeabc64" (UID: "fca9a6bf-87fc-4862-bb8a-7aa72eeabc64"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:00:04 crc kubenswrapper[4946]: I1128 09:00:03.969353 4946 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fca9a6bf-87fc-4862-bb8a-7aa72eeabc64-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 28 09:00:04 crc kubenswrapper[4946]: I1128 09:00:03.969375 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4m6nr\" (UniqueName: \"kubernetes.io/projected/fca9a6bf-87fc-4862-bb8a-7aa72eeabc64-kube-api-access-4m6nr\") on node \"crc\" DevicePath \"\"" Nov 28 09:00:04 crc kubenswrapper[4946]: I1128 09:00:03.975387 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fca9a6bf-87fc-4862-bb8a-7aa72eeabc64-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "fca9a6bf-87fc-4862-bb8a-7aa72eeabc64" (UID: "fca9a6bf-87fc-4862-bb8a-7aa72eeabc64"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:00:04 crc kubenswrapper[4946]: I1128 09:00:04.023300 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fca9a6bf-87fc-4862-bb8a-7aa72eeabc64" path="/var/lib/kubelet/pods/fca9a6bf-87fc-4862-bb8a-7aa72eeabc64/volumes" Nov 28 09:00:04 crc kubenswrapper[4946]: I1128 09:00:04.070791 4946 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fca9a6bf-87fc-4862-bb8a-7aa72eeabc64-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 28 09:00:04 crc kubenswrapper[4946]: I1128 09:00:04.717326 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6fa41e94-689e-4339-ad9c-43e4a91cddc2","Type":"ContainerStarted","Data":"86b005dae2e85694e904b562729e91c177929123476466d99239bf4f0ce71d45"} Nov 28 09:00:04 crc kubenswrapper[4946]: I1128 09:00:04.718608 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 28 09:00:04 crc kubenswrapper[4946]: I1128 09:00:04.718699 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 09:00:04 crc kubenswrapper[4946]: I1128 09:00:04.750565 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.227185194 podStartE2EDuration="3.750547426s" podCreationTimestamp="2025-11-28 09:00:01 +0000 UTC" firstStartedPulling="2025-11-28 09:00:02.688717833 +0000 UTC m=+7657.066782944" lastFinishedPulling="2025-11-28 09:00:03.212080065 +0000 UTC m=+7657.590145176" observedRunningTime="2025-11-28 09:00:04.737810101 +0000 UTC m=+7659.115875212" watchObservedRunningTime="2025-11-28 09:00:04.750547426 +0000 UTC m=+7659.128612537" Nov 28 09:00:04 crc kubenswrapper[4946]: I1128 09:00:04.991546 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405340-4qmvl" Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.034024 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.088637 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqrwf\" (UniqueName: \"kubernetes.io/projected/0282a26a-f451-4607-9fb7-35a20456c493-kube-api-access-kqrwf\") pod \"0282a26a-f451-4607-9fb7-35a20456c493\" (UID: \"0282a26a-f451-4607-9fb7-35a20456c493\") " Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.088870 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0282a26a-f451-4607-9fb7-35a20456c493-config-volume\") pod \"0282a26a-f451-4607-9fb7-35a20456c493\" (UID: \"0282a26a-f451-4607-9fb7-35a20456c493\") " Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.088907 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0282a26a-f451-4607-9fb7-35a20456c493-secret-volume\") pod \"0282a26a-f451-4607-9fb7-35a20456c493\" (UID: \"0282a26a-f451-4607-9fb7-35a20456c493\") " Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.091329 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0282a26a-f451-4607-9fb7-35a20456c493-config-volume" (OuterVolumeSpecName: "config-volume") pod "0282a26a-f451-4607-9fb7-35a20456c493" (UID: "0282a26a-f451-4607-9fb7-35a20456c493"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.093324 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0282a26a-f451-4607-9fb7-35a20456c493-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0282a26a-f451-4607-9fb7-35a20456c493" (UID: "0282a26a-f451-4607-9fb7-35a20456c493"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.095861 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0282a26a-f451-4607-9fb7-35a20456c493-kube-api-access-kqrwf" (OuterVolumeSpecName: "kube-api-access-kqrwf") pod "0282a26a-f451-4607-9fb7-35a20456c493" (UID: "0282a26a-f451-4607-9fb7-35a20456c493"). InnerVolumeSpecName "kube-api-access-kqrwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.191253 4946 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0282a26a-f451-4607-9fb7-35a20456c493-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.191416 4946 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0282a26a-f451-4607-9fb7-35a20456c493-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.191444 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqrwf\" (UniqueName: \"kubernetes.io/projected/0282a26a-f451-4607-9fb7-35a20456c493-kube-api-access-kqrwf\") on node \"crc\" DevicePath \"\"" Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.599076 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gd96x"] Nov 28 09:00:05 crc kubenswrapper[4946]: E1128 09:00:05.599668 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0282a26a-f451-4607-9fb7-35a20456c493" containerName="collect-profiles" Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.599684 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0282a26a-f451-4607-9fb7-35a20456c493" containerName="collect-profiles" Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.599917 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="0282a26a-f451-4607-9fb7-35a20456c493" containerName="collect-profiles" Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.602035 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gd96x" Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.610456 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gd96x"] Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.702374 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08693c0a-842b-4dbe-8d56-41dbd3ff5422-catalog-content\") pod \"community-operators-gd96x\" (UID: \"08693c0a-842b-4dbe-8d56-41dbd3ff5422\") " pod="openshift-marketplace/community-operators-gd96x" Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.702529 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d42vk\" (UniqueName: \"kubernetes.io/projected/08693c0a-842b-4dbe-8d56-41dbd3ff5422-kube-api-access-d42vk\") pod \"community-operators-gd96x\" (UID: \"08693c0a-842b-4dbe-8d56-41dbd3ff5422\") " pod="openshift-marketplace/community-operators-gd96x" Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.702612 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08693c0a-842b-4dbe-8d56-41dbd3ff5422-utilities\") pod \"community-operators-gd96x\" (UID: \"08693c0a-842b-4dbe-8d56-41dbd3ff5422\") " pod="openshift-marketplace/community-operators-gd96x" Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.727217 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405340-4qmvl" event={"ID":"0282a26a-f451-4607-9fb7-35a20456c493","Type":"ContainerDied","Data":"e0da45573f4ec0620eafd2fcf0001a19b10f66248fdf076d579c7b1b4ecbc0dc"} Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.727261 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0da45573f4ec0620eafd2fcf0001a19b10f66248fdf076d579c7b1b4ecbc0dc" Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.727280 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405340-4qmvl" Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.730786 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a56c363b-0694-4f6b-882a-adb92c93e6e5","Type":"ContainerStarted","Data":"127fbfd6f8009abc0dc62326f31c20dd0b3af8d08f10a46ee8d4a099fae876c0"} Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.804876 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08693c0a-842b-4dbe-8d56-41dbd3ff5422-catalog-content\") pod \"community-operators-gd96x\" (UID: \"08693c0a-842b-4dbe-8d56-41dbd3ff5422\") " pod="openshift-marketplace/community-operators-gd96x" Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.804975 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d42vk\" (UniqueName: \"kubernetes.io/projected/08693c0a-842b-4dbe-8d56-41dbd3ff5422-kube-api-access-d42vk\") pod \"community-operators-gd96x\" (UID: \"08693c0a-842b-4dbe-8d56-41dbd3ff5422\") " pod="openshift-marketplace/community-operators-gd96x" Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.805052 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08693c0a-842b-4dbe-8d56-41dbd3ff5422-utilities\") pod \"community-operators-gd96x\" (UID: \"08693c0a-842b-4dbe-8d56-41dbd3ff5422\") " pod="openshift-marketplace/community-operators-gd96x" Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.805710 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08693c0a-842b-4dbe-8d56-41dbd3ff5422-catalog-content\") pod \"community-operators-gd96x\" (UID: \"08693c0a-842b-4dbe-8d56-41dbd3ff5422\") " pod="openshift-marketplace/community-operators-gd96x" Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.805945 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08693c0a-842b-4dbe-8d56-41dbd3ff5422-utilities\") pod \"community-operators-gd96x\" (UID: \"08693c0a-842b-4dbe-8d56-41dbd3ff5422\") " pod="openshift-marketplace/community-operators-gd96x" Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.823619 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d42vk\" (UniqueName: \"kubernetes.io/projected/08693c0a-842b-4dbe-8d56-41dbd3ff5422-kube-api-access-d42vk\") pod \"community-operators-gd96x\" (UID: \"08693c0a-842b-4dbe-8d56-41dbd3ff5422\") " pod="openshift-marketplace/community-operators-gd96x" Nov 28 09:00:05 crc kubenswrapper[4946]: I1128 09:00:05.928355 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gd96x" Nov 28 09:00:06 crc kubenswrapper[4946]: I1128 09:00:06.063964 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405295-zc2t8"] Nov 28 09:00:06 crc kubenswrapper[4946]: I1128 09:00:06.099955 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405295-zc2t8"] Nov 28 09:00:06 crc kubenswrapper[4946]: I1128 09:00:06.550702 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gd96x"] Nov 28 09:00:06 crc kubenswrapper[4946]: I1128 09:00:06.740599 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gd96x" event={"ID":"08693c0a-842b-4dbe-8d56-41dbd3ff5422","Type":"ContainerStarted","Data":"471a2d1781c06b6ca4abf583b087c3a60e58323991f4ccf326adfd69c3f1d888"} Nov 28 09:00:06 crc kubenswrapper[4946]: I1128 09:00:06.740670 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gd96x" event={"ID":"08693c0a-842b-4dbe-8d56-41dbd3ff5422","Type":"ContainerStarted","Data":"9c047f5a15d0abe974a0ed9ca331d70a523bb6136912c84d2644662c6b712ba7"} Nov 28 09:00:07 crc kubenswrapper[4946]: I1128 09:00:07.754064 4946 generic.go:334] "Generic (PLEG): container finished" podID="08693c0a-842b-4dbe-8d56-41dbd3ff5422" containerID="471a2d1781c06b6ca4abf583b087c3a60e58323991f4ccf326adfd69c3f1d888" exitCode=0 Nov 28 09:00:07 crc kubenswrapper[4946]: I1128 09:00:07.754174 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gd96x" event={"ID":"08693c0a-842b-4dbe-8d56-41dbd3ff5422","Type":"ContainerDied","Data":"471a2d1781c06b6ca4abf583b087c3a60e58323991f4ccf326adfd69c3f1d888"} Nov 28 09:00:08 crc kubenswrapper[4946]: I1128 09:00:08.008976 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c91401c-ef3e-4050-8373-929d7237425f" path="/var/lib/kubelet/pods/7c91401c-ef3e-4050-8373-929d7237425f/volumes" Nov 28 09:00:08 crc kubenswrapper[4946]: I1128 09:00:08.989697 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 09:00:08 crc kubenswrapper[4946]: E1128 09:00:08.990070 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:00:09 crc kubenswrapper[4946]: I1128 09:00:09.778799 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gd96x" event={"ID":"08693c0a-842b-4dbe-8d56-41dbd3ff5422","Type":"ContainerStarted","Data":"748bea39b48099f31e9518baabb7bb8f5d2e0b6824bc39ebded2bc23ebbce707"} Nov 28 09:00:10 crc kubenswrapper[4946]: I1128 09:00:10.796834 4946 generic.go:334] "Generic (PLEG): container finished" podID="08693c0a-842b-4dbe-8d56-41dbd3ff5422" containerID="748bea39b48099f31e9518baabb7bb8f5d2e0b6824bc39ebded2bc23ebbce707" exitCode=0 Nov 28 09:00:10 crc kubenswrapper[4946]: I1128 09:00:10.796970 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gd96x" event={"ID":"08693c0a-842b-4dbe-8d56-41dbd3ff5422","Type":"ContainerDied","Data":"748bea39b48099f31e9518baabb7bb8f5d2e0b6824bc39ebded2bc23ebbce707"} Nov 28 09:00:11 crc kubenswrapper[4946]: I1128 09:00:11.814945 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"d418295b-f6d4-4ff4-9245-09df12057df7","Type":"ContainerStarted","Data":"aa5c75ae8ba354eb198acc9456c570d3060b0c540823aaaa73112b2b20d4682a"} Nov 28 09:00:11 crc kubenswrapper[4946]: I1128 09:00:11.820641 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a56c363b-0694-4f6b-882a-adb92c93e6e5","Type":"ContainerStarted","Data":"18f57682deb5db2344457f8fa0377a2abbc85b61db626373f760ef86f2011ab4"} Nov 28 09:00:11 crc kubenswrapper[4946]: I1128 09:00:11.927867 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 28 09:00:12 crc kubenswrapper[4946]: I1128 09:00:12.838624 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gd96x" event={"ID":"08693c0a-842b-4dbe-8d56-41dbd3ff5422","Type":"ContainerStarted","Data":"6852cbc8ed94b827eb84ede3a22920ae5f0e702d2f0272b60955d009f2ca405c"} Nov 28 09:00:12 crc kubenswrapper[4946]: I1128 09:00:12.875216 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gd96x" podStartSLOduration=3.889943705 podStartE2EDuration="7.875189003s" podCreationTimestamp="2025-11-28 09:00:05 +0000 UTC" firstStartedPulling="2025-11-28 09:00:07.756642257 +0000 UTC m=+7662.134707418" lastFinishedPulling="2025-11-28 09:00:11.741887615 +0000 UTC m=+7666.119952716" observedRunningTime="2025-11-28 09:00:12.866567939 +0000 UTC m=+7667.244633060" watchObservedRunningTime="2025-11-28 09:00:12.875189003 +0000 UTC m=+7667.253254164" Nov 28 09:00:13 crc kubenswrapper[4946]: E1128 09:00:13.444312 4946 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab2ed536_9dcd_49fc_be72_0bc6232e2bdb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab2ed536_9dcd_49fc_be72_0bc6232e2bdb.slice/crio-923a13364c6c34707926aa1192b46a860a90d85aa7347db75c0ec338a1b0b22b\": RecentStats: unable to find data in memory cache]" Nov 28 09:00:15 crc kubenswrapper[4946]: I1128 09:00:15.929143 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gd96x" Nov 28 09:00:15 crc kubenswrapper[4946]: I1128 09:00:15.931771 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gd96x" Nov 28 09:00:16 crc kubenswrapper[4946]: I1128 09:00:16.005179 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gd96x" Nov 28 09:00:16 crc kubenswrapper[4946]: I1128 09:00:16.987250 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gd96x" Nov 28 09:00:17 crc kubenswrapper[4946]: I1128 09:00:17.056263 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gd96x"] Nov 28 09:00:18 crc kubenswrapper[4946]: I1128 09:00:18.911684 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gd96x" podUID="08693c0a-842b-4dbe-8d56-41dbd3ff5422" containerName="registry-server" containerID="cri-o://6852cbc8ed94b827eb84ede3a22920ae5f0e702d2f0272b60955d009f2ca405c" gracePeriod=2 Nov 28 09:00:19 crc kubenswrapper[4946]: I1128 09:00:19.510302 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gd96x" Nov 28 09:00:19 crc kubenswrapper[4946]: I1128 09:00:19.611372 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08693c0a-842b-4dbe-8d56-41dbd3ff5422-utilities\") pod \"08693c0a-842b-4dbe-8d56-41dbd3ff5422\" (UID: \"08693c0a-842b-4dbe-8d56-41dbd3ff5422\") " Nov 28 09:00:19 crc kubenswrapper[4946]: I1128 09:00:19.611728 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d42vk\" (UniqueName: \"kubernetes.io/projected/08693c0a-842b-4dbe-8d56-41dbd3ff5422-kube-api-access-d42vk\") pod \"08693c0a-842b-4dbe-8d56-41dbd3ff5422\" (UID: \"08693c0a-842b-4dbe-8d56-41dbd3ff5422\") " Nov 28 09:00:19 crc kubenswrapper[4946]: I1128 09:00:19.611926 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08693c0a-842b-4dbe-8d56-41dbd3ff5422-catalog-content\") pod \"08693c0a-842b-4dbe-8d56-41dbd3ff5422\" (UID: \"08693c0a-842b-4dbe-8d56-41dbd3ff5422\") " Nov 28 09:00:19 crc kubenswrapper[4946]: I1128 09:00:19.612410 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08693c0a-842b-4dbe-8d56-41dbd3ff5422-utilities" (OuterVolumeSpecName: "utilities") pod "08693c0a-842b-4dbe-8d56-41dbd3ff5422" (UID: "08693c0a-842b-4dbe-8d56-41dbd3ff5422"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:00:19 crc kubenswrapper[4946]: I1128 09:00:19.612835 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08693c0a-842b-4dbe-8d56-41dbd3ff5422-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 09:00:19 crc kubenswrapper[4946]: I1128 09:00:19.624388 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08693c0a-842b-4dbe-8d56-41dbd3ff5422-kube-api-access-d42vk" (OuterVolumeSpecName: "kube-api-access-d42vk") pod "08693c0a-842b-4dbe-8d56-41dbd3ff5422" (UID: "08693c0a-842b-4dbe-8d56-41dbd3ff5422"). InnerVolumeSpecName "kube-api-access-d42vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:00:19 crc kubenswrapper[4946]: I1128 09:00:19.683085 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08693c0a-842b-4dbe-8d56-41dbd3ff5422-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08693c0a-842b-4dbe-8d56-41dbd3ff5422" (UID: "08693c0a-842b-4dbe-8d56-41dbd3ff5422"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:00:19 crc kubenswrapper[4946]: I1128 09:00:19.715153 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d42vk\" (UniqueName: \"kubernetes.io/projected/08693c0a-842b-4dbe-8d56-41dbd3ff5422-kube-api-access-d42vk\") on node \"crc\" DevicePath \"\"" Nov 28 09:00:19 crc kubenswrapper[4946]: I1128 09:00:19.715189 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08693c0a-842b-4dbe-8d56-41dbd3ff5422-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 09:00:19 crc kubenswrapper[4946]: I1128 09:00:19.929540 4946 generic.go:334] "Generic (PLEG): container finished" podID="08693c0a-842b-4dbe-8d56-41dbd3ff5422" containerID="6852cbc8ed94b827eb84ede3a22920ae5f0e702d2f0272b60955d009f2ca405c" exitCode=0 Nov 28 09:00:19 crc kubenswrapper[4946]: I1128 09:00:19.929657 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gd96x" Nov 28 09:00:19 crc kubenswrapper[4946]: I1128 09:00:19.929663 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gd96x" event={"ID":"08693c0a-842b-4dbe-8d56-41dbd3ff5422","Type":"ContainerDied","Data":"6852cbc8ed94b827eb84ede3a22920ae5f0e702d2f0272b60955d009f2ca405c"} Nov 28 09:00:19 crc kubenswrapper[4946]: I1128 09:00:19.929741 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gd96x" event={"ID":"08693c0a-842b-4dbe-8d56-41dbd3ff5422","Type":"ContainerDied","Data":"9c047f5a15d0abe974a0ed9ca331d70a523bb6136912c84d2644662c6b712ba7"} Nov 28 09:00:19 crc kubenswrapper[4946]: I1128 09:00:19.929766 4946 scope.go:117] "RemoveContainer" containerID="6852cbc8ed94b827eb84ede3a22920ae5f0e702d2f0272b60955d009f2ca405c" Nov 28 09:00:19 crc kubenswrapper[4946]: I1128 09:00:19.934790 4946 generic.go:334] "Generic (PLEG): container finished" podID="d418295b-f6d4-4ff4-9245-09df12057df7" containerID="aa5c75ae8ba354eb198acc9456c570d3060b0c540823aaaa73112b2b20d4682a" exitCode=0 Nov 28 09:00:19 crc kubenswrapper[4946]: I1128 09:00:19.934859 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"d418295b-f6d4-4ff4-9245-09df12057df7","Type":"ContainerDied","Data":"aa5c75ae8ba354eb198acc9456c570d3060b0c540823aaaa73112b2b20d4682a"} Nov 28 09:00:19 crc kubenswrapper[4946]: I1128 09:00:19.957865 4946 scope.go:117] "RemoveContainer" containerID="748bea39b48099f31e9518baabb7bb8f5d2e0b6824bc39ebded2bc23ebbce707" Nov 28 09:00:20 crc kubenswrapper[4946]: I1128 09:00:20.005828 4946 scope.go:117] "RemoveContainer" containerID="471a2d1781c06b6ca4abf583b087c3a60e58323991f4ccf326adfd69c3f1d888" Nov 28 09:00:20 crc kubenswrapper[4946]: I1128 09:00:20.013886 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gd96x"] Nov 28 09:00:20 crc kubenswrapper[4946]: I1128 09:00:20.027378 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gd96x"] Nov 28 09:00:20 crc kubenswrapper[4946]: I1128 09:00:20.046840 4946 scope.go:117] "RemoveContainer" containerID="6852cbc8ed94b827eb84ede3a22920ae5f0e702d2f0272b60955d009f2ca405c" Nov 28 09:00:20 crc kubenswrapper[4946]: E1128 09:00:20.047260 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6852cbc8ed94b827eb84ede3a22920ae5f0e702d2f0272b60955d009f2ca405c\": container with ID starting with 6852cbc8ed94b827eb84ede3a22920ae5f0e702d2f0272b60955d009f2ca405c not found: ID does not exist" containerID="6852cbc8ed94b827eb84ede3a22920ae5f0e702d2f0272b60955d009f2ca405c" Nov 28 09:00:20 crc kubenswrapper[4946]: I1128 09:00:20.047401 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6852cbc8ed94b827eb84ede3a22920ae5f0e702d2f0272b60955d009f2ca405c"} err="failed to get container status \"6852cbc8ed94b827eb84ede3a22920ae5f0e702d2f0272b60955d009f2ca405c\": rpc error: code = NotFound desc = could not find container \"6852cbc8ed94b827eb84ede3a22920ae5f0e702d2f0272b60955d009f2ca405c\": container with ID starting with 6852cbc8ed94b827eb84ede3a22920ae5f0e702d2f0272b60955d009f2ca405c not found: ID does not exist" Nov 28 09:00:20 crc kubenswrapper[4946]: I1128 09:00:20.047588 4946 scope.go:117] "RemoveContainer" containerID="748bea39b48099f31e9518baabb7bb8f5d2e0b6824bc39ebded2bc23ebbce707" Nov 28 09:00:20 crc kubenswrapper[4946]: E1128 09:00:20.048172 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"748bea39b48099f31e9518baabb7bb8f5d2e0b6824bc39ebded2bc23ebbce707\": container with ID starting with 748bea39b48099f31e9518baabb7bb8f5d2e0b6824bc39ebded2bc23ebbce707 not found: ID does not exist" containerID="748bea39b48099f31e9518baabb7bb8f5d2e0b6824bc39ebded2bc23ebbce707" Nov 28 09:00:20 crc kubenswrapper[4946]: I1128 09:00:20.048211 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"748bea39b48099f31e9518baabb7bb8f5d2e0b6824bc39ebded2bc23ebbce707"} err="failed to get container status \"748bea39b48099f31e9518baabb7bb8f5d2e0b6824bc39ebded2bc23ebbce707\": rpc error: code = NotFound desc = could not find container \"748bea39b48099f31e9518baabb7bb8f5d2e0b6824bc39ebded2bc23ebbce707\": container with ID starting with 748bea39b48099f31e9518baabb7bb8f5d2e0b6824bc39ebded2bc23ebbce707 not found: ID does not exist" Nov 28 09:00:20 crc kubenswrapper[4946]: I1128 09:00:20.048237 4946 scope.go:117] "RemoveContainer" containerID="471a2d1781c06b6ca4abf583b087c3a60e58323991f4ccf326adfd69c3f1d888" Nov 28 09:00:20 crc kubenswrapper[4946]: E1128 09:00:20.048572 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"471a2d1781c06b6ca4abf583b087c3a60e58323991f4ccf326adfd69c3f1d888\": container with ID starting with 471a2d1781c06b6ca4abf583b087c3a60e58323991f4ccf326adfd69c3f1d888 not found: ID does not exist" containerID="471a2d1781c06b6ca4abf583b087c3a60e58323991f4ccf326adfd69c3f1d888" Nov 28 09:00:20 crc kubenswrapper[4946]: I1128 09:00:20.048627 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"471a2d1781c06b6ca4abf583b087c3a60e58323991f4ccf326adfd69c3f1d888"} err="failed to get container status \"471a2d1781c06b6ca4abf583b087c3a60e58323991f4ccf326adfd69c3f1d888\": rpc error: code = NotFound desc = could not find container \"471a2d1781c06b6ca4abf583b087c3a60e58323991f4ccf326adfd69c3f1d888\": container with ID starting with 471a2d1781c06b6ca4abf583b087c3a60e58323991f4ccf326adfd69c3f1d888 not found: ID does not exist" Nov 28 09:00:20 crc kubenswrapper[4946]: I1128 09:00:20.952542 4946 generic.go:334] "Generic (PLEG): container finished" podID="a56c363b-0694-4f6b-882a-adb92c93e6e5" containerID="18f57682deb5db2344457f8fa0377a2abbc85b61db626373f760ef86f2011ab4" exitCode=0 Nov 28 09:00:20 crc kubenswrapper[4946]: I1128 09:00:20.952609 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a56c363b-0694-4f6b-882a-adb92c93e6e5","Type":"ContainerDied","Data":"18f57682deb5db2344457f8fa0377a2abbc85b61db626373f760ef86f2011ab4"} Nov 28 09:00:20 crc kubenswrapper[4946]: I1128 09:00:20.991530 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 09:00:20 crc kubenswrapper[4946]: E1128 09:00:20.992258 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:00:21 crc kubenswrapper[4946]: I1128 09:00:21.060314 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-q9d9x"] Nov 28 09:00:21 crc kubenswrapper[4946]: I1128 09:00:21.072904 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-283e-account-create-update-zkp2w"] Nov 28 09:00:21 crc kubenswrapper[4946]: I1128 09:00:21.084989 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-283e-account-create-update-zkp2w"] Nov 28 09:00:21 crc kubenswrapper[4946]: I1128 09:00:21.096031 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-q9d9x"] Nov 28 09:00:22 crc kubenswrapper[4946]: I1128 09:00:22.011593 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08693c0a-842b-4dbe-8d56-41dbd3ff5422" path="/var/lib/kubelet/pods/08693c0a-842b-4dbe-8d56-41dbd3ff5422/volumes" Nov 28 09:00:22 crc kubenswrapper[4946]: I1128 09:00:22.013967 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e4dd7c8-ff87-4edc-bf02-52eae38dc702" path="/var/lib/kubelet/pods/8e4dd7c8-ff87-4edc-bf02-52eae38dc702/volumes" Nov 28 09:00:22 crc kubenswrapper[4946]: I1128 09:00:22.015434 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4e4f3ce-b467-44bd-88ed-22ab591f9730" path="/var/lib/kubelet/pods/b4e4f3ce-b467-44bd-88ed-22ab591f9730/volumes" Nov 28 09:00:23 crc kubenswrapper[4946]: E1128 09:00:23.706436 4946 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab2ed536_9dcd_49fc_be72_0bc6232e2bdb.slice/crio-923a13364c6c34707926aa1192b46a860a90d85aa7347db75c0ec338a1b0b22b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab2ed536_9dcd_49fc_be72_0bc6232e2bdb.slice\": RecentStats: unable to find data in memory cache]" Nov 28 09:00:24 crc kubenswrapper[4946]: I1128 09:00:24.000403 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"d418295b-f6d4-4ff4-9245-09df12057df7","Type":"ContainerStarted","Data":"23c4e55459168237b67a0e4aa7b160717550ee6f119fe1e0d8be40d8e8550ef7"} Nov 28 09:00:28 crc kubenswrapper[4946]: I1128 09:00:28.039530 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"d418295b-f6d4-4ff4-9245-09df12057df7","Type":"ContainerStarted","Data":"c47cfd2713bbcb7c6622e3864e353990b125225a0350d32578a1bfeac4d51fb9"} Nov 28 09:00:28 crc kubenswrapper[4946]: I1128 09:00:28.041208 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Nov 28 09:00:28 crc kubenswrapper[4946]: I1128 09:00:28.044617 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Nov 28 09:00:28 crc kubenswrapper[4946]: I1128 09:00:28.074719 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=6.130411284 podStartE2EDuration="26.074696948s" podCreationTimestamp="2025-11-28 09:00:02 +0000 UTC" firstStartedPulling="2025-11-28 09:00:03.438286177 +0000 UTC m=+7657.816351278" lastFinishedPulling="2025-11-28 09:00:23.382571831 +0000 UTC m=+7677.760636942" observedRunningTime="2025-11-28 09:00:28.066206358 +0000 UTC m=+7682.444271549" watchObservedRunningTime="2025-11-28 09:00:28.074696948 +0000 UTC m=+7682.452762069" Nov 28 09:00:29 crc kubenswrapper[4946]: I1128 09:00:29.054734 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a56c363b-0694-4f6b-882a-adb92c93e6e5","Type":"ContainerStarted","Data":"4f0bc3b01f30729ad03e7962fc67810b4b108032332a026e7d801a43c9ef9351"} Nov 28 09:00:31 crc kubenswrapper[4946]: I1128 09:00:31.989512 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 09:00:31 crc kubenswrapper[4946]: E1128 09:00:31.989965 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:00:32 crc kubenswrapper[4946]: I1128 09:00:32.535632 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 09:00:33 crc kubenswrapper[4946]: I1128 09:00:33.118198 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a56c363b-0694-4f6b-882a-adb92c93e6e5","Type":"ContainerStarted","Data":"03bc0ed75daf6dc4d7c554fcbdac1a23dd98526d69a0d4cce61888cb918626b3"} Nov 28 09:00:36 crc kubenswrapper[4946]: I1128 09:00:36.152934 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a56c363b-0694-4f6b-882a-adb92c93e6e5","Type":"ContainerStarted","Data":"ad980a395c6f35bb094317f0cab9b4feede483806c9882106107670251331a07"} Nov 28 09:00:36 crc kubenswrapper[4946]: I1128 09:00:36.192305 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=4.646420264 podStartE2EDuration="35.192280971s" podCreationTimestamp="2025-11-28 09:00:01 +0000 UTC" firstStartedPulling="2025-11-28 09:00:05.034511559 +0000 UTC m=+7659.412576660" lastFinishedPulling="2025-11-28 09:00:35.580372256 +0000 UTC m=+7689.958437367" observedRunningTime="2025-11-28 09:00:36.186263562 +0000 UTC m=+7690.564328723" watchObservedRunningTime="2025-11-28 09:00:36.192280971 +0000 UTC m=+7690.570346122" Nov 28 09:00:38 crc kubenswrapper[4946]: I1128 09:00:38.596966 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:40 crc kubenswrapper[4946]: I1128 09:00:40.938701 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 09:00:40 crc kubenswrapper[4946]: E1128 09:00:40.939498 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08693c0a-842b-4dbe-8d56-41dbd3ff5422" containerName="registry-server" Nov 28 09:00:40 crc kubenswrapper[4946]: I1128 09:00:40.939514 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="08693c0a-842b-4dbe-8d56-41dbd3ff5422" containerName="registry-server" Nov 28 09:00:40 crc kubenswrapper[4946]: E1128 09:00:40.939527 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08693c0a-842b-4dbe-8d56-41dbd3ff5422" containerName="extract-utilities" Nov 28 09:00:40 crc kubenswrapper[4946]: I1128 09:00:40.939538 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="08693c0a-842b-4dbe-8d56-41dbd3ff5422" containerName="extract-utilities" Nov 28 09:00:40 crc kubenswrapper[4946]: E1128 09:00:40.939564 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08693c0a-842b-4dbe-8d56-41dbd3ff5422" containerName="extract-content" Nov 28 09:00:40 crc kubenswrapper[4946]: I1128 09:00:40.939572 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="08693c0a-842b-4dbe-8d56-41dbd3ff5422" containerName="extract-content" Nov 28 09:00:40 crc kubenswrapper[4946]: I1128 09:00:40.939945 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="08693c0a-842b-4dbe-8d56-41dbd3ff5422" containerName="registry-server" Nov 28 09:00:40 crc kubenswrapper[4946]: I1128 09:00:40.942364 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 09:00:40 crc kubenswrapper[4946]: I1128 09:00:40.945784 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 09:00:40 crc kubenswrapper[4946]: I1128 09:00:40.948197 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 09:00:40 crc kubenswrapper[4946]: I1128 09:00:40.948934 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 09:00:41 crc kubenswrapper[4946]: I1128 09:00:41.042751 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-scripts\") pod \"ceilometer-0\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " pod="openstack/ceilometer-0" Nov 28 09:00:41 crc kubenswrapper[4946]: I1128 09:00:41.042957 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-log-httpd\") pod \"ceilometer-0\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " pod="openstack/ceilometer-0" Nov 28 09:00:41 crc kubenswrapper[4946]: I1128 09:00:41.043053 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qq5g\" (UniqueName: \"kubernetes.io/projected/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-kube-api-access-8qq5g\") pod \"ceilometer-0\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " pod="openstack/ceilometer-0" Nov 28 09:00:41 crc kubenswrapper[4946]: I1128 09:00:41.043119 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " pod="openstack/ceilometer-0" Nov 28 09:00:41 crc kubenswrapper[4946]: I1128 09:00:41.043437 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-run-httpd\") pod \"ceilometer-0\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " pod="openstack/ceilometer-0" Nov 28 09:00:41 crc kubenswrapper[4946]: I1128 09:00:41.043605 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " pod="openstack/ceilometer-0" Nov 28 09:00:41 crc kubenswrapper[4946]: I1128 09:00:41.043695 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-config-data\") pod \"ceilometer-0\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " pod="openstack/ceilometer-0" Nov 28 09:00:41 crc kubenswrapper[4946]: I1128 09:00:41.146359 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-run-httpd\") pod \"ceilometer-0\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " pod="openstack/ceilometer-0" Nov 28 09:00:41 crc kubenswrapper[4946]: I1128 09:00:41.146512 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " pod="openstack/ceilometer-0" Nov 28 09:00:41 crc kubenswrapper[4946]: I1128 09:00:41.146572 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-config-data\") pod \"ceilometer-0\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " pod="openstack/ceilometer-0" Nov 28 09:00:41 crc kubenswrapper[4946]: I1128 09:00:41.146617 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-scripts\") pod \"ceilometer-0\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " pod="openstack/ceilometer-0" Nov 28 09:00:41 crc kubenswrapper[4946]: I1128 09:00:41.146721 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-log-httpd\") pod \"ceilometer-0\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " pod="openstack/ceilometer-0" Nov 28 09:00:41 crc kubenswrapper[4946]: I1128 09:00:41.146784 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qq5g\" (UniqueName: \"kubernetes.io/projected/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-kube-api-access-8qq5g\") pod \"ceilometer-0\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " pod="openstack/ceilometer-0" Nov 28 09:00:41 crc kubenswrapper[4946]: I1128 09:00:41.146841 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " pod="openstack/ceilometer-0" Nov 28 09:00:41 crc kubenswrapper[4946]: I1128 09:00:41.147241 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-run-httpd\") pod \"ceilometer-0\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " pod="openstack/ceilometer-0" Nov 28 09:00:41 crc kubenswrapper[4946]: I1128 09:00:41.148506 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-log-httpd\") pod \"ceilometer-0\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " pod="openstack/ceilometer-0" Nov 28 09:00:41 crc kubenswrapper[4946]: I1128 09:00:41.153689 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " pod="openstack/ceilometer-0" Nov 28 09:00:41 crc kubenswrapper[4946]: I1128 09:00:41.154492 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " pod="openstack/ceilometer-0" Nov 28 09:00:41 crc kubenswrapper[4946]: I1128 09:00:41.154890 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-config-data\") pod \"ceilometer-0\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " pod="openstack/ceilometer-0" Nov 28 09:00:41 crc kubenswrapper[4946]: I1128 09:00:41.155088 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-scripts\") pod \"ceilometer-0\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " pod="openstack/ceilometer-0" Nov 28 09:00:41 crc kubenswrapper[4946]: I1128 09:00:41.176153 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qq5g\" (UniqueName: \"kubernetes.io/projected/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-kube-api-access-8qq5g\") pod \"ceilometer-0\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " pod="openstack/ceilometer-0" Nov 28 09:00:41 crc kubenswrapper[4946]: I1128 09:00:41.258293 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 09:00:41 crc kubenswrapper[4946]: I1128 09:00:41.776352 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 09:00:41 crc kubenswrapper[4946]: W1128 09:00:41.782211 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25b7a04b_18d3_41f1_b918_5b2fc6aaeaa6.slice/crio-b7c45ddea93129964269221997a6f0414604be46e3aa923b4403bc724cc15a47 WatchSource:0}: Error finding container b7c45ddea93129964269221997a6f0414604be46e3aa923b4403bc724cc15a47: Status 404 returned error can't find the container with id b7c45ddea93129964269221997a6f0414604be46e3aa923b4403bc724cc15a47 Nov 28 09:00:42 crc kubenswrapper[4946]: I1128 09:00:42.225453 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6","Type":"ContainerStarted","Data":"b7c45ddea93129964269221997a6f0414604be46e3aa923b4403bc724cc15a47"} Nov 28 09:00:43 crc kubenswrapper[4946]: I1128 09:00:43.990369 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 09:00:43 crc kubenswrapper[4946]: E1128 09:00:43.991007 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:00:44 crc kubenswrapper[4946]: I1128 09:00:44.053022 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-rxtds"] Nov 28 09:00:44 crc kubenswrapper[4946]: I1128 09:00:44.062988 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-rxtds"] Nov 28 09:00:46 crc kubenswrapper[4946]: I1128 09:00:46.002107 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="870b12ba-c2ed-4359-b100-b68529bda2a0" path="/var/lib/kubelet/pods/870b12ba-c2ed-4359-b100-b68529bda2a0/volumes" Nov 28 09:00:47 crc kubenswrapper[4946]: I1128 09:00:47.281137 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6","Type":"ContainerStarted","Data":"18e8d280a89351278dd929520135dcbc610887430decfc9e6f338d658ecbd160"} Nov 28 09:00:48 crc kubenswrapper[4946]: I1128 09:00:48.290417 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6","Type":"ContainerStarted","Data":"923b854ec4ddfbc56ac35d113397714b9f285d953ee288c1e504428ddf9dd51b"} Nov 28 09:00:48 crc kubenswrapper[4946]: I1128 09:00:48.597344 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:48 crc kubenswrapper[4946]: I1128 09:00:48.602633 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:49 crc kubenswrapper[4946]: I1128 09:00:49.264750 4946 scope.go:117] "RemoveContainer" containerID="bad7c0788e8564179929cfafe33edaa496b34c3f4e55c8ace27e8eea8bb1ad14" Nov 28 09:00:49 crc kubenswrapper[4946]: I1128 09:00:49.308079 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6","Type":"ContainerStarted","Data":"18b921299ada359b90576611785179204489eab026cb3360f237443c0cb34110"} Nov 28 09:00:49 crc kubenswrapper[4946]: I1128 09:00:49.311196 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 28 09:00:49 crc kubenswrapper[4946]: I1128 09:00:49.375253 4946 scope.go:117] "RemoveContainer" containerID="bc7e8bd3178a163ac1d345a60a8e45120428b9eef9eb6bfebe77d056216515b0" Nov 28 09:00:49 crc kubenswrapper[4946]: I1128 09:00:49.415964 4946 scope.go:117] "RemoveContainer" containerID="2ba82a0c1d0004ee4b24689508e2228492dcda52e90819cf398c2119b21367aa" Nov 28 09:00:49 crc kubenswrapper[4946]: I1128 09:00:49.480930 4946 scope.go:117] "RemoveContainer" containerID="d86d4ba36e919dfcdbf4c57e794f76aa6951025eb3be951f3ce4c323076eed6e" Nov 28 09:00:49 crc kubenswrapper[4946]: I1128 09:00:49.516145 4946 scope.go:117] "RemoveContainer" containerID="f7a85fb7551b1a7a5e7856cfa42c838c7895b03a2df14c820ebaa2a531213159" Nov 28 09:00:51 crc kubenswrapper[4946]: I1128 09:00:51.339952 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6","Type":"ContainerStarted","Data":"781c3ea37cd70624c57185541a1ab8c04d203e29125ec47a3842a0071682ae09"} Nov 28 09:00:51 crc kubenswrapper[4946]: I1128 09:00:51.340519 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 09:00:51 crc kubenswrapper[4946]: I1128 09:00:51.370753 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.736259039 podStartE2EDuration="11.370327573s" podCreationTimestamp="2025-11-28 09:00:40 +0000 UTC" firstStartedPulling="2025-11-28 09:00:41.78456993 +0000 UTC m=+7696.162635041" lastFinishedPulling="2025-11-28 09:00:50.418638464 +0000 UTC m=+7704.796703575" observedRunningTime="2025-11-28 09:00:51.360961711 +0000 UTC m=+7705.739026852" watchObservedRunningTime="2025-11-28 09:00:51.370327573 +0000 UTC m=+7705.748392694" Nov 28 09:00:56 crc kubenswrapper[4946]: I1128 09:00:56.810183 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="a5bdfa1c-6408-40f5-a9db-4b991fd2b022" containerName="galera" probeResult="failure" output="command timed out" Nov 28 09:00:56 crc kubenswrapper[4946]: I1128 09:00:56.810960 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="a5bdfa1c-6408-40f5-a9db-4b991fd2b022" containerName="galera" probeResult="failure" output="command timed out" Nov 28 09:00:57 crc kubenswrapper[4946]: I1128 09:00:57.992418 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 09:00:58 crc kubenswrapper[4946]: I1128 09:00:58.415328 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"754338137d28a0c99ab6cd99ffe748df8dc8aeb4903b18cddaca660262b3d025"} Nov 28 09:00:59 crc kubenswrapper[4946]: I1128 09:00:59.654585 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-zgv8k"] Nov 28 09:00:59 crc kubenswrapper[4946]: I1128 09:00:59.656675 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-zgv8k" Nov 28 09:00:59 crc kubenswrapper[4946]: I1128 09:00:59.679551 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-zgv8k"] Nov 28 09:00:59 crc kubenswrapper[4946]: I1128 09:00:59.731477 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5039fbd0-b0df-4b98-9aa5-61ce01921c0d-operator-scripts\") pod \"aodh-db-create-zgv8k\" (UID: \"5039fbd0-b0df-4b98-9aa5-61ce01921c0d\") " pod="openstack/aodh-db-create-zgv8k" Nov 28 09:00:59 crc kubenswrapper[4946]: I1128 09:00:59.731537 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkqkx\" (UniqueName: \"kubernetes.io/projected/5039fbd0-b0df-4b98-9aa5-61ce01921c0d-kube-api-access-tkqkx\") pod \"aodh-db-create-zgv8k\" (UID: \"5039fbd0-b0df-4b98-9aa5-61ce01921c0d\") " pod="openstack/aodh-db-create-zgv8k" Nov 28 09:00:59 crc kubenswrapper[4946]: I1128 09:00:59.764136 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-4a11-account-create-update-f6t9x"] Nov 28 09:00:59 crc kubenswrapper[4946]: I1128 09:00:59.765530 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4a11-account-create-update-f6t9x" Nov 28 09:00:59 crc kubenswrapper[4946]: I1128 09:00:59.767425 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Nov 28 09:00:59 crc kubenswrapper[4946]: I1128 09:00:59.780671 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-4a11-account-create-update-f6t9x"] Nov 28 09:00:59 crc kubenswrapper[4946]: I1128 09:00:59.833484 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5039fbd0-b0df-4b98-9aa5-61ce01921c0d-operator-scripts\") pod \"aodh-db-create-zgv8k\" (UID: \"5039fbd0-b0df-4b98-9aa5-61ce01921c0d\") " pod="openstack/aodh-db-create-zgv8k" Nov 28 09:00:59 crc kubenswrapper[4946]: I1128 09:00:59.833541 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkqkx\" (UniqueName: \"kubernetes.io/projected/5039fbd0-b0df-4b98-9aa5-61ce01921c0d-kube-api-access-tkqkx\") pod \"aodh-db-create-zgv8k\" (UID: \"5039fbd0-b0df-4b98-9aa5-61ce01921c0d\") " pod="openstack/aodh-db-create-zgv8k" Nov 28 09:00:59 crc kubenswrapper[4946]: I1128 09:00:59.833622 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bf5623c-f83d-4071-a915-684b7e01a729-operator-scripts\") pod \"aodh-4a11-account-create-update-f6t9x\" (UID: \"7bf5623c-f83d-4071-a915-684b7e01a729\") " pod="openstack/aodh-4a11-account-create-update-f6t9x" Nov 28 09:00:59 crc kubenswrapper[4946]: I1128 09:00:59.833658 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgm48\" (UniqueName: \"kubernetes.io/projected/7bf5623c-f83d-4071-a915-684b7e01a729-kube-api-access-mgm48\") pod \"aodh-4a11-account-create-update-f6t9x\" (UID: \"7bf5623c-f83d-4071-a915-684b7e01a729\") " pod="openstack/aodh-4a11-account-create-update-f6t9x" Nov 28 09:00:59 crc kubenswrapper[4946]: I1128 09:00:59.834262 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5039fbd0-b0df-4b98-9aa5-61ce01921c0d-operator-scripts\") pod \"aodh-db-create-zgv8k\" (UID: \"5039fbd0-b0df-4b98-9aa5-61ce01921c0d\") " pod="openstack/aodh-db-create-zgv8k" Nov 28 09:00:59 crc kubenswrapper[4946]: I1128 09:00:59.864974 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkqkx\" (UniqueName: \"kubernetes.io/projected/5039fbd0-b0df-4b98-9aa5-61ce01921c0d-kube-api-access-tkqkx\") pod \"aodh-db-create-zgv8k\" (UID: \"5039fbd0-b0df-4b98-9aa5-61ce01921c0d\") " pod="openstack/aodh-db-create-zgv8k" Nov 28 09:00:59 crc kubenswrapper[4946]: I1128 09:00:59.935557 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bf5623c-f83d-4071-a915-684b7e01a729-operator-scripts\") pod \"aodh-4a11-account-create-update-f6t9x\" (UID: \"7bf5623c-f83d-4071-a915-684b7e01a729\") " pod="openstack/aodh-4a11-account-create-update-f6t9x" Nov 28 09:00:59 crc kubenswrapper[4946]: I1128 09:00:59.935610 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgm48\" (UniqueName: \"kubernetes.io/projected/7bf5623c-f83d-4071-a915-684b7e01a729-kube-api-access-mgm48\") pod \"aodh-4a11-account-create-update-f6t9x\" (UID: \"7bf5623c-f83d-4071-a915-684b7e01a729\") " pod="openstack/aodh-4a11-account-create-update-f6t9x" Nov 28 09:00:59 crc kubenswrapper[4946]: I1128 09:00:59.936622 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bf5623c-f83d-4071-a915-684b7e01a729-operator-scripts\") pod \"aodh-4a11-account-create-update-f6t9x\" (UID: \"7bf5623c-f83d-4071-a915-684b7e01a729\") " pod="openstack/aodh-4a11-account-create-update-f6t9x" Nov 28 09:00:59 crc kubenswrapper[4946]: I1128 09:00:59.953142 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgm48\" (UniqueName: \"kubernetes.io/projected/7bf5623c-f83d-4071-a915-684b7e01a729-kube-api-access-mgm48\") pod \"aodh-4a11-account-create-update-f6t9x\" (UID: \"7bf5623c-f83d-4071-a915-684b7e01a729\") " pod="openstack/aodh-4a11-account-create-update-f6t9x" Nov 28 09:00:59 crc kubenswrapper[4946]: I1128 09:00:59.974170 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-zgv8k" Nov 28 09:01:00 crc kubenswrapper[4946]: I1128 09:01:00.083264 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4a11-account-create-update-f6t9x" Nov 28 09:01:00 crc kubenswrapper[4946]: I1128 09:01:00.152931 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29405341-p65t2"] Nov 28 09:01:00 crc kubenswrapper[4946]: I1128 09:01:00.154260 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405341-p65t2" Nov 28 09:01:00 crc kubenswrapper[4946]: I1128 09:01:00.165592 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29405341-p65t2"] Nov 28 09:01:00 crc kubenswrapper[4946]: I1128 09:01:00.243568 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95-combined-ca-bundle\") pod \"keystone-cron-29405341-p65t2\" (UID: \"ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95\") " pod="openstack/keystone-cron-29405341-p65t2" Nov 28 09:01:00 crc kubenswrapper[4946]: I1128 09:01:00.244024 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95-fernet-keys\") pod \"keystone-cron-29405341-p65t2\" (UID: \"ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95\") " pod="openstack/keystone-cron-29405341-p65t2" Nov 28 09:01:00 crc kubenswrapper[4946]: I1128 09:01:00.244066 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vbdn\" (UniqueName: \"kubernetes.io/projected/ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95-kube-api-access-6vbdn\") pod \"keystone-cron-29405341-p65t2\" (UID: \"ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95\") " pod="openstack/keystone-cron-29405341-p65t2" Nov 28 09:01:00 crc kubenswrapper[4946]: I1128 09:01:00.244224 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95-config-data\") pod \"keystone-cron-29405341-p65t2\" (UID: \"ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95\") " pod="openstack/keystone-cron-29405341-p65t2" Nov 28 09:01:00 crc kubenswrapper[4946]: I1128 09:01:00.345980 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95-combined-ca-bundle\") pod \"keystone-cron-29405341-p65t2\" (UID: \"ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95\") " pod="openstack/keystone-cron-29405341-p65t2" Nov 28 09:01:00 crc kubenswrapper[4946]: I1128 09:01:00.346320 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95-fernet-keys\") pod \"keystone-cron-29405341-p65t2\" (UID: \"ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95\") " pod="openstack/keystone-cron-29405341-p65t2" Nov 28 09:01:00 crc kubenswrapper[4946]: I1128 09:01:00.346341 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vbdn\" (UniqueName: \"kubernetes.io/projected/ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95-kube-api-access-6vbdn\") pod \"keystone-cron-29405341-p65t2\" (UID: \"ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95\") " pod="openstack/keystone-cron-29405341-p65t2" Nov 28 09:01:00 crc kubenswrapper[4946]: I1128 09:01:00.346365 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95-config-data\") pod \"keystone-cron-29405341-p65t2\" (UID: \"ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95\") " pod="openstack/keystone-cron-29405341-p65t2" Nov 28 09:01:00 crc kubenswrapper[4946]: I1128 09:01:00.351137 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95-combined-ca-bundle\") pod \"keystone-cron-29405341-p65t2\" (UID: \"ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95\") " pod="openstack/keystone-cron-29405341-p65t2" Nov 28 09:01:00 crc kubenswrapper[4946]: I1128 09:01:00.352529 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95-fernet-keys\") pod \"keystone-cron-29405341-p65t2\" (UID: \"ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95\") " pod="openstack/keystone-cron-29405341-p65t2" Nov 28 09:01:00 crc kubenswrapper[4946]: I1128 09:01:00.358192 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95-config-data\") pod \"keystone-cron-29405341-p65t2\" (UID: \"ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95\") " pod="openstack/keystone-cron-29405341-p65t2" Nov 28 09:01:00 crc kubenswrapper[4946]: I1128 09:01:00.368846 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vbdn\" (UniqueName: \"kubernetes.io/projected/ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95-kube-api-access-6vbdn\") pod \"keystone-cron-29405341-p65t2\" (UID: \"ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95\") " pod="openstack/keystone-cron-29405341-p65t2" Nov 28 09:01:00 crc kubenswrapper[4946]: I1128 09:01:00.483819 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-zgv8k"] Nov 28 09:01:00 crc kubenswrapper[4946]: W1128 09:01:00.486409 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5039fbd0_b0df_4b98_9aa5_61ce01921c0d.slice/crio-3dcab82622bcbf9942bc7e0487cdea594c77dd6389c42f4fc39492637e295d8a WatchSource:0}: Error finding container 3dcab82622bcbf9942bc7e0487cdea594c77dd6389c42f4fc39492637e295d8a: Status 404 returned error can't find the container with id 3dcab82622bcbf9942bc7e0487cdea594c77dd6389c42f4fc39492637e295d8a Nov 28 09:01:00 crc kubenswrapper[4946]: I1128 09:01:00.537234 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405341-p65t2" Nov 28 09:01:00 crc kubenswrapper[4946]: I1128 09:01:00.673953 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-4a11-account-create-update-f6t9x"] Nov 28 09:01:01 crc kubenswrapper[4946]: I1128 09:01:01.057576 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29405341-p65t2"] Nov 28 09:01:01 crc kubenswrapper[4946]: I1128 09:01:01.464010 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405341-p65t2" event={"ID":"ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95","Type":"ContainerStarted","Data":"94c32132874e7dd68dd7f26034e59839d37435d976c123331aff59c96d88b97a"} Nov 28 09:01:01 crc kubenswrapper[4946]: I1128 09:01:01.464373 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405341-p65t2" event={"ID":"ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95","Type":"ContainerStarted","Data":"06c39abce33cbd2b1f1cb605499f73af587c30e6bca7aa92ab3749d7dd9386a9"} Nov 28 09:01:01 crc kubenswrapper[4946]: I1128 09:01:01.471785 4946 generic.go:334] "Generic (PLEG): container finished" podID="5039fbd0-b0df-4b98-9aa5-61ce01921c0d" containerID="7ed634b7edb7bb97dd1ffd7612e1ef36ca6529c02491f79a875804d1b82de7bb" exitCode=0 Nov 28 09:01:01 crc kubenswrapper[4946]: I1128 09:01:01.471963 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-zgv8k" event={"ID":"5039fbd0-b0df-4b98-9aa5-61ce01921c0d","Type":"ContainerDied","Data":"7ed634b7edb7bb97dd1ffd7612e1ef36ca6529c02491f79a875804d1b82de7bb"} Nov 28 09:01:01 crc kubenswrapper[4946]: I1128 09:01:01.471995 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-zgv8k" event={"ID":"5039fbd0-b0df-4b98-9aa5-61ce01921c0d","Type":"ContainerStarted","Data":"3dcab82622bcbf9942bc7e0487cdea594c77dd6389c42f4fc39492637e295d8a"} Nov 28 09:01:01 crc kubenswrapper[4946]: I1128 09:01:01.473929 4946 generic.go:334] "Generic (PLEG): container finished" podID="7bf5623c-f83d-4071-a915-684b7e01a729" containerID="38c74511aed168b9c1cc726534260007815626cf55cd2960b3777306de1b1e3b" exitCode=0 Nov 28 09:01:01 crc kubenswrapper[4946]: I1128 09:01:01.473982 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-4a11-account-create-update-f6t9x" event={"ID":"7bf5623c-f83d-4071-a915-684b7e01a729","Type":"ContainerDied","Data":"38c74511aed168b9c1cc726534260007815626cf55cd2960b3777306de1b1e3b"} Nov 28 09:01:01 crc kubenswrapper[4946]: I1128 09:01:01.474010 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-4a11-account-create-update-f6t9x" event={"ID":"7bf5623c-f83d-4071-a915-684b7e01a729","Type":"ContainerStarted","Data":"3cadf29f37d9ba859355b220733685aae0bfc3eb736f72afc86989f94d6faffc"} Nov 28 09:01:01 crc kubenswrapper[4946]: I1128 09:01:01.511432 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29405341-p65t2" podStartSLOduration=1.5114110219999999 podStartE2EDuration="1.511411022s" podCreationTimestamp="2025-11-28 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 09:01:01.487081559 +0000 UTC m=+7715.865146710" watchObservedRunningTime="2025-11-28 09:01:01.511411022 +0000 UTC m=+7715.889476153" Nov 28 09:01:03 crc kubenswrapper[4946]: I1128 09:01:03.026574 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-zgv8k" Nov 28 09:01:03 crc kubenswrapper[4946]: I1128 09:01:03.034079 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4a11-account-create-update-f6t9x" Nov 28 09:01:03 crc kubenswrapper[4946]: I1128 09:01:03.110065 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkqkx\" (UniqueName: \"kubernetes.io/projected/5039fbd0-b0df-4b98-9aa5-61ce01921c0d-kube-api-access-tkqkx\") pod \"5039fbd0-b0df-4b98-9aa5-61ce01921c0d\" (UID: \"5039fbd0-b0df-4b98-9aa5-61ce01921c0d\") " Nov 28 09:01:03 crc kubenswrapper[4946]: I1128 09:01:03.110124 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5039fbd0-b0df-4b98-9aa5-61ce01921c0d-operator-scripts\") pod \"5039fbd0-b0df-4b98-9aa5-61ce01921c0d\" (UID: \"5039fbd0-b0df-4b98-9aa5-61ce01921c0d\") " Nov 28 09:01:03 crc kubenswrapper[4946]: I1128 09:01:03.110192 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgm48\" (UniqueName: \"kubernetes.io/projected/7bf5623c-f83d-4071-a915-684b7e01a729-kube-api-access-mgm48\") pod \"7bf5623c-f83d-4071-a915-684b7e01a729\" (UID: \"7bf5623c-f83d-4071-a915-684b7e01a729\") " Nov 28 09:01:03 crc kubenswrapper[4946]: I1128 09:01:03.110315 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bf5623c-f83d-4071-a915-684b7e01a729-operator-scripts\") pod \"7bf5623c-f83d-4071-a915-684b7e01a729\" (UID: \"7bf5623c-f83d-4071-a915-684b7e01a729\") " Nov 28 09:01:03 crc kubenswrapper[4946]: I1128 09:01:03.111054 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5039fbd0-b0df-4b98-9aa5-61ce01921c0d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5039fbd0-b0df-4b98-9aa5-61ce01921c0d" (UID: "5039fbd0-b0df-4b98-9aa5-61ce01921c0d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:01:03 crc kubenswrapper[4946]: I1128 09:01:03.111137 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bf5623c-f83d-4071-a915-684b7e01a729-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7bf5623c-f83d-4071-a915-684b7e01a729" (UID: "7bf5623c-f83d-4071-a915-684b7e01a729"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:01:03 crc kubenswrapper[4946]: I1128 09:01:03.117100 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5039fbd0-b0df-4b98-9aa5-61ce01921c0d-kube-api-access-tkqkx" (OuterVolumeSpecName: "kube-api-access-tkqkx") pod "5039fbd0-b0df-4b98-9aa5-61ce01921c0d" (UID: "5039fbd0-b0df-4b98-9aa5-61ce01921c0d"). InnerVolumeSpecName "kube-api-access-tkqkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:01:03 crc kubenswrapper[4946]: I1128 09:01:03.123341 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bf5623c-f83d-4071-a915-684b7e01a729-kube-api-access-mgm48" (OuterVolumeSpecName: "kube-api-access-mgm48") pod "7bf5623c-f83d-4071-a915-684b7e01a729" (UID: "7bf5623c-f83d-4071-a915-684b7e01a729"). InnerVolumeSpecName "kube-api-access-mgm48". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:01:03 crc kubenswrapper[4946]: I1128 09:01:03.212844 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bf5623c-f83d-4071-a915-684b7e01a729-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:03 crc kubenswrapper[4946]: I1128 09:01:03.212876 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkqkx\" (UniqueName: \"kubernetes.io/projected/5039fbd0-b0df-4b98-9aa5-61ce01921c0d-kube-api-access-tkqkx\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:03 crc kubenswrapper[4946]: I1128 09:01:03.212889 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5039fbd0-b0df-4b98-9aa5-61ce01921c0d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:03 crc kubenswrapper[4946]: I1128 09:01:03.212899 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgm48\" (UniqueName: \"kubernetes.io/projected/7bf5623c-f83d-4071-a915-684b7e01a729-kube-api-access-mgm48\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:03 crc kubenswrapper[4946]: I1128 09:01:03.497875 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-4a11-account-create-update-f6t9x" event={"ID":"7bf5623c-f83d-4071-a915-684b7e01a729","Type":"ContainerDied","Data":"3cadf29f37d9ba859355b220733685aae0bfc3eb736f72afc86989f94d6faffc"} Nov 28 09:01:03 crc kubenswrapper[4946]: I1128 09:01:03.498274 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cadf29f37d9ba859355b220733685aae0bfc3eb736f72afc86989f94d6faffc" Nov 28 09:01:03 crc kubenswrapper[4946]: I1128 09:01:03.498953 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4a11-account-create-update-f6t9x" Nov 28 09:01:03 crc kubenswrapper[4946]: I1128 09:01:03.500594 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-zgv8k" event={"ID":"5039fbd0-b0df-4b98-9aa5-61ce01921c0d","Type":"ContainerDied","Data":"3dcab82622bcbf9942bc7e0487cdea594c77dd6389c42f4fc39492637e295d8a"} Nov 28 09:01:03 crc kubenswrapper[4946]: I1128 09:01:03.500629 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dcab82622bcbf9942bc7e0487cdea594c77dd6389c42f4fc39492637e295d8a" Nov 28 09:01:03 crc kubenswrapper[4946]: I1128 09:01:03.500700 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-zgv8k" Nov 28 09:01:04 crc kubenswrapper[4946]: I1128 09:01:04.515224 4946 generic.go:334] "Generic (PLEG): container finished" podID="ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95" containerID="94c32132874e7dd68dd7f26034e59839d37435d976c123331aff59c96d88b97a" exitCode=0 Nov 28 09:01:04 crc kubenswrapper[4946]: I1128 09:01:04.515297 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405341-p65t2" event={"ID":"ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95","Type":"ContainerDied","Data":"94c32132874e7dd68dd7f26034e59839d37435d976c123331aff59c96d88b97a"} Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.041229 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-qb9nc"] Nov 28 09:01:05 crc kubenswrapper[4946]: E1128 09:01:05.042361 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bf5623c-f83d-4071-a915-684b7e01a729" containerName="mariadb-account-create-update" Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.042384 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bf5623c-f83d-4071-a915-684b7e01a729" containerName="mariadb-account-create-update" Nov 28 09:01:05 crc kubenswrapper[4946]: E1128 09:01:05.042408 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5039fbd0-b0df-4b98-9aa5-61ce01921c0d" containerName="mariadb-database-create" Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.042415 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="5039fbd0-b0df-4b98-9aa5-61ce01921c0d" containerName="mariadb-database-create" Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.042640 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="5039fbd0-b0df-4b98-9aa5-61ce01921c0d" containerName="mariadb-database-create" Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.042662 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bf5623c-f83d-4071-a915-684b7e01a729" containerName="mariadb-account-create-update" Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.043394 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-qb9nc" Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.045623 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.046080 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.046292 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.051605 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-qb9nc"] Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.053222 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-4n9j5" Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.153561 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da2f081-ae1d-49cb-9f50-b0498fd21c92-config-data\") pod \"aodh-db-sync-qb9nc\" (UID: \"0da2f081-ae1d-49cb-9f50-b0498fd21c92\") " pod="openstack/aodh-db-sync-qb9nc" Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.153646 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da2f081-ae1d-49cb-9f50-b0498fd21c92-combined-ca-bundle\") pod \"aodh-db-sync-qb9nc\" (UID: \"0da2f081-ae1d-49cb-9f50-b0498fd21c92\") " pod="openstack/aodh-db-sync-qb9nc" Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.153950 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww7jk\" (UniqueName: \"kubernetes.io/projected/0da2f081-ae1d-49cb-9f50-b0498fd21c92-kube-api-access-ww7jk\") pod \"aodh-db-sync-qb9nc\" (UID: \"0da2f081-ae1d-49cb-9f50-b0498fd21c92\") " pod="openstack/aodh-db-sync-qb9nc" Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.154366 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da2f081-ae1d-49cb-9f50-b0498fd21c92-scripts\") pod \"aodh-db-sync-qb9nc\" (UID: \"0da2f081-ae1d-49cb-9f50-b0498fd21c92\") " pod="openstack/aodh-db-sync-qb9nc" Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.257543 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da2f081-ae1d-49cb-9f50-b0498fd21c92-scripts\") pod \"aodh-db-sync-qb9nc\" (UID: \"0da2f081-ae1d-49cb-9f50-b0498fd21c92\") " pod="openstack/aodh-db-sync-qb9nc" Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.257695 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da2f081-ae1d-49cb-9f50-b0498fd21c92-config-data\") pod \"aodh-db-sync-qb9nc\" (UID: \"0da2f081-ae1d-49cb-9f50-b0498fd21c92\") " pod="openstack/aodh-db-sync-qb9nc" Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.257732 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da2f081-ae1d-49cb-9f50-b0498fd21c92-combined-ca-bundle\") pod \"aodh-db-sync-qb9nc\" (UID: \"0da2f081-ae1d-49cb-9f50-b0498fd21c92\") " pod="openstack/aodh-db-sync-qb9nc" Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.257799 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww7jk\" (UniqueName: \"kubernetes.io/projected/0da2f081-ae1d-49cb-9f50-b0498fd21c92-kube-api-access-ww7jk\") pod \"aodh-db-sync-qb9nc\" (UID: \"0da2f081-ae1d-49cb-9f50-b0498fd21c92\") " pod="openstack/aodh-db-sync-qb9nc" Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.267211 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da2f081-ae1d-49cb-9f50-b0498fd21c92-scripts\") pod \"aodh-db-sync-qb9nc\" (UID: \"0da2f081-ae1d-49cb-9f50-b0498fd21c92\") " pod="openstack/aodh-db-sync-qb9nc" Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.267437 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da2f081-ae1d-49cb-9f50-b0498fd21c92-combined-ca-bundle\") pod \"aodh-db-sync-qb9nc\" (UID: \"0da2f081-ae1d-49cb-9f50-b0498fd21c92\") " pod="openstack/aodh-db-sync-qb9nc" Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.268067 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da2f081-ae1d-49cb-9f50-b0498fd21c92-config-data\") pod \"aodh-db-sync-qb9nc\" (UID: \"0da2f081-ae1d-49cb-9f50-b0498fd21c92\") " pod="openstack/aodh-db-sync-qb9nc" Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.274665 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww7jk\" (UniqueName: \"kubernetes.io/projected/0da2f081-ae1d-49cb-9f50-b0498fd21c92-kube-api-access-ww7jk\") pod \"aodh-db-sync-qb9nc\" (UID: \"0da2f081-ae1d-49cb-9f50-b0498fd21c92\") " pod="openstack/aodh-db-sync-qb9nc" Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.369313 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-qb9nc" Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.905861 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-qb9nc"] Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.955319 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405341-p65t2" Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.971451 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95-config-data\") pod \"ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95\" (UID: \"ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95\") " Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.971569 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95-combined-ca-bundle\") pod \"ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95\" (UID: \"ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95\") " Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.971707 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vbdn\" (UniqueName: \"kubernetes.io/projected/ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95-kube-api-access-6vbdn\") pod \"ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95\" (UID: \"ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95\") " Nov 28 09:01:05 crc kubenswrapper[4946]: I1128 09:01:05.971760 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95-fernet-keys\") pod \"ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95\" (UID: \"ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95\") " Nov 28 09:01:06 crc kubenswrapper[4946]: I1128 09:01:06.004754 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95" (UID: "ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:01:06 crc kubenswrapper[4946]: I1128 09:01:06.005078 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95-kube-api-access-6vbdn" (OuterVolumeSpecName: "kube-api-access-6vbdn") pod "ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95" (UID: "ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95"). InnerVolumeSpecName "kube-api-access-6vbdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:01:06 crc kubenswrapper[4946]: I1128 09:01:06.010606 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95" (UID: "ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:01:06 crc kubenswrapper[4946]: I1128 09:01:06.042609 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95-config-data" (OuterVolumeSpecName: "config-data") pod "ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95" (UID: "ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:01:06 crc kubenswrapper[4946]: I1128 09:01:06.074243 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vbdn\" (UniqueName: \"kubernetes.io/projected/ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95-kube-api-access-6vbdn\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:06 crc kubenswrapper[4946]: I1128 09:01:06.074286 4946 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:06 crc kubenswrapper[4946]: I1128 09:01:06.074301 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:06 crc kubenswrapper[4946]: I1128 09:01:06.074312 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:06 crc kubenswrapper[4946]: I1128 09:01:06.542108 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-qb9nc" event={"ID":"0da2f081-ae1d-49cb-9f50-b0498fd21c92","Type":"ContainerStarted","Data":"65d75f292ed4aa75f6ea1e2b59e4492c21e0d47dab867d3f7d3095cd89188fa9"} Nov 28 09:01:06 crc kubenswrapper[4946]: I1128 09:01:06.544743 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405341-p65t2" event={"ID":"ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95","Type":"ContainerDied","Data":"06c39abce33cbd2b1f1cb605499f73af587c30e6bca7aa92ab3749d7dd9386a9"} Nov 28 09:01:06 crc kubenswrapper[4946]: I1128 09:01:06.544772 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06c39abce33cbd2b1f1cb605499f73af587c30e6bca7aa92ab3749d7dd9386a9" Nov 28 09:01:06 crc kubenswrapper[4946]: I1128 09:01:06.544826 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405341-p65t2" Nov 28 09:01:11 crc kubenswrapper[4946]: I1128 09:01:11.263093 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 28 09:01:11 crc kubenswrapper[4946]: I1128 09:01:11.603074 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-qb9nc" event={"ID":"0da2f081-ae1d-49cb-9f50-b0498fd21c92","Type":"ContainerStarted","Data":"c0975f687317e3f17f976294dedb58d6554e37fb2b6357b921b2d73a05972957"} Nov 28 09:01:11 crc kubenswrapper[4946]: I1128 09:01:11.630965 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-qb9nc" podStartSLOduration=1.860524629 podStartE2EDuration="6.630940495s" podCreationTimestamp="2025-11-28 09:01:05 +0000 UTC" firstStartedPulling="2025-11-28 09:01:05.906678075 +0000 UTC m=+7720.284743186" lastFinishedPulling="2025-11-28 09:01:10.677093951 +0000 UTC m=+7725.055159052" observedRunningTime="2025-11-28 09:01:11.620898876 +0000 UTC m=+7725.998963997" watchObservedRunningTime="2025-11-28 09:01:11.630940495 +0000 UTC m=+7726.009005646" Nov 28 09:01:13 crc kubenswrapper[4946]: I1128 09:01:13.635902 4946 generic.go:334] "Generic (PLEG): container finished" podID="0da2f081-ae1d-49cb-9f50-b0498fd21c92" containerID="c0975f687317e3f17f976294dedb58d6554e37fb2b6357b921b2d73a05972957" exitCode=0 Nov 28 09:01:13 crc kubenswrapper[4946]: I1128 09:01:13.636021 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-qb9nc" event={"ID":"0da2f081-ae1d-49cb-9f50-b0498fd21c92","Type":"ContainerDied","Data":"c0975f687317e3f17f976294dedb58d6554e37fb2b6357b921b2d73a05972957"} Nov 28 09:01:15 crc kubenswrapper[4946]: I1128 09:01:15.107930 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-qb9nc" Nov 28 09:01:15 crc kubenswrapper[4946]: I1128 09:01:15.276116 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da2f081-ae1d-49cb-9f50-b0498fd21c92-combined-ca-bundle\") pod \"0da2f081-ae1d-49cb-9f50-b0498fd21c92\" (UID: \"0da2f081-ae1d-49cb-9f50-b0498fd21c92\") " Nov 28 09:01:15 crc kubenswrapper[4946]: I1128 09:01:15.276275 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da2f081-ae1d-49cb-9f50-b0498fd21c92-scripts\") pod \"0da2f081-ae1d-49cb-9f50-b0498fd21c92\" (UID: \"0da2f081-ae1d-49cb-9f50-b0498fd21c92\") " Nov 28 09:01:15 crc kubenswrapper[4946]: I1128 09:01:15.276344 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww7jk\" (UniqueName: \"kubernetes.io/projected/0da2f081-ae1d-49cb-9f50-b0498fd21c92-kube-api-access-ww7jk\") pod \"0da2f081-ae1d-49cb-9f50-b0498fd21c92\" (UID: \"0da2f081-ae1d-49cb-9f50-b0498fd21c92\") " Nov 28 09:01:15 crc kubenswrapper[4946]: I1128 09:01:15.276419 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da2f081-ae1d-49cb-9f50-b0498fd21c92-config-data\") pod \"0da2f081-ae1d-49cb-9f50-b0498fd21c92\" (UID: \"0da2f081-ae1d-49cb-9f50-b0498fd21c92\") " Nov 28 09:01:15 crc kubenswrapper[4946]: I1128 09:01:15.281102 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da2f081-ae1d-49cb-9f50-b0498fd21c92-scripts" (OuterVolumeSpecName: "scripts") pod "0da2f081-ae1d-49cb-9f50-b0498fd21c92" (UID: "0da2f081-ae1d-49cb-9f50-b0498fd21c92"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:01:15 crc kubenswrapper[4946]: I1128 09:01:15.281622 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0da2f081-ae1d-49cb-9f50-b0498fd21c92-kube-api-access-ww7jk" (OuterVolumeSpecName: "kube-api-access-ww7jk") pod "0da2f081-ae1d-49cb-9f50-b0498fd21c92" (UID: "0da2f081-ae1d-49cb-9f50-b0498fd21c92"). InnerVolumeSpecName "kube-api-access-ww7jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:01:15 crc kubenswrapper[4946]: I1128 09:01:15.302667 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da2f081-ae1d-49cb-9f50-b0498fd21c92-config-data" (OuterVolumeSpecName: "config-data") pod "0da2f081-ae1d-49cb-9f50-b0498fd21c92" (UID: "0da2f081-ae1d-49cb-9f50-b0498fd21c92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:01:15 crc kubenswrapper[4946]: I1128 09:01:15.303695 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da2f081-ae1d-49cb-9f50-b0498fd21c92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0da2f081-ae1d-49cb-9f50-b0498fd21c92" (UID: "0da2f081-ae1d-49cb-9f50-b0498fd21c92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:01:15 crc kubenswrapper[4946]: I1128 09:01:15.378895 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da2f081-ae1d-49cb-9f50-b0498fd21c92-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:15 crc kubenswrapper[4946]: I1128 09:01:15.379117 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww7jk\" (UniqueName: \"kubernetes.io/projected/0da2f081-ae1d-49cb-9f50-b0498fd21c92-kube-api-access-ww7jk\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:15 crc kubenswrapper[4946]: I1128 09:01:15.379133 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da2f081-ae1d-49cb-9f50-b0498fd21c92-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:15 crc kubenswrapper[4946]: I1128 09:01:15.379146 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da2f081-ae1d-49cb-9f50-b0498fd21c92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:15 crc kubenswrapper[4946]: I1128 09:01:15.662604 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-qb9nc" event={"ID":"0da2f081-ae1d-49cb-9f50-b0498fd21c92","Type":"ContainerDied","Data":"65d75f292ed4aa75f6ea1e2b59e4492c21e0d47dab867d3f7d3095cd89188fa9"} Nov 28 09:01:15 crc kubenswrapper[4946]: I1128 09:01:15.662665 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65d75f292ed4aa75f6ea1e2b59e4492c21e0d47dab867d3f7d3095cd89188fa9" Nov 28 09:01:15 crc kubenswrapper[4946]: I1128 09:01:15.662664 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-qb9nc" Nov 28 09:01:17 crc kubenswrapper[4946]: I1128 09:01:17.065652 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-c9h9f"] Nov 28 09:01:17 crc kubenswrapper[4946]: I1128 09:01:17.083425 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-c9h9f"] Nov 28 09:01:18 crc kubenswrapper[4946]: I1128 09:01:18.013948 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85e917fc-a566-491c-abc5-515295f5b38e" path="/var/lib/kubelet/pods/85e917fc-a566-491c-abc5-515295f5b38e/volumes" Nov 28 09:01:18 crc kubenswrapper[4946]: I1128 09:01:18.050270 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-bc0c-account-create-update-lq5dr"] Nov 28 09:01:18 crc kubenswrapper[4946]: I1128 09:01:18.061163 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-bc0c-account-create-update-lq5dr"] Nov 28 09:01:20 crc kubenswrapper[4946]: I1128 09:01:20.010833 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e" path="/var/lib/kubelet/pods/0f0e1c79-d58d-4aeb-99b0-4ee1ecedf96e/volumes" Nov 28 09:01:20 crc kubenswrapper[4946]: I1128 09:01:20.160094 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 28 09:01:20 crc kubenswrapper[4946]: E1128 09:01:20.160798 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da2f081-ae1d-49cb-9f50-b0498fd21c92" containerName="aodh-db-sync" Nov 28 09:01:20 crc kubenswrapper[4946]: I1128 09:01:20.160839 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da2f081-ae1d-49cb-9f50-b0498fd21c92" containerName="aodh-db-sync" Nov 28 09:01:20 crc kubenswrapper[4946]: E1128 09:01:20.160940 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95" containerName="keystone-cron" Nov 28 09:01:20 crc kubenswrapper[4946]: I1128 09:01:20.160969 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95" containerName="keystone-cron" Nov 28 09:01:20 crc kubenswrapper[4946]: I1128 09:01:20.161498 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="0da2f081-ae1d-49cb-9f50-b0498fd21c92" containerName="aodh-db-sync" Nov 28 09:01:20 crc kubenswrapper[4946]: I1128 09:01:20.161553 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95" containerName="keystone-cron" Nov 28 09:01:20 crc kubenswrapper[4946]: I1128 09:01:20.164642 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 09:01:20 crc kubenswrapper[4946]: I1128 09:01:20.167166 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 28 09:01:20 crc kubenswrapper[4946]: I1128 09:01:20.167931 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 28 09:01:20 crc kubenswrapper[4946]: I1128 09:01:20.168976 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-4n9j5" Nov 28 09:01:20 crc kubenswrapper[4946]: I1128 09:01:20.181900 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 28 09:01:20 crc kubenswrapper[4946]: I1128 09:01:20.304680 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4fafb80-f462-4000-a1c1-ba8405b342cc-config-data\") pod \"aodh-0\" (UID: \"c4fafb80-f462-4000-a1c1-ba8405b342cc\") " pod="openstack/aodh-0" Nov 28 09:01:20 crc kubenswrapper[4946]: I1128 09:01:20.304726 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g46v\" (UniqueName: \"kubernetes.io/projected/c4fafb80-f462-4000-a1c1-ba8405b342cc-kube-api-access-6g46v\") pod \"aodh-0\" (UID: \"c4fafb80-f462-4000-a1c1-ba8405b342cc\") " pod="openstack/aodh-0" Nov 28 09:01:20 crc kubenswrapper[4946]: I1128 09:01:20.304756 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fafb80-f462-4000-a1c1-ba8405b342cc-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c4fafb80-f462-4000-a1c1-ba8405b342cc\") " pod="openstack/aodh-0" Nov 28 09:01:20 crc kubenswrapper[4946]: I1128 09:01:20.304795 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4fafb80-f462-4000-a1c1-ba8405b342cc-scripts\") pod \"aodh-0\" (UID: \"c4fafb80-f462-4000-a1c1-ba8405b342cc\") " pod="openstack/aodh-0" Nov 28 09:01:20 crc kubenswrapper[4946]: I1128 09:01:20.406828 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4fafb80-f462-4000-a1c1-ba8405b342cc-config-data\") pod \"aodh-0\" (UID: \"c4fafb80-f462-4000-a1c1-ba8405b342cc\") " pod="openstack/aodh-0" Nov 28 09:01:20 crc kubenswrapper[4946]: I1128 09:01:20.407066 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g46v\" (UniqueName: \"kubernetes.io/projected/c4fafb80-f462-4000-a1c1-ba8405b342cc-kube-api-access-6g46v\") pod \"aodh-0\" (UID: \"c4fafb80-f462-4000-a1c1-ba8405b342cc\") " pod="openstack/aodh-0" Nov 28 09:01:20 crc kubenswrapper[4946]: I1128 09:01:20.407228 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fafb80-f462-4000-a1c1-ba8405b342cc-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c4fafb80-f462-4000-a1c1-ba8405b342cc\") " pod="openstack/aodh-0" Nov 28 09:01:20 crc kubenswrapper[4946]: I1128 09:01:20.407361 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4fafb80-f462-4000-a1c1-ba8405b342cc-scripts\") pod \"aodh-0\" (UID: \"c4fafb80-f462-4000-a1c1-ba8405b342cc\") " pod="openstack/aodh-0" Nov 28 09:01:20 crc kubenswrapper[4946]: I1128 09:01:20.415599 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4fafb80-f462-4000-a1c1-ba8405b342cc-config-data\") pod \"aodh-0\" (UID: \"c4fafb80-f462-4000-a1c1-ba8405b342cc\") " pod="openstack/aodh-0" Nov 28 09:01:20 crc kubenswrapper[4946]: I1128 09:01:20.416557 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fafb80-f462-4000-a1c1-ba8405b342cc-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c4fafb80-f462-4000-a1c1-ba8405b342cc\") " pod="openstack/aodh-0" Nov 28 09:01:20 crc kubenswrapper[4946]: I1128 09:01:20.427843 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4fafb80-f462-4000-a1c1-ba8405b342cc-scripts\") pod \"aodh-0\" (UID: \"c4fafb80-f462-4000-a1c1-ba8405b342cc\") " pod="openstack/aodh-0" Nov 28 09:01:20 crc kubenswrapper[4946]: I1128 09:01:20.431082 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g46v\" (UniqueName: \"kubernetes.io/projected/c4fafb80-f462-4000-a1c1-ba8405b342cc-kube-api-access-6g46v\") pod \"aodh-0\" (UID: \"c4fafb80-f462-4000-a1c1-ba8405b342cc\") " pod="openstack/aodh-0" Nov 28 09:01:20 crc kubenswrapper[4946]: I1128 09:01:20.502335 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 09:01:20 crc kubenswrapper[4946]: I1128 09:01:20.995797 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 28 09:01:21 crc kubenswrapper[4946]: I1128 09:01:21.770760 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c4fafb80-f462-4000-a1c1-ba8405b342cc","Type":"ContainerStarted","Data":"b2817d2b32128b9d04e6eb1d73f34eb766014a37a2d5c6ed4fc789b43e9edd21"} Nov 28 09:01:21 crc kubenswrapper[4946]: I1128 09:01:21.771178 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c4fafb80-f462-4000-a1c1-ba8405b342cc","Type":"ContainerStarted","Data":"d0b52a9c2007d47feddb5900c84e271d4be8f08d3a370620b7dcd242d3c38808"} Nov 28 09:01:22 crc kubenswrapper[4946]: I1128 09:01:22.080565 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 09:01:22 crc kubenswrapper[4946]: I1128 09:01:22.080835 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" containerName="ceilometer-central-agent" containerID="cri-o://18e8d280a89351278dd929520135dcbc610887430decfc9e6f338d658ecbd160" gracePeriod=30 Nov 28 09:01:22 crc kubenswrapper[4946]: I1128 09:01:22.080970 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" containerName="proxy-httpd" containerID="cri-o://781c3ea37cd70624c57185541a1ab8c04d203e29125ec47a3842a0071682ae09" gracePeriod=30 Nov 28 09:01:22 crc kubenswrapper[4946]: I1128 09:01:22.081015 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" containerName="sg-core" containerID="cri-o://18b921299ada359b90576611785179204489eab026cb3360f237443c0cb34110" gracePeriod=30 Nov 28 09:01:22 crc kubenswrapper[4946]: I1128 09:01:22.081047 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" containerName="ceilometer-notification-agent" containerID="cri-o://923b854ec4ddfbc56ac35d113397714b9f285d953ee288c1e504428ddf9dd51b" gracePeriod=30 Nov 28 09:01:22 crc kubenswrapper[4946]: I1128 09:01:22.781174 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c4fafb80-f462-4000-a1c1-ba8405b342cc","Type":"ContainerStarted","Data":"ab021d4a12244ef0f2a575d22c7cffb3db4d072a1fcc15e592d5a0041d2bf65a"} Nov 28 09:01:22 crc kubenswrapper[4946]: I1128 09:01:22.785703 4946 generic.go:334] "Generic (PLEG): container finished" podID="25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" containerID="781c3ea37cd70624c57185541a1ab8c04d203e29125ec47a3842a0071682ae09" exitCode=0 Nov 28 09:01:22 crc kubenswrapper[4946]: I1128 09:01:22.785721 4946 generic.go:334] "Generic (PLEG): container finished" podID="25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" containerID="18b921299ada359b90576611785179204489eab026cb3360f237443c0cb34110" exitCode=2 Nov 28 09:01:22 crc kubenswrapper[4946]: I1128 09:01:22.785728 4946 generic.go:334] "Generic (PLEG): container finished" podID="25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" containerID="18e8d280a89351278dd929520135dcbc610887430decfc9e6f338d658ecbd160" exitCode=0 Nov 28 09:01:22 crc kubenswrapper[4946]: I1128 09:01:22.785741 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6","Type":"ContainerDied","Data":"781c3ea37cd70624c57185541a1ab8c04d203e29125ec47a3842a0071682ae09"} Nov 28 09:01:22 crc kubenswrapper[4946]: I1128 09:01:22.785774 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6","Type":"ContainerDied","Data":"18b921299ada359b90576611785179204489eab026cb3360f237443c0cb34110"} Nov 28 09:01:22 crc kubenswrapper[4946]: I1128 09:01:22.785784 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6","Type":"ContainerDied","Data":"18e8d280a89351278dd929520135dcbc610887430decfc9e6f338d658ecbd160"} Nov 28 09:01:23 crc kubenswrapper[4946]: I1128 09:01:23.806667 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c4fafb80-f462-4000-a1c1-ba8405b342cc","Type":"ContainerStarted","Data":"98f25064eed87a41bc19c6e1841537c26d97829e12faedba8f8722b5f858d012"} Nov 28 09:01:25 crc kubenswrapper[4946]: I1128 09:01:25.832986 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c4fafb80-f462-4000-a1c1-ba8405b342cc","Type":"ContainerStarted","Data":"295a79fe444d8381e7106aab50da45bc580acfb92f336096d51b177145baef9d"} Nov 28 09:01:25 crc kubenswrapper[4946]: I1128 09:01:25.877002 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.27667786 podStartE2EDuration="5.876974666s" podCreationTimestamp="2025-11-28 09:01:20 +0000 UTC" firstStartedPulling="2025-11-28 09:01:21.002583435 +0000 UTC m=+7735.380648546" lastFinishedPulling="2025-11-28 09:01:24.602880241 +0000 UTC m=+7738.980945352" observedRunningTime="2025-11-28 09:01:25.865286876 +0000 UTC m=+7740.243352017" watchObservedRunningTime="2025-11-28 09:01:25.876974666 +0000 UTC m=+7740.255039807" Nov 28 09:01:27 crc kubenswrapper[4946]: I1128 09:01:27.034779 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-j2fl5"] Nov 28 09:01:27 crc kubenswrapper[4946]: I1128 09:01:27.047220 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-j2fl5"] Nov 28 09:01:28 crc kubenswrapper[4946]: I1128 09:01:28.003524 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47b7d54b-d04e-4c47-86dc-cc283143f3d7" path="/var/lib/kubelet/pods/47b7d54b-d04e-4c47-86dc-cc283143f3d7/volumes" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.338815 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-ztsg8"] Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.340283 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-ztsg8" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.348626 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-ztsg8"] Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.436772 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37faecbc-f2ad-4943-8252-d5a279e45d76-operator-scripts\") pod \"manila-db-create-ztsg8\" (UID: \"37faecbc-f2ad-4943-8252-d5a279e45d76\") " pod="openstack/manila-db-create-ztsg8" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.436849 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfg5g\" (UniqueName: \"kubernetes.io/projected/37faecbc-f2ad-4943-8252-d5a279e45d76-kube-api-access-pfg5g\") pod \"manila-db-create-ztsg8\" (UID: \"37faecbc-f2ad-4943-8252-d5a279e45d76\") " pod="openstack/manila-db-create-ztsg8" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.467661 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-8ab4-account-create-update-x766l"] Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.469136 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-8ab4-account-create-update-x766l" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.471373 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.476797 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-8ab4-account-create-update-x766l"] Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.539699 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37faecbc-f2ad-4943-8252-d5a279e45d76-operator-scripts\") pod \"manila-db-create-ztsg8\" (UID: \"37faecbc-f2ad-4943-8252-d5a279e45d76\") " pod="openstack/manila-db-create-ztsg8" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.540079 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f226da0-54fe-4f7d-97cc-b32d781386d4-operator-scripts\") pod \"manila-8ab4-account-create-update-x766l\" (UID: \"8f226da0-54fe-4f7d-97cc-b32d781386d4\") " pod="openstack/manila-8ab4-account-create-update-x766l" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.540116 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfg5g\" (UniqueName: \"kubernetes.io/projected/37faecbc-f2ad-4943-8252-d5a279e45d76-kube-api-access-pfg5g\") pod \"manila-db-create-ztsg8\" (UID: \"37faecbc-f2ad-4943-8252-d5a279e45d76\") " pod="openstack/manila-db-create-ztsg8" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.540187 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4hh5\" (UniqueName: \"kubernetes.io/projected/8f226da0-54fe-4f7d-97cc-b32d781386d4-kube-api-access-f4hh5\") pod \"manila-8ab4-account-create-update-x766l\" (UID: \"8f226da0-54fe-4f7d-97cc-b32d781386d4\") " pod="openstack/manila-8ab4-account-create-update-x766l" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.541034 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37faecbc-f2ad-4943-8252-d5a279e45d76-operator-scripts\") pod \"manila-db-create-ztsg8\" (UID: \"37faecbc-f2ad-4943-8252-d5a279e45d76\") " pod="openstack/manila-db-create-ztsg8" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.557753 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfg5g\" (UniqueName: \"kubernetes.io/projected/37faecbc-f2ad-4943-8252-d5a279e45d76-kube-api-access-pfg5g\") pod \"manila-db-create-ztsg8\" (UID: \"37faecbc-f2ad-4943-8252-d5a279e45d76\") " pod="openstack/manila-db-create-ztsg8" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.639732 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.641946 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f226da0-54fe-4f7d-97cc-b32d781386d4-operator-scripts\") pod \"manila-8ab4-account-create-update-x766l\" (UID: \"8f226da0-54fe-4f7d-97cc-b32d781386d4\") " pod="openstack/manila-8ab4-account-create-update-x766l" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.642030 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4hh5\" (UniqueName: \"kubernetes.io/projected/8f226da0-54fe-4f7d-97cc-b32d781386d4-kube-api-access-f4hh5\") pod \"manila-8ab4-account-create-update-x766l\" (UID: \"8f226da0-54fe-4f7d-97cc-b32d781386d4\") " pod="openstack/manila-8ab4-account-create-update-x766l" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.642986 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f226da0-54fe-4f7d-97cc-b32d781386d4-operator-scripts\") pod \"manila-8ab4-account-create-update-x766l\" (UID: \"8f226da0-54fe-4f7d-97cc-b32d781386d4\") " pod="openstack/manila-8ab4-account-create-update-x766l" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.656889 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4hh5\" (UniqueName: \"kubernetes.io/projected/8f226da0-54fe-4f7d-97cc-b32d781386d4-kube-api-access-f4hh5\") pod \"manila-8ab4-account-create-update-x766l\" (UID: \"8f226da0-54fe-4f7d-97cc-b32d781386d4\") " pod="openstack/manila-8ab4-account-create-update-x766l" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.667303 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-ztsg8" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.742753 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-sg-core-conf-yaml\") pod \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.742796 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-log-httpd\") pod \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.742819 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-config-data\") pod \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.742840 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-run-httpd\") pod \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.742890 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qq5g\" (UniqueName: \"kubernetes.io/projected/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-kube-api-access-8qq5g\") pod \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.743532 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-scripts\") pod \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.743585 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-combined-ca-bundle\") pod \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\" (UID: \"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6\") " Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.752866 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-kube-api-access-8qq5g" (OuterVolumeSpecName: "kube-api-access-8qq5g") pod "25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" (UID: "25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6"). InnerVolumeSpecName "kube-api-access-8qq5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.754735 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" (UID: "25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.754854 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" (UID: "25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.760327 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-scripts" (OuterVolumeSpecName: "scripts") pod "25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" (UID: "25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.781948 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" (UID: "25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.790009 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-8ab4-account-create-update-x766l" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.820819 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" (UID: "25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.845909 4946 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.845932 4946 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.845941 4946 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.845952 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qq5g\" (UniqueName: \"kubernetes.io/projected/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-kube-api-access-8qq5g\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.845962 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.845972 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.894753 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-config-data" (OuterVolumeSpecName: "config-data") pod "25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" (UID: "25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.945746 4946 generic.go:334] "Generic (PLEG): container finished" podID="25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" containerID="923b854ec4ddfbc56ac35d113397714b9f285d953ee288c1e504428ddf9dd51b" exitCode=0 Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.945784 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6","Type":"ContainerDied","Data":"923b854ec4ddfbc56ac35d113397714b9f285d953ee288c1e504428ddf9dd51b"} Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.945811 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6","Type":"ContainerDied","Data":"b7c45ddea93129964269221997a6f0414604be46e3aa923b4403bc724cc15a47"} Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.945828 4946 scope.go:117] "RemoveContainer" containerID="781c3ea37cd70624c57185541a1ab8c04d203e29125ec47a3842a0071682ae09" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.945939 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.952230 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:30 crc kubenswrapper[4946]: I1128 09:01:30.989089 4946 scope.go:117] "RemoveContainer" containerID="18b921299ada359b90576611785179204489eab026cb3360f237443c0cb34110" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.003325 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.018319 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.027465 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 09:01:31 crc kubenswrapper[4946]: E1128 09:01:31.028007 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" containerName="ceilometer-central-agent" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.028023 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" containerName="ceilometer-central-agent" Nov 28 09:01:31 crc kubenswrapper[4946]: E1128 09:01:31.028038 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" containerName="sg-core" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.028046 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" containerName="sg-core" Nov 28 09:01:31 crc kubenswrapper[4946]: E1128 09:01:31.028076 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" containerName="ceilometer-notification-agent" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.028086 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" containerName="ceilometer-notification-agent" Nov 28 09:01:31 crc kubenswrapper[4946]: E1128 09:01:31.028110 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" containerName="proxy-httpd" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.028118 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" containerName="proxy-httpd" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.028360 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" containerName="ceilometer-notification-agent" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.028382 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" containerName="ceilometer-central-agent" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.028402 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" containerName="proxy-httpd" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.028420 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" containerName="sg-core" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.030722 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.033246 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.033425 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.046149 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.055597 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0061281-50a6-4f9b-ad83-3043544dc56b-log-httpd\") pod \"ceilometer-0\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " pod="openstack/ceilometer-0" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.055848 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7ll5\" (UniqueName: \"kubernetes.io/projected/d0061281-50a6-4f9b-ad83-3043544dc56b-kube-api-access-h7ll5\") pod \"ceilometer-0\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " pod="openstack/ceilometer-0" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.055877 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0061281-50a6-4f9b-ad83-3043544dc56b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " pod="openstack/ceilometer-0" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.055898 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0061281-50a6-4f9b-ad83-3043544dc56b-run-httpd\") pod \"ceilometer-0\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " pod="openstack/ceilometer-0" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.055919 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0061281-50a6-4f9b-ad83-3043544dc56b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " pod="openstack/ceilometer-0" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.055978 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0061281-50a6-4f9b-ad83-3043544dc56b-scripts\") pod \"ceilometer-0\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " pod="openstack/ceilometer-0" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.056012 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0061281-50a6-4f9b-ad83-3043544dc56b-config-data\") pod \"ceilometer-0\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " pod="openstack/ceilometer-0" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.060632 4946 scope.go:117] "RemoveContainer" containerID="923b854ec4ddfbc56ac35d113397714b9f285d953ee288c1e504428ddf9dd51b" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.086653 4946 scope.go:117] "RemoveContainer" containerID="18e8d280a89351278dd929520135dcbc610887430decfc9e6f338d658ecbd160" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.133378 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-ztsg8"] Nov 28 09:01:31 crc kubenswrapper[4946]: W1128 09:01:31.134147 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37faecbc_f2ad_4943_8252_d5a279e45d76.slice/crio-c1f5b9e373bd38f37b0e0c37ef0d25bac6a4e0e893ad1487fcbb525530b42098 WatchSource:0}: Error finding container c1f5b9e373bd38f37b0e0c37ef0d25bac6a4e0e893ad1487fcbb525530b42098: Status 404 returned error can't find the container with id c1f5b9e373bd38f37b0e0c37ef0d25bac6a4e0e893ad1487fcbb525530b42098 Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.157282 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0061281-50a6-4f9b-ad83-3043544dc56b-scripts\") pod \"ceilometer-0\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " pod="openstack/ceilometer-0" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.157327 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0061281-50a6-4f9b-ad83-3043544dc56b-config-data\") pod \"ceilometer-0\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " pod="openstack/ceilometer-0" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.157412 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0061281-50a6-4f9b-ad83-3043544dc56b-log-httpd\") pod \"ceilometer-0\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " pod="openstack/ceilometer-0" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.157442 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7ll5\" (UniqueName: \"kubernetes.io/projected/d0061281-50a6-4f9b-ad83-3043544dc56b-kube-api-access-h7ll5\") pod \"ceilometer-0\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " pod="openstack/ceilometer-0" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.157465 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0061281-50a6-4f9b-ad83-3043544dc56b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " pod="openstack/ceilometer-0" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.157495 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0061281-50a6-4f9b-ad83-3043544dc56b-run-httpd\") pod \"ceilometer-0\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " pod="openstack/ceilometer-0" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.157516 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0061281-50a6-4f9b-ad83-3043544dc56b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " pod="openstack/ceilometer-0" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.161788 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0061281-50a6-4f9b-ad83-3043544dc56b-log-httpd\") pod \"ceilometer-0\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " pod="openstack/ceilometer-0" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.184948 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0061281-50a6-4f9b-ad83-3043544dc56b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " pod="openstack/ceilometer-0" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.185172 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0061281-50a6-4f9b-ad83-3043544dc56b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " pod="openstack/ceilometer-0" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.185535 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0061281-50a6-4f9b-ad83-3043544dc56b-run-httpd\") pod \"ceilometer-0\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " pod="openstack/ceilometer-0" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.186136 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0061281-50a6-4f9b-ad83-3043544dc56b-scripts\") pod \"ceilometer-0\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " pod="openstack/ceilometer-0" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.198498 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0061281-50a6-4f9b-ad83-3043544dc56b-config-data\") pod \"ceilometer-0\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " pod="openstack/ceilometer-0" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.206206 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7ll5\" (UniqueName: \"kubernetes.io/projected/d0061281-50a6-4f9b-ad83-3043544dc56b-kube-api-access-h7ll5\") pod \"ceilometer-0\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " pod="openstack/ceilometer-0" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.304622 4946 scope.go:117] "RemoveContainer" containerID="781c3ea37cd70624c57185541a1ab8c04d203e29125ec47a3842a0071682ae09" Nov 28 09:01:31 crc kubenswrapper[4946]: E1128 09:01:31.313610 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"781c3ea37cd70624c57185541a1ab8c04d203e29125ec47a3842a0071682ae09\": container with ID starting with 781c3ea37cd70624c57185541a1ab8c04d203e29125ec47a3842a0071682ae09 not found: ID does not exist" containerID="781c3ea37cd70624c57185541a1ab8c04d203e29125ec47a3842a0071682ae09" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.313655 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"781c3ea37cd70624c57185541a1ab8c04d203e29125ec47a3842a0071682ae09"} err="failed to get container status \"781c3ea37cd70624c57185541a1ab8c04d203e29125ec47a3842a0071682ae09\": rpc error: code = NotFound desc = could not find container \"781c3ea37cd70624c57185541a1ab8c04d203e29125ec47a3842a0071682ae09\": container with ID starting with 781c3ea37cd70624c57185541a1ab8c04d203e29125ec47a3842a0071682ae09 not found: ID does not exist" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.313678 4946 scope.go:117] "RemoveContainer" containerID="18b921299ada359b90576611785179204489eab026cb3360f237443c0cb34110" Nov 28 09:01:31 crc kubenswrapper[4946]: E1128 09:01:31.318155 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18b921299ada359b90576611785179204489eab026cb3360f237443c0cb34110\": container with ID starting with 18b921299ada359b90576611785179204489eab026cb3360f237443c0cb34110 not found: ID does not exist" containerID="18b921299ada359b90576611785179204489eab026cb3360f237443c0cb34110" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.318186 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18b921299ada359b90576611785179204489eab026cb3360f237443c0cb34110"} err="failed to get container status \"18b921299ada359b90576611785179204489eab026cb3360f237443c0cb34110\": rpc error: code = NotFound desc = could not find container \"18b921299ada359b90576611785179204489eab026cb3360f237443c0cb34110\": container with ID starting with 18b921299ada359b90576611785179204489eab026cb3360f237443c0cb34110 not found: ID does not exist" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.318203 4946 scope.go:117] "RemoveContainer" containerID="923b854ec4ddfbc56ac35d113397714b9f285d953ee288c1e504428ddf9dd51b" Nov 28 09:01:31 crc kubenswrapper[4946]: E1128 09:01:31.321545 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"923b854ec4ddfbc56ac35d113397714b9f285d953ee288c1e504428ddf9dd51b\": container with ID starting with 923b854ec4ddfbc56ac35d113397714b9f285d953ee288c1e504428ddf9dd51b not found: ID does not exist" containerID="923b854ec4ddfbc56ac35d113397714b9f285d953ee288c1e504428ddf9dd51b" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.321569 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"923b854ec4ddfbc56ac35d113397714b9f285d953ee288c1e504428ddf9dd51b"} err="failed to get container status \"923b854ec4ddfbc56ac35d113397714b9f285d953ee288c1e504428ddf9dd51b\": rpc error: code = NotFound desc = could not find container \"923b854ec4ddfbc56ac35d113397714b9f285d953ee288c1e504428ddf9dd51b\": container with ID starting with 923b854ec4ddfbc56ac35d113397714b9f285d953ee288c1e504428ddf9dd51b not found: ID does not exist" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.321585 4946 scope.go:117] "RemoveContainer" containerID="18e8d280a89351278dd929520135dcbc610887430decfc9e6f338d658ecbd160" Nov 28 09:01:31 crc kubenswrapper[4946]: E1128 09:01:31.325544 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18e8d280a89351278dd929520135dcbc610887430decfc9e6f338d658ecbd160\": container with ID starting with 18e8d280a89351278dd929520135dcbc610887430decfc9e6f338d658ecbd160 not found: ID does not exist" containerID="18e8d280a89351278dd929520135dcbc610887430decfc9e6f338d658ecbd160" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.325604 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18e8d280a89351278dd929520135dcbc610887430decfc9e6f338d658ecbd160"} err="failed to get container status \"18e8d280a89351278dd929520135dcbc610887430decfc9e6f338d658ecbd160\": rpc error: code = NotFound desc = could not find container \"18e8d280a89351278dd929520135dcbc610887430decfc9e6f338d658ecbd160\": container with ID starting with 18e8d280a89351278dd929520135dcbc610887430decfc9e6f338d658ecbd160 not found: ID does not exist" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.351966 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.384514 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-8ab4-account-create-update-x766l"] Nov 28 09:01:31 crc kubenswrapper[4946]: W1128 09:01:31.888026 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0061281_50a6_4f9b_ad83_3043544dc56b.slice/crio-1a06b202817b01a24c76417136988fec2429c18ee2a6d200f9387fc72260a9f2 WatchSource:0}: Error finding container 1a06b202817b01a24c76417136988fec2429c18ee2a6d200f9387fc72260a9f2: Status 404 returned error can't find the container with id 1a06b202817b01a24c76417136988fec2429c18ee2a6d200f9387fc72260a9f2 Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.888171 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.956147 4946 generic.go:334] "Generic (PLEG): container finished" podID="8f226da0-54fe-4f7d-97cc-b32d781386d4" containerID="5b2e45c1bebd6474dea14cae50eb9c6f99b9db78356859a073260ac8e7c0e137" exitCode=0 Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.956200 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-8ab4-account-create-update-x766l" event={"ID":"8f226da0-54fe-4f7d-97cc-b32d781386d4","Type":"ContainerDied","Data":"5b2e45c1bebd6474dea14cae50eb9c6f99b9db78356859a073260ac8e7c0e137"} Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.956256 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-8ab4-account-create-update-x766l" event={"ID":"8f226da0-54fe-4f7d-97cc-b32d781386d4","Type":"ContainerStarted","Data":"9e9fd14dba65feb590d6e19fd94ced15e708d1bf37fbd8319d390922bfe7a863"} Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.959899 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0061281-50a6-4f9b-ad83-3043544dc56b","Type":"ContainerStarted","Data":"1a06b202817b01a24c76417136988fec2429c18ee2a6d200f9387fc72260a9f2"} Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.961935 4946 generic.go:334] "Generic (PLEG): container finished" podID="37faecbc-f2ad-4943-8252-d5a279e45d76" containerID="67d0d2442fd292e5b0af49a19acd55838757defb575601893b48c0f893ddffe9" exitCode=0 Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.961978 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-ztsg8" event={"ID":"37faecbc-f2ad-4943-8252-d5a279e45d76","Type":"ContainerDied","Data":"67d0d2442fd292e5b0af49a19acd55838757defb575601893b48c0f893ddffe9"} Nov 28 09:01:31 crc kubenswrapper[4946]: I1128 09:01:31.962005 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-ztsg8" event={"ID":"37faecbc-f2ad-4943-8252-d5a279e45d76","Type":"ContainerStarted","Data":"c1f5b9e373bd38f37b0e0c37ef0d25bac6a4e0e893ad1487fcbb525530b42098"} Nov 28 09:01:32 crc kubenswrapper[4946]: I1128 09:01:32.004122 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6" path="/var/lib/kubelet/pods/25b7a04b-18d3-41f1-b918-5b2fc6aaeaa6/volumes" Nov 28 09:01:32 crc kubenswrapper[4946]: I1128 09:01:32.976646 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0061281-50a6-4f9b-ad83-3043544dc56b","Type":"ContainerStarted","Data":"c816ef4f6b0abc8f8dee8a68118275e1ed346c57429f1d23efbd7aa21bf99a73"} Nov 28 09:01:32 crc kubenswrapper[4946]: I1128 09:01:32.977167 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0061281-50a6-4f9b-ad83-3043544dc56b","Type":"ContainerStarted","Data":"f3c933f80301e36beeaecf6a52c10a5f819aae02451cf2ec72eae8dade275c1a"} Nov 28 09:01:33 crc kubenswrapper[4946]: I1128 09:01:33.469700 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-ztsg8" Nov 28 09:01:33 crc kubenswrapper[4946]: I1128 09:01:33.483728 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-8ab4-account-create-update-x766l" Nov 28 09:01:33 crc kubenswrapper[4946]: I1128 09:01:33.508324 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfg5g\" (UniqueName: \"kubernetes.io/projected/37faecbc-f2ad-4943-8252-d5a279e45d76-kube-api-access-pfg5g\") pod \"37faecbc-f2ad-4943-8252-d5a279e45d76\" (UID: \"37faecbc-f2ad-4943-8252-d5a279e45d76\") " Nov 28 09:01:33 crc kubenswrapper[4946]: I1128 09:01:33.508383 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37faecbc-f2ad-4943-8252-d5a279e45d76-operator-scripts\") pod \"37faecbc-f2ad-4943-8252-d5a279e45d76\" (UID: \"37faecbc-f2ad-4943-8252-d5a279e45d76\") " Nov 28 09:01:33 crc kubenswrapper[4946]: I1128 09:01:33.508582 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4hh5\" (UniqueName: \"kubernetes.io/projected/8f226da0-54fe-4f7d-97cc-b32d781386d4-kube-api-access-f4hh5\") pod \"8f226da0-54fe-4f7d-97cc-b32d781386d4\" (UID: \"8f226da0-54fe-4f7d-97cc-b32d781386d4\") " Nov 28 09:01:33 crc kubenswrapper[4946]: I1128 09:01:33.508658 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f226da0-54fe-4f7d-97cc-b32d781386d4-operator-scripts\") pod \"8f226da0-54fe-4f7d-97cc-b32d781386d4\" (UID: \"8f226da0-54fe-4f7d-97cc-b32d781386d4\") " Nov 28 09:01:33 crc kubenswrapper[4946]: I1128 09:01:33.509105 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37faecbc-f2ad-4943-8252-d5a279e45d76-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "37faecbc-f2ad-4943-8252-d5a279e45d76" (UID: "37faecbc-f2ad-4943-8252-d5a279e45d76"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:01:33 crc kubenswrapper[4946]: I1128 09:01:33.513453 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f226da0-54fe-4f7d-97cc-b32d781386d4-kube-api-access-f4hh5" (OuterVolumeSpecName: "kube-api-access-f4hh5") pod "8f226da0-54fe-4f7d-97cc-b32d781386d4" (UID: "8f226da0-54fe-4f7d-97cc-b32d781386d4"). InnerVolumeSpecName "kube-api-access-f4hh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:01:33 crc kubenswrapper[4946]: I1128 09:01:33.516231 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37faecbc-f2ad-4943-8252-d5a279e45d76-kube-api-access-pfg5g" (OuterVolumeSpecName: "kube-api-access-pfg5g") pod "37faecbc-f2ad-4943-8252-d5a279e45d76" (UID: "37faecbc-f2ad-4943-8252-d5a279e45d76"). InnerVolumeSpecName "kube-api-access-pfg5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:01:33 crc kubenswrapper[4946]: I1128 09:01:33.516665 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f226da0-54fe-4f7d-97cc-b32d781386d4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f226da0-54fe-4f7d-97cc-b32d781386d4" (UID: "8f226da0-54fe-4f7d-97cc-b32d781386d4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:01:33 crc kubenswrapper[4946]: I1128 09:01:33.611193 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfg5g\" (UniqueName: \"kubernetes.io/projected/37faecbc-f2ad-4943-8252-d5a279e45d76-kube-api-access-pfg5g\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:33 crc kubenswrapper[4946]: I1128 09:01:33.611224 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37faecbc-f2ad-4943-8252-d5a279e45d76-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:33 crc kubenswrapper[4946]: I1128 09:01:33.611233 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4hh5\" (UniqueName: \"kubernetes.io/projected/8f226da0-54fe-4f7d-97cc-b32d781386d4-kube-api-access-f4hh5\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:33 crc kubenswrapper[4946]: I1128 09:01:33.611242 4946 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f226da0-54fe-4f7d-97cc-b32d781386d4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:33 crc kubenswrapper[4946]: I1128 09:01:33.988903 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-ztsg8" Nov 28 09:01:33 crc kubenswrapper[4946]: I1128 09:01:33.992231 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-8ab4-account-create-update-x766l" Nov 28 09:01:34 crc kubenswrapper[4946]: I1128 09:01:34.005336 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-ztsg8" event={"ID":"37faecbc-f2ad-4943-8252-d5a279e45d76","Type":"ContainerDied","Data":"c1f5b9e373bd38f37b0e0c37ef0d25bac6a4e0e893ad1487fcbb525530b42098"} Nov 28 09:01:34 crc kubenswrapper[4946]: I1128 09:01:34.005382 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1f5b9e373bd38f37b0e0c37ef0d25bac6a4e0e893ad1487fcbb525530b42098" Nov 28 09:01:34 crc kubenswrapper[4946]: I1128 09:01:34.005397 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-8ab4-account-create-update-x766l" event={"ID":"8f226da0-54fe-4f7d-97cc-b32d781386d4","Type":"ContainerDied","Data":"9e9fd14dba65feb590d6e19fd94ced15e708d1bf37fbd8319d390922bfe7a863"} Nov 28 09:01:34 crc kubenswrapper[4946]: I1128 09:01:34.005408 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e9fd14dba65feb590d6e19fd94ced15e708d1bf37fbd8319d390922bfe7a863" Nov 28 09:01:34 crc kubenswrapper[4946]: I1128 09:01:34.005419 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0061281-50a6-4f9b-ad83-3043544dc56b","Type":"ContainerStarted","Data":"122e6c5622a7a34ef8cf0ebb001240116f79a69ab0b666898274529627413e6d"} Nov 28 09:01:35 crc kubenswrapper[4946]: I1128 09:01:35.010641 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0061281-50a6-4f9b-ad83-3043544dc56b","Type":"ContainerStarted","Data":"7f25abe1cb9a796c102163c0e570819bf8750b345f688c59e01316df4f868393"} Nov 28 09:01:35 crc kubenswrapper[4946]: I1128 09:01:35.011005 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 09:01:35 crc kubenswrapper[4946]: I1128 09:01:35.037596 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.193251767 podStartE2EDuration="5.037568899s" podCreationTimestamp="2025-11-28 09:01:30 +0000 UTC" firstStartedPulling="2025-11-28 09:01:31.891598166 +0000 UTC m=+7746.269663297" lastFinishedPulling="2025-11-28 09:01:34.735915278 +0000 UTC m=+7749.113980429" observedRunningTime="2025-11-28 09:01:35.03195948 +0000 UTC m=+7749.410024591" watchObservedRunningTime="2025-11-28 09:01:35.037568899 +0000 UTC m=+7749.415634040" Nov 28 09:01:35 crc kubenswrapper[4946]: I1128 09:01:35.729193 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-t686d"] Nov 28 09:01:35 crc kubenswrapper[4946]: E1128 09:01:35.730123 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f226da0-54fe-4f7d-97cc-b32d781386d4" containerName="mariadb-account-create-update" Nov 28 09:01:35 crc kubenswrapper[4946]: I1128 09:01:35.730185 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f226da0-54fe-4f7d-97cc-b32d781386d4" containerName="mariadb-account-create-update" Nov 28 09:01:35 crc kubenswrapper[4946]: E1128 09:01:35.730268 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37faecbc-f2ad-4943-8252-d5a279e45d76" containerName="mariadb-database-create" Nov 28 09:01:35 crc kubenswrapper[4946]: I1128 09:01:35.730318 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="37faecbc-f2ad-4943-8252-d5a279e45d76" containerName="mariadb-database-create" Nov 28 09:01:35 crc kubenswrapper[4946]: I1128 09:01:35.730593 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f226da0-54fe-4f7d-97cc-b32d781386d4" containerName="mariadb-account-create-update" Nov 28 09:01:35 crc kubenswrapper[4946]: I1128 09:01:35.730671 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="37faecbc-f2ad-4943-8252-d5a279e45d76" containerName="mariadb-database-create" Nov 28 09:01:35 crc kubenswrapper[4946]: I1128 09:01:35.731354 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-t686d" Nov 28 09:01:35 crc kubenswrapper[4946]: I1128 09:01:35.734700 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 28 09:01:35 crc kubenswrapper[4946]: I1128 09:01:35.734754 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-stm6d" Nov 28 09:01:35 crc kubenswrapper[4946]: I1128 09:01:35.743607 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-t686d"] Nov 28 09:01:35 crc kubenswrapper[4946]: I1128 09:01:35.852970 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlm64\" (UniqueName: \"kubernetes.io/projected/e21e1758-0210-4e49-8266-c27f5c919379-kube-api-access-dlm64\") pod \"manila-db-sync-t686d\" (UID: \"e21e1758-0210-4e49-8266-c27f5c919379\") " pod="openstack/manila-db-sync-t686d" Nov 28 09:01:35 crc kubenswrapper[4946]: I1128 09:01:35.853048 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e21e1758-0210-4e49-8266-c27f5c919379-combined-ca-bundle\") pod \"manila-db-sync-t686d\" (UID: \"e21e1758-0210-4e49-8266-c27f5c919379\") " pod="openstack/manila-db-sync-t686d" Nov 28 09:01:35 crc kubenswrapper[4946]: I1128 09:01:35.853111 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/e21e1758-0210-4e49-8266-c27f5c919379-job-config-data\") pod \"manila-db-sync-t686d\" (UID: \"e21e1758-0210-4e49-8266-c27f5c919379\") " pod="openstack/manila-db-sync-t686d" Nov 28 09:01:35 crc kubenswrapper[4946]: I1128 09:01:35.853272 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e21e1758-0210-4e49-8266-c27f5c919379-config-data\") pod \"manila-db-sync-t686d\" (UID: \"e21e1758-0210-4e49-8266-c27f5c919379\") " pod="openstack/manila-db-sync-t686d" Nov 28 09:01:35 crc kubenswrapper[4946]: I1128 09:01:35.955601 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlm64\" (UniqueName: \"kubernetes.io/projected/e21e1758-0210-4e49-8266-c27f5c919379-kube-api-access-dlm64\") pod \"manila-db-sync-t686d\" (UID: \"e21e1758-0210-4e49-8266-c27f5c919379\") " pod="openstack/manila-db-sync-t686d" Nov 28 09:01:35 crc kubenswrapper[4946]: I1128 09:01:35.955670 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e21e1758-0210-4e49-8266-c27f5c919379-combined-ca-bundle\") pod \"manila-db-sync-t686d\" (UID: \"e21e1758-0210-4e49-8266-c27f5c919379\") " pod="openstack/manila-db-sync-t686d" Nov 28 09:01:35 crc kubenswrapper[4946]: I1128 09:01:35.955724 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/e21e1758-0210-4e49-8266-c27f5c919379-job-config-data\") pod \"manila-db-sync-t686d\" (UID: \"e21e1758-0210-4e49-8266-c27f5c919379\") " pod="openstack/manila-db-sync-t686d" Nov 28 09:01:35 crc kubenswrapper[4946]: I1128 09:01:35.955885 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e21e1758-0210-4e49-8266-c27f5c919379-config-data\") pod \"manila-db-sync-t686d\" (UID: \"e21e1758-0210-4e49-8266-c27f5c919379\") " pod="openstack/manila-db-sync-t686d" Nov 28 09:01:35 crc kubenswrapper[4946]: I1128 09:01:35.963072 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/e21e1758-0210-4e49-8266-c27f5c919379-job-config-data\") pod \"manila-db-sync-t686d\" (UID: \"e21e1758-0210-4e49-8266-c27f5c919379\") " pod="openstack/manila-db-sync-t686d" Nov 28 09:01:35 crc kubenswrapper[4946]: I1128 09:01:35.974402 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e21e1758-0210-4e49-8266-c27f5c919379-combined-ca-bundle\") pod \"manila-db-sync-t686d\" (UID: \"e21e1758-0210-4e49-8266-c27f5c919379\") " pod="openstack/manila-db-sync-t686d" Nov 28 09:01:35 crc kubenswrapper[4946]: I1128 09:01:35.979083 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlm64\" (UniqueName: \"kubernetes.io/projected/e21e1758-0210-4e49-8266-c27f5c919379-kube-api-access-dlm64\") pod \"manila-db-sync-t686d\" (UID: \"e21e1758-0210-4e49-8266-c27f5c919379\") " pod="openstack/manila-db-sync-t686d" Nov 28 09:01:35 crc kubenswrapper[4946]: I1128 09:01:35.979619 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e21e1758-0210-4e49-8266-c27f5c919379-config-data\") pod \"manila-db-sync-t686d\" (UID: \"e21e1758-0210-4e49-8266-c27f5c919379\") " pod="openstack/manila-db-sync-t686d" Nov 28 09:01:36 crc kubenswrapper[4946]: I1128 09:01:36.055323 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-t686d" Nov 28 09:01:36 crc kubenswrapper[4946]: I1128 09:01:36.740161 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-t686d"] Nov 28 09:01:36 crc kubenswrapper[4946]: W1128 09:01:36.750796 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode21e1758_0210_4e49_8266_c27f5c919379.slice/crio-bee949399fe6d92a136c1c9aae36dd9eae818375e8ae24cd8a09c3dd19556988 WatchSource:0}: Error finding container bee949399fe6d92a136c1c9aae36dd9eae818375e8ae24cd8a09c3dd19556988: Status 404 returned error can't find the container with id bee949399fe6d92a136c1c9aae36dd9eae818375e8ae24cd8a09c3dd19556988 Nov 28 09:01:37 crc kubenswrapper[4946]: I1128 09:01:37.037656 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-t686d" event={"ID":"e21e1758-0210-4e49-8266-c27f5c919379","Type":"ContainerStarted","Data":"bee949399fe6d92a136c1c9aae36dd9eae818375e8ae24cd8a09c3dd19556988"} Nov 28 09:01:43 crc kubenswrapper[4946]: I1128 09:01:43.133290 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-t686d" event={"ID":"e21e1758-0210-4e49-8266-c27f5c919379","Type":"ContainerStarted","Data":"6db333d218206bf727f4e24bd695017b567df324f32c092892cfda52bd85cf51"} Nov 28 09:01:43 crc kubenswrapper[4946]: I1128 09:01:43.155129 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-t686d" podStartSLOduration=2.948078852 podStartE2EDuration="8.155112181s" podCreationTimestamp="2025-11-28 09:01:35 +0000 UTC" firstStartedPulling="2025-11-28 09:01:36.753152128 +0000 UTC m=+7751.131217239" lastFinishedPulling="2025-11-28 09:01:41.960185457 +0000 UTC m=+7756.338250568" observedRunningTime="2025-11-28 09:01:43.153207734 +0000 UTC m=+7757.531272845" watchObservedRunningTime="2025-11-28 09:01:43.155112181 +0000 UTC m=+7757.533177292" Nov 28 09:01:45 crc kubenswrapper[4946]: I1128 09:01:45.158325 4946 generic.go:334] "Generic (PLEG): container finished" podID="e21e1758-0210-4e49-8266-c27f5c919379" containerID="6db333d218206bf727f4e24bd695017b567df324f32c092892cfda52bd85cf51" exitCode=0 Nov 28 09:01:45 crc kubenswrapper[4946]: I1128 09:01:45.158809 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-t686d" event={"ID":"e21e1758-0210-4e49-8266-c27f5c919379","Type":"ContainerDied","Data":"6db333d218206bf727f4e24bd695017b567df324f32c092892cfda52bd85cf51"} Nov 28 09:01:46 crc kubenswrapper[4946]: I1128 09:01:46.710156 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-t686d" Nov 28 09:01:46 crc kubenswrapper[4946]: I1128 09:01:46.792140 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e21e1758-0210-4e49-8266-c27f5c919379-combined-ca-bundle\") pod \"e21e1758-0210-4e49-8266-c27f5c919379\" (UID: \"e21e1758-0210-4e49-8266-c27f5c919379\") " Nov 28 09:01:46 crc kubenswrapper[4946]: I1128 09:01:46.792273 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/e21e1758-0210-4e49-8266-c27f5c919379-job-config-data\") pod \"e21e1758-0210-4e49-8266-c27f5c919379\" (UID: \"e21e1758-0210-4e49-8266-c27f5c919379\") " Nov 28 09:01:46 crc kubenswrapper[4946]: I1128 09:01:46.792302 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e21e1758-0210-4e49-8266-c27f5c919379-config-data\") pod \"e21e1758-0210-4e49-8266-c27f5c919379\" (UID: \"e21e1758-0210-4e49-8266-c27f5c919379\") " Nov 28 09:01:46 crc kubenswrapper[4946]: I1128 09:01:46.792460 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlm64\" (UniqueName: \"kubernetes.io/projected/e21e1758-0210-4e49-8266-c27f5c919379-kube-api-access-dlm64\") pod \"e21e1758-0210-4e49-8266-c27f5c919379\" (UID: \"e21e1758-0210-4e49-8266-c27f5c919379\") " Nov 28 09:01:46 crc kubenswrapper[4946]: I1128 09:01:46.797730 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e21e1758-0210-4e49-8266-c27f5c919379-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "e21e1758-0210-4e49-8266-c27f5c919379" (UID: "e21e1758-0210-4e49-8266-c27f5c919379"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:01:46 crc kubenswrapper[4946]: I1128 09:01:46.797839 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e21e1758-0210-4e49-8266-c27f5c919379-kube-api-access-dlm64" (OuterVolumeSpecName: "kube-api-access-dlm64") pod "e21e1758-0210-4e49-8266-c27f5c919379" (UID: "e21e1758-0210-4e49-8266-c27f5c919379"). InnerVolumeSpecName "kube-api-access-dlm64". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:01:46 crc kubenswrapper[4946]: I1128 09:01:46.800643 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e21e1758-0210-4e49-8266-c27f5c919379-config-data" (OuterVolumeSpecName: "config-data") pod "e21e1758-0210-4e49-8266-c27f5c919379" (UID: "e21e1758-0210-4e49-8266-c27f5c919379"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:01:46 crc kubenswrapper[4946]: I1128 09:01:46.824274 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e21e1758-0210-4e49-8266-c27f5c919379-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e21e1758-0210-4e49-8266-c27f5c919379" (UID: "e21e1758-0210-4e49-8266-c27f5c919379"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:01:46 crc kubenswrapper[4946]: I1128 09:01:46.895093 4946 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/e21e1758-0210-4e49-8266-c27f5c919379-job-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:46 crc kubenswrapper[4946]: I1128 09:01:46.895121 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e21e1758-0210-4e49-8266-c27f5c919379-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:46 crc kubenswrapper[4946]: I1128 09:01:46.895135 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlm64\" (UniqueName: \"kubernetes.io/projected/e21e1758-0210-4e49-8266-c27f5c919379-kube-api-access-dlm64\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:46 crc kubenswrapper[4946]: I1128 09:01:46.895144 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e21e1758-0210-4e49-8266-c27f5c919379-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.180670 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-t686d" event={"ID":"e21e1758-0210-4e49-8266-c27f5c919379","Type":"ContainerDied","Data":"bee949399fe6d92a136c1c9aae36dd9eae818375e8ae24cd8a09c3dd19556988"} Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.181071 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bee949399fe6d92a136c1c9aae36dd9eae818375e8ae24cd8a09c3dd19556988" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.180732 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-t686d" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.580926 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Nov 28 09:01:47 crc kubenswrapper[4946]: E1128 09:01:47.581537 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e21e1758-0210-4e49-8266-c27f5c919379" containerName="manila-db-sync" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.581564 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="e21e1758-0210-4e49-8266-c27f5c919379" containerName="manila-db-sync" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.581865 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="e21e1758-0210-4e49-8266-c27f5c919379" containerName="manila-db-sync" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.583158 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.589395 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.589662 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.589781 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-stm6d" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.589899 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.609252 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.621553 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.624091 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.631853 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.636900 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.706924 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8bd8679dc-g9r8k"] Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.708667 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.712383 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8ecc468-f0db-41b1-a847-7ac2fdf26b37-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"c8ecc468-f0db-41b1-a847-7ac2fdf26b37\") " pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.712432 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b648b9-5f57-40e5-a36f-3a4e23a21dd1-config-data\") pod \"manila-scheduler-0\" (UID: \"a9b648b9-5f57-40e5-a36f-3a4e23a21dd1\") " pod="openstack/manila-scheduler-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.712561 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8ecc468-f0db-41b1-a847-7ac2fdf26b37-scripts\") pod \"manila-share-share1-0\" (UID: \"c8ecc468-f0db-41b1-a847-7ac2fdf26b37\") " pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.712587 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8ecc468-f0db-41b1-a847-7ac2fdf26b37-config-data\") pod \"manila-share-share1-0\" (UID: \"c8ecc468-f0db-41b1-a847-7ac2fdf26b37\") " pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.712626 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b648b9-5f57-40e5-a36f-3a4e23a21dd1-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"a9b648b9-5f57-40e5-a36f-3a4e23a21dd1\") " pod="openstack/manila-scheduler-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.712643 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9b648b9-5f57-40e5-a36f-3a4e23a21dd1-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"a9b648b9-5f57-40e5-a36f-3a4e23a21dd1\") " pod="openstack/manila-scheduler-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.712662 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f64ms\" (UniqueName: \"kubernetes.io/projected/c8ecc468-f0db-41b1-a847-7ac2fdf26b37-kube-api-access-f64ms\") pod \"manila-share-share1-0\" (UID: \"c8ecc468-f0db-41b1-a847-7ac2fdf26b37\") " pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.712681 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9b648b9-5f57-40e5-a36f-3a4e23a21dd1-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"a9b648b9-5f57-40e5-a36f-3a4e23a21dd1\") " pod="openstack/manila-scheduler-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.712695 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/c8ecc468-f0db-41b1-a847-7ac2fdf26b37-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"c8ecc468-f0db-41b1-a847-7ac2fdf26b37\") " pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.712740 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8ecc468-f0db-41b1-a847-7ac2fdf26b37-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"c8ecc468-f0db-41b1-a847-7ac2fdf26b37\") " pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.712798 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c8ecc468-f0db-41b1-a847-7ac2fdf26b37-ceph\") pod \"manila-share-share1-0\" (UID: \"c8ecc468-f0db-41b1-a847-7ac2fdf26b37\") " pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.712846 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtkkb\" (UniqueName: \"kubernetes.io/projected/a9b648b9-5f57-40e5-a36f-3a4e23a21dd1-kube-api-access-wtkkb\") pod \"manila-scheduler-0\" (UID: \"a9b648b9-5f57-40e5-a36f-3a4e23a21dd1\") " pod="openstack/manila-scheduler-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.712864 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8ecc468-f0db-41b1-a847-7ac2fdf26b37-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"c8ecc468-f0db-41b1-a847-7ac2fdf26b37\") " pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.712877 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9b648b9-5f57-40e5-a36f-3a4e23a21dd1-scripts\") pod \"manila-scheduler-0\" (UID: \"a9b648b9-5f57-40e5-a36f-3a4e23a21dd1\") " pod="openstack/manila-scheduler-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.715108 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8bd8679dc-g9r8k"] Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.814708 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtkkb\" (UniqueName: \"kubernetes.io/projected/a9b648b9-5f57-40e5-a36f-3a4e23a21dd1-kube-api-access-wtkkb\") pod \"manila-scheduler-0\" (UID: \"a9b648b9-5f57-40e5-a36f-3a4e23a21dd1\") " pod="openstack/manila-scheduler-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.814770 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9b648b9-5f57-40e5-a36f-3a4e23a21dd1-scripts\") pod \"manila-scheduler-0\" (UID: \"a9b648b9-5f57-40e5-a36f-3a4e23a21dd1\") " pod="openstack/manila-scheduler-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.814794 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8ecc468-f0db-41b1-a847-7ac2fdf26b37-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"c8ecc468-f0db-41b1-a847-7ac2fdf26b37\") " pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.814827 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8ecc468-f0db-41b1-a847-7ac2fdf26b37-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"c8ecc468-f0db-41b1-a847-7ac2fdf26b37\") " pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.814874 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b648b9-5f57-40e5-a36f-3a4e23a21dd1-config-data\") pod \"manila-scheduler-0\" (UID: \"a9b648b9-5f57-40e5-a36f-3a4e23a21dd1\") " pod="openstack/manila-scheduler-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.814929 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8ecc468-f0db-41b1-a847-7ac2fdf26b37-scripts\") pod \"manila-share-share1-0\" (UID: \"c8ecc468-f0db-41b1-a847-7ac2fdf26b37\") " pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.814959 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8ecc468-f0db-41b1-a847-7ac2fdf26b37-config-data\") pod \"manila-share-share1-0\" (UID: \"c8ecc468-f0db-41b1-a847-7ac2fdf26b37\") " pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.815017 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b648b9-5f57-40e5-a36f-3a4e23a21dd1-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"a9b648b9-5f57-40e5-a36f-3a4e23a21dd1\") " pod="openstack/manila-scheduler-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.815041 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9b648b9-5f57-40e5-a36f-3a4e23a21dd1-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"a9b648b9-5f57-40e5-a36f-3a4e23a21dd1\") " pod="openstack/manila-scheduler-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.815075 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f64ms\" (UniqueName: \"kubernetes.io/projected/c8ecc468-f0db-41b1-a847-7ac2fdf26b37-kube-api-access-f64ms\") pod \"manila-share-share1-0\" (UID: \"c8ecc468-f0db-41b1-a847-7ac2fdf26b37\") " pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.815106 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9b648b9-5f57-40e5-a36f-3a4e23a21dd1-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"a9b648b9-5f57-40e5-a36f-3a4e23a21dd1\") " pod="openstack/manila-scheduler-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.815129 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/c8ecc468-f0db-41b1-a847-7ac2fdf26b37-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"c8ecc468-f0db-41b1-a847-7ac2fdf26b37\") " pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.815185 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk2sr\" (UniqueName: \"kubernetes.io/projected/9735f8fd-4a68-461b-9029-3a55c266570b-kube-api-access-kk2sr\") pod \"dnsmasq-dns-8bd8679dc-g9r8k\" (UID: \"9735f8fd-4a68-461b-9029-3a55c266570b\") " pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.815220 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8ecc468-f0db-41b1-a847-7ac2fdf26b37-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"c8ecc468-f0db-41b1-a847-7ac2fdf26b37\") " pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.815256 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9735f8fd-4a68-461b-9029-3a55c266570b-dns-svc\") pod \"dnsmasq-dns-8bd8679dc-g9r8k\" (UID: \"9735f8fd-4a68-461b-9029-3a55c266570b\") " pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.815298 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9735f8fd-4a68-461b-9029-3a55c266570b-config\") pod \"dnsmasq-dns-8bd8679dc-g9r8k\" (UID: \"9735f8fd-4a68-461b-9029-3a55c266570b\") " pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.816591 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9b648b9-5f57-40e5-a36f-3a4e23a21dd1-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"a9b648b9-5f57-40e5-a36f-3a4e23a21dd1\") " pod="openstack/manila-scheduler-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.816726 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/c8ecc468-f0db-41b1-a847-7ac2fdf26b37-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"c8ecc468-f0db-41b1-a847-7ac2fdf26b37\") " pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.817317 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9735f8fd-4a68-461b-9029-3a55c266570b-ovsdbserver-nb\") pod \"dnsmasq-dns-8bd8679dc-g9r8k\" (UID: \"9735f8fd-4a68-461b-9029-3a55c266570b\") " pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.817376 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9735f8fd-4a68-461b-9029-3a55c266570b-ovsdbserver-sb\") pod \"dnsmasq-dns-8bd8679dc-g9r8k\" (UID: \"9735f8fd-4a68-461b-9029-3a55c266570b\") " pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.817450 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c8ecc468-f0db-41b1-a847-7ac2fdf26b37-ceph\") pod \"manila-share-share1-0\" (UID: \"c8ecc468-f0db-41b1-a847-7ac2fdf26b37\") " pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.821530 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8ecc468-f0db-41b1-a847-7ac2fdf26b37-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"c8ecc468-f0db-41b1-a847-7ac2fdf26b37\") " pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.825809 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b648b9-5f57-40e5-a36f-3a4e23a21dd1-config-data\") pod \"manila-scheduler-0\" (UID: \"a9b648b9-5f57-40e5-a36f-3a4e23a21dd1\") " pod="openstack/manila-scheduler-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.828424 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8ecc468-f0db-41b1-a847-7ac2fdf26b37-config-data\") pod \"manila-share-share1-0\" (UID: \"c8ecc468-f0db-41b1-a847-7ac2fdf26b37\") " pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.828700 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8ecc468-f0db-41b1-a847-7ac2fdf26b37-scripts\") pod \"manila-share-share1-0\" (UID: \"c8ecc468-f0db-41b1-a847-7ac2fdf26b37\") " pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.829025 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b648b9-5f57-40e5-a36f-3a4e23a21dd1-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"a9b648b9-5f57-40e5-a36f-3a4e23a21dd1\") " pod="openstack/manila-scheduler-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.829512 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c8ecc468-f0db-41b1-a847-7ac2fdf26b37-ceph\") pod \"manila-share-share1-0\" (UID: \"c8ecc468-f0db-41b1-a847-7ac2fdf26b37\") " pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.829679 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9b648b9-5f57-40e5-a36f-3a4e23a21dd1-scripts\") pod \"manila-scheduler-0\" (UID: \"a9b648b9-5f57-40e5-a36f-3a4e23a21dd1\") " pod="openstack/manila-scheduler-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.831483 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8ecc468-f0db-41b1-a847-7ac2fdf26b37-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"c8ecc468-f0db-41b1-a847-7ac2fdf26b37\") " pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.847945 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8ecc468-f0db-41b1-a847-7ac2fdf26b37-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"c8ecc468-f0db-41b1-a847-7ac2fdf26b37\") " pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.852039 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9b648b9-5f57-40e5-a36f-3a4e23a21dd1-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"a9b648b9-5f57-40e5-a36f-3a4e23a21dd1\") " pod="openstack/manila-scheduler-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.853444 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtkkb\" (UniqueName: \"kubernetes.io/projected/a9b648b9-5f57-40e5-a36f-3a4e23a21dd1-kube-api-access-wtkkb\") pod \"manila-scheduler-0\" (UID: \"a9b648b9-5f57-40e5-a36f-3a4e23a21dd1\") " pod="openstack/manila-scheduler-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.853993 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f64ms\" (UniqueName: \"kubernetes.io/projected/c8ecc468-f0db-41b1-a847-7ac2fdf26b37-kube-api-access-f64ms\") pod \"manila-share-share1-0\" (UID: \"c8ecc468-f0db-41b1-a847-7ac2fdf26b37\") " pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.915052 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.919914 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk2sr\" (UniqueName: \"kubernetes.io/projected/9735f8fd-4a68-461b-9029-3a55c266570b-kube-api-access-kk2sr\") pod \"dnsmasq-dns-8bd8679dc-g9r8k\" (UID: \"9735f8fd-4a68-461b-9029-3a55c266570b\") " pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.919975 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9735f8fd-4a68-461b-9029-3a55c266570b-dns-svc\") pod \"dnsmasq-dns-8bd8679dc-g9r8k\" (UID: \"9735f8fd-4a68-461b-9029-3a55c266570b\") " pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.920012 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9735f8fd-4a68-461b-9029-3a55c266570b-config\") pod \"dnsmasq-dns-8bd8679dc-g9r8k\" (UID: \"9735f8fd-4a68-461b-9029-3a55c266570b\") " pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.920039 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9735f8fd-4a68-461b-9029-3a55c266570b-ovsdbserver-nb\") pod \"dnsmasq-dns-8bd8679dc-g9r8k\" (UID: \"9735f8fd-4a68-461b-9029-3a55c266570b\") " pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.920075 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9735f8fd-4a68-461b-9029-3a55c266570b-ovsdbserver-sb\") pod \"dnsmasq-dns-8bd8679dc-g9r8k\" (UID: \"9735f8fd-4a68-461b-9029-3a55c266570b\") " pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.921187 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9735f8fd-4a68-461b-9029-3a55c266570b-ovsdbserver-sb\") pod \"dnsmasq-dns-8bd8679dc-g9r8k\" (UID: \"9735f8fd-4a68-461b-9029-3a55c266570b\") " pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.922158 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9735f8fd-4a68-461b-9029-3a55c266570b-dns-svc\") pod \"dnsmasq-dns-8bd8679dc-g9r8k\" (UID: \"9735f8fd-4a68-461b-9029-3a55c266570b\") " pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.922763 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9735f8fd-4a68-461b-9029-3a55c266570b-config\") pod \"dnsmasq-dns-8bd8679dc-g9r8k\" (UID: \"9735f8fd-4a68-461b-9029-3a55c266570b\") " pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.923329 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9735f8fd-4a68-461b-9029-3a55c266570b-ovsdbserver-nb\") pod \"dnsmasq-dns-8bd8679dc-g9r8k\" (UID: \"9735f8fd-4a68-461b-9029-3a55c266570b\") " pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.948088 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 28 09:01:47 crc kubenswrapper[4946]: I1128 09:01:47.978845 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk2sr\" (UniqueName: \"kubernetes.io/projected/9735f8fd-4a68-461b-9029-3a55c266570b-kube-api-access-kk2sr\") pod \"dnsmasq-dns-8bd8679dc-g9r8k\" (UID: \"9735f8fd-4a68-461b-9029-3a55c266570b\") " pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.032974 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.071149 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.082778 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.083700 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.084952 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.235859 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b-scripts\") pod \"manila-api-0\" (UID: \"59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b\") " pod="openstack/manila-api-0" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.236185 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9hgr\" (UniqueName: \"kubernetes.io/projected/59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b-kube-api-access-j9hgr\") pod \"manila-api-0\" (UID: \"59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b\") " pod="openstack/manila-api-0" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.236251 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b-logs\") pod \"manila-api-0\" (UID: \"59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b\") " pod="openstack/manila-api-0" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.236296 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b-etc-machine-id\") pod \"manila-api-0\" (UID: \"59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b\") " pod="openstack/manila-api-0" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.236316 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b-config-data-custom\") pod \"manila-api-0\" (UID: \"59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b\") " pod="openstack/manila-api-0" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.236351 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b-config-data\") pod \"manila-api-0\" (UID: \"59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b\") " pod="openstack/manila-api-0" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.236383 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b\") " pod="openstack/manila-api-0" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.338604 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b-logs\") pod \"manila-api-0\" (UID: \"59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b\") " pod="openstack/manila-api-0" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.338661 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b-etc-machine-id\") pod \"manila-api-0\" (UID: \"59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b\") " pod="openstack/manila-api-0" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.338685 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b-config-data-custom\") pod \"manila-api-0\" (UID: \"59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b\") " pod="openstack/manila-api-0" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.338720 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b-config-data\") pod \"manila-api-0\" (UID: \"59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b\") " pod="openstack/manila-api-0" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.338743 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b\") " pod="openstack/manila-api-0" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.338807 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b-scripts\") pod \"manila-api-0\" (UID: \"59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b\") " pod="openstack/manila-api-0" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.338846 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9hgr\" (UniqueName: \"kubernetes.io/projected/59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b-kube-api-access-j9hgr\") pod \"manila-api-0\" (UID: \"59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b\") " pod="openstack/manila-api-0" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.339345 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b-etc-machine-id\") pod \"manila-api-0\" (UID: \"59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b\") " pod="openstack/manila-api-0" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.343212 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b-logs\") pod \"manila-api-0\" (UID: \"59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b\") " pod="openstack/manila-api-0" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.348039 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b-config-data\") pod \"manila-api-0\" (UID: \"59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b\") " pod="openstack/manila-api-0" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.349220 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b\") " pod="openstack/manila-api-0" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.349356 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b-config-data-custom\") pod \"manila-api-0\" (UID: \"59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b\") " pod="openstack/manila-api-0" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.354172 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b-scripts\") pod \"manila-api-0\" (UID: \"59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b\") " pod="openstack/manila-api-0" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.358352 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9hgr\" (UniqueName: \"kubernetes.io/projected/59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b-kube-api-access-j9hgr\") pod \"manila-api-0\" (UID: \"59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b\") " pod="openstack/manila-api-0" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.453952 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.801747 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.847051 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8bd8679dc-g9r8k"] Nov 28 09:01:48 crc kubenswrapper[4946]: I1128 09:01:48.878252 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 28 09:01:48 crc kubenswrapper[4946]: W1128 09:01:48.897150 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9b648b9_5f57_40e5_a36f_3a4e23a21dd1.slice/crio-352bc1a9063395288b55b85d479e62fcff2375046d512414d8e360fcd6845938 WatchSource:0}: Error finding container 352bc1a9063395288b55b85d479e62fcff2375046d512414d8e360fcd6845938: Status 404 returned error can't find the container with id 352bc1a9063395288b55b85d479e62fcff2375046d512414d8e360fcd6845938 Nov 28 09:01:49 crc kubenswrapper[4946]: I1128 09:01:49.226680 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"a9b648b9-5f57-40e5-a36f-3a4e23a21dd1","Type":"ContainerStarted","Data":"352bc1a9063395288b55b85d479e62fcff2375046d512414d8e360fcd6845938"} Nov 28 09:01:49 crc kubenswrapper[4946]: I1128 09:01:49.229423 4946 generic.go:334] "Generic (PLEG): container finished" podID="9735f8fd-4a68-461b-9029-3a55c266570b" containerID="5e21c504e53caa9408aa74d829d729709649efccea57b364cb6cedbcb864533c" exitCode=0 Nov 28 09:01:49 crc kubenswrapper[4946]: I1128 09:01:49.229503 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" event={"ID":"9735f8fd-4a68-461b-9029-3a55c266570b","Type":"ContainerDied","Data":"5e21c504e53caa9408aa74d829d729709649efccea57b364cb6cedbcb864533c"} Nov 28 09:01:49 crc kubenswrapper[4946]: I1128 09:01:49.229523 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" event={"ID":"9735f8fd-4a68-461b-9029-3a55c266570b","Type":"ContainerStarted","Data":"c1bfaf3f2c14578e424dc6639cf05a2c33641bfb97250866ab23d193000cdf60"} Nov 28 09:01:49 crc kubenswrapper[4946]: I1128 09:01:49.233620 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"c8ecc468-f0db-41b1-a847-7ac2fdf26b37","Type":"ContainerStarted","Data":"da123ce452f6b8e0f42ff2cc7a282b88c31592c5fa7505be1c18edb1320fde20"} Nov 28 09:01:49 crc kubenswrapper[4946]: I1128 09:01:49.265648 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 28 09:01:49 crc kubenswrapper[4946]: I1128 09:01:49.671962 4946 scope.go:117] "RemoveContainer" containerID="322ddc7bc8022960bc74b4e656c68b690b89f9a6d42fc613213f26a4515a78a1" Nov 28 09:01:49 crc kubenswrapper[4946]: I1128 09:01:49.722593 4946 scope.go:117] "RemoveContainer" containerID="f976139ca0f06d5441ab8fd016b08f4706bed58c9dd81ce2d69b1c6c2a8a026a" Nov 28 09:01:49 crc kubenswrapper[4946]: I1128 09:01:49.752025 4946 scope.go:117] "RemoveContainer" containerID="94ee65fed67d23a723e2eaf61a3f2a4c20b09e4c9fe5166404d7ff0b39c66c77" Nov 28 09:01:50 crc kubenswrapper[4946]: I1128 09:01:50.264936 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" event={"ID":"9735f8fd-4a68-461b-9029-3a55c266570b","Type":"ContainerStarted","Data":"0caa6f7026691fcc3a35ef5fc62686ec26afb2f4ee35b0e822f14dc0bbc356eb"} Nov 28 09:01:50 crc kubenswrapper[4946]: I1128 09:01:50.265283 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" Nov 28 09:01:50 crc kubenswrapper[4946]: I1128 09:01:50.284118 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"a9b648b9-5f57-40e5-a36f-3a4e23a21dd1","Type":"ContainerStarted","Data":"669c6eecb3b4f7d0c6b2e3c85e86e70485675d8a0699696866d7afe93263d6fe"} Nov 28 09:01:50 crc kubenswrapper[4946]: I1128 09:01:50.284739 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" podStartSLOduration=3.284715945 podStartE2EDuration="3.284715945s" podCreationTimestamp="2025-11-28 09:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 09:01:50.281438844 +0000 UTC m=+7764.659503955" watchObservedRunningTime="2025-11-28 09:01:50.284715945 +0000 UTC m=+7764.662781056" Nov 28 09:01:50 crc kubenswrapper[4946]: I1128 09:01:50.292721 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b","Type":"ContainerStarted","Data":"ea57a2886dc5fd9662dad412fb9046c5dccfbb17628fa1b2a4252c9484620516"} Nov 28 09:01:50 crc kubenswrapper[4946]: I1128 09:01:50.292760 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b","Type":"ContainerStarted","Data":"8602311e3812a2641e8c6df605f7513f539b0dffefe32cdc1fde3adf0492fc33"} Nov 28 09:01:51 crc kubenswrapper[4946]: I1128 09:01:51.321157 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b","Type":"ContainerStarted","Data":"da89d7a2a4a18af159392e69ed8146e3a6dd61a7c9213ce715918e5571b56feb"} Nov 28 09:01:51 crc kubenswrapper[4946]: I1128 09:01:51.322088 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 28 09:01:51 crc kubenswrapper[4946]: I1128 09:01:51.327984 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"a9b648b9-5f57-40e5-a36f-3a4e23a21dd1","Type":"ContainerStarted","Data":"e00230f756432ec1c483268b81bd5444634915cdb0f47e4d7780a68d02b366f2"} Nov 28 09:01:51 crc kubenswrapper[4946]: I1128 09:01:51.363914 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.363885212 podStartE2EDuration="4.363885212s" podCreationTimestamp="2025-11-28 09:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 09:01:51.351085185 +0000 UTC m=+7765.729150296" watchObservedRunningTime="2025-11-28 09:01:51.363885212 +0000 UTC m=+7765.741950323" Nov 28 09:01:51 crc kubenswrapper[4946]: I1128 09:01:51.371701 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.958940623 podStartE2EDuration="4.371680335s" podCreationTimestamp="2025-11-28 09:01:47 +0000 UTC" firstStartedPulling="2025-11-28 09:01:48.908509031 +0000 UTC m=+7763.286574142" lastFinishedPulling="2025-11-28 09:01:49.321248743 +0000 UTC m=+7763.699313854" observedRunningTime="2025-11-28 09:01:51.367910082 +0000 UTC m=+7765.745975193" watchObservedRunningTime="2025-11-28 09:01:51.371680335 +0000 UTC m=+7765.749745446" Nov 28 09:01:52 crc kubenswrapper[4946]: I1128 09:01:52.124409 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 09:01:52 crc kubenswrapper[4946]: I1128 09:01:52.124898 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0061281-50a6-4f9b-ad83-3043544dc56b" containerName="ceilometer-central-agent" containerID="cri-o://f3c933f80301e36beeaecf6a52c10a5f819aae02451cf2ec72eae8dade275c1a" gracePeriod=30 Nov 28 09:01:52 crc kubenswrapper[4946]: I1128 09:01:52.125099 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0061281-50a6-4f9b-ad83-3043544dc56b" containerName="sg-core" containerID="cri-o://122e6c5622a7a34ef8cf0ebb001240116f79a69ab0b666898274529627413e6d" gracePeriod=30 Nov 28 09:01:52 crc kubenswrapper[4946]: I1128 09:01:52.125269 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0061281-50a6-4f9b-ad83-3043544dc56b" containerName="proxy-httpd" containerID="cri-o://7f25abe1cb9a796c102163c0e570819bf8750b345f688c59e01316df4f868393" gracePeriod=30 Nov 28 09:01:52 crc kubenswrapper[4946]: I1128 09:01:52.125317 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0061281-50a6-4f9b-ad83-3043544dc56b" containerName="ceilometer-notification-agent" containerID="cri-o://c816ef4f6b0abc8f8dee8a68118275e1ed346c57429f1d23efbd7aa21bf99a73" gracePeriod=30 Nov 28 09:01:52 crc kubenswrapper[4946]: I1128 09:01:52.129676 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 28 09:01:52 crc kubenswrapper[4946]: I1128 09:01:52.341436 4946 generic.go:334] "Generic (PLEG): container finished" podID="d0061281-50a6-4f9b-ad83-3043544dc56b" containerID="7f25abe1cb9a796c102163c0e570819bf8750b345f688c59e01316df4f868393" exitCode=0 Nov 28 09:01:52 crc kubenswrapper[4946]: I1128 09:01:52.342660 4946 generic.go:334] "Generic (PLEG): container finished" podID="d0061281-50a6-4f9b-ad83-3043544dc56b" containerID="122e6c5622a7a34ef8cf0ebb001240116f79a69ab0b666898274529627413e6d" exitCode=2 Nov 28 09:01:52 crc kubenswrapper[4946]: I1128 09:01:52.341488 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0061281-50a6-4f9b-ad83-3043544dc56b","Type":"ContainerDied","Data":"7f25abe1cb9a796c102163c0e570819bf8750b345f688c59e01316df4f868393"} Nov 28 09:01:52 crc kubenswrapper[4946]: I1128 09:01:52.343547 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0061281-50a6-4f9b-ad83-3043544dc56b","Type":"ContainerDied","Data":"122e6c5622a7a34ef8cf0ebb001240116f79a69ab0b666898274529627413e6d"} Nov 28 09:01:53 crc kubenswrapper[4946]: I1128 09:01:53.365262 4946 generic.go:334] "Generic (PLEG): container finished" podID="d0061281-50a6-4f9b-ad83-3043544dc56b" containerID="f3c933f80301e36beeaecf6a52c10a5f819aae02451cf2ec72eae8dade275c1a" exitCode=0 Nov 28 09:01:53 crc kubenswrapper[4946]: I1128 09:01:53.365672 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0061281-50a6-4f9b-ad83-3043544dc56b","Type":"ContainerDied","Data":"f3c933f80301e36beeaecf6a52c10a5f819aae02451cf2ec72eae8dade275c1a"} Nov 28 09:01:54 crc kubenswrapper[4946]: I1128 09:01:54.391949 4946 generic.go:334] "Generic (PLEG): container finished" podID="d0061281-50a6-4f9b-ad83-3043544dc56b" containerID="c816ef4f6b0abc8f8dee8a68118275e1ed346c57429f1d23efbd7aa21bf99a73" exitCode=0 Nov 28 09:01:54 crc kubenswrapper[4946]: I1128 09:01:54.392254 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0061281-50a6-4f9b-ad83-3043544dc56b","Type":"ContainerDied","Data":"c816ef4f6b0abc8f8dee8a68118275e1ed346c57429f1d23efbd7aa21bf99a73"} Nov 28 09:01:56 crc kubenswrapper[4946]: I1128 09:01:56.869979 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 09:01:56 crc kubenswrapper[4946]: I1128 09:01:56.993085 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0061281-50a6-4f9b-ad83-3043544dc56b-config-data\") pod \"d0061281-50a6-4f9b-ad83-3043544dc56b\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " Nov 28 09:01:56 crc kubenswrapper[4946]: I1128 09:01:56.993124 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0061281-50a6-4f9b-ad83-3043544dc56b-log-httpd\") pod \"d0061281-50a6-4f9b-ad83-3043544dc56b\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " Nov 28 09:01:56 crc kubenswrapper[4946]: I1128 09:01:56.993167 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0061281-50a6-4f9b-ad83-3043544dc56b-scripts\") pod \"d0061281-50a6-4f9b-ad83-3043544dc56b\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " Nov 28 09:01:56 crc kubenswrapper[4946]: I1128 09:01:56.993238 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0061281-50a6-4f9b-ad83-3043544dc56b-combined-ca-bundle\") pod \"d0061281-50a6-4f9b-ad83-3043544dc56b\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " Nov 28 09:01:56 crc kubenswrapper[4946]: I1128 09:01:56.993293 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0061281-50a6-4f9b-ad83-3043544dc56b-run-httpd\") pod \"d0061281-50a6-4f9b-ad83-3043544dc56b\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " Nov 28 09:01:56 crc kubenswrapper[4946]: I1128 09:01:56.993343 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7ll5\" (UniqueName: \"kubernetes.io/projected/d0061281-50a6-4f9b-ad83-3043544dc56b-kube-api-access-h7ll5\") pod \"d0061281-50a6-4f9b-ad83-3043544dc56b\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " Nov 28 09:01:56 crc kubenswrapper[4946]: I1128 09:01:56.993375 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0061281-50a6-4f9b-ad83-3043544dc56b-sg-core-conf-yaml\") pod \"d0061281-50a6-4f9b-ad83-3043544dc56b\" (UID: \"d0061281-50a6-4f9b-ad83-3043544dc56b\") " Nov 28 09:01:56 crc kubenswrapper[4946]: I1128 09:01:56.993796 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0061281-50a6-4f9b-ad83-3043544dc56b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d0061281-50a6-4f9b-ad83-3043544dc56b" (UID: "d0061281-50a6-4f9b-ad83-3043544dc56b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:01:56 crc kubenswrapper[4946]: I1128 09:01:56.994495 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0061281-50a6-4f9b-ad83-3043544dc56b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d0061281-50a6-4f9b-ad83-3043544dc56b" (UID: "d0061281-50a6-4f9b-ad83-3043544dc56b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.000799 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0061281-50a6-4f9b-ad83-3043544dc56b-kube-api-access-h7ll5" (OuterVolumeSpecName: "kube-api-access-h7ll5") pod "d0061281-50a6-4f9b-ad83-3043544dc56b" (UID: "d0061281-50a6-4f9b-ad83-3043544dc56b"). InnerVolumeSpecName "kube-api-access-h7ll5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.000808 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0061281-50a6-4f9b-ad83-3043544dc56b-scripts" (OuterVolumeSpecName: "scripts") pod "d0061281-50a6-4f9b-ad83-3043544dc56b" (UID: "d0061281-50a6-4f9b-ad83-3043544dc56b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.026766 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0061281-50a6-4f9b-ad83-3043544dc56b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d0061281-50a6-4f9b-ad83-3043544dc56b" (UID: "d0061281-50a6-4f9b-ad83-3043544dc56b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.090438 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0061281-50a6-4f9b-ad83-3043544dc56b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0061281-50a6-4f9b-ad83-3043544dc56b" (UID: "d0061281-50a6-4f9b-ad83-3043544dc56b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.095971 4946 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0061281-50a6-4f9b-ad83-3043544dc56b-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.096063 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0061281-50a6-4f9b-ad83-3043544dc56b-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.096132 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0061281-50a6-4f9b-ad83-3043544dc56b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.096187 4946 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0061281-50a6-4f9b-ad83-3043544dc56b-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.096243 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7ll5\" (UniqueName: \"kubernetes.io/projected/d0061281-50a6-4f9b-ad83-3043544dc56b-kube-api-access-h7ll5\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.096297 4946 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0061281-50a6-4f9b-ad83-3043544dc56b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.111253 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0061281-50a6-4f9b-ad83-3043544dc56b-config-data" (OuterVolumeSpecName: "config-data") pod "d0061281-50a6-4f9b-ad83-3043544dc56b" (UID: "d0061281-50a6-4f9b-ad83-3043544dc56b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.198569 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0061281-50a6-4f9b-ad83-3043544dc56b-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.442436 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"c8ecc468-f0db-41b1-a847-7ac2fdf26b37","Type":"ContainerStarted","Data":"b29b66a59b5927ad14ff7b89de9cff14c063180c2d6b9419686e5e1ba217e3f2"} Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.447201 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0061281-50a6-4f9b-ad83-3043544dc56b","Type":"ContainerDied","Data":"1a06b202817b01a24c76417136988fec2429c18ee2a6d200f9387fc72260a9f2"} Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.447273 4946 scope.go:117] "RemoveContainer" containerID="7f25abe1cb9a796c102163c0e570819bf8750b345f688c59e01316df4f868393" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.447299 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.515678 4946 scope.go:117] "RemoveContainer" containerID="122e6c5622a7a34ef8cf0ebb001240116f79a69ab0b666898274529627413e6d" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.547820 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.567348 4946 scope.go:117] "RemoveContainer" containerID="c816ef4f6b0abc8f8dee8a68118275e1ed346c57429f1d23efbd7aa21bf99a73" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.571332 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.597557 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 09:01:57 crc kubenswrapper[4946]: E1128 09:01:57.598055 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0061281-50a6-4f9b-ad83-3043544dc56b" containerName="sg-core" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.598067 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0061281-50a6-4f9b-ad83-3043544dc56b" containerName="sg-core" Nov 28 09:01:57 crc kubenswrapper[4946]: E1128 09:01:57.598097 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0061281-50a6-4f9b-ad83-3043544dc56b" containerName="ceilometer-central-agent" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.598103 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0061281-50a6-4f9b-ad83-3043544dc56b" containerName="ceilometer-central-agent" Nov 28 09:01:57 crc kubenswrapper[4946]: E1128 09:01:57.598113 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0061281-50a6-4f9b-ad83-3043544dc56b" containerName="ceilometer-notification-agent" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.598119 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0061281-50a6-4f9b-ad83-3043544dc56b" containerName="ceilometer-notification-agent" Nov 28 09:01:57 crc kubenswrapper[4946]: E1128 09:01:57.598134 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0061281-50a6-4f9b-ad83-3043544dc56b" containerName="proxy-httpd" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.598139 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0061281-50a6-4f9b-ad83-3043544dc56b" containerName="proxy-httpd" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.598320 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0061281-50a6-4f9b-ad83-3043544dc56b" containerName="proxy-httpd" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.598338 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0061281-50a6-4f9b-ad83-3043544dc56b" containerName="sg-core" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.598353 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0061281-50a6-4f9b-ad83-3043544dc56b" containerName="ceilometer-notification-agent" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.598372 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0061281-50a6-4f9b-ad83-3043544dc56b" containerName="ceilometer-central-agent" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.600135 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.603884 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.604038 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.613334 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.614174 4946 scope.go:117] "RemoveContainer" containerID="f3c933f80301e36beeaecf6a52c10a5f819aae02451cf2ec72eae8dade275c1a" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.713745 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87d9a427-9315-4a98-8a43-a204ef1c9ea4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " pod="openstack/ceilometer-0" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.714292 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsx79\" (UniqueName: \"kubernetes.io/projected/87d9a427-9315-4a98-8a43-a204ef1c9ea4-kube-api-access-qsx79\") pod \"ceilometer-0\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " pod="openstack/ceilometer-0" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.714386 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87d9a427-9315-4a98-8a43-a204ef1c9ea4-log-httpd\") pod \"ceilometer-0\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " pod="openstack/ceilometer-0" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.714648 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87d9a427-9315-4a98-8a43-a204ef1c9ea4-scripts\") pod \"ceilometer-0\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " pod="openstack/ceilometer-0" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.714839 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87d9a427-9315-4a98-8a43-a204ef1c9ea4-run-httpd\") pod \"ceilometer-0\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " pod="openstack/ceilometer-0" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.715106 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87d9a427-9315-4a98-8a43-a204ef1c9ea4-config-data\") pod \"ceilometer-0\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " pod="openstack/ceilometer-0" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.715217 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87d9a427-9315-4a98-8a43-a204ef1c9ea4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " pod="openstack/ceilometer-0" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.817164 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87d9a427-9315-4a98-8a43-a204ef1c9ea4-run-httpd\") pod \"ceilometer-0\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " pod="openstack/ceilometer-0" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.817297 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87d9a427-9315-4a98-8a43-a204ef1c9ea4-config-data\") pod \"ceilometer-0\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " pod="openstack/ceilometer-0" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.817327 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87d9a427-9315-4a98-8a43-a204ef1c9ea4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " pod="openstack/ceilometer-0" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.817394 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87d9a427-9315-4a98-8a43-a204ef1c9ea4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " pod="openstack/ceilometer-0" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.817434 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsx79\" (UniqueName: \"kubernetes.io/projected/87d9a427-9315-4a98-8a43-a204ef1c9ea4-kube-api-access-qsx79\") pod \"ceilometer-0\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " pod="openstack/ceilometer-0" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.818914 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87d9a427-9315-4a98-8a43-a204ef1c9ea4-run-httpd\") pod \"ceilometer-0\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " pod="openstack/ceilometer-0" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.817459 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87d9a427-9315-4a98-8a43-a204ef1c9ea4-log-httpd\") pod \"ceilometer-0\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " pod="openstack/ceilometer-0" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.819410 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87d9a427-9315-4a98-8a43-a204ef1c9ea4-scripts\") pod \"ceilometer-0\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " pod="openstack/ceilometer-0" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.820086 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87d9a427-9315-4a98-8a43-a204ef1c9ea4-log-httpd\") pod \"ceilometer-0\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " pod="openstack/ceilometer-0" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.822728 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87d9a427-9315-4a98-8a43-a204ef1c9ea4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " pod="openstack/ceilometer-0" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.822814 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87d9a427-9315-4a98-8a43-a204ef1c9ea4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " pod="openstack/ceilometer-0" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.829229 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87d9a427-9315-4a98-8a43-a204ef1c9ea4-scripts\") pod \"ceilometer-0\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " pod="openstack/ceilometer-0" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.837207 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87d9a427-9315-4a98-8a43-a204ef1c9ea4-config-data\") pod \"ceilometer-0\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " pod="openstack/ceilometer-0" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.837312 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsx79\" (UniqueName: \"kubernetes.io/projected/87d9a427-9315-4a98-8a43-a204ef1c9ea4-kube-api-access-qsx79\") pod \"ceilometer-0\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " pod="openstack/ceilometer-0" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.915687 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Nov 28 09:01:57 crc kubenswrapper[4946]: I1128 09:01:57.920941 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 09:01:58 crc kubenswrapper[4946]: I1128 09:01:58.004378 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0061281-50a6-4f9b-ad83-3043544dc56b" path="/var/lib/kubelet/pods/d0061281-50a6-4f9b-ad83-3043544dc56b/volumes" Nov 28 09:01:58 crc kubenswrapper[4946]: I1128 09:01:58.093591 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" Nov 28 09:01:58 crc kubenswrapper[4946]: I1128 09:01:58.168075 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-657f9b46c5-6chv7"] Nov 28 09:01:58 crc kubenswrapper[4946]: I1128 09:01:58.168313 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" podUID="ea2a6b0f-c72f-4623-acf1-efe087189a34" containerName="dnsmasq-dns" containerID="cri-o://12ff29d2a05951d419d263c8b96f417547a4bf7d4d3751c37cbc6853c7784901" gracePeriod=10 Nov 28 09:01:58 crc kubenswrapper[4946]: I1128 09:01:58.475024 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"c8ecc468-f0db-41b1-a847-7ac2fdf26b37","Type":"ContainerStarted","Data":"186603dfbe02098a33f42f81062ed0617b734860e5cd90ec2f9a196c9669064e"} Nov 28 09:01:58 crc kubenswrapper[4946]: I1128 09:01:58.484508 4946 generic.go:334] "Generic (PLEG): container finished" podID="ea2a6b0f-c72f-4623-acf1-efe087189a34" containerID="12ff29d2a05951d419d263c8b96f417547a4bf7d4d3751c37cbc6853c7784901" exitCode=0 Nov 28 09:01:58 crc kubenswrapper[4946]: I1128 09:01:58.484549 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" event={"ID":"ea2a6b0f-c72f-4623-acf1-efe087189a34","Type":"ContainerDied","Data":"12ff29d2a05951d419d263c8b96f417547a4bf7d4d3751c37cbc6853c7784901"} Nov 28 09:01:58 crc kubenswrapper[4946]: I1128 09:01:58.511707 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 09:01:58 crc kubenswrapper[4946]: I1128 09:01:58.519514 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.797363141 podStartE2EDuration="11.51949438s" podCreationTimestamp="2025-11-28 09:01:47 +0000 UTC" firstStartedPulling="2025-11-28 09:01:48.817581089 +0000 UTC m=+7763.195646200" lastFinishedPulling="2025-11-28 09:01:56.539712328 +0000 UTC m=+7770.917777439" observedRunningTime="2025-11-28 09:01:58.505911834 +0000 UTC m=+7772.883976945" watchObservedRunningTime="2025-11-28 09:01:58.51949438 +0000 UTC m=+7772.897559491" Nov 28 09:01:58 crc kubenswrapper[4946]: I1128 09:01:58.903275 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" Nov 28 09:01:59 crc kubenswrapper[4946]: I1128 09:01:59.053359 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea2a6b0f-c72f-4623-acf1-efe087189a34-dns-svc\") pod \"ea2a6b0f-c72f-4623-acf1-efe087189a34\" (UID: \"ea2a6b0f-c72f-4623-acf1-efe087189a34\") " Nov 28 09:01:59 crc kubenswrapper[4946]: I1128 09:01:59.053900 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea2a6b0f-c72f-4623-acf1-efe087189a34-ovsdbserver-nb\") pod \"ea2a6b0f-c72f-4623-acf1-efe087189a34\" (UID: \"ea2a6b0f-c72f-4623-acf1-efe087189a34\") " Nov 28 09:01:59 crc kubenswrapper[4946]: I1128 09:01:59.053931 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hpz5\" (UniqueName: \"kubernetes.io/projected/ea2a6b0f-c72f-4623-acf1-efe087189a34-kube-api-access-7hpz5\") pod \"ea2a6b0f-c72f-4623-acf1-efe087189a34\" (UID: \"ea2a6b0f-c72f-4623-acf1-efe087189a34\") " Nov 28 09:01:59 crc kubenswrapper[4946]: I1128 09:01:59.054027 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea2a6b0f-c72f-4623-acf1-efe087189a34-config\") pod \"ea2a6b0f-c72f-4623-acf1-efe087189a34\" (UID: \"ea2a6b0f-c72f-4623-acf1-efe087189a34\") " Nov 28 09:01:59 crc kubenswrapper[4946]: I1128 09:01:59.054071 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea2a6b0f-c72f-4623-acf1-efe087189a34-ovsdbserver-sb\") pod \"ea2a6b0f-c72f-4623-acf1-efe087189a34\" (UID: \"ea2a6b0f-c72f-4623-acf1-efe087189a34\") " Nov 28 09:01:59 crc kubenswrapper[4946]: I1128 09:01:59.070182 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea2a6b0f-c72f-4623-acf1-efe087189a34-kube-api-access-7hpz5" (OuterVolumeSpecName: "kube-api-access-7hpz5") pod "ea2a6b0f-c72f-4623-acf1-efe087189a34" (UID: "ea2a6b0f-c72f-4623-acf1-efe087189a34"). InnerVolumeSpecName "kube-api-access-7hpz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:01:59 crc kubenswrapper[4946]: I1128 09:01:59.114252 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea2a6b0f-c72f-4623-acf1-efe087189a34-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ea2a6b0f-c72f-4623-acf1-efe087189a34" (UID: "ea2a6b0f-c72f-4623-acf1-efe087189a34"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:01:59 crc kubenswrapper[4946]: I1128 09:01:59.118593 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea2a6b0f-c72f-4623-acf1-efe087189a34-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ea2a6b0f-c72f-4623-acf1-efe087189a34" (UID: "ea2a6b0f-c72f-4623-acf1-efe087189a34"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:01:59 crc kubenswrapper[4946]: I1128 09:01:59.123675 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea2a6b0f-c72f-4623-acf1-efe087189a34-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ea2a6b0f-c72f-4623-acf1-efe087189a34" (UID: "ea2a6b0f-c72f-4623-acf1-efe087189a34"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:01:59 crc kubenswrapper[4946]: I1128 09:01:59.134131 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea2a6b0f-c72f-4623-acf1-efe087189a34-config" (OuterVolumeSpecName: "config") pod "ea2a6b0f-c72f-4623-acf1-efe087189a34" (UID: "ea2a6b0f-c72f-4623-acf1-efe087189a34"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:01:59 crc kubenswrapper[4946]: I1128 09:01:59.156023 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea2a6b0f-c72f-4623-acf1-efe087189a34-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:59 crc kubenswrapper[4946]: I1128 09:01:59.156051 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea2a6b0f-c72f-4623-acf1-efe087189a34-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:59 crc kubenswrapper[4946]: I1128 09:01:59.156064 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hpz5\" (UniqueName: \"kubernetes.io/projected/ea2a6b0f-c72f-4623-acf1-efe087189a34-kube-api-access-7hpz5\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:59 crc kubenswrapper[4946]: I1128 09:01:59.156073 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea2a6b0f-c72f-4623-acf1-efe087189a34-config\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:59 crc kubenswrapper[4946]: I1128 09:01:59.156081 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea2a6b0f-c72f-4623-acf1-efe087189a34-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 09:01:59 crc kubenswrapper[4946]: I1128 09:01:59.501204 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" event={"ID":"ea2a6b0f-c72f-4623-acf1-efe087189a34","Type":"ContainerDied","Data":"c3e1a06ed4f3851c01f450e43a9b14334b650a0b9040d5adc4e0ffe395d7a47a"} Nov 28 09:01:59 crc kubenswrapper[4946]: I1128 09:01:59.501536 4946 scope.go:117] "RemoveContainer" containerID="12ff29d2a05951d419d263c8b96f417547a4bf7d4d3751c37cbc6853c7784901" Nov 28 09:01:59 crc kubenswrapper[4946]: I1128 09:01:59.501261 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-657f9b46c5-6chv7" Nov 28 09:01:59 crc kubenswrapper[4946]: I1128 09:01:59.504494 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87d9a427-9315-4a98-8a43-a204ef1c9ea4","Type":"ContainerStarted","Data":"2700e4b47f97107ad18af77b3a498462c2373cf8c7057fd37e3c840de9e922dd"} Nov 28 09:01:59 crc kubenswrapper[4946]: I1128 09:01:59.504530 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87d9a427-9315-4a98-8a43-a204ef1c9ea4","Type":"ContainerStarted","Data":"60524d7294875c3469beb2399d3a2373f56ce2c6f76b580fee476918f40131fe"} Nov 28 09:01:59 crc kubenswrapper[4946]: I1128 09:01:59.504543 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87d9a427-9315-4a98-8a43-a204ef1c9ea4","Type":"ContainerStarted","Data":"a2a5c50e3e79e0a18762029bb9ad6d1812ef0308519f671ff6a0b7132c946440"} Nov 28 09:01:59 crc kubenswrapper[4946]: I1128 09:01:59.536435 4946 scope.go:117] "RemoveContainer" containerID="71e9d96d4edbbff80a970adfcecb93a3ce3550238a5d888722b79b8c7a281894" Nov 28 09:01:59 crc kubenswrapper[4946]: I1128 09:01:59.537908 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-657f9b46c5-6chv7"] Nov 28 09:01:59 crc kubenswrapper[4946]: I1128 09:01:59.573998 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-657f9b46c5-6chv7"] Nov 28 09:02:00 crc kubenswrapper[4946]: I1128 09:02:00.008816 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea2a6b0f-c72f-4623-acf1-efe087189a34" path="/var/lib/kubelet/pods/ea2a6b0f-c72f-4623-acf1-efe087189a34/volumes" Nov 28 09:02:00 crc kubenswrapper[4946]: I1128 09:02:00.516768 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87d9a427-9315-4a98-8a43-a204ef1c9ea4","Type":"ContainerStarted","Data":"042371e87b3537c7ae1e624d26dbd2adbbbab31db293e16d6fcbb639b6efabbc"} Nov 28 09:02:02 crc kubenswrapper[4946]: I1128 09:02:02.007218 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 09:02:02 crc kubenswrapper[4946]: I1128 09:02:02.539424 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87d9a427-9315-4a98-8a43-a204ef1c9ea4","Type":"ContainerStarted","Data":"2b8043f241b011e60413288f86953c9a8b73558b86119f8eb0c5d0d9fa53c05d"} Nov 28 09:02:02 crc kubenswrapper[4946]: I1128 09:02:02.539610 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 09:02:02 crc kubenswrapper[4946]: I1128 09:02:02.564892 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.577261897 podStartE2EDuration="5.564850958s" podCreationTimestamp="2025-11-28 09:01:57 +0000 UTC" firstStartedPulling="2025-11-28 09:01:58.531507168 +0000 UTC m=+7772.909572279" lastFinishedPulling="2025-11-28 09:02:01.519096229 +0000 UTC m=+7775.897161340" observedRunningTime="2025-11-28 09:02:02.559091396 +0000 UTC m=+7776.937156517" watchObservedRunningTime="2025-11-28 09:02:02.564850958 +0000 UTC m=+7776.942916089" Nov 28 09:02:03 crc kubenswrapper[4946]: I1128 09:02:03.553558 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87d9a427-9315-4a98-8a43-a204ef1c9ea4" containerName="ceilometer-central-agent" containerID="cri-o://60524d7294875c3469beb2399d3a2373f56ce2c6f76b580fee476918f40131fe" gracePeriod=30 Nov 28 09:02:03 crc kubenswrapper[4946]: I1128 09:02:03.553589 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87d9a427-9315-4a98-8a43-a204ef1c9ea4" containerName="sg-core" containerID="cri-o://042371e87b3537c7ae1e624d26dbd2adbbbab31db293e16d6fcbb639b6efabbc" gracePeriod=30 Nov 28 09:02:03 crc kubenswrapper[4946]: I1128 09:02:03.553586 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87d9a427-9315-4a98-8a43-a204ef1c9ea4" containerName="proxy-httpd" containerID="cri-o://2b8043f241b011e60413288f86953c9a8b73558b86119f8eb0c5d0d9fa53c05d" gracePeriod=30 Nov 28 09:02:03 crc kubenswrapper[4946]: I1128 09:02:03.553665 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87d9a427-9315-4a98-8a43-a204ef1c9ea4" containerName="ceilometer-notification-agent" containerID="cri-o://2700e4b47f97107ad18af77b3a498462c2373cf8c7057fd37e3c840de9e922dd" gracePeriod=30 Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.566371 4946 generic.go:334] "Generic (PLEG): container finished" podID="87d9a427-9315-4a98-8a43-a204ef1c9ea4" containerID="2b8043f241b011e60413288f86953c9a8b73558b86119f8eb0c5d0d9fa53c05d" exitCode=0 Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.566663 4946 generic.go:334] "Generic (PLEG): container finished" podID="87d9a427-9315-4a98-8a43-a204ef1c9ea4" containerID="042371e87b3537c7ae1e624d26dbd2adbbbab31db293e16d6fcbb639b6efabbc" exitCode=2 Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.566676 4946 generic.go:334] "Generic (PLEG): container finished" podID="87d9a427-9315-4a98-8a43-a204ef1c9ea4" containerID="2700e4b47f97107ad18af77b3a498462c2373cf8c7057fd37e3c840de9e922dd" exitCode=0 Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.566686 4946 generic.go:334] "Generic (PLEG): container finished" podID="87d9a427-9315-4a98-8a43-a204ef1c9ea4" containerID="60524d7294875c3469beb2399d3a2373f56ce2c6f76b580fee476918f40131fe" exitCode=0 Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.566482 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87d9a427-9315-4a98-8a43-a204ef1c9ea4","Type":"ContainerDied","Data":"2b8043f241b011e60413288f86953c9a8b73558b86119f8eb0c5d0d9fa53c05d"} Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.566723 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87d9a427-9315-4a98-8a43-a204ef1c9ea4","Type":"ContainerDied","Data":"042371e87b3537c7ae1e624d26dbd2adbbbab31db293e16d6fcbb639b6efabbc"} Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.566737 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87d9a427-9315-4a98-8a43-a204ef1c9ea4","Type":"ContainerDied","Data":"2700e4b47f97107ad18af77b3a498462c2373cf8c7057fd37e3c840de9e922dd"} Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.566753 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87d9a427-9315-4a98-8a43-a204ef1c9ea4","Type":"ContainerDied","Data":"60524d7294875c3469beb2399d3a2373f56ce2c6f76b580fee476918f40131fe"} Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.566764 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87d9a427-9315-4a98-8a43-a204ef1c9ea4","Type":"ContainerDied","Data":"a2a5c50e3e79e0a18762029bb9ad6d1812ef0308519f671ff6a0b7132c946440"} Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.566791 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2a5c50e3e79e0a18762029bb9ad6d1812ef0308519f671ff6a0b7132c946440" Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.576975 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.679985 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87d9a427-9315-4a98-8a43-a204ef1c9ea4-sg-core-conf-yaml\") pod \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.680236 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87d9a427-9315-4a98-8a43-a204ef1c9ea4-run-httpd\") pod \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.680342 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsx79\" (UniqueName: \"kubernetes.io/projected/87d9a427-9315-4a98-8a43-a204ef1c9ea4-kube-api-access-qsx79\") pod \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.680373 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87d9a427-9315-4a98-8a43-a204ef1c9ea4-log-httpd\") pod \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.680398 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87d9a427-9315-4a98-8a43-a204ef1c9ea4-config-data\") pod \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.680427 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87d9a427-9315-4a98-8a43-a204ef1c9ea4-combined-ca-bundle\") pod \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.680513 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87d9a427-9315-4a98-8a43-a204ef1c9ea4-scripts\") pod \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\" (UID: \"87d9a427-9315-4a98-8a43-a204ef1c9ea4\") " Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.681059 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87d9a427-9315-4a98-8a43-a204ef1c9ea4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "87d9a427-9315-4a98-8a43-a204ef1c9ea4" (UID: "87d9a427-9315-4a98-8a43-a204ef1c9ea4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.681245 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87d9a427-9315-4a98-8a43-a204ef1c9ea4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "87d9a427-9315-4a98-8a43-a204ef1c9ea4" (UID: "87d9a427-9315-4a98-8a43-a204ef1c9ea4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.682458 4946 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87d9a427-9315-4a98-8a43-a204ef1c9ea4-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.682493 4946 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87d9a427-9315-4a98-8a43-a204ef1c9ea4-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.686782 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87d9a427-9315-4a98-8a43-a204ef1c9ea4-kube-api-access-qsx79" (OuterVolumeSpecName: "kube-api-access-qsx79") pod "87d9a427-9315-4a98-8a43-a204ef1c9ea4" (UID: "87d9a427-9315-4a98-8a43-a204ef1c9ea4"). InnerVolumeSpecName "kube-api-access-qsx79". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.687352 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87d9a427-9315-4a98-8a43-a204ef1c9ea4-scripts" (OuterVolumeSpecName: "scripts") pod "87d9a427-9315-4a98-8a43-a204ef1c9ea4" (UID: "87d9a427-9315-4a98-8a43-a204ef1c9ea4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.718063 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87d9a427-9315-4a98-8a43-a204ef1c9ea4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "87d9a427-9315-4a98-8a43-a204ef1c9ea4" (UID: "87d9a427-9315-4a98-8a43-a204ef1c9ea4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.780683 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87d9a427-9315-4a98-8a43-a204ef1c9ea4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87d9a427-9315-4a98-8a43-a204ef1c9ea4" (UID: "87d9a427-9315-4a98-8a43-a204ef1c9ea4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.784072 4946 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87d9a427-9315-4a98-8a43-a204ef1c9ea4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.784111 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsx79\" (UniqueName: \"kubernetes.io/projected/87d9a427-9315-4a98-8a43-a204ef1c9ea4-kube-api-access-qsx79\") on node \"crc\" DevicePath \"\"" Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.784126 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87d9a427-9315-4a98-8a43-a204ef1c9ea4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.784138 4946 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87d9a427-9315-4a98-8a43-a204ef1c9ea4-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.803536 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87d9a427-9315-4a98-8a43-a204ef1c9ea4-config-data" (OuterVolumeSpecName: "config-data") pod "87d9a427-9315-4a98-8a43-a204ef1c9ea4" (UID: "87d9a427-9315-4a98-8a43-a204ef1c9ea4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:02:04 crc kubenswrapper[4946]: I1128 09:02:04.886379 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87d9a427-9315-4a98-8a43-a204ef1c9ea4-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.579456 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.637449 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.645997 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.671379 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 09:02:05 crc kubenswrapper[4946]: E1128 09:02:05.671988 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea2a6b0f-c72f-4623-acf1-efe087189a34" containerName="dnsmasq-dns" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.672007 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea2a6b0f-c72f-4623-acf1-efe087189a34" containerName="dnsmasq-dns" Nov 28 09:02:05 crc kubenswrapper[4946]: E1128 09:02:05.672030 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d9a427-9315-4a98-8a43-a204ef1c9ea4" containerName="proxy-httpd" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.672036 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d9a427-9315-4a98-8a43-a204ef1c9ea4" containerName="proxy-httpd" Nov 28 09:02:05 crc kubenswrapper[4946]: E1128 09:02:05.672054 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d9a427-9315-4a98-8a43-a204ef1c9ea4" containerName="sg-core" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.672060 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d9a427-9315-4a98-8a43-a204ef1c9ea4" containerName="sg-core" Nov 28 09:02:05 crc kubenswrapper[4946]: E1128 09:02:05.672073 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d9a427-9315-4a98-8a43-a204ef1c9ea4" containerName="ceilometer-central-agent" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.672079 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d9a427-9315-4a98-8a43-a204ef1c9ea4" containerName="ceilometer-central-agent" Nov 28 09:02:05 crc kubenswrapper[4946]: E1128 09:02:05.672098 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea2a6b0f-c72f-4623-acf1-efe087189a34" containerName="init" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.672104 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea2a6b0f-c72f-4623-acf1-efe087189a34" containerName="init" Nov 28 09:02:05 crc kubenswrapper[4946]: E1128 09:02:05.672113 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d9a427-9315-4a98-8a43-a204ef1c9ea4" containerName="ceilometer-notification-agent" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.672119 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d9a427-9315-4a98-8a43-a204ef1c9ea4" containerName="ceilometer-notification-agent" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.672309 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea2a6b0f-c72f-4623-acf1-efe087189a34" containerName="dnsmasq-dns" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.672330 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d9a427-9315-4a98-8a43-a204ef1c9ea4" containerName="sg-core" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.672341 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d9a427-9315-4a98-8a43-a204ef1c9ea4" containerName="proxy-httpd" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.672349 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d9a427-9315-4a98-8a43-a204ef1c9ea4" containerName="ceilometer-notification-agent" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.672360 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d9a427-9315-4a98-8a43-a204ef1c9ea4" containerName="ceilometer-central-agent" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.674225 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.677096 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.677401 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.693569 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.803703 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353f743-3daf-4cfd-bbbd-c3ac503c1161-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7353f743-3daf-4cfd-bbbd-c3ac503c1161\") " pod="openstack/ceilometer-0" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.803765 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7353f743-3daf-4cfd-bbbd-c3ac503c1161-scripts\") pod \"ceilometer-0\" (UID: \"7353f743-3daf-4cfd-bbbd-c3ac503c1161\") " pod="openstack/ceilometer-0" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.804070 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7353f743-3daf-4cfd-bbbd-c3ac503c1161-run-httpd\") pod \"ceilometer-0\" (UID: \"7353f743-3daf-4cfd-bbbd-c3ac503c1161\") " pod="openstack/ceilometer-0" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.804192 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7353f743-3daf-4cfd-bbbd-c3ac503c1161-log-httpd\") pod \"ceilometer-0\" (UID: \"7353f743-3daf-4cfd-bbbd-c3ac503c1161\") " pod="openstack/ceilometer-0" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.804234 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7353f743-3daf-4cfd-bbbd-c3ac503c1161-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7353f743-3daf-4cfd-bbbd-c3ac503c1161\") " pod="openstack/ceilometer-0" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.804251 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7353f743-3daf-4cfd-bbbd-c3ac503c1161-config-data\") pod \"ceilometer-0\" (UID: \"7353f743-3daf-4cfd-bbbd-c3ac503c1161\") " pod="openstack/ceilometer-0" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.804310 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nbjj\" (UniqueName: \"kubernetes.io/projected/7353f743-3daf-4cfd-bbbd-c3ac503c1161-kube-api-access-4nbjj\") pod \"ceilometer-0\" (UID: \"7353f743-3daf-4cfd-bbbd-c3ac503c1161\") " pod="openstack/ceilometer-0" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.906339 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353f743-3daf-4cfd-bbbd-c3ac503c1161-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7353f743-3daf-4cfd-bbbd-c3ac503c1161\") " pod="openstack/ceilometer-0" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.906406 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7353f743-3daf-4cfd-bbbd-c3ac503c1161-scripts\") pod \"ceilometer-0\" (UID: \"7353f743-3daf-4cfd-bbbd-c3ac503c1161\") " pod="openstack/ceilometer-0" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.906508 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7353f743-3daf-4cfd-bbbd-c3ac503c1161-run-httpd\") pod \"ceilometer-0\" (UID: \"7353f743-3daf-4cfd-bbbd-c3ac503c1161\") " pod="openstack/ceilometer-0" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.906540 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7353f743-3daf-4cfd-bbbd-c3ac503c1161-log-httpd\") pod \"ceilometer-0\" (UID: \"7353f743-3daf-4cfd-bbbd-c3ac503c1161\") " pod="openstack/ceilometer-0" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.906561 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7353f743-3daf-4cfd-bbbd-c3ac503c1161-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7353f743-3daf-4cfd-bbbd-c3ac503c1161\") " pod="openstack/ceilometer-0" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.906577 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7353f743-3daf-4cfd-bbbd-c3ac503c1161-config-data\") pod \"ceilometer-0\" (UID: \"7353f743-3daf-4cfd-bbbd-c3ac503c1161\") " pod="openstack/ceilometer-0" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.906597 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nbjj\" (UniqueName: \"kubernetes.io/projected/7353f743-3daf-4cfd-bbbd-c3ac503c1161-kube-api-access-4nbjj\") pod \"ceilometer-0\" (UID: \"7353f743-3daf-4cfd-bbbd-c3ac503c1161\") " pod="openstack/ceilometer-0" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.915235 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7353f743-3daf-4cfd-bbbd-c3ac503c1161-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7353f743-3daf-4cfd-bbbd-c3ac503c1161\") " pod="openstack/ceilometer-0" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.915507 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7353f743-3daf-4cfd-bbbd-c3ac503c1161-run-httpd\") pod \"ceilometer-0\" (UID: \"7353f743-3daf-4cfd-bbbd-c3ac503c1161\") " pod="openstack/ceilometer-0" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.923243 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353f743-3daf-4cfd-bbbd-c3ac503c1161-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7353f743-3daf-4cfd-bbbd-c3ac503c1161\") " pod="openstack/ceilometer-0" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.924163 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7353f743-3daf-4cfd-bbbd-c3ac503c1161-scripts\") pod \"ceilometer-0\" (UID: \"7353f743-3daf-4cfd-bbbd-c3ac503c1161\") " pod="openstack/ceilometer-0" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.926774 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7353f743-3daf-4cfd-bbbd-c3ac503c1161-log-httpd\") pod \"ceilometer-0\" (UID: \"7353f743-3daf-4cfd-bbbd-c3ac503c1161\") " pod="openstack/ceilometer-0" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.931290 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7353f743-3daf-4cfd-bbbd-c3ac503c1161-config-data\") pod \"ceilometer-0\" (UID: \"7353f743-3daf-4cfd-bbbd-c3ac503c1161\") " pod="openstack/ceilometer-0" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.945875 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nbjj\" (UniqueName: \"kubernetes.io/projected/7353f743-3daf-4cfd-bbbd-c3ac503c1161-kube-api-access-4nbjj\") pod \"ceilometer-0\" (UID: \"7353f743-3daf-4cfd-bbbd-c3ac503c1161\") " pod="openstack/ceilometer-0" Nov 28 09:02:05 crc kubenswrapper[4946]: I1128 09:02:05.994171 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 09:02:06 crc kubenswrapper[4946]: I1128 09:02:06.000256 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87d9a427-9315-4a98-8a43-a204ef1c9ea4" path="/var/lib/kubelet/pods/87d9a427-9315-4a98-8a43-a204ef1c9ea4/volumes" Nov 28 09:02:06 crc kubenswrapper[4946]: I1128 09:02:06.442823 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 09:02:06 crc kubenswrapper[4946]: I1128 09:02:06.607914 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7353f743-3daf-4cfd-bbbd-c3ac503c1161","Type":"ContainerStarted","Data":"ab7f1847df2ad4204b7752b546ce56c8db63a65f632ab34d349ff90b215cd683"} Nov 28 09:02:07 crc kubenswrapper[4946]: I1128 09:02:07.620131 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7353f743-3daf-4cfd-bbbd-c3ac503c1161","Type":"ContainerStarted","Data":"8c3daaecbeda55f163130b67878abf3af829d1a327437c30bb16e541428fbab9"} Nov 28 09:02:07 crc kubenswrapper[4946]: I1128 09:02:07.620475 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7353f743-3daf-4cfd-bbbd-c3ac503c1161","Type":"ContainerStarted","Data":"38278a40c0c78b516521e8aca3e236a11916d8bb4b0d2104753cdd16d846dc9b"} Nov 28 09:02:07 crc kubenswrapper[4946]: I1128 09:02:07.956778 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 28 09:02:08 crc kubenswrapper[4946]: I1128 09:02:08.634913 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7353f743-3daf-4cfd-bbbd-c3ac503c1161","Type":"ContainerStarted","Data":"5092a15a5186e8bce7625bb9dfc4ac2df873130c3af50c44107505f22f7eb6cc"} Nov 28 09:02:09 crc kubenswrapper[4946]: I1128 09:02:09.468071 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Nov 28 09:02:09 crc kubenswrapper[4946]: I1128 09:02:09.574593 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Nov 28 09:02:09 crc kubenswrapper[4946]: I1128 09:02:09.646151 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7353f743-3daf-4cfd-bbbd-c3ac503c1161","Type":"ContainerStarted","Data":"e7be8c57d04bf392641b8b53a9e48c6a62bb3e83dd03e3c2c412468166070f8c"} Nov 28 09:02:09 crc kubenswrapper[4946]: I1128 09:02:09.647479 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 09:02:09 crc kubenswrapper[4946]: I1128 09:02:09.805406 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Nov 28 09:02:09 crc kubenswrapper[4946]: I1128 09:02:09.822678 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.297200361 podStartE2EDuration="4.822658387s" podCreationTimestamp="2025-11-28 09:02:05 +0000 UTC" firstStartedPulling="2025-11-28 09:02:06.446642096 +0000 UTC m=+7780.824707227" lastFinishedPulling="2025-11-28 09:02:08.972100142 +0000 UTC m=+7783.350165253" observedRunningTime="2025-11-28 09:02:09.66972948 +0000 UTC m=+7784.047794591" watchObservedRunningTime="2025-11-28 09:02:09.822658387 +0000 UTC m=+7784.200723498" Nov 28 09:02:27 crc kubenswrapper[4946]: I1128 09:02:27.073884 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-rzjfc"] Nov 28 09:02:27 crc kubenswrapper[4946]: I1128 09:02:27.089041 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-tz944"] Nov 28 09:02:27 crc kubenswrapper[4946]: I1128 09:02:27.100227 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-tz944"] Nov 28 09:02:27 crc kubenswrapper[4946]: I1128 09:02:27.108739 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-rzjfc"] Nov 28 09:02:28 crc kubenswrapper[4946]: I1128 09:02:28.015868 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60ea7f40-c928-4e4d-afe5-ab0b4a76e918" path="/var/lib/kubelet/pods/60ea7f40-c928-4e4d-afe5-ab0b4a76e918/volumes" Nov 28 09:02:28 crc kubenswrapper[4946]: I1128 09:02:28.018652 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d510bda-fc5e-412b-be7a-e7f6154da6ff" path="/var/lib/kubelet/pods/9d510bda-fc5e-412b-be7a-e7f6154da6ff/volumes" Nov 28 09:02:28 crc kubenswrapper[4946]: I1128 09:02:28.041608 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-263b-account-create-update-lr5fj"] Nov 28 09:02:28 crc kubenswrapper[4946]: I1128 09:02:28.056205 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-263b-account-create-update-lr5fj"] Nov 28 09:02:28 crc kubenswrapper[4946]: I1128 09:02:28.065806 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-ztrmk"] Nov 28 09:02:28 crc kubenswrapper[4946]: I1128 09:02:28.074001 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-ztrmk"] Nov 28 09:02:29 crc kubenswrapper[4946]: I1128 09:02:29.033536 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-66e9-account-create-update-bjp9j"] Nov 28 09:02:29 crc kubenswrapper[4946]: I1128 09:02:29.044592 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-66e9-account-create-update-bjp9j"] Nov 28 09:02:29 crc kubenswrapper[4946]: I1128 09:02:29.059168 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5268-account-create-update-nkrh2"] Nov 28 09:02:29 crc kubenswrapper[4946]: I1128 09:02:29.071943 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5268-account-create-update-nkrh2"] Nov 28 09:02:30 crc kubenswrapper[4946]: I1128 09:02:30.012924 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="574aecae-6184-4f4c-abed-81ec30f03ac0" path="/var/lib/kubelet/pods/574aecae-6184-4f4c-abed-81ec30f03ac0/volumes" Nov 28 09:02:30 crc kubenswrapper[4946]: I1128 09:02:30.014459 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d6ecba1-ee43-4308-ba77-ea66427ac798" path="/var/lib/kubelet/pods/7d6ecba1-ee43-4308-ba77-ea66427ac798/volumes" Nov 28 09:02:30 crc kubenswrapper[4946]: I1128 09:02:30.015821 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dbfdfa4-2fc0-42de-bee6-5657463640c6" path="/var/lib/kubelet/pods/9dbfdfa4-2fc0-42de-bee6-5657463640c6/volumes" Nov 28 09:02:30 crc kubenswrapper[4946]: I1128 09:02:30.017021 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df17e46e-2f32-4c8d-9aa7-cba30670c85f" path="/var/lib/kubelet/pods/df17e46e-2f32-4c8d-9aa7-cba30670c85f/volumes" Nov 28 09:02:36 crc kubenswrapper[4946]: I1128 09:02:36.003942 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 28 09:02:49 crc kubenswrapper[4946]: I1128 09:02:49.949831 4946 scope.go:117] "RemoveContainer" containerID="c9e18741ca662196f97cc7cadf00628471e1f36f0c701a8c5526ce71a63523b2" Nov 28 09:02:50 crc kubenswrapper[4946]: I1128 09:02:50.010862 4946 scope.go:117] "RemoveContainer" containerID="06270bef99c40dec19eab6dc72c4ad9ad5a33f12cbd902dd086ce31d00d4f23b" Nov 28 09:02:50 crc kubenswrapper[4946]: I1128 09:02:50.072252 4946 scope.go:117] "RemoveContainer" containerID="cf02778d1f39a71494c7dc1c914b706e50d6c137d07c843c16caf662ecdfdeb7" Nov 28 09:02:50 crc kubenswrapper[4946]: I1128 09:02:50.143218 4946 scope.go:117] "RemoveContainer" containerID="2120d25158b240e41e34e6642365a529363c3785e0137bdf59590a41e526f192" Nov 28 09:02:50 crc kubenswrapper[4946]: I1128 09:02:50.167088 4946 scope.go:117] "RemoveContainer" containerID="d71f454e72d4601e82dbfb46fbbdb717a54eb183f4d5d8fc4f330c6324b0ba2a" Nov 28 09:02:50 crc kubenswrapper[4946]: I1128 09:02:50.215133 4946 scope.go:117] "RemoveContainer" containerID="b25e6a0e37465a270b99cc767412f5d71891fc08a3ed17e43ce822fa6e38699c" Nov 28 09:02:51 crc kubenswrapper[4946]: I1128 09:02:51.042354 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m8sbg"] Nov 28 09:02:51 crc kubenswrapper[4946]: I1128 09:02:51.051363 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m8sbg"] Nov 28 09:02:52 crc kubenswrapper[4946]: I1128 09:02:52.003005 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be2290f7-78ba-4e50-9e73-8732b2e2639c" path="/var/lib/kubelet/pods/be2290f7-78ba-4e50-9e73-8732b2e2639c/volumes" Nov 28 09:02:55 crc kubenswrapper[4946]: I1128 09:02:55.942713 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76c856f7f9-64gg5"] Nov 28 09:02:55 crc kubenswrapper[4946]: I1128 09:02:55.948349 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" Nov 28 09:02:55 crc kubenswrapper[4946]: I1128 09:02:55.950776 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Nov 28 09:02:55 crc kubenswrapper[4946]: I1128 09:02:55.978107 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76c856f7f9-64gg5"] Nov 28 09:02:56 crc kubenswrapper[4946]: I1128 09:02:56.143354 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-openstack-cell1\") pod \"dnsmasq-dns-76c856f7f9-64gg5\" (UID: \"b5a17ba4-ab8f-4369-893c-645128a36024\") " pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" Nov 28 09:02:56 crc kubenswrapper[4946]: I1128 09:02:56.143427 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkbwb\" (UniqueName: \"kubernetes.io/projected/b5a17ba4-ab8f-4369-893c-645128a36024-kube-api-access-vkbwb\") pod \"dnsmasq-dns-76c856f7f9-64gg5\" (UID: \"b5a17ba4-ab8f-4369-893c-645128a36024\") " pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" Nov 28 09:02:56 crc kubenswrapper[4946]: I1128 09:02:56.143519 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-config\") pod \"dnsmasq-dns-76c856f7f9-64gg5\" (UID: \"b5a17ba4-ab8f-4369-893c-645128a36024\") " pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" Nov 28 09:02:56 crc kubenswrapper[4946]: I1128 09:02:56.143593 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-ovsdbserver-sb\") pod \"dnsmasq-dns-76c856f7f9-64gg5\" (UID: \"b5a17ba4-ab8f-4369-893c-645128a36024\") " pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" Nov 28 09:02:56 crc kubenswrapper[4946]: I1128 09:02:56.143659 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-dns-svc\") pod \"dnsmasq-dns-76c856f7f9-64gg5\" (UID: \"b5a17ba4-ab8f-4369-893c-645128a36024\") " pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" Nov 28 09:02:56 crc kubenswrapper[4946]: I1128 09:02:56.143700 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-ovsdbserver-nb\") pod \"dnsmasq-dns-76c856f7f9-64gg5\" (UID: \"b5a17ba4-ab8f-4369-893c-645128a36024\") " pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" Nov 28 09:02:56 crc kubenswrapper[4946]: I1128 09:02:56.246235 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-dns-svc\") pod \"dnsmasq-dns-76c856f7f9-64gg5\" (UID: \"b5a17ba4-ab8f-4369-893c-645128a36024\") " pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" Nov 28 09:02:56 crc kubenswrapper[4946]: I1128 09:02:56.246348 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-ovsdbserver-nb\") pod \"dnsmasq-dns-76c856f7f9-64gg5\" (UID: \"b5a17ba4-ab8f-4369-893c-645128a36024\") " pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" Nov 28 09:02:56 crc kubenswrapper[4946]: I1128 09:02:56.247591 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-ovsdbserver-nb\") pod \"dnsmasq-dns-76c856f7f9-64gg5\" (UID: \"b5a17ba4-ab8f-4369-893c-645128a36024\") " pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" Nov 28 09:02:56 crc kubenswrapper[4946]: I1128 09:02:56.247756 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-openstack-cell1\") pod \"dnsmasq-dns-76c856f7f9-64gg5\" (UID: \"b5a17ba4-ab8f-4369-893c-645128a36024\") " pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" Nov 28 09:02:56 crc kubenswrapper[4946]: I1128 09:02:56.247789 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkbwb\" (UniqueName: \"kubernetes.io/projected/b5a17ba4-ab8f-4369-893c-645128a36024-kube-api-access-vkbwb\") pod \"dnsmasq-dns-76c856f7f9-64gg5\" (UID: \"b5a17ba4-ab8f-4369-893c-645128a36024\") " pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" Nov 28 09:02:56 crc kubenswrapper[4946]: I1128 09:02:56.247839 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-dns-svc\") pod \"dnsmasq-dns-76c856f7f9-64gg5\" (UID: \"b5a17ba4-ab8f-4369-893c-645128a36024\") " pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" Nov 28 09:02:56 crc kubenswrapper[4946]: I1128 09:02:56.247862 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-config\") pod \"dnsmasq-dns-76c856f7f9-64gg5\" (UID: \"b5a17ba4-ab8f-4369-893c-645128a36024\") " pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" Nov 28 09:02:56 crc kubenswrapper[4946]: I1128 09:02:56.248015 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-ovsdbserver-sb\") pod \"dnsmasq-dns-76c856f7f9-64gg5\" (UID: \"b5a17ba4-ab8f-4369-893c-645128a36024\") " pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" Nov 28 09:02:56 crc kubenswrapper[4946]: I1128 09:02:56.248426 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-config\") pod \"dnsmasq-dns-76c856f7f9-64gg5\" (UID: \"b5a17ba4-ab8f-4369-893c-645128a36024\") " pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" Nov 28 09:02:56 crc kubenswrapper[4946]: I1128 09:02:56.250098 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-ovsdbserver-sb\") pod \"dnsmasq-dns-76c856f7f9-64gg5\" (UID: \"b5a17ba4-ab8f-4369-893c-645128a36024\") " pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" Nov 28 09:02:56 crc kubenswrapper[4946]: I1128 09:02:56.250171 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-openstack-cell1\") pod \"dnsmasq-dns-76c856f7f9-64gg5\" (UID: \"b5a17ba4-ab8f-4369-893c-645128a36024\") " pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" Nov 28 09:02:56 crc kubenswrapper[4946]: I1128 09:02:56.274233 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkbwb\" (UniqueName: \"kubernetes.io/projected/b5a17ba4-ab8f-4369-893c-645128a36024-kube-api-access-vkbwb\") pod \"dnsmasq-dns-76c856f7f9-64gg5\" (UID: \"b5a17ba4-ab8f-4369-893c-645128a36024\") " pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" Nov 28 09:02:56 crc kubenswrapper[4946]: I1128 09:02:56.276409 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" Nov 28 09:02:56 crc kubenswrapper[4946]: I1128 09:02:56.744264 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76c856f7f9-64gg5"] Nov 28 09:02:57 crc kubenswrapper[4946]: I1128 09:02:57.381359 4946 generic.go:334] "Generic (PLEG): container finished" podID="b5a17ba4-ab8f-4369-893c-645128a36024" containerID="3fb710f207bef39cad6d32551df25899fccc9206e0b99e63159b09093767dcc0" exitCode=0 Nov 28 09:02:57 crc kubenswrapper[4946]: I1128 09:02:57.381416 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" event={"ID":"b5a17ba4-ab8f-4369-893c-645128a36024","Type":"ContainerDied","Data":"3fb710f207bef39cad6d32551df25899fccc9206e0b99e63159b09093767dcc0"} Nov 28 09:02:57 crc kubenswrapper[4946]: I1128 09:02:57.382810 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" event={"ID":"b5a17ba4-ab8f-4369-893c-645128a36024","Type":"ContainerStarted","Data":"0df6298a58e990bc2315d005ea4ee724c9684fc36adb2160fe2eb13c36c6626c"} Nov 28 09:02:58 crc kubenswrapper[4946]: I1128 09:02:58.398218 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" event={"ID":"b5a17ba4-ab8f-4369-893c-645128a36024","Type":"ContainerStarted","Data":"a88cf115d824055c27c7f7bcdcb51cd896bc951e349aba28a20b1023a96b3ab9"} Nov 28 09:02:58 crc kubenswrapper[4946]: I1128 09:02:58.444972 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" podStartSLOduration=3.44495185 podStartE2EDuration="3.44495185s" podCreationTimestamp="2025-11-28 09:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 09:02:58.427918638 +0000 UTC m=+7832.805983779" watchObservedRunningTime="2025-11-28 09:02:58.44495185 +0000 UTC m=+7832.823016971" Nov 28 09:02:59 crc kubenswrapper[4946]: I1128 09:02:59.408946 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" Nov 28 09:03:05 crc kubenswrapper[4946]: I1128 09:03:05.042012 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cwv2r"] Nov 28 09:03:05 crc kubenswrapper[4946]: I1128 09:03:05.079087 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-qjtsc"] Nov 28 09:03:05 crc kubenswrapper[4946]: I1128 09:03:05.090152 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cwv2r"] Nov 28 09:03:05 crc kubenswrapper[4946]: I1128 09:03:05.104493 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-qjtsc"] Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.001727 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="630068c6-dca8-4f28-b680-9c730c477328" path="/var/lib/kubelet/pods/630068c6-dca8-4f28-b680-9c730c477328/volumes" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.002295 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7724247-b3f0-4422-9120-b4ab79b4e613" path="/var/lib/kubelet/pods/a7724247-b3f0-4422-9120-b4ab79b4e613/volumes" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.278814 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.354775 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8bd8679dc-g9r8k"] Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.355311 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" podUID="9735f8fd-4a68-461b-9029-3a55c266570b" containerName="dnsmasq-dns" containerID="cri-o://0caa6f7026691fcc3a35ef5fc62686ec26afb2f4ee35b0e822f14dc0bbc356eb" gracePeriod=10 Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.530271 4946 generic.go:334] "Generic (PLEG): container finished" podID="9735f8fd-4a68-461b-9029-3a55c266570b" containerID="0caa6f7026691fcc3a35ef5fc62686ec26afb2f4ee35b0e822f14dc0bbc356eb" exitCode=0 Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.530310 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" event={"ID":"9735f8fd-4a68-461b-9029-3a55c266570b","Type":"ContainerDied","Data":"0caa6f7026691fcc3a35ef5fc62686ec26afb2f4ee35b0e822f14dc0bbc356eb"} Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.533941 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68596bf757-wmlsg"] Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.536480 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68596bf757-wmlsg" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.543088 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-networker" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.561934 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68596bf757-wmlsg"] Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.637590 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68596bf757-wmlsg"] Nov 28 09:03:06 crc kubenswrapper[4946]: E1128 09:03:06.638480 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-f4khm openstack-cell1 openstack-networker ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-68596bf757-wmlsg" podUID="2c3657d9-0640-4776-981c-cd8cefb489cc" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.687874 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675bd7f5bc-6xj4d"] Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.689744 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.698420 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675bd7f5bc-6xj4d"] Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.699531 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-dns-svc\") pod \"dnsmasq-dns-68596bf757-wmlsg\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " pod="openstack/dnsmasq-dns-68596bf757-wmlsg" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.699563 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4khm\" (UniqueName: \"kubernetes.io/projected/2c3657d9-0640-4776-981c-cd8cefb489cc-kube-api-access-f4khm\") pod \"dnsmasq-dns-68596bf757-wmlsg\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " pod="openstack/dnsmasq-dns-68596bf757-wmlsg" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.699673 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-config\") pod \"dnsmasq-dns-68596bf757-wmlsg\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " pod="openstack/dnsmasq-dns-68596bf757-wmlsg" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.699748 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-ovsdbserver-sb\") pod \"dnsmasq-dns-68596bf757-wmlsg\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " pod="openstack/dnsmasq-dns-68596bf757-wmlsg" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.699777 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-openstack-networker\") pod \"dnsmasq-dns-68596bf757-wmlsg\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " pod="openstack/dnsmasq-dns-68596bf757-wmlsg" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.699815 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-ovsdbserver-nb\") pod \"dnsmasq-dns-68596bf757-wmlsg\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " pod="openstack/dnsmasq-dns-68596bf757-wmlsg" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.699840 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-openstack-cell1\") pod \"dnsmasq-dns-68596bf757-wmlsg\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " pod="openstack/dnsmasq-dns-68596bf757-wmlsg" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.801777 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-openstack-cell1\") pod \"dnsmasq-dns-68596bf757-wmlsg\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " pod="openstack/dnsmasq-dns-68596bf757-wmlsg" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.802102 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af25ba3e-ed08-4184-baee-857db122755c-config\") pod \"dnsmasq-dns-675bd7f5bc-6xj4d\" (UID: \"af25ba3e-ed08-4184-baee-857db122755c\") " pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.802170 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af25ba3e-ed08-4184-baee-857db122755c-dns-svc\") pod \"dnsmasq-dns-675bd7f5bc-6xj4d\" (UID: \"af25ba3e-ed08-4184-baee-857db122755c\") " pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.802186 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af25ba3e-ed08-4184-baee-857db122755c-ovsdbserver-sb\") pod \"dnsmasq-dns-675bd7f5bc-6xj4d\" (UID: \"af25ba3e-ed08-4184-baee-857db122755c\") " pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.802220 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-448rd\" (UniqueName: \"kubernetes.io/projected/af25ba3e-ed08-4184-baee-857db122755c-kube-api-access-448rd\") pod \"dnsmasq-dns-675bd7f5bc-6xj4d\" (UID: \"af25ba3e-ed08-4184-baee-857db122755c\") " pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.802250 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-dns-svc\") pod \"dnsmasq-dns-68596bf757-wmlsg\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " pod="openstack/dnsmasq-dns-68596bf757-wmlsg" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.802274 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4khm\" (UniqueName: \"kubernetes.io/projected/2c3657d9-0640-4776-981c-cd8cefb489cc-kube-api-access-f4khm\") pod \"dnsmasq-dns-68596bf757-wmlsg\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " pod="openstack/dnsmasq-dns-68596bf757-wmlsg" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.802314 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/af25ba3e-ed08-4184-baee-857db122755c-openstack-cell1\") pod \"dnsmasq-dns-675bd7f5bc-6xj4d\" (UID: \"af25ba3e-ed08-4184-baee-857db122755c\") " pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.802350 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af25ba3e-ed08-4184-baee-857db122755c-ovsdbserver-nb\") pod \"dnsmasq-dns-675bd7f5bc-6xj4d\" (UID: \"af25ba3e-ed08-4184-baee-857db122755c\") " pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.802384 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/af25ba3e-ed08-4184-baee-857db122755c-openstack-networker\") pod \"dnsmasq-dns-675bd7f5bc-6xj4d\" (UID: \"af25ba3e-ed08-4184-baee-857db122755c\") " pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.802417 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-config\") pod \"dnsmasq-dns-68596bf757-wmlsg\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " pod="openstack/dnsmasq-dns-68596bf757-wmlsg" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.802488 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-ovsdbserver-sb\") pod \"dnsmasq-dns-68596bf757-wmlsg\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " pod="openstack/dnsmasq-dns-68596bf757-wmlsg" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.802506 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-openstack-networker\") pod \"dnsmasq-dns-68596bf757-wmlsg\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " pod="openstack/dnsmasq-dns-68596bf757-wmlsg" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.802534 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-ovsdbserver-nb\") pod \"dnsmasq-dns-68596bf757-wmlsg\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " pod="openstack/dnsmasq-dns-68596bf757-wmlsg" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.802747 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-openstack-cell1\") pod \"dnsmasq-dns-68596bf757-wmlsg\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " pod="openstack/dnsmasq-dns-68596bf757-wmlsg" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.803301 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-ovsdbserver-nb\") pod \"dnsmasq-dns-68596bf757-wmlsg\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " pod="openstack/dnsmasq-dns-68596bf757-wmlsg" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.803895 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-config\") pod \"dnsmasq-dns-68596bf757-wmlsg\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " pod="openstack/dnsmasq-dns-68596bf757-wmlsg" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.804054 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-ovsdbserver-sb\") pod \"dnsmasq-dns-68596bf757-wmlsg\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " pod="openstack/dnsmasq-dns-68596bf757-wmlsg" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.804304 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-dns-svc\") pod \"dnsmasq-dns-68596bf757-wmlsg\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " pod="openstack/dnsmasq-dns-68596bf757-wmlsg" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.804419 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-openstack-networker\") pod \"dnsmasq-dns-68596bf757-wmlsg\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " pod="openstack/dnsmasq-dns-68596bf757-wmlsg" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.824247 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4khm\" (UniqueName: \"kubernetes.io/projected/2c3657d9-0640-4776-981c-cd8cefb489cc-kube-api-access-f4khm\") pod \"dnsmasq-dns-68596bf757-wmlsg\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " pod="openstack/dnsmasq-dns-68596bf757-wmlsg" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.922531 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af25ba3e-ed08-4184-baee-857db122755c-config\") pod \"dnsmasq-dns-675bd7f5bc-6xj4d\" (UID: \"af25ba3e-ed08-4184-baee-857db122755c\") " pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.922655 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af25ba3e-ed08-4184-baee-857db122755c-dns-svc\") pod \"dnsmasq-dns-675bd7f5bc-6xj4d\" (UID: \"af25ba3e-ed08-4184-baee-857db122755c\") " pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.922674 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af25ba3e-ed08-4184-baee-857db122755c-ovsdbserver-sb\") pod \"dnsmasq-dns-675bd7f5bc-6xj4d\" (UID: \"af25ba3e-ed08-4184-baee-857db122755c\") " pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.922722 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-448rd\" (UniqueName: \"kubernetes.io/projected/af25ba3e-ed08-4184-baee-857db122755c-kube-api-access-448rd\") pod \"dnsmasq-dns-675bd7f5bc-6xj4d\" (UID: \"af25ba3e-ed08-4184-baee-857db122755c\") " pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.922782 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/af25ba3e-ed08-4184-baee-857db122755c-openstack-cell1\") pod \"dnsmasq-dns-675bd7f5bc-6xj4d\" (UID: \"af25ba3e-ed08-4184-baee-857db122755c\") " pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.922811 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af25ba3e-ed08-4184-baee-857db122755c-ovsdbserver-nb\") pod \"dnsmasq-dns-675bd7f5bc-6xj4d\" (UID: \"af25ba3e-ed08-4184-baee-857db122755c\") " pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.922842 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/af25ba3e-ed08-4184-baee-857db122755c-openstack-networker\") pod \"dnsmasq-dns-675bd7f5bc-6xj4d\" (UID: \"af25ba3e-ed08-4184-baee-857db122755c\") " pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.923737 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/af25ba3e-ed08-4184-baee-857db122755c-openstack-networker\") pod \"dnsmasq-dns-675bd7f5bc-6xj4d\" (UID: \"af25ba3e-ed08-4184-baee-857db122755c\") " pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.926045 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af25ba3e-ed08-4184-baee-857db122755c-dns-svc\") pod \"dnsmasq-dns-675bd7f5bc-6xj4d\" (UID: \"af25ba3e-ed08-4184-baee-857db122755c\") " pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.926346 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af25ba3e-ed08-4184-baee-857db122755c-config\") pod \"dnsmasq-dns-675bd7f5bc-6xj4d\" (UID: \"af25ba3e-ed08-4184-baee-857db122755c\") " pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.926653 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af25ba3e-ed08-4184-baee-857db122755c-ovsdbserver-nb\") pod \"dnsmasq-dns-675bd7f5bc-6xj4d\" (UID: \"af25ba3e-ed08-4184-baee-857db122755c\") " pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.929795 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/af25ba3e-ed08-4184-baee-857db122755c-openstack-cell1\") pod \"dnsmasq-dns-675bd7f5bc-6xj4d\" (UID: \"af25ba3e-ed08-4184-baee-857db122755c\") " pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.929984 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af25ba3e-ed08-4184-baee-857db122755c-ovsdbserver-sb\") pod \"dnsmasq-dns-675bd7f5bc-6xj4d\" (UID: \"af25ba3e-ed08-4184-baee-857db122755c\") " pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.941222 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-448rd\" (UniqueName: \"kubernetes.io/projected/af25ba3e-ed08-4184-baee-857db122755c-kube-api-access-448rd\") pod \"dnsmasq-dns-675bd7f5bc-6xj4d\" (UID: \"af25ba3e-ed08-4184-baee-857db122755c\") " pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" Nov 28 09:03:06 crc kubenswrapper[4946]: I1128 09:03:06.996500 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.023420 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.137669 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9735f8fd-4a68-461b-9029-3a55c266570b-ovsdbserver-sb\") pod \"9735f8fd-4a68-461b-9029-3a55c266570b\" (UID: \"9735f8fd-4a68-461b-9029-3a55c266570b\") " Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.138039 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kk2sr\" (UniqueName: \"kubernetes.io/projected/9735f8fd-4a68-461b-9029-3a55c266570b-kube-api-access-kk2sr\") pod \"9735f8fd-4a68-461b-9029-3a55c266570b\" (UID: \"9735f8fd-4a68-461b-9029-3a55c266570b\") " Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.138243 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9735f8fd-4a68-461b-9029-3a55c266570b-ovsdbserver-nb\") pod \"9735f8fd-4a68-461b-9029-3a55c266570b\" (UID: \"9735f8fd-4a68-461b-9029-3a55c266570b\") " Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.138286 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9735f8fd-4a68-461b-9029-3a55c266570b-config\") pod \"9735f8fd-4a68-461b-9029-3a55c266570b\" (UID: \"9735f8fd-4a68-461b-9029-3a55c266570b\") " Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.138437 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9735f8fd-4a68-461b-9029-3a55c266570b-dns-svc\") pod \"9735f8fd-4a68-461b-9029-3a55c266570b\" (UID: \"9735f8fd-4a68-461b-9029-3a55c266570b\") " Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.143125 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9735f8fd-4a68-461b-9029-3a55c266570b-kube-api-access-kk2sr" (OuterVolumeSpecName: "kube-api-access-kk2sr") pod "9735f8fd-4a68-461b-9029-3a55c266570b" (UID: "9735f8fd-4a68-461b-9029-3a55c266570b"). InnerVolumeSpecName "kube-api-access-kk2sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.205743 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9735f8fd-4a68-461b-9029-3a55c266570b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9735f8fd-4a68-461b-9029-3a55c266570b" (UID: "9735f8fd-4a68-461b-9029-3a55c266570b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.220695 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9735f8fd-4a68-461b-9029-3a55c266570b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9735f8fd-4a68-461b-9029-3a55c266570b" (UID: "9735f8fd-4a68-461b-9029-3a55c266570b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.220759 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9735f8fd-4a68-461b-9029-3a55c266570b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9735f8fd-4a68-461b-9029-3a55c266570b" (UID: "9735f8fd-4a68-461b-9029-3a55c266570b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.229071 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9735f8fd-4a68-461b-9029-3a55c266570b-config" (OuterVolumeSpecName: "config") pod "9735f8fd-4a68-461b-9029-3a55c266570b" (UID: "9735f8fd-4a68-461b-9029-3a55c266570b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.240434 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9735f8fd-4a68-461b-9029-3a55c266570b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.240474 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9735f8fd-4a68-461b-9029-3a55c266570b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.240487 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kk2sr\" (UniqueName: \"kubernetes.io/projected/9735f8fd-4a68-461b-9029-3a55c266570b-kube-api-access-kk2sr\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.240495 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9735f8fd-4a68-461b-9029-3a55c266570b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.240504 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9735f8fd-4a68-461b-9029-3a55c266570b-config\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.529873 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675bd7f5bc-6xj4d"] Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.541285 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68596bf757-wmlsg" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.542127 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.542299 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8bd8679dc-g9r8k" event={"ID":"9735f8fd-4a68-461b-9029-3a55c266570b","Type":"ContainerDied","Data":"c1bfaf3f2c14578e424dc6639cf05a2c33641bfb97250866ab23d193000cdf60"} Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.542341 4946 scope.go:117] "RemoveContainer" containerID="0caa6f7026691fcc3a35ef5fc62686ec26afb2f4ee35b0e822f14dc0bbc356eb" Nov 28 09:03:07 crc kubenswrapper[4946]: W1128 09:03:07.542489 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf25ba3e_ed08_4184_baee_857db122755c.slice/crio-95547593bc9d2763051e6155b4fcdd700c402c865a4aff6a9274ab8955b6411b WatchSource:0}: Error finding container 95547593bc9d2763051e6155b4fcdd700c402c865a4aff6a9274ab8955b6411b: Status 404 returned error can't find the container with id 95547593bc9d2763051e6155b4fcdd700c402c865a4aff6a9274ab8955b6411b Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.711655 4946 scope.go:117] "RemoveContainer" containerID="5e21c504e53caa9408aa74d829d729709649efccea57b364cb6cedbcb864533c" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.725337 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68596bf757-wmlsg" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.741350 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8bd8679dc-g9r8k"] Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.747788 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-openstack-networker\") pod \"2c3657d9-0640-4776-981c-cd8cefb489cc\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.747852 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-ovsdbserver-sb\") pod \"2c3657d9-0640-4776-981c-cd8cefb489cc\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.747879 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-openstack-cell1\") pod \"2c3657d9-0640-4776-981c-cd8cefb489cc\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.747917 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-dns-svc\") pod \"2c3657d9-0640-4776-981c-cd8cefb489cc\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.747977 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4khm\" (UniqueName: \"kubernetes.io/projected/2c3657d9-0640-4776-981c-cd8cefb489cc-kube-api-access-f4khm\") pod \"2c3657d9-0640-4776-981c-cd8cefb489cc\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.747999 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-config\") pod \"2c3657d9-0640-4776-981c-cd8cefb489cc\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.748023 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-ovsdbserver-nb\") pod \"2c3657d9-0640-4776-981c-cd8cefb489cc\" (UID: \"2c3657d9-0640-4776-981c-cd8cefb489cc\") " Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.748755 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "2c3657d9-0640-4776-981c-cd8cefb489cc" (UID: "2c3657d9-0640-4776-981c-cd8cefb489cc"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.748763 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2c3657d9-0640-4776-981c-cd8cefb489cc" (UID: "2c3657d9-0640-4776-981c-cd8cefb489cc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.749125 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-config" (OuterVolumeSpecName: "config") pod "2c3657d9-0640-4776-981c-cd8cefb489cc" (UID: "2c3657d9-0640-4776-981c-cd8cefb489cc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.749237 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2c3657d9-0640-4776-981c-cd8cefb489cc" (UID: "2c3657d9-0640-4776-981c-cd8cefb489cc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.749341 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2c3657d9-0640-4776-981c-cd8cefb489cc" (UID: "2c3657d9-0640-4776-981c-cd8cefb489cc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.750844 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-openstack-networker" (OuterVolumeSpecName: "openstack-networker") pod "2c3657d9-0640-4776-981c-cd8cefb489cc" (UID: "2c3657d9-0640-4776-981c-cd8cefb489cc"). InnerVolumeSpecName "openstack-networker". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.752241 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c3657d9-0640-4776-981c-cd8cefb489cc-kube-api-access-f4khm" (OuterVolumeSpecName: "kube-api-access-f4khm") pod "2c3657d9-0640-4776-981c-cd8cefb489cc" (UID: "2c3657d9-0640-4776-981c-cd8cefb489cc"). InnerVolumeSpecName "kube-api-access-f4khm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.752956 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8bd8679dc-g9r8k"] Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.850431 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.850491 4946 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-openstack-cell1\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.850509 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.850522 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4khm\" (UniqueName: \"kubernetes.io/projected/2c3657d9-0640-4776-981c-cd8cefb489cc-kube-api-access-f4khm\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.850535 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-config\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.850545 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:07 crc kubenswrapper[4946]: I1128 09:03:07.850556 4946 reconciler_common.go:293] "Volume detached for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/2c3657d9-0640-4776-981c-cd8cefb489cc-openstack-networker\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:08 crc kubenswrapper[4946]: I1128 09:03:08.001956 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9735f8fd-4a68-461b-9029-3a55c266570b" path="/var/lib/kubelet/pods/9735f8fd-4a68-461b-9029-3a55c266570b/volumes" Nov 28 09:03:08 crc kubenswrapper[4946]: I1128 09:03:08.574679 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" event={"ID":"af25ba3e-ed08-4184-baee-857db122755c","Type":"ContainerDied","Data":"b2eb573cc47da2d2f35cf85d01c8138971269af30c5649ca57ff62b4cf332368"} Nov 28 09:03:08 crc kubenswrapper[4946]: I1128 09:03:08.574574 4946 generic.go:334] "Generic (PLEG): container finished" podID="af25ba3e-ed08-4184-baee-857db122755c" containerID="b2eb573cc47da2d2f35cf85d01c8138971269af30c5649ca57ff62b4cf332368" exitCode=0 Nov 28 09:03:08 crc kubenswrapper[4946]: I1128 09:03:08.576762 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" event={"ID":"af25ba3e-ed08-4184-baee-857db122755c","Type":"ContainerStarted","Data":"95547593bc9d2763051e6155b4fcdd700c402c865a4aff6a9274ab8955b6411b"} Nov 28 09:03:08 crc kubenswrapper[4946]: I1128 09:03:08.576774 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68596bf757-wmlsg" Nov 28 09:03:08 crc kubenswrapper[4946]: I1128 09:03:08.675158 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68596bf757-wmlsg"] Nov 28 09:03:08 crc kubenswrapper[4946]: I1128 09:03:08.691240 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68596bf757-wmlsg"] Nov 28 09:03:09 crc kubenswrapper[4946]: I1128 09:03:09.594981 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" event={"ID":"af25ba3e-ed08-4184-baee-857db122755c","Type":"ContainerStarted","Data":"5fed77e0c929d5540436d75073d00c80c151deb60f7161b3f80a4a251c1fc141"} Nov 28 09:03:09 crc kubenswrapper[4946]: I1128 09:03:09.595325 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" Nov 28 09:03:09 crc kubenswrapper[4946]: I1128 09:03:09.628245 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" podStartSLOduration=3.628223058 podStartE2EDuration="3.628223058s" podCreationTimestamp="2025-11-28 09:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 09:03:09.625526511 +0000 UTC m=+7844.003591652" watchObservedRunningTime="2025-11-28 09:03:09.628223058 +0000 UTC m=+7844.006288179" Nov 28 09:03:10 crc kubenswrapper[4946]: I1128 09:03:10.006395 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c3657d9-0640-4776-981c-cd8cefb489cc" path="/var/lib/kubelet/pods/2c3657d9-0640-4776-981c-cd8cefb489cc/volumes" Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.025679 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-675bd7f5bc-6xj4d" Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.094874 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76c856f7f9-64gg5"] Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.095209 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" podUID="b5a17ba4-ab8f-4369-893c-645128a36024" containerName="dnsmasq-dns" containerID="cri-o://a88cf115d824055c27c7f7bcdcb51cd896bc951e349aba28a20b1023a96b3ab9" gracePeriod=10 Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.628755 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.704437 4946 generic.go:334] "Generic (PLEG): container finished" podID="b5a17ba4-ab8f-4369-893c-645128a36024" containerID="a88cf115d824055c27c7f7bcdcb51cd896bc951e349aba28a20b1023a96b3ab9" exitCode=0 Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.704504 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" event={"ID":"b5a17ba4-ab8f-4369-893c-645128a36024","Type":"ContainerDied","Data":"a88cf115d824055c27c7f7bcdcb51cd896bc951e349aba28a20b1023a96b3ab9"} Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.704536 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" event={"ID":"b5a17ba4-ab8f-4369-893c-645128a36024","Type":"ContainerDied","Data":"0df6298a58e990bc2315d005ea4ee724c9684fc36adb2160fe2eb13c36c6626c"} Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.704559 4946 scope.go:117] "RemoveContainer" containerID="a88cf115d824055c27c7f7bcdcb51cd896bc951e349aba28a20b1023a96b3ab9" Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.704746 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c856f7f9-64gg5" Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.737689 4946 scope.go:117] "RemoveContainer" containerID="3fb710f207bef39cad6d32551df25899fccc9206e0b99e63159b09093767dcc0" Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.745476 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-openstack-cell1\") pod \"b5a17ba4-ab8f-4369-893c-645128a36024\" (UID: \"b5a17ba4-ab8f-4369-893c-645128a36024\") " Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.745572 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-ovsdbserver-nb\") pod \"b5a17ba4-ab8f-4369-893c-645128a36024\" (UID: \"b5a17ba4-ab8f-4369-893c-645128a36024\") " Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.745604 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkbwb\" (UniqueName: \"kubernetes.io/projected/b5a17ba4-ab8f-4369-893c-645128a36024-kube-api-access-vkbwb\") pod \"b5a17ba4-ab8f-4369-893c-645128a36024\" (UID: \"b5a17ba4-ab8f-4369-893c-645128a36024\") " Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.745654 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-dns-svc\") pod \"b5a17ba4-ab8f-4369-893c-645128a36024\" (UID: \"b5a17ba4-ab8f-4369-893c-645128a36024\") " Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.745759 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-ovsdbserver-sb\") pod \"b5a17ba4-ab8f-4369-893c-645128a36024\" (UID: \"b5a17ba4-ab8f-4369-893c-645128a36024\") " Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.745809 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-config\") pod \"b5a17ba4-ab8f-4369-893c-645128a36024\" (UID: \"b5a17ba4-ab8f-4369-893c-645128a36024\") " Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.758074 4946 scope.go:117] "RemoveContainer" containerID="a88cf115d824055c27c7f7bcdcb51cd896bc951e349aba28a20b1023a96b3ab9" Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.758099 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5a17ba4-ab8f-4369-893c-645128a36024-kube-api-access-vkbwb" (OuterVolumeSpecName: "kube-api-access-vkbwb") pod "b5a17ba4-ab8f-4369-893c-645128a36024" (UID: "b5a17ba4-ab8f-4369-893c-645128a36024"). InnerVolumeSpecName "kube-api-access-vkbwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:03:17 crc kubenswrapper[4946]: E1128 09:03:17.758357 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a88cf115d824055c27c7f7bcdcb51cd896bc951e349aba28a20b1023a96b3ab9\": container with ID starting with a88cf115d824055c27c7f7bcdcb51cd896bc951e349aba28a20b1023a96b3ab9 not found: ID does not exist" containerID="a88cf115d824055c27c7f7bcdcb51cd896bc951e349aba28a20b1023a96b3ab9" Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.758391 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a88cf115d824055c27c7f7bcdcb51cd896bc951e349aba28a20b1023a96b3ab9"} err="failed to get container status \"a88cf115d824055c27c7f7bcdcb51cd896bc951e349aba28a20b1023a96b3ab9\": rpc error: code = NotFound desc = could not find container \"a88cf115d824055c27c7f7bcdcb51cd896bc951e349aba28a20b1023a96b3ab9\": container with ID starting with a88cf115d824055c27c7f7bcdcb51cd896bc951e349aba28a20b1023a96b3ab9 not found: ID does not exist" Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.758408 4946 scope.go:117] "RemoveContainer" containerID="3fb710f207bef39cad6d32551df25899fccc9206e0b99e63159b09093767dcc0" Nov 28 09:03:17 crc kubenswrapper[4946]: E1128 09:03:17.758584 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fb710f207bef39cad6d32551df25899fccc9206e0b99e63159b09093767dcc0\": container with ID starting with 3fb710f207bef39cad6d32551df25899fccc9206e0b99e63159b09093767dcc0 not found: ID does not exist" containerID="3fb710f207bef39cad6d32551df25899fccc9206e0b99e63159b09093767dcc0" Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.758605 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fb710f207bef39cad6d32551df25899fccc9206e0b99e63159b09093767dcc0"} err="failed to get container status \"3fb710f207bef39cad6d32551df25899fccc9206e0b99e63159b09093767dcc0\": rpc error: code = NotFound desc = could not find container \"3fb710f207bef39cad6d32551df25899fccc9206e0b99e63159b09093767dcc0\": container with ID starting with 3fb710f207bef39cad6d32551df25899fccc9206e0b99e63159b09093767dcc0 not found: ID does not exist" Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.799284 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b5a17ba4-ab8f-4369-893c-645128a36024" (UID: "b5a17ba4-ab8f-4369-893c-645128a36024"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.799302 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-config" (OuterVolumeSpecName: "config") pod "b5a17ba4-ab8f-4369-893c-645128a36024" (UID: "b5a17ba4-ab8f-4369-893c-645128a36024"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.804842 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b5a17ba4-ab8f-4369-893c-645128a36024" (UID: "b5a17ba4-ab8f-4369-893c-645128a36024"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.810001 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b5a17ba4-ab8f-4369-893c-645128a36024" (UID: "b5a17ba4-ab8f-4369-893c-645128a36024"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.816853 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "b5a17ba4-ab8f-4369-893c-645128a36024" (UID: "b5a17ba4-ab8f-4369-893c-645128a36024"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.847937 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.848154 4946 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-config\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.848164 4946 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-openstack-cell1\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.848174 4946 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.848185 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkbwb\" (UniqueName: \"kubernetes.io/projected/b5a17ba4-ab8f-4369-893c-645128a36024-kube-api-access-vkbwb\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:17 crc kubenswrapper[4946]: I1128 09:03:17.848196 4946 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5a17ba4-ab8f-4369-893c-645128a36024-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:18 crc kubenswrapper[4946]: I1128 09:03:18.037665 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76c856f7f9-64gg5"] Nov 28 09:03:18 crc kubenswrapper[4946]: I1128 09:03:18.048330 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76c856f7f9-64gg5"] Nov 28 09:03:20 crc kubenswrapper[4946]: I1128 09:03:20.007275 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5a17ba4-ab8f-4369-893c-645128a36024" path="/var/lib/kubelet/pods/b5a17ba4-ab8f-4369-893c-645128a36024/volumes" Nov 28 09:03:24 crc kubenswrapper[4946]: I1128 09:03:24.730599 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:03:24 crc kubenswrapper[4946]: I1128 09:03:24.731408 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:03:25 crc kubenswrapper[4946]: I1128 09:03:25.086335 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-62q66"] Nov 28 09:03:25 crc kubenswrapper[4946]: I1128 09:03:25.097340 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-62q66"] Nov 28 09:03:26 crc kubenswrapper[4946]: I1128 09:03:26.001061 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ece55576-ba44-491f-8be3-84adf5c7f1c4" path="/var/lib/kubelet/pods/ece55576-ba44-491f-8be3-84adf5c7f1c4/volumes" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.195975 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg"] Nov 28 09:03:28 crc kubenswrapper[4946]: E1128 09:03:28.198593 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9735f8fd-4a68-461b-9029-3a55c266570b" containerName="dnsmasq-dns" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.198759 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="9735f8fd-4a68-461b-9029-3a55c266570b" containerName="dnsmasq-dns" Nov 28 09:03:28 crc kubenswrapper[4946]: E1128 09:03:28.198905 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9735f8fd-4a68-461b-9029-3a55c266570b" containerName="init" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.199093 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="9735f8fd-4a68-461b-9029-3a55c266570b" containerName="init" Nov 28 09:03:28 crc kubenswrapper[4946]: E1128 09:03:28.199221 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a17ba4-ab8f-4369-893c-645128a36024" containerName="dnsmasq-dns" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.199330 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a17ba4-ab8f-4369-893c-645128a36024" containerName="dnsmasq-dns" Nov 28 09:03:28 crc kubenswrapper[4946]: E1128 09:03:28.199500 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a17ba4-ab8f-4369-893c-645128a36024" containerName="init" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.199634 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a17ba4-ab8f-4369-893c-645128a36024" containerName="init" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.200186 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5a17ba4-ab8f-4369-893c-645128a36024" containerName="dnsmasq-dns" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.200359 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="9735f8fd-4a68-461b-9029-3a55c266570b" containerName="dnsmasq-dns" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.201674 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.205105 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.205315 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-psw2j" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.207823 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.212065 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp"] Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.215322 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.216430 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.224895 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg"] Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.225141 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-7cq8d" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.225348 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.232490 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp"] Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.397946 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg\" (UID: \"d9fbbfa5-fb4b-4302-9d24-909f0eda1732\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.398099 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f293b1f4-ce1d-4962-8528-cc59f1a70093-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp\" (UID: \"f293b1f4-ce1d-4962-8528-cc59f1a70093\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.398157 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5mn7\" (UniqueName: \"kubernetes.io/projected/f293b1f4-ce1d-4962-8528-cc59f1a70093-kube-api-access-t5mn7\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp\" (UID: \"f293b1f4-ce1d-4962-8528-cc59f1a70093\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.398219 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f293b1f4-ce1d-4962-8528-cc59f1a70093-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp\" (UID: \"f293b1f4-ce1d-4962-8528-cc59f1a70093\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.398253 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg\" (UID: \"d9fbbfa5-fb4b-4302-9d24-909f0eda1732\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.398298 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f293b1f4-ce1d-4962-8528-cc59f1a70093-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp\" (UID: \"f293b1f4-ce1d-4962-8528-cc59f1a70093\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.398339 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqsx5\" (UniqueName: \"kubernetes.io/projected/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-kube-api-access-mqsx5\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg\" (UID: \"d9fbbfa5-fb4b-4302-9d24-909f0eda1732\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.398386 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg\" (UID: \"d9fbbfa5-fb4b-4302-9d24-909f0eda1732\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.398417 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg\" (UID: \"d9fbbfa5-fb4b-4302-9d24-909f0eda1732\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.499813 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f293b1f4-ce1d-4962-8528-cc59f1a70093-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp\" (UID: \"f293b1f4-ce1d-4962-8528-cc59f1a70093\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.499907 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5mn7\" (UniqueName: \"kubernetes.io/projected/f293b1f4-ce1d-4962-8528-cc59f1a70093-kube-api-access-t5mn7\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp\" (UID: \"f293b1f4-ce1d-4962-8528-cc59f1a70093\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.499972 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f293b1f4-ce1d-4962-8528-cc59f1a70093-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp\" (UID: \"f293b1f4-ce1d-4962-8528-cc59f1a70093\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.499997 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg\" (UID: \"d9fbbfa5-fb4b-4302-9d24-909f0eda1732\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.500033 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f293b1f4-ce1d-4962-8528-cc59f1a70093-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp\" (UID: \"f293b1f4-ce1d-4962-8528-cc59f1a70093\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.500062 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqsx5\" (UniqueName: \"kubernetes.io/projected/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-kube-api-access-mqsx5\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg\" (UID: \"d9fbbfa5-fb4b-4302-9d24-909f0eda1732\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.500094 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg\" (UID: \"d9fbbfa5-fb4b-4302-9d24-909f0eda1732\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.500123 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg\" (UID: \"d9fbbfa5-fb4b-4302-9d24-909f0eda1732\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.500165 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg\" (UID: \"d9fbbfa5-fb4b-4302-9d24-909f0eda1732\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.506825 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg\" (UID: \"d9fbbfa5-fb4b-4302-9d24-909f0eda1732\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.508485 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg\" (UID: \"d9fbbfa5-fb4b-4302-9d24-909f0eda1732\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.508699 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f293b1f4-ce1d-4962-8528-cc59f1a70093-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp\" (UID: \"f293b1f4-ce1d-4962-8528-cc59f1a70093\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.509060 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f293b1f4-ce1d-4962-8528-cc59f1a70093-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp\" (UID: \"f293b1f4-ce1d-4962-8528-cc59f1a70093\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.517492 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg\" (UID: \"d9fbbfa5-fb4b-4302-9d24-909f0eda1732\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.521270 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg\" (UID: \"d9fbbfa5-fb4b-4302-9d24-909f0eda1732\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.524848 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f293b1f4-ce1d-4962-8528-cc59f1a70093-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp\" (UID: \"f293b1f4-ce1d-4962-8528-cc59f1a70093\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.527663 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5mn7\" (UniqueName: \"kubernetes.io/projected/f293b1f4-ce1d-4962-8528-cc59f1a70093-kube-api-access-t5mn7\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp\" (UID: \"f293b1f4-ce1d-4962-8528-cc59f1a70093\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.532134 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqsx5\" (UniqueName: \"kubernetes.io/projected/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-kube-api-access-mqsx5\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg\" (UID: \"d9fbbfa5-fb4b-4302-9d24-909f0eda1732\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.543123 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp" Nov 28 09:03:28 crc kubenswrapper[4946]: I1128 09:03:28.825064 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg" Nov 28 09:03:29 crc kubenswrapper[4946]: I1128 09:03:29.139516 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp"] Nov 28 09:03:29 crc kubenswrapper[4946]: I1128 09:03:29.449069 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg"] Nov 28 09:03:29 crc kubenswrapper[4946]: W1128 09:03:29.457481 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9fbbfa5_fb4b_4302_9d24_909f0eda1732.slice/crio-0f2543f2229d428b3324ed32ce0517f8519fd2ad650b8018f5d66b1c8a3c6f68 WatchSource:0}: Error finding container 0f2543f2229d428b3324ed32ce0517f8519fd2ad650b8018f5d66b1c8a3c6f68: Status 404 returned error can't find the container with id 0f2543f2229d428b3324ed32ce0517f8519fd2ad650b8018f5d66b1c8a3c6f68 Nov 28 09:03:29 crc kubenswrapper[4946]: I1128 09:03:29.869265 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp" event={"ID":"f293b1f4-ce1d-4962-8528-cc59f1a70093","Type":"ContainerStarted","Data":"9f61d3ebb95896c20cbfe34ea29e813857b130f1f66c3515eec12ec0d8d040e4"} Nov 28 09:03:29 crc kubenswrapper[4946]: I1128 09:03:29.871153 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg" event={"ID":"d9fbbfa5-fb4b-4302-9d24-909f0eda1732","Type":"ContainerStarted","Data":"0f2543f2229d428b3324ed32ce0517f8519fd2ad650b8018f5d66b1c8a3c6f68"} Nov 28 09:03:38 crc kubenswrapper[4946]: I1128 09:03:38.974635 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp" event={"ID":"f293b1f4-ce1d-4962-8528-cc59f1a70093","Type":"ContainerStarted","Data":"83a9acab06648ca0916a90494d5aa180e8c7c9e782cd280a2b046ba721c0178d"} Nov 28 09:03:38 crc kubenswrapper[4946]: I1128 09:03:38.976146 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg" event={"ID":"d9fbbfa5-fb4b-4302-9d24-909f0eda1732","Type":"ContainerStarted","Data":"f7adc0ac46543f200c2aba42d8ccf2a05f1a457a31efd3e3a9d6d677a296e59f"} Nov 28 09:03:38 crc kubenswrapper[4946]: I1128 09:03:38.995493 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp" podStartSLOduration=1.607687944 podStartE2EDuration="10.995452374s" podCreationTimestamp="2025-11-28 09:03:28 +0000 UTC" firstStartedPulling="2025-11-28 09:03:29.164094188 +0000 UTC m=+7863.542159299" lastFinishedPulling="2025-11-28 09:03:38.551858608 +0000 UTC m=+7872.929923729" observedRunningTime="2025-11-28 09:03:38.992189993 +0000 UTC m=+7873.370255104" watchObservedRunningTime="2025-11-28 09:03:38.995452374 +0000 UTC m=+7873.373517485" Nov 28 09:03:39 crc kubenswrapper[4946]: I1128 09:03:39.012726 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg" podStartSLOduration=1.958083233 podStartE2EDuration="11.012707271s" podCreationTimestamp="2025-11-28 09:03:28 +0000 UTC" firstStartedPulling="2025-11-28 09:03:29.47020412 +0000 UTC m=+7863.848269241" lastFinishedPulling="2025-11-28 09:03:38.524828158 +0000 UTC m=+7872.902893279" observedRunningTime="2025-11-28 09:03:39.007149104 +0000 UTC m=+7873.385214215" watchObservedRunningTime="2025-11-28 09:03:39.012707271 +0000 UTC m=+7873.390772382" Nov 28 09:03:49 crc kubenswrapper[4946]: I1128 09:03:49.104574 4946 generic.go:334] "Generic (PLEG): container finished" podID="f293b1f4-ce1d-4962-8528-cc59f1a70093" containerID="83a9acab06648ca0916a90494d5aa180e8c7c9e782cd280a2b046ba721c0178d" exitCode=0 Nov 28 09:03:49 crc kubenswrapper[4946]: I1128 09:03:49.104855 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp" event={"ID":"f293b1f4-ce1d-4962-8528-cc59f1a70093","Type":"ContainerDied","Data":"83a9acab06648ca0916a90494d5aa180e8c7c9e782cd280a2b046ba721c0178d"} Nov 28 09:03:50 crc kubenswrapper[4946]: I1128 09:03:50.120777 4946 generic.go:334] "Generic (PLEG): container finished" podID="d9fbbfa5-fb4b-4302-9d24-909f0eda1732" containerID="f7adc0ac46543f200c2aba42d8ccf2a05f1a457a31efd3e3a9d6d677a296e59f" exitCode=0 Nov 28 09:03:50 crc kubenswrapper[4946]: I1128 09:03:50.120898 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg" event={"ID":"d9fbbfa5-fb4b-4302-9d24-909f0eda1732","Type":"ContainerDied","Data":"f7adc0ac46543f200c2aba42d8ccf2a05f1a457a31efd3e3a9d6d677a296e59f"} Nov 28 09:03:50 crc kubenswrapper[4946]: I1128 09:03:50.460374 4946 scope.go:117] "RemoveContainer" containerID="dd06f4fb21ca6f3398e04fa9f4a94ca44a70fc205f5028ddf521a39b7ced8a41" Nov 28 09:03:50 crc kubenswrapper[4946]: I1128 09:03:50.488350 4946 scope.go:117] "RemoveContainer" containerID="f65e759f88c4f12322679858a51b2efb8e0875588667933b40e551ad478f7e26" Nov 28 09:03:50 crc kubenswrapper[4946]: I1128 09:03:50.521547 4946 scope.go:117] "RemoveContainer" containerID="9b96b891ac92b8dbe02be64f74d1319b150ab3f2328c605b8b6b3765faa93a88" Nov 28 09:03:50 crc kubenswrapper[4946]: I1128 09:03:50.628014 4946 scope.go:117] "RemoveContainer" containerID="4f8e173b34d1506a6bed0691341522603cad87d30ac6502d8ff321cf68fcd7a5" Nov 28 09:03:50 crc kubenswrapper[4946]: I1128 09:03:50.633259 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp" Nov 28 09:03:50 crc kubenswrapper[4946]: I1128 09:03:50.681736 4946 scope.go:117] "RemoveContainer" containerID="60346e20f06f7a9e68228a717462a1199289d1730ad13ae3c36bcc100b4d76c8" Nov 28 09:03:50 crc kubenswrapper[4946]: I1128 09:03:50.785314 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f293b1f4-ce1d-4962-8528-cc59f1a70093-pre-adoption-validation-combined-ca-bundle\") pod \"f293b1f4-ce1d-4962-8528-cc59f1a70093\" (UID: \"f293b1f4-ce1d-4962-8528-cc59f1a70093\") " Nov 28 09:03:50 crc kubenswrapper[4946]: I1128 09:03:50.785412 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f293b1f4-ce1d-4962-8528-cc59f1a70093-ssh-key\") pod \"f293b1f4-ce1d-4962-8528-cc59f1a70093\" (UID: \"f293b1f4-ce1d-4962-8528-cc59f1a70093\") " Nov 28 09:03:50 crc kubenswrapper[4946]: I1128 09:03:50.785533 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f293b1f4-ce1d-4962-8528-cc59f1a70093-inventory\") pod \"f293b1f4-ce1d-4962-8528-cc59f1a70093\" (UID: \"f293b1f4-ce1d-4962-8528-cc59f1a70093\") " Nov 28 09:03:50 crc kubenswrapper[4946]: I1128 09:03:50.785600 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5mn7\" (UniqueName: \"kubernetes.io/projected/f293b1f4-ce1d-4962-8528-cc59f1a70093-kube-api-access-t5mn7\") pod \"f293b1f4-ce1d-4962-8528-cc59f1a70093\" (UID: \"f293b1f4-ce1d-4962-8528-cc59f1a70093\") " Nov 28 09:03:50 crc kubenswrapper[4946]: I1128 09:03:50.794609 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f293b1f4-ce1d-4962-8528-cc59f1a70093-kube-api-access-t5mn7" (OuterVolumeSpecName: "kube-api-access-t5mn7") pod "f293b1f4-ce1d-4962-8528-cc59f1a70093" (UID: "f293b1f4-ce1d-4962-8528-cc59f1a70093"). InnerVolumeSpecName "kube-api-access-t5mn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:03:50 crc kubenswrapper[4946]: I1128 09:03:50.794924 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f293b1f4-ce1d-4962-8528-cc59f1a70093-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "f293b1f4-ce1d-4962-8528-cc59f1a70093" (UID: "f293b1f4-ce1d-4962-8528-cc59f1a70093"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:03:50 crc kubenswrapper[4946]: I1128 09:03:50.828647 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f293b1f4-ce1d-4962-8528-cc59f1a70093-inventory" (OuterVolumeSpecName: "inventory") pod "f293b1f4-ce1d-4962-8528-cc59f1a70093" (UID: "f293b1f4-ce1d-4962-8528-cc59f1a70093"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:03:50 crc kubenswrapper[4946]: I1128 09:03:50.842683 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f293b1f4-ce1d-4962-8528-cc59f1a70093-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f293b1f4-ce1d-4962-8528-cc59f1a70093" (UID: "f293b1f4-ce1d-4962-8528-cc59f1a70093"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:03:50 crc kubenswrapper[4946]: I1128 09:03:50.890413 4946 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f293b1f4-ce1d-4962-8528-cc59f1a70093-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:50 crc kubenswrapper[4946]: I1128 09:03:50.890453 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f293b1f4-ce1d-4962-8528-cc59f1a70093-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:50 crc kubenswrapper[4946]: I1128 09:03:50.890483 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f293b1f4-ce1d-4962-8528-cc59f1a70093-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:50 crc kubenswrapper[4946]: I1128 09:03:50.890493 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5mn7\" (UniqueName: \"kubernetes.io/projected/f293b1f4-ce1d-4962-8528-cc59f1a70093-kube-api-access-t5mn7\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:50 crc kubenswrapper[4946]: I1128 09:03:50.934935 4946 scope.go:117] "RemoveContainer" containerID="461239738a7a06da605c8471f4b12b183a6dc41331b73b3e3f3c177572bcb55b" Nov 28 09:03:51 crc kubenswrapper[4946]: I1128 09:03:51.140938 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp" Nov 28 09:03:51 crc kubenswrapper[4946]: I1128 09:03:51.141050 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp" event={"ID":"f293b1f4-ce1d-4962-8528-cc59f1a70093","Type":"ContainerDied","Data":"9f61d3ebb95896c20cbfe34ea29e813857b130f1f66c3515eec12ec0d8d040e4"} Nov 28 09:03:51 crc kubenswrapper[4946]: I1128 09:03:51.141098 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f61d3ebb95896c20cbfe34ea29e813857b130f1f66c3515eec12ec0d8d040e4" Nov 28 09:03:51 crc kubenswrapper[4946]: I1128 09:03:51.626495 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg" Nov 28 09:03:51 crc kubenswrapper[4946]: I1128 09:03:51.706952 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqsx5\" (UniqueName: \"kubernetes.io/projected/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-kube-api-access-mqsx5\") pod \"d9fbbfa5-fb4b-4302-9d24-909f0eda1732\" (UID: \"d9fbbfa5-fb4b-4302-9d24-909f0eda1732\") " Nov 28 09:03:51 crc kubenswrapper[4946]: I1128 09:03:51.707023 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-pre-adoption-validation-combined-ca-bundle\") pod \"d9fbbfa5-fb4b-4302-9d24-909f0eda1732\" (UID: \"d9fbbfa5-fb4b-4302-9d24-909f0eda1732\") " Nov 28 09:03:51 crc kubenswrapper[4946]: I1128 09:03:51.707184 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-ssh-key\") pod \"d9fbbfa5-fb4b-4302-9d24-909f0eda1732\" (UID: \"d9fbbfa5-fb4b-4302-9d24-909f0eda1732\") " Nov 28 09:03:51 crc kubenswrapper[4946]: I1128 09:03:51.707324 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-inventory\") pod \"d9fbbfa5-fb4b-4302-9d24-909f0eda1732\" (UID: \"d9fbbfa5-fb4b-4302-9d24-909f0eda1732\") " Nov 28 09:03:51 crc kubenswrapper[4946]: I1128 09:03:51.707491 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-ceph\") pod \"d9fbbfa5-fb4b-4302-9d24-909f0eda1732\" (UID: \"d9fbbfa5-fb4b-4302-9d24-909f0eda1732\") " Nov 28 09:03:51 crc kubenswrapper[4946]: I1128 09:03:51.712112 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "d9fbbfa5-fb4b-4302-9d24-909f0eda1732" (UID: "d9fbbfa5-fb4b-4302-9d24-909f0eda1732"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:03:51 crc kubenswrapper[4946]: I1128 09:03:51.712762 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-kube-api-access-mqsx5" (OuterVolumeSpecName: "kube-api-access-mqsx5") pod "d9fbbfa5-fb4b-4302-9d24-909f0eda1732" (UID: "d9fbbfa5-fb4b-4302-9d24-909f0eda1732"). InnerVolumeSpecName "kube-api-access-mqsx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:03:51 crc kubenswrapper[4946]: I1128 09:03:51.715612 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-ceph" (OuterVolumeSpecName: "ceph") pod "d9fbbfa5-fb4b-4302-9d24-909f0eda1732" (UID: "d9fbbfa5-fb4b-4302-9d24-909f0eda1732"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:03:51 crc kubenswrapper[4946]: I1128 09:03:51.753306 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d9fbbfa5-fb4b-4302-9d24-909f0eda1732" (UID: "d9fbbfa5-fb4b-4302-9d24-909f0eda1732"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:03:51 crc kubenswrapper[4946]: I1128 09:03:51.755174 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-inventory" (OuterVolumeSpecName: "inventory") pod "d9fbbfa5-fb4b-4302-9d24-909f0eda1732" (UID: "d9fbbfa5-fb4b-4302-9d24-909f0eda1732"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:03:51 crc kubenswrapper[4946]: I1128 09:03:51.810329 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqsx5\" (UniqueName: \"kubernetes.io/projected/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-kube-api-access-mqsx5\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:51 crc kubenswrapper[4946]: I1128 09:03:51.810360 4946 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:51 crc kubenswrapper[4946]: I1128 09:03:51.810370 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:51 crc kubenswrapper[4946]: I1128 09:03:51.810378 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:51 crc kubenswrapper[4946]: I1128 09:03:51.810386 4946 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d9fbbfa5-fb4b-4302-9d24-909f0eda1732-ceph\") on node \"crc\" DevicePath \"\"" Nov 28 09:03:52 crc kubenswrapper[4946]: I1128 09:03:52.151237 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg" event={"ID":"d9fbbfa5-fb4b-4302-9d24-909f0eda1732","Type":"ContainerDied","Data":"0f2543f2229d428b3324ed32ce0517f8519fd2ad650b8018f5d66b1c8a3c6f68"} Nov 28 09:03:52 crc kubenswrapper[4946]: I1128 09:03:52.151274 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f2543f2229d428b3324ed32ce0517f8519fd2ad650b8018f5d66b1c8a3c6f68" Nov 28 09:03:52 crc kubenswrapper[4946]: I1128 09:03:52.151318 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg" Nov 28 09:03:54 crc kubenswrapper[4946]: I1128 09:03:54.731146 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:03:54 crc kubenswrapper[4946]: I1128 09:03:54.732108 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.506427 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2"] Nov 28 09:04:01 crc kubenswrapper[4946]: E1128 09:04:01.507760 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9fbbfa5-fb4b-4302-9d24-909f0eda1732" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.507785 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9fbbfa5-fb4b-4302-9d24-909f0eda1732" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Nov 28 09:04:01 crc kubenswrapper[4946]: E1128 09:04:01.507872 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f293b1f4-ce1d-4962-8528-cc59f1a70093" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-networ" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.507889 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="f293b1f4-ce1d-4962-8528-cc59f1a70093" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-networ" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.508243 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="f293b1f4-ce1d-4962-8528-cc59f1a70093" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-networ" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.508285 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9fbbfa5-fb4b-4302-9d24-909f0eda1732" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.515538 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.516362 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p"] Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.518215 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.518315 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/041f147a-ac40-4d08-8953-4ba399c7159c-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2\" (UID: \"041f147a-ac40-4d08-8953-4ba399c7159c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.518437 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/041f147a-ac40-4d08-8953-4ba399c7159c-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2\" (UID: \"041f147a-ac40-4d08-8953-4ba399c7159c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.518628 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/041f147a-ac40-4d08-8953-4ba399c7159c-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2\" (UID: \"041f147a-ac40-4d08-8953-4ba399c7159c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.518713 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnsvr\" (UniqueName: \"kubernetes.io/projected/041f147a-ac40-4d08-8953-4ba399c7159c-kube-api-access-fnsvr\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2\" (UID: \"041f147a-ac40-4d08-8953-4ba399c7159c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.518793 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/041f147a-ac40-4d08-8953-4ba399c7159c-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2\" (UID: \"041f147a-ac40-4d08-8953-4ba399c7159c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.520339 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.521140 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.521192 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-7cq8d" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.521573 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-psw2j" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.521661 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.521698 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.542823 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p"] Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.561549 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2"] Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.620792 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p\" (UID: \"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.620921 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p\" (UID: \"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.620957 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/041f147a-ac40-4d08-8953-4ba399c7159c-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2\" (UID: \"041f147a-ac40-4d08-8953-4ba399c7159c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.620994 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p654\" (UniqueName: \"kubernetes.io/projected/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-kube-api-access-9p654\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p\" (UID: \"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.621062 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnsvr\" (UniqueName: \"kubernetes.io/projected/041f147a-ac40-4d08-8953-4ba399c7159c-kube-api-access-fnsvr\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2\" (UID: \"041f147a-ac40-4d08-8953-4ba399c7159c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.621138 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/041f147a-ac40-4d08-8953-4ba399c7159c-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2\" (UID: \"041f147a-ac40-4d08-8953-4ba399c7159c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.621172 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p\" (UID: \"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.621219 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/041f147a-ac40-4d08-8953-4ba399c7159c-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2\" (UID: \"041f147a-ac40-4d08-8953-4ba399c7159c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.621276 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/041f147a-ac40-4d08-8953-4ba399c7159c-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2\" (UID: \"041f147a-ac40-4d08-8953-4ba399c7159c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.627330 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/041f147a-ac40-4d08-8953-4ba399c7159c-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2\" (UID: \"041f147a-ac40-4d08-8953-4ba399c7159c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.628440 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/041f147a-ac40-4d08-8953-4ba399c7159c-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2\" (UID: \"041f147a-ac40-4d08-8953-4ba399c7159c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.630250 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/041f147a-ac40-4d08-8953-4ba399c7159c-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2\" (UID: \"041f147a-ac40-4d08-8953-4ba399c7159c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.635051 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/041f147a-ac40-4d08-8953-4ba399c7159c-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2\" (UID: \"041f147a-ac40-4d08-8953-4ba399c7159c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.643295 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnsvr\" (UniqueName: \"kubernetes.io/projected/041f147a-ac40-4d08-8953-4ba399c7159c-kube-api-access-fnsvr\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2\" (UID: \"041f147a-ac40-4d08-8953-4ba399c7159c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.722038 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p\" (UID: \"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.722100 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p654\" (UniqueName: \"kubernetes.io/projected/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-kube-api-access-9p654\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p\" (UID: \"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.722209 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p\" (UID: \"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.722295 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p\" (UID: \"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.726365 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p\" (UID: \"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.727485 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p\" (UID: \"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.731011 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p\" (UID: \"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.757044 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p654\" (UniqueName: \"kubernetes.io/projected/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-kube-api-access-9p654\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p\" (UID: \"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.861535 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2" Nov 28 09:04:01 crc kubenswrapper[4946]: I1128 09:04:01.872473 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p" Nov 28 09:04:02 crc kubenswrapper[4946]: I1128 09:04:02.493394 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2"] Nov 28 09:04:02 crc kubenswrapper[4946]: I1128 09:04:02.587157 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p"] Nov 28 09:04:03 crc kubenswrapper[4946]: I1128 09:04:03.285038 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p" event={"ID":"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7","Type":"ContainerStarted","Data":"5a3c764886489dbbd25f1727fc3cac2046db2f8aab5e0f4869b057917069d5f5"} Nov 28 09:04:03 crc kubenswrapper[4946]: I1128 09:04:03.287532 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2" event={"ID":"041f147a-ac40-4d08-8953-4ba399c7159c","Type":"ContainerStarted","Data":"42ee473bb0e8151d3c961bbd7745b3239e282d2f7041ffecd0a778cce2ca25ae"} Nov 28 09:04:04 crc kubenswrapper[4946]: I1128 09:04:04.311165 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2" event={"ID":"041f147a-ac40-4d08-8953-4ba399c7159c","Type":"ContainerStarted","Data":"e7d726b5322ab17d7e8021c2055a4f05cdb82669d1aa9575d4a94f5f867b86a8"} Nov 28 09:04:04 crc kubenswrapper[4946]: I1128 09:04:04.316061 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p" event={"ID":"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7","Type":"ContainerStarted","Data":"b7c11b91610189fb94a86f68b52674d6f48847858d223bc3f9af6c672935f9ca"} Nov 28 09:04:04 crc kubenswrapper[4946]: I1128 09:04:04.351104 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2" podStartSLOduration=2.571365467 podStartE2EDuration="3.351032107s" podCreationTimestamp="2025-11-28 09:04:01 +0000 UTC" firstStartedPulling="2025-11-28 09:04:02.502559987 +0000 UTC m=+7896.880625138" lastFinishedPulling="2025-11-28 09:04:03.282226667 +0000 UTC m=+7897.660291778" observedRunningTime="2025-11-28 09:04:04.346044783 +0000 UTC m=+7898.724109934" watchObservedRunningTime="2025-11-28 09:04:04.351032107 +0000 UTC m=+7898.729097258" Nov 28 09:04:04 crc kubenswrapper[4946]: I1128 09:04:04.376993 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p" podStartSLOduration=2.6168991950000002 podStartE2EDuration="3.376967129s" podCreationTimestamp="2025-11-28 09:04:01 +0000 UTC" firstStartedPulling="2025-11-28 09:04:02.590089295 +0000 UTC m=+7896.968154416" lastFinishedPulling="2025-11-28 09:04:03.350157229 +0000 UTC m=+7897.728222350" observedRunningTime="2025-11-28 09:04:04.371686678 +0000 UTC m=+7898.749751819" watchObservedRunningTime="2025-11-28 09:04:04.376967129 +0000 UTC m=+7898.755032270" Nov 28 09:04:08 crc kubenswrapper[4946]: I1128 09:04:08.056095 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-qw9dr"] Nov 28 09:04:08 crc kubenswrapper[4946]: I1128 09:04:08.074005 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-9652-account-create-update-xdcdt"] Nov 28 09:04:08 crc kubenswrapper[4946]: I1128 09:04:08.087723 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-qw9dr"] Nov 28 09:04:08 crc kubenswrapper[4946]: I1128 09:04:08.102672 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-9652-account-create-update-xdcdt"] Nov 28 09:04:10 crc kubenswrapper[4946]: I1128 09:04:10.010954 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd203623-2c1b-4be7-baf7-cf2238097ed1" path="/var/lib/kubelet/pods/cd203623-2c1b-4be7-baf7-cf2238097ed1/volumes" Nov 28 09:04:10 crc kubenswrapper[4946]: I1128 09:04:10.012422 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0cb24b0-63f6-4b88-bf38-35f389e32715" path="/var/lib/kubelet/pods/e0cb24b0-63f6-4b88-bf38-35f389e32715/volumes" Nov 28 09:04:24 crc kubenswrapper[4946]: I1128 09:04:24.730609 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:04:24 crc kubenswrapper[4946]: I1128 09:04:24.731576 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:04:24 crc kubenswrapper[4946]: I1128 09:04:24.731707 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 09:04:24 crc kubenswrapper[4946]: I1128 09:04:24.733180 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"754338137d28a0c99ab6cd99ffe748df8dc8aeb4903b18cddaca660262b3d025"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 09:04:24 crc kubenswrapper[4946]: I1128 09:04:24.733312 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://754338137d28a0c99ab6cd99ffe748df8dc8aeb4903b18cddaca660262b3d025" gracePeriod=600 Nov 28 09:04:25 crc kubenswrapper[4946]: I1128 09:04:25.616931 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="754338137d28a0c99ab6cd99ffe748df8dc8aeb4903b18cddaca660262b3d025" exitCode=0 Nov 28 09:04:25 crc kubenswrapper[4946]: I1128 09:04:25.617057 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"754338137d28a0c99ab6cd99ffe748df8dc8aeb4903b18cddaca660262b3d025"} Nov 28 09:04:25 crc kubenswrapper[4946]: I1128 09:04:25.617658 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880"} Nov 28 09:04:25 crc kubenswrapper[4946]: I1128 09:04:25.617695 4946 scope.go:117] "RemoveContainer" containerID="39cf365640a94edb527a038b876ca944a2f17e9a393ff02023ee1ab525b0e376" Nov 28 09:04:36 crc kubenswrapper[4946]: I1128 09:04:36.063504 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-4jgss"] Nov 28 09:04:36 crc kubenswrapper[4946]: I1128 09:04:36.073590 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-4jgss"] Nov 28 09:04:38 crc kubenswrapper[4946]: I1128 09:04:38.008385 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75f84d54-d949-4047-becb-0d08af1d91b5" path="/var/lib/kubelet/pods/75f84d54-d949-4047-becb-0d08af1d91b5/volumes" Nov 28 09:04:51 crc kubenswrapper[4946]: I1128 09:04:51.119846 4946 scope.go:117] "RemoveContainer" containerID="a612814cb9da1360fac311a79a14baf6bad243c6d53ea79be2931a0e660794ec" Nov 28 09:04:51 crc kubenswrapper[4946]: I1128 09:04:51.162406 4946 scope.go:117] "RemoveContainer" containerID="342b9a426b093662706492b35854dec560b3a681a5cfa698c6bf87229c603473" Nov 28 09:04:51 crc kubenswrapper[4946]: I1128 09:04:51.255516 4946 scope.go:117] "RemoveContainer" containerID="f0f1801f8e4c0a7eec8dd6fc1bf01473999ebdec4b25ac1b518d836278a3232f" Nov 28 09:06:35 crc kubenswrapper[4946]: I1128 09:06:35.197298 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rx8sd"] Nov 28 09:06:35 crc kubenswrapper[4946]: I1128 09:06:35.207752 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rx8sd" Nov 28 09:06:35 crc kubenswrapper[4946]: I1128 09:06:35.228610 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rx8sd"] Nov 28 09:06:35 crc kubenswrapper[4946]: I1128 09:06:35.351736 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/296f2f04-739f-4ba5-be15-902ec62a6d38-catalog-content\") pod \"certified-operators-rx8sd\" (UID: \"296f2f04-739f-4ba5-be15-902ec62a6d38\") " pod="openshift-marketplace/certified-operators-rx8sd" Nov 28 09:06:35 crc kubenswrapper[4946]: I1128 09:06:35.351841 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g88ch\" (UniqueName: \"kubernetes.io/projected/296f2f04-739f-4ba5-be15-902ec62a6d38-kube-api-access-g88ch\") pod \"certified-operators-rx8sd\" (UID: \"296f2f04-739f-4ba5-be15-902ec62a6d38\") " pod="openshift-marketplace/certified-operators-rx8sd" Nov 28 09:06:35 crc kubenswrapper[4946]: I1128 09:06:35.351914 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/296f2f04-739f-4ba5-be15-902ec62a6d38-utilities\") pod \"certified-operators-rx8sd\" (UID: \"296f2f04-739f-4ba5-be15-902ec62a6d38\") " pod="openshift-marketplace/certified-operators-rx8sd" Nov 28 09:06:35 crc kubenswrapper[4946]: I1128 09:06:35.454565 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/296f2f04-739f-4ba5-be15-902ec62a6d38-catalog-content\") pod \"certified-operators-rx8sd\" (UID: \"296f2f04-739f-4ba5-be15-902ec62a6d38\") " pod="openshift-marketplace/certified-operators-rx8sd" Nov 28 09:06:35 crc kubenswrapper[4946]: I1128 09:06:35.455145 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/296f2f04-739f-4ba5-be15-902ec62a6d38-catalog-content\") pod \"certified-operators-rx8sd\" (UID: \"296f2f04-739f-4ba5-be15-902ec62a6d38\") " pod="openshift-marketplace/certified-operators-rx8sd" Nov 28 09:06:35 crc kubenswrapper[4946]: I1128 09:06:35.455778 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g88ch\" (UniqueName: \"kubernetes.io/projected/296f2f04-739f-4ba5-be15-902ec62a6d38-kube-api-access-g88ch\") pod \"certified-operators-rx8sd\" (UID: \"296f2f04-739f-4ba5-be15-902ec62a6d38\") " pod="openshift-marketplace/certified-operators-rx8sd" Nov 28 09:06:35 crc kubenswrapper[4946]: I1128 09:06:35.455861 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/296f2f04-739f-4ba5-be15-902ec62a6d38-utilities\") pod \"certified-operators-rx8sd\" (UID: \"296f2f04-739f-4ba5-be15-902ec62a6d38\") " pod="openshift-marketplace/certified-operators-rx8sd" Nov 28 09:06:35 crc kubenswrapper[4946]: I1128 09:06:35.456363 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/296f2f04-739f-4ba5-be15-902ec62a6d38-utilities\") pod \"certified-operators-rx8sd\" (UID: \"296f2f04-739f-4ba5-be15-902ec62a6d38\") " pod="openshift-marketplace/certified-operators-rx8sd" Nov 28 09:06:35 crc kubenswrapper[4946]: I1128 09:06:35.478116 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g88ch\" (UniqueName: \"kubernetes.io/projected/296f2f04-739f-4ba5-be15-902ec62a6d38-kube-api-access-g88ch\") pod \"certified-operators-rx8sd\" (UID: \"296f2f04-739f-4ba5-be15-902ec62a6d38\") " pod="openshift-marketplace/certified-operators-rx8sd" Nov 28 09:06:35 crc kubenswrapper[4946]: I1128 09:06:35.583449 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rx8sd" Nov 28 09:06:36 crc kubenswrapper[4946]: I1128 09:06:36.103941 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rx8sd"] Nov 28 09:06:36 crc kubenswrapper[4946]: I1128 09:06:36.487754 4946 generic.go:334] "Generic (PLEG): container finished" podID="296f2f04-739f-4ba5-be15-902ec62a6d38" containerID="b56f0770967dec818faa8a3dfe71291e51c632dde27f09173a451498fab7ad24" exitCode=0 Nov 28 09:06:36 crc kubenswrapper[4946]: I1128 09:06:36.487805 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rx8sd" event={"ID":"296f2f04-739f-4ba5-be15-902ec62a6d38","Type":"ContainerDied","Data":"b56f0770967dec818faa8a3dfe71291e51c632dde27f09173a451498fab7ad24"} Nov 28 09:06:36 crc kubenswrapper[4946]: I1128 09:06:36.488123 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rx8sd" event={"ID":"296f2f04-739f-4ba5-be15-902ec62a6d38","Type":"ContainerStarted","Data":"6ce0e92d1a3c9ff20715b4f1fd30c8416c8ea7e600e72429d69e6ba37c810cfd"} Nov 28 09:06:36 crc kubenswrapper[4946]: I1128 09:06:36.490276 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 09:06:42 crc kubenswrapper[4946]: I1128 09:06:42.550168 4946 generic.go:334] "Generic (PLEG): container finished" podID="296f2f04-739f-4ba5-be15-902ec62a6d38" containerID="a245a8ef6be2d2dd49f819fda9e18369e77ec2b4460fdaf61fa41aa2ccfebab7" exitCode=0 Nov 28 09:06:42 crc kubenswrapper[4946]: I1128 09:06:42.550342 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rx8sd" event={"ID":"296f2f04-739f-4ba5-be15-902ec62a6d38","Type":"ContainerDied","Data":"a245a8ef6be2d2dd49f819fda9e18369e77ec2b4460fdaf61fa41aa2ccfebab7"} Nov 28 09:06:44 crc kubenswrapper[4946]: I1128 09:06:44.600066 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rx8sd" event={"ID":"296f2f04-739f-4ba5-be15-902ec62a6d38","Type":"ContainerStarted","Data":"334d9b10b20484faddc3d275085d5a4355eea2328763cf2ab2678ea8433411d6"} Nov 28 09:06:44 crc kubenswrapper[4946]: I1128 09:06:44.621842 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rx8sd" podStartSLOduration=2.768712376 podStartE2EDuration="9.621804932s" podCreationTimestamp="2025-11-28 09:06:35 +0000 UTC" firstStartedPulling="2025-11-28 09:06:36.490040978 +0000 UTC m=+8050.868106089" lastFinishedPulling="2025-11-28 09:06:43.343133524 +0000 UTC m=+8057.721198645" observedRunningTime="2025-11-28 09:06:44.617230349 +0000 UTC m=+8058.995295490" watchObservedRunningTime="2025-11-28 09:06:44.621804932 +0000 UTC m=+8058.999870063" Nov 28 09:06:45 crc kubenswrapper[4946]: I1128 09:06:45.583972 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rx8sd" Nov 28 09:06:45 crc kubenswrapper[4946]: I1128 09:06:45.584328 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rx8sd" Nov 28 09:06:45 crc kubenswrapper[4946]: I1128 09:06:45.645450 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rx8sd" Nov 28 09:06:54 crc kubenswrapper[4946]: I1128 09:06:54.730871 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:06:54 crc kubenswrapper[4946]: I1128 09:06:54.731494 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:06:55 crc kubenswrapper[4946]: I1128 09:06:55.674705 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rx8sd" Nov 28 09:06:55 crc kubenswrapper[4946]: I1128 09:06:55.762383 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rx8sd"] Nov 28 09:06:55 crc kubenswrapper[4946]: I1128 09:06:55.825039 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5rk47"] Nov 28 09:06:55 crc kubenswrapper[4946]: I1128 09:06:55.825272 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5rk47" podUID="affebde7-d83f-4478-85fa-2dcba3ec2499" containerName="registry-server" containerID="cri-o://9c3e47b930804795d4e9b89145622e35b9c34eee790bff2fe27e6bfcf7e83b31" gracePeriod=2 Nov 28 09:06:56 crc kubenswrapper[4946]: I1128 09:06:56.301375 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5rk47" Nov 28 09:06:56 crc kubenswrapper[4946]: I1128 09:06:56.464103 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affebde7-d83f-4478-85fa-2dcba3ec2499-catalog-content\") pod \"affebde7-d83f-4478-85fa-2dcba3ec2499\" (UID: \"affebde7-d83f-4478-85fa-2dcba3ec2499\") " Nov 28 09:06:56 crc kubenswrapper[4946]: I1128 09:06:56.464166 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affebde7-d83f-4478-85fa-2dcba3ec2499-utilities\") pod \"affebde7-d83f-4478-85fa-2dcba3ec2499\" (UID: \"affebde7-d83f-4478-85fa-2dcba3ec2499\") " Nov 28 09:06:56 crc kubenswrapper[4946]: I1128 09:06:56.464432 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhlfk\" (UniqueName: \"kubernetes.io/projected/affebde7-d83f-4478-85fa-2dcba3ec2499-kube-api-access-vhlfk\") pod \"affebde7-d83f-4478-85fa-2dcba3ec2499\" (UID: \"affebde7-d83f-4478-85fa-2dcba3ec2499\") " Nov 28 09:06:56 crc kubenswrapper[4946]: I1128 09:06:56.466447 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/affebde7-d83f-4478-85fa-2dcba3ec2499-utilities" (OuterVolumeSpecName: "utilities") pod "affebde7-d83f-4478-85fa-2dcba3ec2499" (UID: "affebde7-d83f-4478-85fa-2dcba3ec2499"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:06:56 crc kubenswrapper[4946]: I1128 09:06:56.470874 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/affebde7-d83f-4478-85fa-2dcba3ec2499-kube-api-access-vhlfk" (OuterVolumeSpecName: "kube-api-access-vhlfk") pod "affebde7-d83f-4478-85fa-2dcba3ec2499" (UID: "affebde7-d83f-4478-85fa-2dcba3ec2499"). InnerVolumeSpecName "kube-api-access-vhlfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:06:56 crc kubenswrapper[4946]: I1128 09:06:56.512293 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/affebde7-d83f-4478-85fa-2dcba3ec2499-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "affebde7-d83f-4478-85fa-2dcba3ec2499" (UID: "affebde7-d83f-4478-85fa-2dcba3ec2499"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:06:56 crc kubenswrapper[4946]: I1128 09:06:56.566822 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhlfk\" (UniqueName: \"kubernetes.io/projected/affebde7-d83f-4478-85fa-2dcba3ec2499-kube-api-access-vhlfk\") on node \"crc\" DevicePath \"\"" Nov 28 09:06:56 crc kubenswrapper[4946]: I1128 09:06:56.566859 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affebde7-d83f-4478-85fa-2dcba3ec2499-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 09:06:56 crc kubenswrapper[4946]: I1128 09:06:56.566872 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affebde7-d83f-4478-85fa-2dcba3ec2499-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 09:06:56 crc kubenswrapper[4946]: I1128 09:06:56.747539 4946 generic.go:334] "Generic (PLEG): container finished" podID="affebde7-d83f-4478-85fa-2dcba3ec2499" containerID="9c3e47b930804795d4e9b89145622e35b9c34eee790bff2fe27e6bfcf7e83b31" exitCode=0 Nov 28 09:06:56 crc kubenswrapper[4946]: I1128 09:06:56.747597 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5rk47" event={"ID":"affebde7-d83f-4478-85fa-2dcba3ec2499","Type":"ContainerDied","Data":"9c3e47b930804795d4e9b89145622e35b9c34eee790bff2fe27e6bfcf7e83b31"} Nov 28 09:06:56 crc kubenswrapper[4946]: I1128 09:06:56.747618 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5rk47" Nov 28 09:06:56 crc kubenswrapper[4946]: I1128 09:06:56.747635 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5rk47" event={"ID":"affebde7-d83f-4478-85fa-2dcba3ec2499","Type":"ContainerDied","Data":"3f6c157240b95f9093427a2409d9c381a8a3939b6f288cadac5512e3f052e825"} Nov 28 09:06:56 crc kubenswrapper[4946]: I1128 09:06:56.747657 4946 scope.go:117] "RemoveContainer" containerID="9c3e47b930804795d4e9b89145622e35b9c34eee790bff2fe27e6bfcf7e83b31" Nov 28 09:06:56 crc kubenswrapper[4946]: I1128 09:06:56.780227 4946 scope.go:117] "RemoveContainer" containerID="4efe97155707deb6860ae9515826ad72caad61f259ecd515ef7536baad8a5ade" Nov 28 09:06:56 crc kubenswrapper[4946]: I1128 09:06:56.785849 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5rk47"] Nov 28 09:06:56 crc kubenswrapper[4946]: I1128 09:06:56.795639 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5rk47"] Nov 28 09:06:56 crc kubenswrapper[4946]: I1128 09:06:56.804573 4946 scope.go:117] "RemoveContainer" containerID="528837ebaa02f1c228381924dbb77c8ce056319330899045fc68f1cd0f5263b4" Nov 28 09:06:56 crc kubenswrapper[4946]: I1128 09:06:56.857690 4946 scope.go:117] "RemoveContainer" containerID="9c3e47b930804795d4e9b89145622e35b9c34eee790bff2fe27e6bfcf7e83b31" Nov 28 09:06:56 crc kubenswrapper[4946]: E1128 09:06:56.858182 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c3e47b930804795d4e9b89145622e35b9c34eee790bff2fe27e6bfcf7e83b31\": container with ID starting with 9c3e47b930804795d4e9b89145622e35b9c34eee790bff2fe27e6bfcf7e83b31 not found: ID does not exist" containerID="9c3e47b930804795d4e9b89145622e35b9c34eee790bff2fe27e6bfcf7e83b31" Nov 28 09:06:56 crc kubenswrapper[4946]: I1128 09:06:56.858224 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c3e47b930804795d4e9b89145622e35b9c34eee790bff2fe27e6bfcf7e83b31"} err="failed to get container status \"9c3e47b930804795d4e9b89145622e35b9c34eee790bff2fe27e6bfcf7e83b31\": rpc error: code = NotFound desc = could not find container \"9c3e47b930804795d4e9b89145622e35b9c34eee790bff2fe27e6bfcf7e83b31\": container with ID starting with 9c3e47b930804795d4e9b89145622e35b9c34eee790bff2fe27e6bfcf7e83b31 not found: ID does not exist" Nov 28 09:06:56 crc kubenswrapper[4946]: I1128 09:06:56.858249 4946 scope.go:117] "RemoveContainer" containerID="4efe97155707deb6860ae9515826ad72caad61f259ecd515ef7536baad8a5ade" Nov 28 09:06:56 crc kubenswrapper[4946]: E1128 09:06:56.858619 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4efe97155707deb6860ae9515826ad72caad61f259ecd515ef7536baad8a5ade\": container with ID starting with 4efe97155707deb6860ae9515826ad72caad61f259ecd515ef7536baad8a5ade not found: ID does not exist" containerID="4efe97155707deb6860ae9515826ad72caad61f259ecd515ef7536baad8a5ade" Nov 28 09:06:56 crc kubenswrapper[4946]: I1128 09:06:56.858642 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4efe97155707deb6860ae9515826ad72caad61f259ecd515ef7536baad8a5ade"} err="failed to get container status \"4efe97155707deb6860ae9515826ad72caad61f259ecd515ef7536baad8a5ade\": rpc error: code = NotFound desc = could not find container \"4efe97155707deb6860ae9515826ad72caad61f259ecd515ef7536baad8a5ade\": container with ID starting with 4efe97155707deb6860ae9515826ad72caad61f259ecd515ef7536baad8a5ade not found: ID does not exist" Nov 28 09:06:56 crc kubenswrapper[4946]: I1128 09:06:56.858656 4946 scope.go:117] "RemoveContainer" containerID="528837ebaa02f1c228381924dbb77c8ce056319330899045fc68f1cd0f5263b4" Nov 28 09:06:56 crc kubenswrapper[4946]: E1128 09:06:56.858906 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"528837ebaa02f1c228381924dbb77c8ce056319330899045fc68f1cd0f5263b4\": container with ID starting with 528837ebaa02f1c228381924dbb77c8ce056319330899045fc68f1cd0f5263b4 not found: ID does not exist" containerID="528837ebaa02f1c228381924dbb77c8ce056319330899045fc68f1cd0f5263b4" Nov 28 09:06:56 crc kubenswrapper[4946]: I1128 09:06:56.858949 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"528837ebaa02f1c228381924dbb77c8ce056319330899045fc68f1cd0f5263b4"} err="failed to get container status \"528837ebaa02f1c228381924dbb77c8ce056319330899045fc68f1cd0f5263b4\": rpc error: code = NotFound desc = could not find container \"528837ebaa02f1c228381924dbb77c8ce056319330899045fc68f1cd0f5263b4\": container with ID starting with 528837ebaa02f1c228381924dbb77c8ce056319330899045fc68f1cd0f5263b4 not found: ID does not exist" Nov 28 09:06:58 crc kubenswrapper[4946]: I1128 09:06:58.003441 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="affebde7-d83f-4478-85fa-2dcba3ec2499" path="/var/lib/kubelet/pods/affebde7-d83f-4478-85fa-2dcba3ec2499/volumes" Nov 28 09:07:24 crc kubenswrapper[4946]: I1128 09:07:24.731222 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:07:24 crc kubenswrapper[4946]: I1128 09:07:24.732643 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:07:28 crc kubenswrapper[4946]: I1128 09:07:28.179175 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jbvgh"] Nov 28 09:07:28 crc kubenswrapper[4946]: E1128 09:07:28.180005 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="affebde7-d83f-4478-85fa-2dcba3ec2499" containerName="registry-server" Nov 28 09:07:28 crc kubenswrapper[4946]: I1128 09:07:28.180016 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="affebde7-d83f-4478-85fa-2dcba3ec2499" containerName="registry-server" Nov 28 09:07:28 crc kubenswrapper[4946]: E1128 09:07:28.180032 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="affebde7-d83f-4478-85fa-2dcba3ec2499" containerName="extract-content" Nov 28 09:07:28 crc kubenswrapper[4946]: I1128 09:07:28.180049 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="affebde7-d83f-4478-85fa-2dcba3ec2499" containerName="extract-content" Nov 28 09:07:28 crc kubenswrapper[4946]: E1128 09:07:28.180067 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="affebde7-d83f-4478-85fa-2dcba3ec2499" containerName="extract-utilities" Nov 28 09:07:28 crc kubenswrapper[4946]: I1128 09:07:28.180073 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="affebde7-d83f-4478-85fa-2dcba3ec2499" containerName="extract-utilities" Nov 28 09:07:28 crc kubenswrapper[4946]: I1128 09:07:28.182036 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="affebde7-d83f-4478-85fa-2dcba3ec2499" containerName="registry-server" Nov 28 09:07:28 crc kubenswrapper[4946]: I1128 09:07:28.183382 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jbvgh" Nov 28 09:07:28 crc kubenswrapper[4946]: I1128 09:07:28.195823 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jbvgh"] Nov 28 09:07:28 crc kubenswrapper[4946]: I1128 09:07:28.325016 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f110733-aff4-4839-aca5-d1f8781020ff-catalog-content\") pod \"redhat-operators-jbvgh\" (UID: \"1f110733-aff4-4839-aca5-d1f8781020ff\") " pod="openshift-marketplace/redhat-operators-jbvgh" Nov 28 09:07:28 crc kubenswrapper[4946]: I1128 09:07:28.325141 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f110733-aff4-4839-aca5-d1f8781020ff-utilities\") pod \"redhat-operators-jbvgh\" (UID: \"1f110733-aff4-4839-aca5-d1f8781020ff\") " pod="openshift-marketplace/redhat-operators-jbvgh" Nov 28 09:07:28 crc kubenswrapper[4946]: I1128 09:07:28.325204 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvr77\" (UniqueName: \"kubernetes.io/projected/1f110733-aff4-4839-aca5-d1f8781020ff-kube-api-access-bvr77\") pod \"redhat-operators-jbvgh\" (UID: \"1f110733-aff4-4839-aca5-d1f8781020ff\") " pod="openshift-marketplace/redhat-operators-jbvgh" Nov 28 09:07:28 crc kubenswrapper[4946]: I1128 09:07:28.427388 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f110733-aff4-4839-aca5-d1f8781020ff-catalog-content\") pod \"redhat-operators-jbvgh\" (UID: \"1f110733-aff4-4839-aca5-d1f8781020ff\") " pod="openshift-marketplace/redhat-operators-jbvgh" Nov 28 09:07:28 crc kubenswrapper[4946]: I1128 09:07:28.427484 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f110733-aff4-4839-aca5-d1f8781020ff-utilities\") pod \"redhat-operators-jbvgh\" (UID: \"1f110733-aff4-4839-aca5-d1f8781020ff\") " pod="openshift-marketplace/redhat-operators-jbvgh" Nov 28 09:07:28 crc kubenswrapper[4946]: I1128 09:07:28.427528 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvr77\" (UniqueName: \"kubernetes.io/projected/1f110733-aff4-4839-aca5-d1f8781020ff-kube-api-access-bvr77\") pod \"redhat-operators-jbvgh\" (UID: \"1f110733-aff4-4839-aca5-d1f8781020ff\") " pod="openshift-marketplace/redhat-operators-jbvgh" Nov 28 09:07:28 crc kubenswrapper[4946]: I1128 09:07:28.428007 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f110733-aff4-4839-aca5-d1f8781020ff-utilities\") pod \"redhat-operators-jbvgh\" (UID: \"1f110733-aff4-4839-aca5-d1f8781020ff\") " pod="openshift-marketplace/redhat-operators-jbvgh" Nov 28 09:07:28 crc kubenswrapper[4946]: I1128 09:07:28.428278 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f110733-aff4-4839-aca5-d1f8781020ff-catalog-content\") pod \"redhat-operators-jbvgh\" (UID: \"1f110733-aff4-4839-aca5-d1f8781020ff\") " pod="openshift-marketplace/redhat-operators-jbvgh" Nov 28 09:07:28 crc kubenswrapper[4946]: I1128 09:07:28.449366 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvr77\" (UniqueName: \"kubernetes.io/projected/1f110733-aff4-4839-aca5-d1f8781020ff-kube-api-access-bvr77\") pod \"redhat-operators-jbvgh\" (UID: \"1f110733-aff4-4839-aca5-d1f8781020ff\") " pod="openshift-marketplace/redhat-operators-jbvgh" Nov 28 09:07:28 crc kubenswrapper[4946]: I1128 09:07:28.519539 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jbvgh" Nov 28 09:07:29 crc kubenswrapper[4946]: I1128 09:07:29.055260 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jbvgh"] Nov 28 09:07:29 crc kubenswrapper[4946]: I1128 09:07:29.174909 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jbvgh" event={"ID":"1f110733-aff4-4839-aca5-d1f8781020ff","Type":"ContainerStarted","Data":"5536f0daf6121586ff2c852d7ddf6aa1f83a4bb226ab4292bcc947c9fbb3c6bc"} Nov 28 09:07:30 crc kubenswrapper[4946]: I1128 09:07:30.186668 4946 generic.go:334] "Generic (PLEG): container finished" podID="1f110733-aff4-4839-aca5-d1f8781020ff" containerID="e558072b6d48126ed4480a319800eaf797498cb7de0845b82dd09cd74ffb75b4" exitCode=0 Nov 28 09:07:30 crc kubenswrapper[4946]: I1128 09:07:30.187022 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jbvgh" event={"ID":"1f110733-aff4-4839-aca5-d1f8781020ff","Type":"ContainerDied","Data":"e558072b6d48126ed4480a319800eaf797498cb7de0845b82dd09cd74ffb75b4"} Nov 28 09:07:30 crc kubenswrapper[4946]: I1128 09:07:30.776687 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cfvzj"] Nov 28 09:07:30 crc kubenswrapper[4946]: I1128 09:07:30.778954 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cfvzj" Nov 28 09:07:30 crc kubenswrapper[4946]: I1128 09:07:30.785736 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cfvzj"] Nov 28 09:07:30 crc kubenswrapper[4946]: I1128 09:07:30.877963 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af4aab3f-c7c1-4d95-84eb-2562db990f24-catalog-content\") pod \"redhat-marketplace-cfvzj\" (UID: \"af4aab3f-c7c1-4d95-84eb-2562db990f24\") " pod="openshift-marketplace/redhat-marketplace-cfvzj" Nov 28 09:07:30 crc kubenswrapper[4946]: I1128 09:07:30.878048 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af4aab3f-c7c1-4d95-84eb-2562db990f24-utilities\") pod \"redhat-marketplace-cfvzj\" (UID: \"af4aab3f-c7c1-4d95-84eb-2562db990f24\") " pod="openshift-marketplace/redhat-marketplace-cfvzj" Nov 28 09:07:30 crc kubenswrapper[4946]: I1128 09:07:30.878146 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8vqz\" (UniqueName: \"kubernetes.io/projected/af4aab3f-c7c1-4d95-84eb-2562db990f24-kube-api-access-s8vqz\") pod \"redhat-marketplace-cfvzj\" (UID: \"af4aab3f-c7c1-4d95-84eb-2562db990f24\") " pod="openshift-marketplace/redhat-marketplace-cfvzj" Nov 28 09:07:30 crc kubenswrapper[4946]: I1128 09:07:30.981157 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af4aab3f-c7c1-4d95-84eb-2562db990f24-catalog-content\") pod \"redhat-marketplace-cfvzj\" (UID: \"af4aab3f-c7c1-4d95-84eb-2562db990f24\") " pod="openshift-marketplace/redhat-marketplace-cfvzj" Nov 28 09:07:30 crc kubenswrapper[4946]: I1128 09:07:30.981227 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af4aab3f-c7c1-4d95-84eb-2562db990f24-utilities\") pod \"redhat-marketplace-cfvzj\" (UID: \"af4aab3f-c7c1-4d95-84eb-2562db990f24\") " pod="openshift-marketplace/redhat-marketplace-cfvzj" Nov 28 09:07:30 crc kubenswrapper[4946]: I1128 09:07:30.981253 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8vqz\" (UniqueName: \"kubernetes.io/projected/af4aab3f-c7c1-4d95-84eb-2562db990f24-kube-api-access-s8vqz\") pod \"redhat-marketplace-cfvzj\" (UID: \"af4aab3f-c7c1-4d95-84eb-2562db990f24\") " pod="openshift-marketplace/redhat-marketplace-cfvzj" Nov 28 09:07:30 crc kubenswrapper[4946]: I1128 09:07:30.981602 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af4aab3f-c7c1-4d95-84eb-2562db990f24-catalog-content\") pod \"redhat-marketplace-cfvzj\" (UID: \"af4aab3f-c7c1-4d95-84eb-2562db990f24\") " pod="openshift-marketplace/redhat-marketplace-cfvzj" Nov 28 09:07:30 crc kubenswrapper[4946]: I1128 09:07:30.981692 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af4aab3f-c7c1-4d95-84eb-2562db990f24-utilities\") pod \"redhat-marketplace-cfvzj\" (UID: \"af4aab3f-c7c1-4d95-84eb-2562db990f24\") " pod="openshift-marketplace/redhat-marketplace-cfvzj" Nov 28 09:07:31 crc kubenswrapper[4946]: I1128 09:07:31.001230 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8vqz\" (UniqueName: \"kubernetes.io/projected/af4aab3f-c7c1-4d95-84eb-2562db990f24-kube-api-access-s8vqz\") pod \"redhat-marketplace-cfvzj\" (UID: \"af4aab3f-c7c1-4d95-84eb-2562db990f24\") " pod="openshift-marketplace/redhat-marketplace-cfvzj" Nov 28 09:07:31 crc kubenswrapper[4946]: I1128 09:07:31.105934 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cfvzj" Nov 28 09:07:31 crc kubenswrapper[4946]: I1128 09:07:31.203155 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jbvgh" event={"ID":"1f110733-aff4-4839-aca5-d1f8781020ff","Type":"ContainerStarted","Data":"03845cefd307794eb4d6018d5abc3190827b06453dd398603bcc14db5173c309"} Nov 28 09:07:31 crc kubenswrapper[4946]: I1128 09:07:31.676240 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cfvzj"] Nov 28 09:07:31 crc kubenswrapper[4946]: W1128 09:07:31.678620 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf4aab3f_c7c1_4d95_84eb_2562db990f24.slice/crio-3736d86b18d7ae2b015a5b300bcd92a18c4d798f741d8bc7fa76098c24b679f9 WatchSource:0}: Error finding container 3736d86b18d7ae2b015a5b300bcd92a18c4d798f741d8bc7fa76098c24b679f9: Status 404 returned error can't find the container with id 3736d86b18d7ae2b015a5b300bcd92a18c4d798f741d8bc7fa76098c24b679f9 Nov 28 09:07:32 crc kubenswrapper[4946]: I1128 09:07:32.250902 4946 generic.go:334] "Generic (PLEG): container finished" podID="af4aab3f-c7c1-4d95-84eb-2562db990f24" containerID="4a0eabf4cdb098593a37f8ac1275fe3afc68ff91875ef9380a095b07f2b82bc6" exitCode=0 Nov 28 09:07:32 crc kubenswrapper[4946]: I1128 09:07:32.253517 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cfvzj" event={"ID":"af4aab3f-c7c1-4d95-84eb-2562db990f24","Type":"ContainerDied","Data":"4a0eabf4cdb098593a37f8ac1275fe3afc68ff91875ef9380a095b07f2b82bc6"} Nov 28 09:07:32 crc kubenswrapper[4946]: I1128 09:07:32.253786 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cfvzj" event={"ID":"af4aab3f-c7c1-4d95-84eb-2562db990f24","Type":"ContainerStarted","Data":"3736d86b18d7ae2b015a5b300bcd92a18c4d798f741d8bc7fa76098c24b679f9"} Nov 28 09:07:34 crc kubenswrapper[4946]: I1128 09:07:34.270778 4946 generic.go:334] "Generic (PLEG): container finished" podID="af4aab3f-c7c1-4d95-84eb-2562db990f24" containerID="e52ae45ff669ceb25df715122ea700142029d73a9538ecd9afa480c58ec5699d" exitCode=0 Nov 28 09:07:34 crc kubenswrapper[4946]: I1128 09:07:34.270928 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cfvzj" event={"ID":"af4aab3f-c7c1-4d95-84eb-2562db990f24","Type":"ContainerDied","Data":"e52ae45ff669ceb25df715122ea700142029d73a9538ecd9afa480c58ec5699d"} Nov 28 09:07:34 crc kubenswrapper[4946]: I1128 09:07:34.273444 4946 generic.go:334] "Generic (PLEG): container finished" podID="1f110733-aff4-4839-aca5-d1f8781020ff" containerID="03845cefd307794eb4d6018d5abc3190827b06453dd398603bcc14db5173c309" exitCode=0 Nov 28 09:07:34 crc kubenswrapper[4946]: I1128 09:07:34.273485 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jbvgh" event={"ID":"1f110733-aff4-4839-aca5-d1f8781020ff","Type":"ContainerDied","Data":"03845cefd307794eb4d6018d5abc3190827b06453dd398603bcc14db5173c309"} Nov 28 09:07:35 crc kubenswrapper[4946]: I1128 09:07:35.287034 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cfvzj" event={"ID":"af4aab3f-c7c1-4d95-84eb-2562db990f24","Type":"ContainerStarted","Data":"c363a9ccf67b9f2709e77eb0cc33ea8d413fc6f1584c6c28106254917a842c72"} Nov 28 09:07:35 crc kubenswrapper[4946]: I1128 09:07:35.289954 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jbvgh" event={"ID":"1f110733-aff4-4839-aca5-d1f8781020ff","Type":"ContainerStarted","Data":"7a59e56c143ed2abb701164cdc7b0e9c881c31f0c7e51add8cccad4520538344"} Nov 28 09:07:35 crc kubenswrapper[4946]: I1128 09:07:35.317164 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cfvzj" podStartSLOduration=2.793261362 podStartE2EDuration="5.317149018s" podCreationTimestamp="2025-11-28 09:07:30 +0000 UTC" firstStartedPulling="2025-11-28 09:07:32.263642691 +0000 UTC m=+8106.641707802" lastFinishedPulling="2025-11-28 09:07:34.787530347 +0000 UTC m=+8109.165595458" observedRunningTime="2025-11-28 09:07:35.31439934 +0000 UTC m=+8109.692464461" watchObservedRunningTime="2025-11-28 09:07:35.317149018 +0000 UTC m=+8109.695214119" Nov 28 09:07:35 crc kubenswrapper[4946]: I1128 09:07:35.334585 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jbvgh" podStartSLOduration=2.553046611 podStartE2EDuration="7.33456775s" podCreationTimestamp="2025-11-28 09:07:28 +0000 UTC" firstStartedPulling="2025-11-28 09:07:30.189647551 +0000 UTC m=+8104.567712672" lastFinishedPulling="2025-11-28 09:07:34.9711687 +0000 UTC m=+8109.349233811" observedRunningTime="2025-11-28 09:07:35.332981181 +0000 UTC m=+8109.711046302" watchObservedRunningTime="2025-11-28 09:07:35.33456775 +0000 UTC m=+8109.712632861" Nov 28 09:07:38 crc kubenswrapper[4946]: I1128 09:07:38.520999 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jbvgh" Nov 28 09:07:38 crc kubenswrapper[4946]: I1128 09:07:38.521486 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jbvgh" Nov 28 09:07:39 crc kubenswrapper[4946]: I1128 09:07:39.574301 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jbvgh" podUID="1f110733-aff4-4839-aca5-d1f8781020ff" containerName="registry-server" probeResult="failure" output=< Nov 28 09:07:39 crc kubenswrapper[4946]: timeout: failed to connect service ":50051" within 1s Nov 28 09:07:39 crc kubenswrapper[4946]: > Nov 28 09:07:41 crc kubenswrapper[4946]: I1128 09:07:41.106197 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cfvzj" Nov 28 09:07:41 crc kubenswrapper[4946]: I1128 09:07:41.106286 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cfvzj" Nov 28 09:07:41 crc kubenswrapper[4946]: I1128 09:07:41.157387 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cfvzj" Nov 28 09:07:41 crc kubenswrapper[4946]: I1128 09:07:41.405260 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cfvzj" Nov 28 09:07:44 crc kubenswrapper[4946]: I1128 09:07:44.980665 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cfvzj"] Nov 28 09:07:44 crc kubenswrapper[4946]: I1128 09:07:44.981761 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cfvzj" podUID="af4aab3f-c7c1-4d95-84eb-2562db990f24" containerName="registry-server" containerID="cri-o://c363a9ccf67b9f2709e77eb0cc33ea8d413fc6f1584c6c28106254917a842c72" gracePeriod=2 Nov 28 09:07:45 crc kubenswrapper[4946]: I1128 09:07:45.385221 4946 generic.go:334] "Generic (PLEG): container finished" podID="af4aab3f-c7c1-4d95-84eb-2562db990f24" containerID="c363a9ccf67b9f2709e77eb0cc33ea8d413fc6f1584c6c28106254917a842c72" exitCode=0 Nov 28 09:07:45 crc kubenswrapper[4946]: I1128 09:07:45.385568 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cfvzj" event={"ID":"af4aab3f-c7c1-4d95-84eb-2562db990f24","Type":"ContainerDied","Data":"c363a9ccf67b9f2709e77eb0cc33ea8d413fc6f1584c6c28106254917a842c72"} Nov 28 09:07:45 crc kubenswrapper[4946]: I1128 09:07:45.531206 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cfvzj" Nov 28 09:07:45 crc kubenswrapper[4946]: I1128 09:07:45.631092 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8vqz\" (UniqueName: \"kubernetes.io/projected/af4aab3f-c7c1-4d95-84eb-2562db990f24-kube-api-access-s8vqz\") pod \"af4aab3f-c7c1-4d95-84eb-2562db990f24\" (UID: \"af4aab3f-c7c1-4d95-84eb-2562db990f24\") " Nov 28 09:07:45 crc kubenswrapper[4946]: I1128 09:07:45.631290 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af4aab3f-c7c1-4d95-84eb-2562db990f24-catalog-content\") pod \"af4aab3f-c7c1-4d95-84eb-2562db990f24\" (UID: \"af4aab3f-c7c1-4d95-84eb-2562db990f24\") " Nov 28 09:07:45 crc kubenswrapper[4946]: I1128 09:07:45.631446 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af4aab3f-c7c1-4d95-84eb-2562db990f24-utilities\") pod \"af4aab3f-c7c1-4d95-84eb-2562db990f24\" (UID: \"af4aab3f-c7c1-4d95-84eb-2562db990f24\") " Nov 28 09:07:45 crc kubenswrapper[4946]: I1128 09:07:45.632427 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af4aab3f-c7c1-4d95-84eb-2562db990f24-utilities" (OuterVolumeSpecName: "utilities") pod "af4aab3f-c7c1-4d95-84eb-2562db990f24" (UID: "af4aab3f-c7c1-4d95-84eb-2562db990f24"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:07:45 crc kubenswrapper[4946]: I1128 09:07:45.637435 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af4aab3f-c7c1-4d95-84eb-2562db990f24-kube-api-access-s8vqz" (OuterVolumeSpecName: "kube-api-access-s8vqz") pod "af4aab3f-c7c1-4d95-84eb-2562db990f24" (UID: "af4aab3f-c7c1-4d95-84eb-2562db990f24"). InnerVolumeSpecName "kube-api-access-s8vqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:07:45 crc kubenswrapper[4946]: I1128 09:07:45.647155 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af4aab3f-c7c1-4d95-84eb-2562db990f24-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af4aab3f-c7c1-4d95-84eb-2562db990f24" (UID: "af4aab3f-c7c1-4d95-84eb-2562db990f24"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:07:45 crc kubenswrapper[4946]: I1128 09:07:45.734053 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af4aab3f-c7c1-4d95-84eb-2562db990f24-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 09:07:45 crc kubenswrapper[4946]: I1128 09:07:45.734111 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8vqz\" (UniqueName: \"kubernetes.io/projected/af4aab3f-c7c1-4d95-84eb-2562db990f24-kube-api-access-s8vqz\") on node \"crc\" DevicePath \"\"" Nov 28 09:07:45 crc kubenswrapper[4946]: I1128 09:07:45.734130 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af4aab3f-c7c1-4d95-84eb-2562db990f24-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 09:07:46 crc kubenswrapper[4946]: I1128 09:07:46.398054 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cfvzj" event={"ID":"af4aab3f-c7c1-4d95-84eb-2562db990f24","Type":"ContainerDied","Data":"3736d86b18d7ae2b015a5b300bcd92a18c4d798f741d8bc7fa76098c24b679f9"} Nov 28 09:07:46 crc kubenswrapper[4946]: I1128 09:07:46.398108 4946 scope.go:117] "RemoveContainer" containerID="c363a9ccf67b9f2709e77eb0cc33ea8d413fc6f1584c6c28106254917a842c72" Nov 28 09:07:46 crc kubenswrapper[4946]: I1128 09:07:46.398161 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cfvzj" Nov 28 09:07:46 crc kubenswrapper[4946]: I1128 09:07:46.425238 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cfvzj"] Nov 28 09:07:46 crc kubenswrapper[4946]: I1128 09:07:46.434838 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cfvzj"] Nov 28 09:07:46 crc kubenswrapper[4946]: I1128 09:07:46.444680 4946 scope.go:117] "RemoveContainer" containerID="e52ae45ff669ceb25df715122ea700142029d73a9538ecd9afa480c58ec5699d" Nov 28 09:07:46 crc kubenswrapper[4946]: I1128 09:07:46.478901 4946 scope.go:117] "RemoveContainer" containerID="4a0eabf4cdb098593a37f8ac1275fe3afc68ff91875ef9380a095b07f2b82bc6" Nov 28 09:07:48 crc kubenswrapper[4946]: I1128 09:07:48.002583 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af4aab3f-c7c1-4d95-84eb-2562db990f24" path="/var/lib/kubelet/pods/af4aab3f-c7c1-4d95-84eb-2562db990f24/volumes" Nov 28 09:07:49 crc kubenswrapper[4946]: I1128 09:07:49.575605 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jbvgh" podUID="1f110733-aff4-4839-aca5-d1f8781020ff" containerName="registry-server" probeResult="failure" output=< Nov 28 09:07:49 crc kubenswrapper[4946]: timeout: failed to connect service ":50051" within 1s Nov 28 09:07:49 crc kubenswrapper[4946]: > Nov 28 09:07:54 crc kubenswrapper[4946]: I1128 09:07:54.731191 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:07:54 crc kubenswrapper[4946]: I1128 09:07:54.735337 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:07:54 crc kubenswrapper[4946]: I1128 09:07:54.735743 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 09:07:54 crc kubenswrapper[4946]: I1128 09:07:54.739074 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 09:07:54 crc kubenswrapper[4946]: I1128 09:07:54.739704 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" gracePeriod=600 Nov 28 09:07:54 crc kubenswrapper[4946]: E1128 09:07:54.860939 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:07:55 crc kubenswrapper[4946]: I1128 09:07:55.504089 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" exitCode=0 Nov 28 09:07:55 crc kubenswrapper[4946]: I1128 09:07:55.504176 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880"} Nov 28 09:07:55 crc kubenswrapper[4946]: I1128 09:07:55.504377 4946 scope.go:117] "RemoveContainer" containerID="754338137d28a0c99ab6cd99ffe748df8dc8aeb4903b18cddaca660262b3d025" Nov 28 09:07:55 crc kubenswrapper[4946]: I1128 09:07:55.505216 4946 scope.go:117] "RemoveContainer" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" Nov 28 09:07:55 crc kubenswrapper[4946]: E1128 09:07:55.505788 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:07:58 crc kubenswrapper[4946]: I1128 09:07:58.573471 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jbvgh" Nov 28 09:07:58 crc kubenswrapper[4946]: I1128 09:07:58.631673 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jbvgh" Nov 28 09:07:59 crc kubenswrapper[4946]: I1128 09:07:59.386897 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jbvgh"] Nov 28 09:08:00 crc kubenswrapper[4946]: I1128 09:08:00.568366 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jbvgh" podUID="1f110733-aff4-4839-aca5-d1f8781020ff" containerName="registry-server" containerID="cri-o://7a59e56c143ed2abb701164cdc7b0e9c881c31f0c7e51add8cccad4520538344" gracePeriod=2 Nov 28 09:08:01 crc kubenswrapper[4946]: I1128 09:08:01.080944 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jbvgh" Nov 28 09:08:01 crc kubenswrapper[4946]: I1128 09:08:01.170495 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f110733-aff4-4839-aca5-d1f8781020ff-utilities\") pod \"1f110733-aff4-4839-aca5-d1f8781020ff\" (UID: \"1f110733-aff4-4839-aca5-d1f8781020ff\") " Nov 28 09:08:01 crc kubenswrapper[4946]: I1128 09:08:01.170577 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f110733-aff4-4839-aca5-d1f8781020ff-catalog-content\") pod \"1f110733-aff4-4839-aca5-d1f8781020ff\" (UID: \"1f110733-aff4-4839-aca5-d1f8781020ff\") " Nov 28 09:08:01 crc kubenswrapper[4946]: I1128 09:08:01.170601 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvr77\" (UniqueName: \"kubernetes.io/projected/1f110733-aff4-4839-aca5-d1f8781020ff-kube-api-access-bvr77\") pod \"1f110733-aff4-4839-aca5-d1f8781020ff\" (UID: \"1f110733-aff4-4839-aca5-d1f8781020ff\") " Nov 28 09:08:01 crc kubenswrapper[4946]: I1128 09:08:01.171983 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f110733-aff4-4839-aca5-d1f8781020ff-utilities" (OuterVolumeSpecName: "utilities") pod "1f110733-aff4-4839-aca5-d1f8781020ff" (UID: "1f110733-aff4-4839-aca5-d1f8781020ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:08:01 crc kubenswrapper[4946]: I1128 09:08:01.178687 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f110733-aff4-4839-aca5-d1f8781020ff-kube-api-access-bvr77" (OuterVolumeSpecName: "kube-api-access-bvr77") pod "1f110733-aff4-4839-aca5-d1f8781020ff" (UID: "1f110733-aff4-4839-aca5-d1f8781020ff"). InnerVolumeSpecName "kube-api-access-bvr77". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:08:01 crc kubenswrapper[4946]: I1128 09:08:01.267306 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f110733-aff4-4839-aca5-d1f8781020ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f110733-aff4-4839-aca5-d1f8781020ff" (UID: "1f110733-aff4-4839-aca5-d1f8781020ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:08:01 crc kubenswrapper[4946]: I1128 09:08:01.272359 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f110733-aff4-4839-aca5-d1f8781020ff-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 09:08:01 crc kubenswrapper[4946]: I1128 09:08:01.272385 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f110733-aff4-4839-aca5-d1f8781020ff-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 09:08:01 crc kubenswrapper[4946]: I1128 09:08:01.272401 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvr77\" (UniqueName: \"kubernetes.io/projected/1f110733-aff4-4839-aca5-d1f8781020ff-kube-api-access-bvr77\") on node \"crc\" DevicePath \"\"" Nov 28 09:08:01 crc kubenswrapper[4946]: I1128 09:08:01.582961 4946 generic.go:334] "Generic (PLEG): container finished" podID="1f110733-aff4-4839-aca5-d1f8781020ff" containerID="7a59e56c143ed2abb701164cdc7b0e9c881c31f0c7e51add8cccad4520538344" exitCode=0 Nov 28 09:08:01 crc kubenswrapper[4946]: I1128 09:08:01.583014 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jbvgh" event={"ID":"1f110733-aff4-4839-aca5-d1f8781020ff","Type":"ContainerDied","Data":"7a59e56c143ed2abb701164cdc7b0e9c881c31f0c7e51add8cccad4520538344"} Nov 28 09:08:01 crc kubenswrapper[4946]: I1128 09:08:01.583046 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jbvgh" Nov 28 09:08:01 crc kubenswrapper[4946]: I1128 09:08:01.583074 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jbvgh" event={"ID":"1f110733-aff4-4839-aca5-d1f8781020ff","Type":"ContainerDied","Data":"5536f0daf6121586ff2c852d7ddf6aa1f83a4bb226ab4292bcc947c9fbb3c6bc"} Nov 28 09:08:01 crc kubenswrapper[4946]: I1128 09:08:01.583100 4946 scope.go:117] "RemoveContainer" containerID="7a59e56c143ed2abb701164cdc7b0e9c881c31f0c7e51add8cccad4520538344" Nov 28 09:08:01 crc kubenswrapper[4946]: I1128 09:08:01.618905 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jbvgh"] Nov 28 09:08:01 crc kubenswrapper[4946]: I1128 09:08:01.626602 4946 scope.go:117] "RemoveContainer" containerID="03845cefd307794eb4d6018d5abc3190827b06453dd398603bcc14db5173c309" Nov 28 09:08:01 crc kubenswrapper[4946]: I1128 09:08:01.630710 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jbvgh"] Nov 28 09:08:01 crc kubenswrapper[4946]: I1128 09:08:01.655520 4946 scope.go:117] "RemoveContainer" containerID="e558072b6d48126ed4480a319800eaf797498cb7de0845b82dd09cd74ffb75b4" Nov 28 09:08:01 crc kubenswrapper[4946]: I1128 09:08:01.696448 4946 scope.go:117] "RemoveContainer" containerID="7a59e56c143ed2abb701164cdc7b0e9c881c31f0c7e51add8cccad4520538344" Nov 28 09:08:01 crc kubenswrapper[4946]: E1128 09:08:01.696883 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a59e56c143ed2abb701164cdc7b0e9c881c31f0c7e51add8cccad4520538344\": container with ID starting with 7a59e56c143ed2abb701164cdc7b0e9c881c31f0c7e51add8cccad4520538344 not found: ID does not exist" containerID="7a59e56c143ed2abb701164cdc7b0e9c881c31f0c7e51add8cccad4520538344" Nov 28 09:08:01 crc kubenswrapper[4946]: I1128 09:08:01.696916 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a59e56c143ed2abb701164cdc7b0e9c881c31f0c7e51add8cccad4520538344"} err="failed to get container status \"7a59e56c143ed2abb701164cdc7b0e9c881c31f0c7e51add8cccad4520538344\": rpc error: code = NotFound desc = could not find container \"7a59e56c143ed2abb701164cdc7b0e9c881c31f0c7e51add8cccad4520538344\": container with ID starting with 7a59e56c143ed2abb701164cdc7b0e9c881c31f0c7e51add8cccad4520538344 not found: ID does not exist" Nov 28 09:08:01 crc kubenswrapper[4946]: I1128 09:08:01.696936 4946 scope.go:117] "RemoveContainer" containerID="03845cefd307794eb4d6018d5abc3190827b06453dd398603bcc14db5173c309" Nov 28 09:08:01 crc kubenswrapper[4946]: E1128 09:08:01.697165 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03845cefd307794eb4d6018d5abc3190827b06453dd398603bcc14db5173c309\": container with ID starting with 03845cefd307794eb4d6018d5abc3190827b06453dd398603bcc14db5173c309 not found: ID does not exist" containerID="03845cefd307794eb4d6018d5abc3190827b06453dd398603bcc14db5173c309" Nov 28 09:08:01 crc kubenswrapper[4946]: I1128 09:08:01.697182 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03845cefd307794eb4d6018d5abc3190827b06453dd398603bcc14db5173c309"} err="failed to get container status \"03845cefd307794eb4d6018d5abc3190827b06453dd398603bcc14db5173c309\": rpc error: code = NotFound desc = could not find container \"03845cefd307794eb4d6018d5abc3190827b06453dd398603bcc14db5173c309\": container with ID starting with 03845cefd307794eb4d6018d5abc3190827b06453dd398603bcc14db5173c309 not found: ID does not exist" Nov 28 09:08:01 crc kubenswrapper[4946]: I1128 09:08:01.697198 4946 scope.go:117] "RemoveContainer" containerID="e558072b6d48126ed4480a319800eaf797498cb7de0845b82dd09cd74ffb75b4" Nov 28 09:08:01 crc kubenswrapper[4946]: E1128 09:08:01.697519 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e558072b6d48126ed4480a319800eaf797498cb7de0845b82dd09cd74ffb75b4\": container with ID starting with e558072b6d48126ed4480a319800eaf797498cb7de0845b82dd09cd74ffb75b4 not found: ID does not exist" containerID="e558072b6d48126ed4480a319800eaf797498cb7de0845b82dd09cd74ffb75b4" Nov 28 09:08:01 crc kubenswrapper[4946]: I1128 09:08:01.697535 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e558072b6d48126ed4480a319800eaf797498cb7de0845b82dd09cd74ffb75b4"} err="failed to get container status \"e558072b6d48126ed4480a319800eaf797498cb7de0845b82dd09cd74ffb75b4\": rpc error: code = NotFound desc = could not find container \"e558072b6d48126ed4480a319800eaf797498cb7de0845b82dd09cd74ffb75b4\": container with ID starting with e558072b6d48126ed4480a319800eaf797498cb7de0845b82dd09cd74ffb75b4 not found: ID does not exist" Nov 28 09:08:02 crc kubenswrapper[4946]: I1128 09:08:02.003064 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f110733-aff4-4839-aca5-d1f8781020ff" path="/var/lib/kubelet/pods/1f110733-aff4-4839-aca5-d1f8781020ff/volumes" Nov 28 09:08:06 crc kubenswrapper[4946]: I1128 09:08:06.990195 4946 scope.go:117] "RemoveContainer" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" Nov 28 09:08:06 crc kubenswrapper[4946]: E1128 09:08:06.990885 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:08:19 crc kubenswrapper[4946]: I1128 09:08:19.990192 4946 scope.go:117] "RemoveContainer" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" Nov 28 09:08:19 crc kubenswrapper[4946]: E1128 09:08:19.991380 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:08:30 crc kubenswrapper[4946]: I1128 09:08:30.039251 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-nk2js"] Nov 28 09:08:30 crc kubenswrapper[4946]: I1128 09:08:30.054009 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-1c1b-account-create-update-4s6kt"] Nov 28 09:08:30 crc kubenswrapper[4946]: I1128 09:08:30.062076 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-nk2js"] Nov 28 09:08:30 crc kubenswrapper[4946]: I1128 09:08:30.068854 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-1c1b-account-create-update-4s6kt"] Nov 28 09:08:30 crc kubenswrapper[4946]: I1128 09:08:30.945658 4946 generic.go:334] "Generic (PLEG): container finished" podID="e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7" containerID="b7c11b91610189fb94a86f68b52674d6f48847858d223bc3f9af6c672935f9ca" exitCode=0 Nov 28 09:08:30 crc kubenswrapper[4946]: I1128 09:08:30.945739 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p" event={"ID":"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7","Type":"ContainerDied","Data":"b7c11b91610189fb94a86f68b52674d6f48847858d223bc3f9af6c672935f9ca"} Nov 28 09:08:32 crc kubenswrapper[4946]: I1128 09:08:32.006814 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a776c3e3-091d-4963-9aa8-744e319c7193" path="/var/lib/kubelet/pods/a776c3e3-091d-4963-9aa8-744e319c7193/volumes" Nov 28 09:08:32 crc kubenswrapper[4946]: I1128 09:08:32.008864 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05" path="/var/lib/kubelet/pods/ae25b6b6-23c8-4e7b-bb5a-9efa36f9ca05/volumes" Nov 28 09:08:32 crc kubenswrapper[4946]: I1128 09:08:32.532335 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p" Nov 28 09:08:32 crc kubenswrapper[4946]: I1128 09:08:32.606885 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-inventory\") pod \"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7\" (UID: \"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7\") " Nov 28 09:08:32 crc kubenswrapper[4946]: I1128 09:08:32.607148 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-ssh-key\") pod \"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7\" (UID: \"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7\") " Nov 28 09:08:32 crc kubenswrapper[4946]: I1128 09:08:32.607203 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p654\" (UniqueName: \"kubernetes.io/projected/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-kube-api-access-9p654\") pod \"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7\" (UID: \"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7\") " Nov 28 09:08:32 crc kubenswrapper[4946]: I1128 09:08:32.607333 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-tripleo-cleanup-combined-ca-bundle\") pod \"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7\" (UID: \"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7\") " Nov 28 09:08:32 crc kubenswrapper[4946]: I1128 09:08:32.614015 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-kube-api-access-9p654" (OuterVolumeSpecName: "kube-api-access-9p654") pod "e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7" (UID: "e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7"). InnerVolumeSpecName "kube-api-access-9p654". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:08:32 crc kubenswrapper[4946]: I1128 09:08:32.614179 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7" (UID: "e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:08:32 crc kubenswrapper[4946]: E1128 09:08:32.647852 4946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-ssh-key podName:e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7 nodeName:}" failed. No retries permitted until 2025-11-28 09:08:33.147795396 +0000 UTC m=+8167.525860547 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key" (UniqueName: "kubernetes.io/secret/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-ssh-key") pod "e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7" (UID: "e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7") : error deleting /var/lib/kubelet/pods/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7/volume-subpaths: remove /var/lib/kubelet/pods/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7/volume-subpaths: no such file or directory Nov 28 09:08:32 crc kubenswrapper[4946]: I1128 09:08:32.651046 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-inventory" (OuterVolumeSpecName: "inventory") pod "e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7" (UID: "e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:08:32 crc kubenswrapper[4946]: I1128 09:08:32.709736 4946 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:08:32 crc kubenswrapper[4946]: I1128 09:08:32.709779 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:08:32 crc kubenswrapper[4946]: I1128 09:08:32.709789 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9p654\" (UniqueName: \"kubernetes.io/projected/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-kube-api-access-9p654\") on node \"crc\" DevicePath \"\"" Nov 28 09:08:32 crc kubenswrapper[4946]: I1128 09:08:32.976069 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p" event={"ID":"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7","Type":"ContainerDied","Data":"5a3c764886489dbbd25f1727fc3cac2046db2f8aab5e0f4869b057917069d5f5"} Nov 28 09:08:32 crc kubenswrapper[4946]: I1128 09:08:32.976138 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a3c764886489dbbd25f1727fc3cac2046db2f8aab5e0f4869b057917069d5f5" Nov 28 09:08:32 crc kubenswrapper[4946]: I1128 09:08:32.976227 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p" Nov 28 09:08:33 crc kubenswrapper[4946]: I1128 09:08:33.219814 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-ssh-key\") pod \"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7\" (UID: \"e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7\") " Nov 28 09:08:33 crc kubenswrapper[4946]: I1128 09:08:33.227760 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7" (UID: "e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:08:33 crc kubenswrapper[4946]: I1128 09:08:33.324162 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:08:33 crc kubenswrapper[4946]: I1128 09:08:33.990714 4946 scope.go:117] "RemoveContainer" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" Nov 28 09:08:33 crc kubenswrapper[4946]: E1128 09:08:33.991664 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:08:45 crc kubenswrapper[4946]: I1128 09:08:45.998305 4946 scope.go:117] "RemoveContainer" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" Nov 28 09:08:46 crc kubenswrapper[4946]: E1128 09:08:45.999293 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:08:46 crc kubenswrapper[4946]: I1128 09:08:46.051553 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-54t9z"] Nov 28 09:08:46 crc kubenswrapper[4946]: I1128 09:08:46.064769 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-54t9z"] Nov 28 09:08:48 crc kubenswrapper[4946]: I1128 09:08:48.006698 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b293abfb-6014-4e21-9e78-8feaf9fc1e9a" path="/var/lib/kubelet/pods/b293abfb-6014-4e21-9e78-8feaf9fc1e9a/volumes" Nov 28 09:08:51 crc kubenswrapper[4946]: I1128 09:08:51.495972 4946 scope.go:117] "RemoveContainer" containerID="2700e4b47f97107ad18af77b3a498462c2373cf8c7057fd37e3c840de9e922dd" Nov 28 09:08:51 crc kubenswrapper[4946]: I1128 09:08:51.532904 4946 scope.go:117] "RemoveContainer" containerID="2b8043f241b011e60413288f86953c9a8b73558b86119f8eb0c5d0d9fa53c05d" Nov 28 09:08:51 crc kubenswrapper[4946]: I1128 09:08:51.559872 4946 scope.go:117] "RemoveContainer" containerID="60524d7294875c3469beb2399d3a2373f56ce2c6f76b580fee476918f40131fe" Nov 28 09:08:51 crc kubenswrapper[4946]: I1128 09:08:51.587028 4946 scope.go:117] "RemoveContainer" containerID="1248b3066e93cde578f2079e7d9b519d362908349eb3d5f9b29415f1370e3694" Nov 28 09:08:51 crc kubenswrapper[4946]: I1128 09:08:51.618091 4946 scope.go:117] "RemoveContainer" containerID="042371e87b3537c7ae1e624d26dbd2adbbbab31db293e16d6fcbb639b6efabbc" Nov 28 09:08:51 crc kubenswrapper[4946]: I1128 09:08:51.652430 4946 scope.go:117] "RemoveContainer" containerID="de302d81b64cda88fa16ace4909b818f00fed9b1acd965c2f7144987b8e64d21" Nov 28 09:08:51 crc kubenswrapper[4946]: I1128 09:08:51.674828 4946 scope.go:117] "RemoveContainer" containerID="f15407ec2e722978c2af1347446c35e5edf1066bae5dd69fc797a8cee128483f" Nov 28 09:08:58 crc kubenswrapper[4946]: I1128 09:08:58.989613 4946 scope.go:117] "RemoveContainer" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" Nov 28 09:08:58 crc kubenswrapper[4946]: E1128 09:08:58.990422 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:09:12 crc kubenswrapper[4946]: I1128 09:09:12.990732 4946 scope.go:117] "RemoveContainer" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" Nov 28 09:09:12 crc kubenswrapper[4946]: E1128 09:09:12.991937 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:09:26 crc kubenswrapper[4946]: I1128 09:09:26.004136 4946 scope.go:117] "RemoveContainer" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" Nov 28 09:09:26 crc kubenswrapper[4946]: E1128 09:09:26.005238 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:09:36 crc kubenswrapper[4946]: I1128 09:09:36.990378 4946 scope.go:117] "RemoveContainer" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" Nov 28 09:09:36 crc kubenswrapper[4946]: E1128 09:09:36.991487 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:09:49 crc kubenswrapper[4946]: I1128 09:09:49.990916 4946 scope.go:117] "RemoveContainer" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" Nov 28 09:09:49 crc kubenswrapper[4946]: E1128 09:09:49.991828 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:10:04 crc kubenswrapper[4946]: I1128 09:10:04.990425 4946 scope.go:117] "RemoveContainer" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" Nov 28 09:10:04 crc kubenswrapper[4946]: E1128 09:10:04.991471 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:10:19 crc kubenswrapper[4946]: I1128 09:10:19.991698 4946 scope.go:117] "RemoveContainer" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" Nov 28 09:10:19 crc kubenswrapper[4946]: E1128 09:10:19.994882 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:10:32 crc kubenswrapper[4946]: I1128 09:10:32.989745 4946 scope.go:117] "RemoveContainer" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" Nov 28 09:10:32 crc kubenswrapper[4946]: E1128 09:10:32.991070 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:10:45 crc kubenswrapper[4946]: I1128 09:10:45.998694 4946 scope.go:117] "RemoveContainer" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" Nov 28 09:10:46 crc kubenswrapper[4946]: E1128 09:10:45.999933 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:10:48 crc kubenswrapper[4946]: I1128 09:10:48.214192 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r4224"] Nov 28 09:10:48 crc kubenswrapper[4946]: E1128 09:10:48.215519 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af4aab3f-c7c1-4d95-84eb-2562db990f24" containerName="extract-utilities" Nov 28 09:10:48 crc kubenswrapper[4946]: I1128 09:10:48.215541 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="af4aab3f-c7c1-4d95-84eb-2562db990f24" containerName="extract-utilities" Nov 28 09:10:48 crc kubenswrapper[4946]: E1128 09:10:48.215566 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af4aab3f-c7c1-4d95-84eb-2562db990f24" containerName="extract-content" Nov 28 09:10:48 crc kubenswrapper[4946]: I1128 09:10:48.215576 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="af4aab3f-c7c1-4d95-84eb-2562db990f24" containerName="extract-content" Nov 28 09:10:48 crc kubenswrapper[4946]: E1128 09:10:48.215602 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7" containerName="tripleo-cleanup-tripleo-cleanup-openstack-networker" Nov 28 09:10:48 crc kubenswrapper[4946]: I1128 09:10:48.215615 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7" containerName="tripleo-cleanup-tripleo-cleanup-openstack-networker" Nov 28 09:10:48 crc kubenswrapper[4946]: E1128 09:10:48.215637 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f110733-aff4-4839-aca5-d1f8781020ff" containerName="extract-content" Nov 28 09:10:48 crc kubenswrapper[4946]: I1128 09:10:48.215650 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f110733-aff4-4839-aca5-d1f8781020ff" containerName="extract-content" Nov 28 09:10:48 crc kubenswrapper[4946]: E1128 09:10:48.215680 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f110733-aff4-4839-aca5-d1f8781020ff" containerName="extract-utilities" Nov 28 09:10:48 crc kubenswrapper[4946]: I1128 09:10:48.215691 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f110733-aff4-4839-aca5-d1f8781020ff" containerName="extract-utilities" Nov 28 09:10:48 crc kubenswrapper[4946]: E1128 09:10:48.215710 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af4aab3f-c7c1-4d95-84eb-2562db990f24" containerName="registry-server" Nov 28 09:10:48 crc kubenswrapper[4946]: I1128 09:10:48.215720 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="af4aab3f-c7c1-4d95-84eb-2562db990f24" containerName="registry-server" Nov 28 09:10:48 crc kubenswrapper[4946]: E1128 09:10:48.215747 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f110733-aff4-4839-aca5-d1f8781020ff" containerName="registry-server" Nov 28 09:10:48 crc kubenswrapper[4946]: I1128 09:10:48.215758 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f110733-aff4-4839-aca5-d1f8781020ff" containerName="registry-server" Nov 28 09:10:48 crc kubenswrapper[4946]: I1128 09:10:48.216044 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f110733-aff4-4839-aca5-d1f8781020ff" containerName="registry-server" Nov 28 09:10:48 crc kubenswrapper[4946]: I1128 09:10:48.216104 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7" containerName="tripleo-cleanup-tripleo-cleanup-openstack-networker" Nov 28 09:10:48 crc kubenswrapper[4946]: I1128 09:10:48.216124 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="af4aab3f-c7c1-4d95-84eb-2562db990f24" containerName="registry-server" Nov 28 09:10:48 crc kubenswrapper[4946]: I1128 09:10:48.218629 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4224" Nov 28 09:10:48 crc kubenswrapper[4946]: I1128 09:10:48.228794 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r4224"] Nov 28 09:10:48 crc kubenswrapper[4946]: I1128 09:10:48.339777 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blm5p\" (UniqueName: \"kubernetes.io/projected/4205737e-b016-4ea3-8e9e-883cf22d96e2-kube-api-access-blm5p\") pod \"community-operators-r4224\" (UID: \"4205737e-b016-4ea3-8e9e-883cf22d96e2\") " pod="openshift-marketplace/community-operators-r4224" Nov 28 09:10:48 crc kubenswrapper[4946]: I1128 09:10:48.339850 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4205737e-b016-4ea3-8e9e-883cf22d96e2-catalog-content\") pod \"community-operators-r4224\" (UID: \"4205737e-b016-4ea3-8e9e-883cf22d96e2\") " pod="openshift-marketplace/community-operators-r4224" Nov 28 09:10:48 crc kubenswrapper[4946]: I1128 09:10:48.340321 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4205737e-b016-4ea3-8e9e-883cf22d96e2-utilities\") pod \"community-operators-r4224\" (UID: \"4205737e-b016-4ea3-8e9e-883cf22d96e2\") " pod="openshift-marketplace/community-operators-r4224" Nov 28 09:10:48 crc kubenswrapper[4946]: I1128 09:10:48.442296 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4205737e-b016-4ea3-8e9e-883cf22d96e2-utilities\") pod \"community-operators-r4224\" (UID: \"4205737e-b016-4ea3-8e9e-883cf22d96e2\") " pod="openshift-marketplace/community-operators-r4224" Nov 28 09:10:48 crc kubenswrapper[4946]: I1128 09:10:48.442429 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blm5p\" (UniqueName: \"kubernetes.io/projected/4205737e-b016-4ea3-8e9e-883cf22d96e2-kube-api-access-blm5p\") pod \"community-operators-r4224\" (UID: \"4205737e-b016-4ea3-8e9e-883cf22d96e2\") " pod="openshift-marketplace/community-operators-r4224" Nov 28 09:10:48 crc kubenswrapper[4946]: I1128 09:10:48.442593 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4205737e-b016-4ea3-8e9e-883cf22d96e2-catalog-content\") pod \"community-operators-r4224\" (UID: \"4205737e-b016-4ea3-8e9e-883cf22d96e2\") " pod="openshift-marketplace/community-operators-r4224" Nov 28 09:10:48 crc kubenswrapper[4946]: I1128 09:10:48.442936 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4205737e-b016-4ea3-8e9e-883cf22d96e2-utilities\") pod \"community-operators-r4224\" (UID: \"4205737e-b016-4ea3-8e9e-883cf22d96e2\") " pod="openshift-marketplace/community-operators-r4224" Nov 28 09:10:48 crc kubenswrapper[4946]: I1128 09:10:48.442944 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4205737e-b016-4ea3-8e9e-883cf22d96e2-catalog-content\") pod \"community-operators-r4224\" (UID: \"4205737e-b016-4ea3-8e9e-883cf22d96e2\") " pod="openshift-marketplace/community-operators-r4224" Nov 28 09:10:48 crc kubenswrapper[4946]: I1128 09:10:48.465474 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blm5p\" (UniqueName: \"kubernetes.io/projected/4205737e-b016-4ea3-8e9e-883cf22d96e2-kube-api-access-blm5p\") pod \"community-operators-r4224\" (UID: \"4205737e-b016-4ea3-8e9e-883cf22d96e2\") " pod="openshift-marketplace/community-operators-r4224" Nov 28 09:10:48 crc kubenswrapper[4946]: I1128 09:10:48.560186 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4224" Nov 28 09:10:49 crc kubenswrapper[4946]: I1128 09:10:49.123584 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r4224"] Nov 28 09:10:49 crc kubenswrapper[4946]: I1128 09:10:49.657178 4946 generic.go:334] "Generic (PLEG): container finished" podID="4205737e-b016-4ea3-8e9e-883cf22d96e2" containerID="27e09de53eee663f41a8673024b6b248a1976e2576f8cb660802a73b9bf923b5" exitCode=0 Nov 28 09:10:49 crc kubenswrapper[4946]: I1128 09:10:49.657409 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4224" event={"ID":"4205737e-b016-4ea3-8e9e-883cf22d96e2","Type":"ContainerDied","Data":"27e09de53eee663f41a8673024b6b248a1976e2576f8cb660802a73b9bf923b5"} Nov 28 09:10:49 crc kubenswrapper[4946]: I1128 09:10:49.657586 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4224" event={"ID":"4205737e-b016-4ea3-8e9e-883cf22d96e2","Type":"ContainerStarted","Data":"dc87402c6744c59d18028a541b84dad03d489c69d15d2e97dcf5293d2e552245"} Nov 28 09:10:50 crc kubenswrapper[4946]: I1128 09:10:50.673135 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4224" event={"ID":"4205737e-b016-4ea3-8e9e-883cf22d96e2","Type":"ContainerStarted","Data":"c05344c4e5931403c20a2983817f78deb417b2a637b6b70c9aa5de2a94e9e2c8"} Nov 28 09:10:51 crc kubenswrapper[4946]: I1128 09:10:51.689657 4946 generic.go:334] "Generic (PLEG): container finished" podID="4205737e-b016-4ea3-8e9e-883cf22d96e2" containerID="c05344c4e5931403c20a2983817f78deb417b2a637b6b70c9aa5de2a94e9e2c8" exitCode=0 Nov 28 09:10:51 crc kubenswrapper[4946]: I1128 09:10:51.689787 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4224" event={"ID":"4205737e-b016-4ea3-8e9e-883cf22d96e2","Type":"ContainerDied","Data":"c05344c4e5931403c20a2983817f78deb417b2a637b6b70c9aa5de2a94e9e2c8"} Nov 28 09:10:52 crc kubenswrapper[4946]: I1128 09:10:52.704697 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4224" event={"ID":"4205737e-b016-4ea3-8e9e-883cf22d96e2","Type":"ContainerStarted","Data":"0c8287fcd31a71b062a0dc507c4244d2d944593480a107cd86caaeae47cc501f"} Nov 28 09:10:52 crc kubenswrapper[4946]: I1128 09:10:52.752631 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r4224" podStartSLOduration=2.251102833 podStartE2EDuration="4.752597324s" podCreationTimestamp="2025-11-28 09:10:48 +0000 UTC" firstStartedPulling="2025-11-28 09:10:49.660204104 +0000 UTC m=+8304.038269225" lastFinishedPulling="2025-11-28 09:10:52.161698555 +0000 UTC m=+8306.539763716" observedRunningTime="2025-11-28 09:10:52.730851885 +0000 UTC m=+8307.108917066" watchObservedRunningTime="2025-11-28 09:10:52.752597324 +0000 UTC m=+8307.130662475" Nov 28 09:10:56 crc kubenswrapper[4946]: I1128 09:10:56.990531 4946 scope.go:117] "RemoveContainer" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" Nov 28 09:10:56 crc kubenswrapper[4946]: E1128 09:10:56.991438 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:10:58 crc kubenswrapper[4946]: I1128 09:10:58.560557 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r4224" Nov 28 09:10:58 crc kubenswrapper[4946]: I1128 09:10:58.561142 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r4224" Nov 28 09:10:58 crc kubenswrapper[4946]: I1128 09:10:58.651612 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r4224" Nov 28 09:10:58 crc kubenswrapper[4946]: I1128 09:10:58.875429 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r4224" Nov 28 09:10:58 crc kubenswrapper[4946]: I1128 09:10:58.964824 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r4224"] Nov 28 09:11:00 crc kubenswrapper[4946]: I1128 09:11:00.814377 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r4224" podUID="4205737e-b016-4ea3-8e9e-883cf22d96e2" containerName="registry-server" containerID="cri-o://0c8287fcd31a71b062a0dc507c4244d2d944593480a107cd86caaeae47cc501f" gracePeriod=2 Nov 28 09:11:01 crc kubenswrapper[4946]: I1128 09:11:01.827715 4946 generic.go:334] "Generic (PLEG): container finished" podID="4205737e-b016-4ea3-8e9e-883cf22d96e2" containerID="0c8287fcd31a71b062a0dc507c4244d2d944593480a107cd86caaeae47cc501f" exitCode=0 Nov 28 09:11:01 crc kubenswrapper[4946]: I1128 09:11:01.827936 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4224" event={"ID":"4205737e-b016-4ea3-8e9e-883cf22d96e2","Type":"ContainerDied","Data":"0c8287fcd31a71b062a0dc507c4244d2d944593480a107cd86caaeae47cc501f"} Nov 28 09:11:01 crc kubenswrapper[4946]: I1128 09:11:01.828249 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4224" event={"ID":"4205737e-b016-4ea3-8e9e-883cf22d96e2","Type":"ContainerDied","Data":"dc87402c6744c59d18028a541b84dad03d489c69d15d2e97dcf5293d2e552245"} Nov 28 09:11:01 crc kubenswrapper[4946]: I1128 09:11:01.828268 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc87402c6744c59d18028a541b84dad03d489c69d15d2e97dcf5293d2e552245" Nov 28 09:11:01 crc kubenswrapper[4946]: I1128 09:11:01.903869 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4224" Nov 28 09:11:02 crc kubenswrapper[4946]: I1128 09:11:02.000549 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4205737e-b016-4ea3-8e9e-883cf22d96e2-catalog-content\") pod \"4205737e-b016-4ea3-8e9e-883cf22d96e2\" (UID: \"4205737e-b016-4ea3-8e9e-883cf22d96e2\") " Nov 28 09:11:02 crc kubenswrapper[4946]: I1128 09:11:02.000629 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4205737e-b016-4ea3-8e9e-883cf22d96e2-utilities\") pod \"4205737e-b016-4ea3-8e9e-883cf22d96e2\" (UID: \"4205737e-b016-4ea3-8e9e-883cf22d96e2\") " Nov 28 09:11:02 crc kubenswrapper[4946]: I1128 09:11:02.000882 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blm5p\" (UniqueName: \"kubernetes.io/projected/4205737e-b016-4ea3-8e9e-883cf22d96e2-kube-api-access-blm5p\") pod \"4205737e-b016-4ea3-8e9e-883cf22d96e2\" (UID: \"4205737e-b016-4ea3-8e9e-883cf22d96e2\") " Nov 28 09:11:02 crc kubenswrapper[4946]: I1128 09:11:02.002317 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4205737e-b016-4ea3-8e9e-883cf22d96e2-utilities" (OuterVolumeSpecName: "utilities") pod "4205737e-b016-4ea3-8e9e-883cf22d96e2" (UID: "4205737e-b016-4ea3-8e9e-883cf22d96e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:11:02 crc kubenswrapper[4946]: I1128 09:11:02.002778 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4205737e-b016-4ea3-8e9e-883cf22d96e2-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 09:11:02 crc kubenswrapper[4946]: I1128 09:11:02.009706 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4205737e-b016-4ea3-8e9e-883cf22d96e2-kube-api-access-blm5p" (OuterVolumeSpecName: "kube-api-access-blm5p") pod "4205737e-b016-4ea3-8e9e-883cf22d96e2" (UID: "4205737e-b016-4ea3-8e9e-883cf22d96e2"). InnerVolumeSpecName "kube-api-access-blm5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:11:02 crc kubenswrapper[4946]: I1128 09:11:02.060799 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4205737e-b016-4ea3-8e9e-883cf22d96e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4205737e-b016-4ea3-8e9e-883cf22d96e2" (UID: "4205737e-b016-4ea3-8e9e-883cf22d96e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:11:02 crc kubenswrapper[4946]: I1128 09:11:02.105335 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4205737e-b016-4ea3-8e9e-883cf22d96e2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 09:11:02 crc kubenswrapper[4946]: I1128 09:11:02.105389 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blm5p\" (UniqueName: \"kubernetes.io/projected/4205737e-b016-4ea3-8e9e-883cf22d96e2-kube-api-access-blm5p\") on node \"crc\" DevicePath \"\"" Nov 28 09:11:02 crc kubenswrapper[4946]: I1128 09:11:02.838012 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4224" Nov 28 09:11:02 crc kubenswrapper[4946]: I1128 09:11:02.884598 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r4224"] Nov 28 09:11:02 crc kubenswrapper[4946]: I1128 09:11:02.897223 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r4224"] Nov 28 09:11:04 crc kubenswrapper[4946]: I1128 09:11:04.011059 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4205737e-b016-4ea3-8e9e-883cf22d96e2" path="/var/lib/kubelet/pods/4205737e-b016-4ea3-8e9e-883cf22d96e2/volumes" Nov 28 09:11:04 crc kubenswrapper[4946]: I1128 09:11:04.072064 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-zgv8k"] Nov 28 09:11:04 crc kubenswrapper[4946]: I1128 09:11:04.084883 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-4a11-account-create-update-f6t9x"] Nov 28 09:11:04 crc kubenswrapper[4946]: I1128 09:11:04.096031 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-4a11-account-create-update-f6t9x"] Nov 28 09:11:04 crc kubenswrapper[4946]: I1128 09:11:04.106369 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-zgv8k"] Nov 28 09:11:06 crc kubenswrapper[4946]: I1128 09:11:06.010483 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5039fbd0-b0df-4b98-9aa5-61ce01921c0d" path="/var/lib/kubelet/pods/5039fbd0-b0df-4b98-9aa5-61ce01921c0d/volumes" Nov 28 09:11:06 crc kubenswrapper[4946]: I1128 09:11:06.011433 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bf5623c-f83d-4071-a915-684b7e01a729" path="/var/lib/kubelet/pods/7bf5623c-f83d-4071-a915-684b7e01a729/volumes" Nov 28 09:11:07 crc kubenswrapper[4946]: I1128 09:11:07.990371 4946 scope.go:117] "RemoveContainer" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" Nov 28 09:11:07 crc kubenswrapper[4946]: E1128 09:11:07.991997 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:11:15 crc kubenswrapper[4946]: I1128 09:11:15.058330 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-qb9nc"] Nov 28 09:11:15 crc kubenswrapper[4946]: I1128 09:11:15.076148 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-qb9nc"] Nov 28 09:11:16 crc kubenswrapper[4946]: I1128 09:11:16.007325 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0da2f081-ae1d-49cb-9f50-b0498fd21c92" path="/var/lib/kubelet/pods/0da2f081-ae1d-49cb-9f50-b0498fd21c92/volumes" Nov 28 09:11:22 crc kubenswrapper[4946]: I1128 09:11:22.990997 4946 scope.go:117] "RemoveContainer" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" Nov 28 09:11:22 crc kubenswrapper[4946]: E1128 09:11:22.991786 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:11:34 crc kubenswrapper[4946]: I1128 09:11:34.050825 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-ztsg8"] Nov 28 09:11:34 crc kubenswrapper[4946]: I1128 09:11:34.071884 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-ztsg8"] Nov 28 09:11:34 crc kubenswrapper[4946]: I1128 09:11:34.071936 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-8ab4-account-create-update-x766l"] Nov 28 09:11:34 crc kubenswrapper[4946]: I1128 09:11:34.071949 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-8ab4-account-create-update-x766l"] Nov 28 09:11:34 crc kubenswrapper[4946]: I1128 09:11:34.990136 4946 scope.go:117] "RemoveContainer" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" Nov 28 09:11:34 crc kubenswrapper[4946]: E1128 09:11:34.990645 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:11:36 crc kubenswrapper[4946]: I1128 09:11:36.011329 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37faecbc-f2ad-4943-8252-d5a279e45d76" path="/var/lib/kubelet/pods/37faecbc-f2ad-4943-8252-d5a279e45d76/volumes" Nov 28 09:11:36 crc kubenswrapper[4946]: I1128 09:11:36.013207 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f226da0-54fe-4f7d-97cc-b32d781386d4" path="/var/lib/kubelet/pods/8f226da0-54fe-4f7d-97cc-b32d781386d4/volumes" Nov 28 09:11:47 crc kubenswrapper[4946]: I1128 09:11:47.056640 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-t686d"] Nov 28 09:11:47 crc kubenswrapper[4946]: I1128 09:11:47.067284 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-t686d"] Nov 28 09:11:48 crc kubenswrapper[4946]: I1128 09:11:48.001005 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e21e1758-0210-4e49-8266-c27f5c919379" path="/var/lib/kubelet/pods/e21e1758-0210-4e49-8266-c27f5c919379/volumes" Nov 28 09:11:49 crc kubenswrapper[4946]: I1128 09:11:49.989984 4946 scope.go:117] "RemoveContainer" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" Nov 28 09:11:49 crc kubenswrapper[4946]: E1128 09:11:49.990584 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:11:51 crc kubenswrapper[4946]: I1128 09:11:51.865602 4946 scope.go:117] "RemoveContainer" containerID="5b2e45c1bebd6474dea14cae50eb9c6f99b9db78356859a073260ac8e7c0e137" Nov 28 09:11:51 crc kubenswrapper[4946]: I1128 09:11:51.903376 4946 scope.go:117] "RemoveContainer" containerID="c0975f687317e3f17f976294dedb58d6554e37fb2b6357b921b2d73a05972957" Nov 28 09:11:51 crc kubenswrapper[4946]: I1128 09:11:51.957433 4946 scope.go:117] "RemoveContainer" containerID="7ed634b7edb7bb97dd1ffd7612e1ef36ca6529c02491f79a875804d1b82de7bb" Nov 28 09:11:51 crc kubenswrapper[4946]: I1128 09:11:51.998757 4946 scope.go:117] "RemoveContainer" containerID="67d0d2442fd292e5b0af49a19acd55838757defb575601893b48c0f893ddffe9" Nov 28 09:11:52 crc kubenswrapper[4946]: I1128 09:11:52.047032 4946 scope.go:117] "RemoveContainer" containerID="6db333d218206bf727f4e24bd695017b567df324f32c092892cfda52bd85cf51" Nov 28 09:11:52 crc kubenswrapper[4946]: I1128 09:11:52.097049 4946 scope.go:117] "RemoveContainer" containerID="38c74511aed168b9c1cc726534260007815626cf55cd2960b3777306de1b1e3b" Nov 28 09:11:54 crc kubenswrapper[4946]: I1128 09:11:54.375669 4946 generic.go:334] "Generic (PLEG): container finished" podID="041f147a-ac40-4d08-8953-4ba399c7159c" containerID="e7d726b5322ab17d7e8021c2055a4f05cdb82669d1aa9575d4a94f5f867b86a8" exitCode=0 Nov 28 09:11:54 crc kubenswrapper[4946]: I1128 09:11:54.375762 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2" event={"ID":"041f147a-ac40-4d08-8953-4ba399c7159c","Type":"ContainerDied","Data":"e7d726b5322ab17d7e8021c2055a4f05cdb82669d1aa9575d4a94f5f867b86a8"} Nov 28 09:11:56 crc kubenswrapper[4946]: I1128 09:11:56.132571 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2" Nov 28 09:11:56 crc kubenswrapper[4946]: I1128 09:11:56.212831 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnsvr\" (UniqueName: \"kubernetes.io/projected/041f147a-ac40-4d08-8953-4ba399c7159c-kube-api-access-fnsvr\") pod \"041f147a-ac40-4d08-8953-4ba399c7159c\" (UID: \"041f147a-ac40-4d08-8953-4ba399c7159c\") " Nov 28 09:11:56 crc kubenswrapper[4946]: I1128 09:11:56.212941 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/041f147a-ac40-4d08-8953-4ba399c7159c-ssh-key\") pod \"041f147a-ac40-4d08-8953-4ba399c7159c\" (UID: \"041f147a-ac40-4d08-8953-4ba399c7159c\") " Nov 28 09:11:56 crc kubenswrapper[4946]: I1128 09:11:56.213071 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/041f147a-ac40-4d08-8953-4ba399c7159c-ceph\") pod \"041f147a-ac40-4d08-8953-4ba399c7159c\" (UID: \"041f147a-ac40-4d08-8953-4ba399c7159c\") " Nov 28 09:11:56 crc kubenswrapper[4946]: I1128 09:11:56.213127 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/041f147a-ac40-4d08-8953-4ba399c7159c-inventory\") pod \"041f147a-ac40-4d08-8953-4ba399c7159c\" (UID: \"041f147a-ac40-4d08-8953-4ba399c7159c\") " Nov 28 09:11:56 crc kubenswrapper[4946]: I1128 09:11:56.213174 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/041f147a-ac40-4d08-8953-4ba399c7159c-tripleo-cleanup-combined-ca-bundle\") pod \"041f147a-ac40-4d08-8953-4ba399c7159c\" (UID: \"041f147a-ac40-4d08-8953-4ba399c7159c\") " Nov 28 09:11:56 crc kubenswrapper[4946]: I1128 09:11:56.219306 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/041f147a-ac40-4d08-8953-4ba399c7159c-kube-api-access-fnsvr" (OuterVolumeSpecName: "kube-api-access-fnsvr") pod "041f147a-ac40-4d08-8953-4ba399c7159c" (UID: "041f147a-ac40-4d08-8953-4ba399c7159c"). InnerVolumeSpecName "kube-api-access-fnsvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:11:56 crc kubenswrapper[4946]: I1128 09:11:56.221623 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/041f147a-ac40-4d08-8953-4ba399c7159c-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "041f147a-ac40-4d08-8953-4ba399c7159c" (UID: "041f147a-ac40-4d08-8953-4ba399c7159c"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:11:56 crc kubenswrapper[4946]: I1128 09:11:56.222571 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/041f147a-ac40-4d08-8953-4ba399c7159c-ceph" (OuterVolumeSpecName: "ceph") pod "041f147a-ac40-4d08-8953-4ba399c7159c" (UID: "041f147a-ac40-4d08-8953-4ba399c7159c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:11:56 crc kubenswrapper[4946]: I1128 09:11:56.250903 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/041f147a-ac40-4d08-8953-4ba399c7159c-inventory" (OuterVolumeSpecName: "inventory") pod "041f147a-ac40-4d08-8953-4ba399c7159c" (UID: "041f147a-ac40-4d08-8953-4ba399c7159c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:11:56 crc kubenswrapper[4946]: I1128 09:11:56.255600 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/041f147a-ac40-4d08-8953-4ba399c7159c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "041f147a-ac40-4d08-8953-4ba399c7159c" (UID: "041f147a-ac40-4d08-8953-4ba399c7159c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:11:56 crc kubenswrapper[4946]: I1128 09:11:56.315146 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnsvr\" (UniqueName: \"kubernetes.io/projected/041f147a-ac40-4d08-8953-4ba399c7159c-kube-api-access-fnsvr\") on node \"crc\" DevicePath \"\"" Nov 28 09:11:56 crc kubenswrapper[4946]: I1128 09:11:56.315181 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/041f147a-ac40-4d08-8953-4ba399c7159c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:11:56 crc kubenswrapper[4946]: I1128 09:11:56.315193 4946 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/041f147a-ac40-4d08-8953-4ba399c7159c-ceph\") on node \"crc\" DevicePath \"\"" Nov 28 09:11:56 crc kubenswrapper[4946]: I1128 09:11:56.315202 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/041f147a-ac40-4d08-8953-4ba399c7159c-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:11:56 crc kubenswrapper[4946]: I1128 09:11:56.315213 4946 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/041f147a-ac40-4d08-8953-4ba399c7159c-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:11:56 crc kubenswrapper[4946]: I1128 09:11:56.701815 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2" event={"ID":"041f147a-ac40-4d08-8953-4ba399c7159c","Type":"ContainerDied","Data":"42ee473bb0e8151d3c961bbd7745b3239e282d2f7041ffecd0a778cce2ca25ae"} Nov 28 09:11:56 crc kubenswrapper[4946]: I1128 09:11:56.701866 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42ee473bb0e8151d3c961bbd7745b3239e282d2f7041ffecd0a778cce2ca25ae" Nov 28 09:11:56 crc kubenswrapper[4946]: I1128 09:11:56.701998 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2" Nov 28 09:12:02 crc kubenswrapper[4946]: I1128 09:12:02.991591 4946 scope.go:117] "RemoveContainer" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" Nov 28 09:12:02 crc kubenswrapper[4946]: E1128 09:12:02.992854 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.244815 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-networker-vf8r5"] Nov 28 09:12:07 crc kubenswrapper[4946]: E1128 09:12:07.246480 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4205737e-b016-4ea3-8e9e-883cf22d96e2" containerName="extract-content" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.246499 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="4205737e-b016-4ea3-8e9e-883cf22d96e2" containerName="extract-content" Nov 28 09:12:07 crc kubenswrapper[4946]: E1128 09:12:07.246515 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="041f147a-ac40-4d08-8953-4ba399c7159c" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.246525 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="041f147a-ac40-4d08-8953-4ba399c7159c" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Nov 28 09:12:07 crc kubenswrapper[4946]: E1128 09:12:07.246542 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4205737e-b016-4ea3-8e9e-883cf22d96e2" containerName="registry-server" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.246549 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="4205737e-b016-4ea3-8e9e-883cf22d96e2" containerName="registry-server" Nov 28 09:12:07 crc kubenswrapper[4946]: E1128 09:12:07.246571 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4205737e-b016-4ea3-8e9e-883cf22d96e2" containerName="extract-utilities" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.246579 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="4205737e-b016-4ea3-8e9e-883cf22d96e2" containerName="extract-utilities" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.246892 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="4205737e-b016-4ea3-8e9e-883cf22d96e2" containerName="registry-server" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.246923 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="041f147a-ac40-4d08-8953-4ba399c7159c" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.248081 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-networker-vf8r5" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.252303 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-7cq8d" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.255238 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.255979 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.256606 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.272371 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-hrvm8"] Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.277047 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-hrvm8" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.283954 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-psw2j" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.283970 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.292274 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-hrvm8"] Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.302532 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-networker-vf8r5"] Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.389580 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-networker-vf8r5\" (UID: \"bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74\") " pod="openstack/bootstrap-openstack-openstack-networker-vf8r5" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.389690 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74-ssh-key\") pod \"bootstrap-openstack-openstack-networker-vf8r5\" (UID: \"bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74\") " pod="openstack/bootstrap-openstack-openstack-networker-vf8r5" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.389795 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74-inventory\") pod \"bootstrap-openstack-openstack-networker-vf8r5\" (UID: \"bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74\") " pod="openstack/bootstrap-openstack-openstack-networker-vf8r5" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.389922 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jjd2\" (UniqueName: \"kubernetes.io/projected/bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74-kube-api-access-6jjd2\") pod \"bootstrap-openstack-openstack-networker-vf8r5\" (UID: \"bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74\") " pod="openstack/bootstrap-openstack-openstack-networker-vf8r5" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.390170 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67930c7-a5cb-4b90-a00f-f2494590d922-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-hrvm8\" (UID: \"d67930c7-a5cb-4b90-a00f-f2494590d922\") " pod="openstack/bootstrap-openstack-openstack-cell1-hrvm8" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.390263 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpxf5\" (UniqueName: \"kubernetes.io/projected/d67930c7-a5cb-4b90-a00f-f2494590d922-kube-api-access-zpxf5\") pod \"bootstrap-openstack-openstack-cell1-hrvm8\" (UID: \"d67930c7-a5cb-4b90-a00f-f2494590d922\") " pod="openstack/bootstrap-openstack-openstack-cell1-hrvm8" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.390346 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d67930c7-a5cb-4b90-a00f-f2494590d922-inventory\") pod \"bootstrap-openstack-openstack-cell1-hrvm8\" (UID: \"d67930c7-a5cb-4b90-a00f-f2494590d922\") " pod="openstack/bootstrap-openstack-openstack-cell1-hrvm8" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.390415 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d67930c7-a5cb-4b90-a00f-f2494590d922-ceph\") pod \"bootstrap-openstack-openstack-cell1-hrvm8\" (UID: \"d67930c7-a5cb-4b90-a00f-f2494590d922\") " pod="openstack/bootstrap-openstack-openstack-cell1-hrvm8" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.390535 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d67930c7-a5cb-4b90-a00f-f2494590d922-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-hrvm8\" (UID: \"d67930c7-a5cb-4b90-a00f-f2494590d922\") " pod="openstack/bootstrap-openstack-openstack-cell1-hrvm8" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.493606 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74-inventory\") pod \"bootstrap-openstack-openstack-networker-vf8r5\" (UID: \"bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74\") " pod="openstack/bootstrap-openstack-openstack-networker-vf8r5" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.493789 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jjd2\" (UniqueName: \"kubernetes.io/projected/bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74-kube-api-access-6jjd2\") pod \"bootstrap-openstack-openstack-networker-vf8r5\" (UID: \"bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74\") " pod="openstack/bootstrap-openstack-openstack-networker-vf8r5" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.494457 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67930c7-a5cb-4b90-a00f-f2494590d922-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-hrvm8\" (UID: \"d67930c7-a5cb-4b90-a00f-f2494590d922\") " pod="openstack/bootstrap-openstack-openstack-cell1-hrvm8" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.495234 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpxf5\" (UniqueName: \"kubernetes.io/projected/d67930c7-a5cb-4b90-a00f-f2494590d922-kube-api-access-zpxf5\") pod \"bootstrap-openstack-openstack-cell1-hrvm8\" (UID: \"d67930c7-a5cb-4b90-a00f-f2494590d922\") " pod="openstack/bootstrap-openstack-openstack-cell1-hrvm8" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.495280 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d67930c7-a5cb-4b90-a00f-f2494590d922-inventory\") pod \"bootstrap-openstack-openstack-cell1-hrvm8\" (UID: \"d67930c7-a5cb-4b90-a00f-f2494590d922\") " pod="openstack/bootstrap-openstack-openstack-cell1-hrvm8" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.495329 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d67930c7-a5cb-4b90-a00f-f2494590d922-ceph\") pod \"bootstrap-openstack-openstack-cell1-hrvm8\" (UID: \"d67930c7-a5cb-4b90-a00f-f2494590d922\") " pod="openstack/bootstrap-openstack-openstack-cell1-hrvm8" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.495399 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d67930c7-a5cb-4b90-a00f-f2494590d922-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-hrvm8\" (UID: \"d67930c7-a5cb-4b90-a00f-f2494590d922\") " pod="openstack/bootstrap-openstack-openstack-cell1-hrvm8" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.495553 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-networker-vf8r5\" (UID: \"bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74\") " pod="openstack/bootstrap-openstack-openstack-networker-vf8r5" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.495609 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74-ssh-key\") pod \"bootstrap-openstack-openstack-networker-vf8r5\" (UID: \"bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74\") " pod="openstack/bootstrap-openstack-openstack-networker-vf8r5" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.511790 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67930c7-a5cb-4b90-a00f-f2494590d922-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-hrvm8\" (UID: \"d67930c7-a5cb-4b90-a00f-f2494590d922\") " pod="openstack/bootstrap-openstack-openstack-cell1-hrvm8" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.511887 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d67930c7-a5cb-4b90-a00f-f2494590d922-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-hrvm8\" (UID: \"d67930c7-a5cb-4b90-a00f-f2494590d922\") " pod="openstack/bootstrap-openstack-openstack-cell1-hrvm8" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.513866 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-networker-vf8r5\" (UID: \"bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74\") " pod="openstack/bootstrap-openstack-openstack-networker-vf8r5" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.515542 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74-ssh-key\") pod \"bootstrap-openstack-openstack-networker-vf8r5\" (UID: \"bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74\") " pod="openstack/bootstrap-openstack-openstack-networker-vf8r5" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.516073 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jjd2\" (UniqueName: \"kubernetes.io/projected/bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74-kube-api-access-6jjd2\") pod \"bootstrap-openstack-openstack-networker-vf8r5\" (UID: \"bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74\") " pod="openstack/bootstrap-openstack-openstack-networker-vf8r5" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.516647 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d67930c7-a5cb-4b90-a00f-f2494590d922-ceph\") pod \"bootstrap-openstack-openstack-cell1-hrvm8\" (UID: \"d67930c7-a5cb-4b90-a00f-f2494590d922\") " pod="openstack/bootstrap-openstack-openstack-cell1-hrvm8" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.516717 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74-inventory\") pod \"bootstrap-openstack-openstack-networker-vf8r5\" (UID: \"bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74\") " pod="openstack/bootstrap-openstack-openstack-networker-vf8r5" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.517433 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d67930c7-a5cb-4b90-a00f-f2494590d922-inventory\") pod \"bootstrap-openstack-openstack-cell1-hrvm8\" (UID: \"d67930c7-a5cb-4b90-a00f-f2494590d922\") " pod="openstack/bootstrap-openstack-openstack-cell1-hrvm8" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.525710 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpxf5\" (UniqueName: \"kubernetes.io/projected/d67930c7-a5cb-4b90-a00f-f2494590d922-kube-api-access-zpxf5\") pod \"bootstrap-openstack-openstack-cell1-hrvm8\" (UID: \"d67930c7-a5cb-4b90-a00f-f2494590d922\") " pod="openstack/bootstrap-openstack-openstack-cell1-hrvm8" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.579246 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-networker-vf8r5" Nov 28 09:12:07 crc kubenswrapper[4946]: I1128 09:12:07.604441 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-hrvm8" Nov 28 09:12:08 crc kubenswrapper[4946]: I1128 09:12:08.275456 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-networker-vf8r5"] Nov 28 09:12:08 crc kubenswrapper[4946]: I1128 09:12:08.356936 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 09:12:08 crc kubenswrapper[4946]: I1128 09:12:08.408566 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-hrvm8"] Nov 28 09:12:08 crc kubenswrapper[4946]: W1128 09:12:08.419232 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd67930c7_a5cb_4b90_a00f_f2494590d922.slice/crio-c35c712ced21e8e786cf049f322d8c77cd9c38b4f2e0718442498476ec85976d WatchSource:0}: Error finding container c35c712ced21e8e786cf049f322d8c77cd9c38b4f2e0718442498476ec85976d: Status 404 returned error can't find the container with id c35c712ced21e8e786cf049f322d8c77cd9c38b4f2e0718442498476ec85976d Nov 28 09:12:08 crc kubenswrapper[4946]: I1128 09:12:08.829853 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-networker-vf8r5" event={"ID":"bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74","Type":"ContainerStarted","Data":"330a2ae63bead6aeb19b522903c8263fa1a3e3aae8624178377e55c23019a07f"} Nov 28 09:12:08 crc kubenswrapper[4946]: I1128 09:12:08.831746 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-hrvm8" event={"ID":"d67930c7-a5cb-4b90-a00f-f2494590d922","Type":"ContainerStarted","Data":"c35c712ced21e8e786cf049f322d8c77cd9c38b4f2e0718442498476ec85976d"} Nov 28 09:12:09 crc kubenswrapper[4946]: I1128 09:12:09.841036 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-networker-vf8r5" event={"ID":"bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74","Type":"ContainerStarted","Data":"39ddea80e90d32a2f9b8e9df3f30061a9352125947d19ac0f31e87303754f76a"} Nov 28 09:12:09 crc kubenswrapper[4946]: I1128 09:12:09.843311 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-hrvm8" event={"ID":"d67930c7-a5cb-4b90-a00f-f2494590d922","Type":"ContainerStarted","Data":"c8b975d94dd86e72b2dcbd34f071e30dd2d477186b829cfab7cf8bd019ede4eb"} Nov 28 09:12:09 crc kubenswrapper[4946]: I1128 09:12:09.875648 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-networker-vf8r5" podStartSLOduration=2.166675961 podStartE2EDuration="2.875621918s" podCreationTimestamp="2025-11-28 09:12:07 +0000 UTC" firstStartedPulling="2025-11-28 09:12:08.356656158 +0000 UTC m=+8382.734721269" lastFinishedPulling="2025-11-28 09:12:09.065602105 +0000 UTC m=+8383.443667226" observedRunningTime="2025-11-28 09:12:09.872508341 +0000 UTC m=+8384.250573482" watchObservedRunningTime="2025-11-28 09:12:09.875621918 +0000 UTC m=+8384.253687059" Nov 28 09:12:09 crc kubenswrapper[4946]: I1128 09:12:09.893573 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-hrvm8" podStartSLOduration=2.353282967 podStartE2EDuration="2.893549462s" podCreationTimestamp="2025-11-28 09:12:07 +0000 UTC" firstStartedPulling="2025-11-28 09:12:08.423746391 +0000 UTC m=+8382.801811502" lastFinishedPulling="2025-11-28 09:12:08.964012836 +0000 UTC m=+8383.342077997" observedRunningTime="2025-11-28 09:12:09.889768669 +0000 UTC m=+8384.267833800" watchObservedRunningTime="2025-11-28 09:12:09.893549462 +0000 UTC m=+8384.271614603" Nov 28 09:12:16 crc kubenswrapper[4946]: I1128 09:12:16.990913 4946 scope.go:117] "RemoveContainer" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" Nov 28 09:12:16 crc kubenswrapper[4946]: E1128 09:12:16.991745 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:12:27 crc kubenswrapper[4946]: I1128 09:12:27.994654 4946 scope.go:117] "RemoveContainer" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" Nov 28 09:12:27 crc kubenswrapper[4946]: E1128 09:12:27.995454 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:12:42 crc kubenswrapper[4946]: I1128 09:12:42.990645 4946 scope.go:117] "RemoveContainer" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" Nov 28 09:12:42 crc kubenswrapper[4946]: E1128 09:12:42.991888 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:12:56 crc kubenswrapper[4946]: I1128 09:12:56.990252 4946 scope.go:117] "RemoveContainer" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" Nov 28 09:12:57 crc kubenswrapper[4946]: I1128 09:12:57.401213 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"d8b45121214190074c519e34154db0426b7e8853a8fe0a16ee20d03a0a14cd84"} Nov 28 09:15:00 crc kubenswrapper[4946]: I1128 09:15:00.188404 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405355-l7czx"] Nov 28 09:15:00 crc kubenswrapper[4946]: I1128 09:15:00.191590 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405355-l7czx" Nov 28 09:15:00 crc kubenswrapper[4946]: I1128 09:15:00.199407 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 09:15:00 crc kubenswrapper[4946]: I1128 09:15:00.199676 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 09:15:00 crc kubenswrapper[4946]: I1128 09:15:00.215402 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405355-l7czx"] Nov 28 09:15:00 crc kubenswrapper[4946]: I1128 09:15:00.346289 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xtx2\" (UniqueName: \"kubernetes.io/projected/fe928ed1-b263-4791-b82c-4f64a33ae019-kube-api-access-8xtx2\") pod \"collect-profiles-29405355-l7czx\" (UID: \"fe928ed1-b263-4791-b82c-4f64a33ae019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405355-l7czx" Nov 28 09:15:00 crc kubenswrapper[4946]: I1128 09:15:00.346638 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe928ed1-b263-4791-b82c-4f64a33ae019-secret-volume\") pod \"collect-profiles-29405355-l7czx\" (UID: \"fe928ed1-b263-4791-b82c-4f64a33ae019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405355-l7czx" Nov 28 09:15:00 crc kubenswrapper[4946]: I1128 09:15:00.346766 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe928ed1-b263-4791-b82c-4f64a33ae019-config-volume\") pod \"collect-profiles-29405355-l7czx\" (UID: \"fe928ed1-b263-4791-b82c-4f64a33ae019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405355-l7czx" Nov 28 09:15:00 crc kubenswrapper[4946]: I1128 09:15:00.448887 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xtx2\" (UniqueName: \"kubernetes.io/projected/fe928ed1-b263-4791-b82c-4f64a33ae019-kube-api-access-8xtx2\") pod \"collect-profiles-29405355-l7czx\" (UID: \"fe928ed1-b263-4791-b82c-4f64a33ae019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405355-l7czx" Nov 28 09:15:00 crc kubenswrapper[4946]: I1128 09:15:00.449007 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe928ed1-b263-4791-b82c-4f64a33ae019-secret-volume\") pod \"collect-profiles-29405355-l7czx\" (UID: \"fe928ed1-b263-4791-b82c-4f64a33ae019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405355-l7czx" Nov 28 09:15:00 crc kubenswrapper[4946]: I1128 09:15:00.449063 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe928ed1-b263-4791-b82c-4f64a33ae019-config-volume\") pod \"collect-profiles-29405355-l7czx\" (UID: \"fe928ed1-b263-4791-b82c-4f64a33ae019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405355-l7czx" Nov 28 09:15:00 crc kubenswrapper[4946]: I1128 09:15:00.450222 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe928ed1-b263-4791-b82c-4f64a33ae019-config-volume\") pod \"collect-profiles-29405355-l7czx\" (UID: \"fe928ed1-b263-4791-b82c-4f64a33ae019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405355-l7czx" Nov 28 09:15:00 crc kubenswrapper[4946]: I1128 09:15:00.463282 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe928ed1-b263-4791-b82c-4f64a33ae019-secret-volume\") pod \"collect-profiles-29405355-l7czx\" (UID: \"fe928ed1-b263-4791-b82c-4f64a33ae019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405355-l7czx" Nov 28 09:15:00 crc kubenswrapper[4946]: I1128 09:15:00.476051 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xtx2\" (UniqueName: \"kubernetes.io/projected/fe928ed1-b263-4791-b82c-4f64a33ae019-kube-api-access-8xtx2\") pod \"collect-profiles-29405355-l7czx\" (UID: \"fe928ed1-b263-4791-b82c-4f64a33ae019\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405355-l7czx" Nov 28 09:15:00 crc kubenswrapper[4946]: I1128 09:15:00.533919 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405355-l7czx" Nov 28 09:15:01 crc kubenswrapper[4946]: I1128 09:15:01.049328 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405355-l7czx"] Nov 28 09:15:01 crc kubenswrapper[4946]: I1128 09:15:01.792101 4946 generic.go:334] "Generic (PLEG): container finished" podID="fe928ed1-b263-4791-b82c-4f64a33ae019" containerID="529c3e529d9511872e5a58898f9f735421fb51b3513de54545f6333f6e097b6b" exitCode=0 Nov 28 09:15:01 crc kubenswrapper[4946]: I1128 09:15:01.792166 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405355-l7czx" event={"ID":"fe928ed1-b263-4791-b82c-4f64a33ae019","Type":"ContainerDied","Data":"529c3e529d9511872e5a58898f9f735421fb51b3513de54545f6333f6e097b6b"} Nov 28 09:15:01 crc kubenswrapper[4946]: I1128 09:15:01.792448 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405355-l7czx" event={"ID":"fe928ed1-b263-4791-b82c-4f64a33ae019","Type":"ContainerStarted","Data":"b7dd7d8891c048c3f3ca0fad1efa84b823393ea7ba8c00d9e4ccc97380fe8643"} Nov 28 09:15:03 crc kubenswrapper[4946]: I1128 09:15:03.185277 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405355-l7czx" Nov 28 09:15:03 crc kubenswrapper[4946]: I1128 09:15:03.319855 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe928ed1-b263-4791-b82c-4f64a33ae019-secret-volume\") pod \"fe928ed1-b263-4791-b82c-4f64a33ae019\" (UID: \"fe928ed1-b263-4791-b82c-4f64a33ae019\") " Nov 28 09:15:03 crc kubenswrapper[4946]: I1128 09:15:03.319917 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xtx2\" (UniqueName: \"kubernetes.io/projected/fe928ed1-b263-4791-b82c-4f64a33ae019-kube-api-access-8xtx2\") pod \"fe928ed1-b263-4791-b82c-4f64a33ae019\" (UID: \"fe928ed1-b263-4791-b82c-4f64a33ae019\") " Nov 28 09:15:03 crc kubenswrapper[4946]: I1128 09:15:03.320195 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe928ed1-b263-4791-b82c-4f64a33ae019-config-volume\") pod \"fe928ed1-b263-4791-b82c-4f64a33ae019\" (UID: \"fe928ed1-b263-4791-b82c-4f64a33ae019\") " Nov 28 09:15:03 crc kubenswrapper[4946]: I1128 09:15:03.321026 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe928ed1-b263-4791-b82c-4f64a33ae019-config-volume" (OuterVolumeSpecName: "config-volume") pod "fe928ed1-b263-4791-b82c-4f64a33ae019" (UID: "fe928ed1-b263-4791-b82c-4f64a33ae019"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:15:03 crc kubenswrapper[4946]: I1128 09:15:03.326502 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe928ed1-b263-4791-b82c-4f64a33ae019-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fe928ed1-b263-4791-b82c-4f64a33ae019" (UID: "fe928ed1-b263-4791-b82c-4f64a33ae019"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:15:03 crc kubenswrapper[4946]: I1128 09:15:03.326756 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe928ed1-b263-4791-b82c-4f64a33ae019-kube-api-access-8xtx2" (OuterVolumeSpecName: "kube-api-access-8xtx2") pod "fe928ed1-b263-4791-b82c-4f64a33ae019" (UID: "fe928ed1-b263-4791-b82c-4f64a33ae019"). InnerVolumeSpecName "kube-api-access-8xtx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:15:03 crc kubenswrapper[4946]: I1128 09:15:03.422234 4946 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe928ed1-b263-4791-b82c-4f64a33ae019-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 09:15:03 crc kubenswrapper[4946]: I1128 09:15:03.422269 4946 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe928ed1-b263-4791-b82c-4f64a33ae019-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 09:15:03 crc kubenswrapper[4946]: I1128 09:15:03.422279 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xtx2\" (UniqueName: \"kubernetes.io/projected/fe928ed1-b263-4791-b82c-4f64a33ae019-kube-api-access-8xtx2\") on node \"crc\" DevicePath \"\"" Nov 28 09:15:03 crc kubenswrapper[4946]: I1128 09:15:03.813729 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405355-l7czx" event={"ID":"fe928ed1-b263-4791-b82c-4f64a33ae019","Type":"ContainerDied","Data":"b7dd7d8891c048c3f3ca0fad1efa84b823393ea7ba8c00d9e4ccc97380fe8643"} Nov 28 09:15:03 crc kubenswrapper[4946]: I1128 09:15:03.813779 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7dd7d8891c048c3f3ca0fad1efa84b823393ea7ba8c00d9e4ccc97380fe8643" Nov 28 09:15:03 crc kubenswrapper[4946]: I1128 09:15:03.813780 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405355-l7czx" Nov 28 09:15:04 crc kubenswrapper[4946]: I1128 09:15:04.255639 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405310-tcq6s"] Nov 28 09:15:04 crc kubenswrapper[4946]: I1128 09:15:04.268166 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405310-tcq6s"] Nov 28 09:15:06 crc kubenswrapper[4946]: I1128 09:15:06.007606 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a581cbb-503c-4af0-91ce-747373199185" path="/var/lib/kubelet/pods/1a581cbb-503c-4af0-91ce-747373199185/volumes" Nov 28 09:15:12 crc kubenswrapper[4946]: I1128 09:15:12.902700 4946 generic.go:334] "Generic (PLEG): container finished" podID="bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74" containerID="39ddea80e90d32a2f9b8e9df3f30061a9352125947d19ac0f31e87303754f76a" exitCode=0 Nov 28 09:15:12 crc kubenswrapper[4946]: I1128 09:15:12.902817 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-networker-vf8r5" event={"ID":"bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74","Type":"ContainerDied","Data":"39ddea80e90d32a2f9b8e9df3f30061a9352125947d19ac0f31e87303754f76a"} Nov 28 09:15:14 crc kubenswrapper[4946]: I1128 09:15:14.508638 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-networker-vf8r5" Nov 28 09:15:14 crc kubenswrapper[4946]: I1128 09:15:14.695041 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74-ssh-key\") pod \"bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74\" (UID: \"bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74\") " Nov 28 09:15:14 crc kubenswrapper[4946]: I1128 09:15:14.695231 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jjd2\" (UniqueName: \"kubernetes.io/projected/bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74-kube-api-access-6jjd2\") pod \"bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74\" (UID: \"bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74\") " Nov 28 09:15:14 crc kubenswrapper[4946]: I1128 09:15:14.695315 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74-bootstrap-combined-ca-bundle\") pod \"bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74\" (UID: \"bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74\") " Nov 28 09:15:14 crc kubenswrapper[4946]: I1128 09:15:14.695366 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74-inventory\") pod \"bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74\" (UID: \"bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74\") " Nov 28 09:15:14 crc kubenswrapper[4946]: I1128 09:15:14.704819 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74" (UID: "bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:15:14 crc kubenswrapper[4946]: I1128 09:15:14.711588 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74-kube-api-access-6jjd2" (OuterVolumeSpecName: "kube-api-access-6jjd2") pod "bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74" (UID: "bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74"). InnerVolumeSpecName "kube-api-access-6jjd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:15:14 crc kubenswrapper[4946]: I1128 09:15:14.728789 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74" (UID: "bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:15:14 crc kubenswrapper[4946]: I1128 09:15:14.752823 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74-inventory" (OuterVolumeSpecName: "inventory") pod "bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74" (UID: "bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:15:14 crc kubenswrapper[4946]: I1128 09:15:14.798383 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:15:14 crc kubenswrapper[4946]: I1128 09:15:14.798419 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:15:14 crc kubenswrapper[4946]: I1128 09:15:14.798431 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jjd2\" (UniqueName: \"kubernetes.io/projected/bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74-kube-api-access-6jjd2\") on node \"crc\" DevicePath \"\"" Nov 28 09:15:14 crc kubenswrapper[4946]: I1128 09:15:14.798442 4946 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:15:14 crc kubenswrapper[4946]: I1128 09:15:14.932777 4946 generic.go:334] "Generic (PLEG): container finished" podID="d67930c7-a5cb-4b90-a00f-f2494590d922" containerID="c8b975d94dd86e72b2dcbd34f071e30dd2d477186b829cfab7cf8bd019ede4eb" exitCode=0 Nov 28 09:15:14 crc kubenswrapper[4946]: I1128 09:15:14.932863 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-hrvm8" event={"ID":"d67930c7-a5cb-4b90-a00f-f2494590d922","Type":"ContainerDied","Data":"c8b975d94dd86e72b2dcbd34f071e30dd2d477186b829cfab7cf8bd019ede4eb"} Nov 28 09:15:14 crc kubenswrapper[4946]: I1128 09:15:14.935333 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-networker-vf8r5" event={"ID":"bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74","Type":"ContainerDied","Data":"330a2ae63bead6aeb19b522903c8263fa1a3e3aae8624178377e55c23019a07f"} Nov 28 09:15:14 crc kubenswrapper[4946]: I1128 09:15:14.935608 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="330a2ae63bead6aeb19b522903c8263fa1a3e3aae8624178377e55c23019a07f" Nov 28 09:15:14 crc kubenswrapper[4946]: I1128 09:15:14.935394 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-networker-vf8r5" Nov 28 09:15:15 crc kubenswrapper[4946]: I1128 09:15:15.047621 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-networker-b5nt7"] Nov 28 09:15:15 crc kubenswrapper[4946]: E1128 09:15:15.047999 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74" containerName="bootstrap-openstack-openstack-networker" Nov 28 09:15:15 crc kubenswrapper[4946]: I1128 09:15:15.048014 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74" containerName="bootstrap-openstack-openstack-networker" Nov 28 09:15:15 crc kubenswrapper[4946]: E1128 09:15:15.048042 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe928ed1-b263-4791-b82c-4f64a33ae019" containerName="collect-profiles" Nov 28 09:15:15 crc kubenswrapper[4946]: I1128 09:15:15.048049 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe928ed1-b263-4791-b82c-4f64a33ae019" containerName="collect-profiles" Nov 28 09:15:15 crc kubenswrapper[4946]: I1128 09:15:15.048239 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe928ed1-b263-4791-b82c-4f64a33ae019" containerName="collect-profiles" Nov 28 09:15:15 crc kubenswrapper[4946]: I1128 09:15:15.048258 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74" containerName="bootstrap-openstack-openstack-networker" Nov 28 09:15:15 crc kubenswrapper[4946]: I1128 09:15:15.048959 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-networker-b5nt7" Nov 28 09:15:15 crc kubenswrapper[4946]: I1128 09:15:15.054069 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Nov 28 09:15:15 crc kubenswrapper[4946]: I1128 09:15:15.054414 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-7cq8d" Nov 28 09:15:15 crc kubenswrapper[4946]: I1128 09:15:15.080620 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-networker-b5nt7"] Nov 28 09:15:15 crc kubenswrapper[4946]: I1128 09:15:15.215547 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bed38fc8-f019-4416-b82d-f86696d869be-ssh-key\") pod \"download-cache-openstack-openstack-networker-b5nt7\" (UID: \"bed38fc8-f019-4416-b82d-f86696d869be\") " pod="openstack/download-cache-openstack-openstack-networker-b5nt7" Nov 28 09:15:15 crc kubenswrapper[4946]: I1128 09:15:15.216223 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bed38fc8-f019-4416-b82d-f86696d869be-inventory\") pod \"download-cache-openstack-openstack-networker-b5nt7\" (UID: \"bed38fc8-f019-4416-b82d-f86696d869be\") " pod="openstack/download-cache-openstack-openstack-networker-b5nt7" Nov 28 09:15:15 crc kubenswrapper[4946]: I1128 09:15:15.216438 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx27b\" (UniqueName: \"kubernetes.io/projected/bed38fc8-f019-4416-b82d-f86696d869be-kube-api-access-hx27b\") pod \"download-cache-openstack-openstack-networker-b5nt7\" (UID: \"bed38fc8-f019-4416-b82d-f86696d869be\") " pod="openstack/download-cache-openstack-openstack-networker-b5nt7" Nov 28 09:15:15 crc kubenswrapper[4946]: I1128 09:15:15.318693 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bed38fc8-f019-4416-b82d-f86696d869be-ssh-key\") pod \"download-cache-openstack-openstack-networker-b5nt7\" (UID: \"bed38fc8-f019-4416-b82d-f86696d869be\") " pod="openstack/download-cache-openstack-openstack-networker-b5nt7" Nov 28 09:15:15 crc kubenswrapper[4946]: I1128 09:15:15.318813 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bed38fc8-f019-4416-b82d-f86696d869be-inventory\") pod \"download-cache-openstack-openstack-networker-b5nt7\" (UID: \"bed38fc8-f019-4416-b82d-f86696d869be\") " pod="openstack/download-cache-openstack-openstack-networker-b5nt7" Nov 28 09:15:15 crc kubenswrapper[4946]: I1128 09:15:15.318887 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx27b\" (UniqueName: \"kubernetes.io/projected/bed38fc8-f019-4416-b82d-f86696d869be-kube-api-access-hx27b\") pod \"download-cache-openstack-openstack-networker-b5nt7\" (UID: \"bed38fc8-f019-4416-b82d-f86696d869be\") " pod="openstack/download-cache-openstack-openstack-networker-b5nt7" Nov 28 09:15:15 crc kubenswrapper[4946]: I1128 09:15:15.324507 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bed38fc8-f019-4416-b82d-f86696d869be-inventory\") pod \"download-cache-openstack-openstack-networker-b5nt7\" (UID: \"bed38fc8-f019-4416-b82d-f86696d869be\") " pod="openstack/download-cache-openstack-openstack-networker-b5nt7" Nov 28 09:15:15 crc kubenswrapper[4946]: I1128 09:15:15.324960 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bed38fc8-f019-4416-b82d-f86696d869be-ssh-key\") pod \"download-cache-openstack-openstack-networker-b5nt7\" (UID: \"bed38fc8-f019-4416-b82d-f86696d869be\") " pod="openstack/download-cache-openstack-openstack-networker-b5nt7" Nov 28 09:15:15 crc kubenswrapper[4946]: I1128 09:15:15.339121 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx27b\" (UniqueName: \"kubernetes.io/projected/bed38fc8-f019-4416-b82d-f86696d869be-kube-api-access-hx27b\") pod \"download-cache-openstack-openstack-networker-b5nt7\" (UID: \"bed38fc8-f019-4416-b82d-f86696d869be\") " pod="openstack/download-cache-openstack-openstack-networker-b5nt7" Nov 28 09:15:15 crc kubenswrapper[4946]: I1128 09:15:15.377811 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-networker-b5nt7" Nov 28 09:15:15 crc kubenswrapper[4946]: I1128 09:15:15.724449 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-networker-b5nt7"] Nov 28 09:15:15 crc kubenswrapper[4946]: I1128 09:15:15.951232 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-networker-b5nt7" event={"ID":"bed38fc8-f019-4416-b82d-f86696d869be","Type":"ContainerStarted","Data":"4705425e6f777ee2144832c0d60ee422ab6ce289821dc67be3c144e7840b1725"} Nov 28 09:15:16 crc kubenswrapper[4946]: I1128 09:15:16.381324 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-hrvm8" Nov 28 09:15:16 crc kubenswrapper[4946]: I1128 09:15:16.549883 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d67930c7-a5cb-4b90-a00f-f2494590d922-inventory\") pod \"d67930c7-a5cb-4b90-a00f-f2494590d922\" (UID: \"d67930c7-a5cb-4b90-a00f-f2494590d922\") " Nov 28 09:15:16 crc kubenswrapper[4946]: I1128 09:15:16.550004 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d67930c7-a5cb-4b90-a00f-f2494590d922-ceph\") pod \"d67930c7-a5cb-4b90-a00f-f2494590d922\" (UID: \"d67930c7-a5cb-4b90-a00f-f2494590d922\") " Nov 28 09:15:16 crc kubenswrapper[4946]: I1128 09:15:16.550032 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67930c7-a5cb-4b90-a00f-f2494590d922-bootstrap-combined-ca-bundle\") pod \"d67930c7-a5cb-4b90-a00f-f2494590d922\" (UID: \"d67930c7-a5cb-4b90-a00f-f2494590d922\") " Nov 28 09:15:16 crc kubenswrapper[4946]: I1128 09:15:16.550056 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpxf5\" (UniqueName: \"kubernetes.io/projected/d67930c7-a5cb-4b90-a00f-f2494590d922-kube-api-access-zpxf5\") pod \"d67930c7-a5cb-4b90-a00f-f2494590d922\" (UID: \"d67930c7-a5cb-4b90-a00f-f2494590d922\") " Nov 28 09:15:16 crc kubenswrapper[4946]: I1128 09:15:16.550081 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d67930c7-a5cb-4b90-a00f-f2494590d922-ssh-key\") pod \"d67930c7-a5cb-4b90-a00f-f2494590d922\" (UID: \"d67930c7-a5cb-4b90-a00f-f2494590d922\") " Nov 28 09:15:16 crc kubenswrapper[4946]: I1128 09:15:16.553799 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d67930c7-a5cb-4b90-a00f-f2494590d922-kube-api-access-zpxf5" (OuterVolumeSpecName: "kube-api-access-zpxf5") pod "d67930c7-a5cb-4b90-a00f-f2494590d922" (UID: "d67930c7-a5cb-4b90-a00f-f2494590d922"). InnerVolumeSpecName "kube-api-access-zpxf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:15:16 crc kubenswrapper[4946]: I1128 09:15:16.554263 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d67930c7-a5cb-4b90-a00f-f2494590d922-ceph" (OuterVolumeSpecName: "ceph") pod "d67930c7-a5cb-4b90-a00f-f2494590d922" (UID: "d67930c7-a5cb-4b90-a00f-f2494590d922"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:15:16 crc kubenswrapper[4946]: I1128 09:15:16.554581 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d67930c7-a5cb-4b90-a00f-f2494590d922-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d67930c7-a5cb-4b90-a00f-f2494590d922" (UID: "d67930c7-a5cb-4b90-a00f-f2494590d922"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:15:16 crc kubenswrapper[4946]: I1128 09:15:16.575595 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d67930c7-a5cb-4b90-a00f-f2494590d922-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d67930c7-a5cb-4b90-a00f-f2494590d922" (UID: "d67930c7-a5cb-4b90-a00f-f2494590d922"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:15:16 crc kubenswrapper[4946]: I1128 09:15:16.589928 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d67930c7-a5cb-4b90-a00f-f2494590d922-inventory" (OuterVolumeSpecName: "inventory") pod "d67930c7-a5cb-4b90-a00f-f2494590d922" (UID: "d67930c7-a5cb-4b90-a00f-f2494590d922"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:15:16 crc kubenswrapper[4946]: I1128 09:15:16.652657 4946 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d67930c7-a5cb-4b90-a00f-f2494590d922-ceph\") on node \"crc\" DevicePath \"\"" Nov 28 09:15:16 crc kubenswrapper[4946]: I1128 09:15:16.652706 4946 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67930c7-a5cb-4b90-a00f-f2494590d922-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:15:16 crc kubenswrapper[4946]: I1128 09:15:16.652728 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpxf5\" (UniqueName: \"kubernetes.io/projected/d67930c7-a5cb-4b90-a00f-f2494590d922-kube-api-access-zpxf5\") on node \"crc\" DevicePath \"\"" Nov 28 09:15:16 crc kubenswrapper[4946]: I1128 09:15:16.652745 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d67930c7-a5cb-4b90-a00f-f2494590d922-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:15:16 crc kubenswrapper[4946]: I1128 09:15:16.652765 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d67930c7-a5cb-4b90-a00f-f2494590d922-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:15:16 crc kubenswrapper[4946]: I1128 09:15:16.963902 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-hrvm8" Nov 28 09:15:16 crc kubenswrapper[4946]: I1128 09:15:16.963902 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-hrvm8" event={"ID":"d67930c7-a5cb-4b90-a00f-f2494590d922","Type":"ContainerDied","Data":"c35c712ced21e8e786cf049f322d8c77cd9c38b4f2e0718442498476ec85976d"} Nov 28 09:15:16 crc kubenswrapper[4946]: I1128 09:15:16.964395 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c35c712ced21e8e786cf049f322d8c77cd9c38b4f2e0718442498476ec85976d" Nov 28 09:15:16 crc kubenswrapper[4946]: I1128 09:15:16.986257 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-networker-b5nt7" event={"ID":"bed38fc8-f019-4416-b82d-f86696d869be","Type":"ContainerStarted","Data":"fc9c2f7432dfcb627ea7c63c2408a17813f71d60802bcfd626b3a65bddb399fd"} Nov 28 09:15:17 crc kubenswrapper[4946]: I1128 09:15:17.051074 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-vdqgc"] Nov 28 09:15:17 crc kubenswrapper[4946]: E1128 09:15:17.051744 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d67930c7-a5cb-4b90-a00f-f2494590d922" containerName="bootstrap-openstack-openstack-cell1" Nov 28 09:15:17 crc kubenswrapper[4946]: I1128 09:15:17.051782 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="d67930c7-a5cb-4b90-a00f-f2494590d922" containerName="bootstrap-openstack-openstack-cell1" Nov 28 09:15:17 crc kubenswrapper[4946]: I1128 09:15:17.052033 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="d67930c7-a5cb-4b90-a00f-f2494590d922" containerName="bootstrap-openstack-openstack-cell1" Nov 28 09:15:17 crc kubenswrapper[4946]: I1128 09:15:17.053017 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-vdqgc" Nov 28 09:15:17 crc kubenswrapper[4946]: I1128 09:15:17.055133 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-psw2j" Nov 28 09:15:17 crc kubenswrapper[4946]: I1128 09:15:17.059958 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 28 09:15:17 crc kubenswrapper[4946]: I1128 09:15:17.060793 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-networker-b5nt7" podStartSLOduration=1.532023948 podStartE2EDuration="2.060732407s" podCreationTimestamp="2025-11-28 09:15:15 +0000 UTC" firstStartedPulling="2025-11-28 09:15:15.724061126 +0000 UTC m=+8570.102126237" lastFinishedPulling="2025-11-28 09:15:16.252769585 +0000 UTC m=+8570.630834696" observedRunningTime="2025-11-28 09:15:17.039531431 +0000 UTC m=+8571.417596542" watchObservedRunningTime="2025-11-28 09:15:17.060732407 +0000 UTC m=+8571.438797538" Nov 28 09:15:17 crc kubenswrapper[4946]: I1128 09:15:17.084196 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-vdqgc"] Nov 28 09:15:17 crc kubenswrapper[4946]: I1128 09:15:17.164048 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82f88c64-bdae-4b41-a21a-e9bcae8aac99-inventory\") pod \"download-cache-openstack-openstack-cell1-vdqgc\" (UID: \"82f88c64-bdae-4b41-a21a-e9bcae8aac99\") " pod="openstack/download-cache-openstack-openstack-cell1-vdqgc" Nov 28 09:15:17 crc kubenswrapper[4946]: I1128 09:15:17.164382 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptmcr\" (UniqueName: \"kubernetes.io/projected/82f88c64-bdae-4b41-a21a-e9bcae8aac99-kube-api-access-ptmcr\") pod \"download-cache-openstack-openstack-cell1-vdqgc\" (UID: \"82f88c64-bdae-4b41-a21a-e9bcae8aac99\") " pod="openstack/download-cache-openstack-openstack-cell1-vdqgc" Nov 28 09:15:17 crc kubenswrapper[4946]: I1128 09:15:17.164541 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/82f88c64-bdae-4b41-a21a-e9bcae8aac99-ssh-key\") pod \"download-cache-openstack-openstack-cell1-vdqgc\" (UID: \"82f88c64-bdae-4b41-a21a-e9bcae8aac99\") " pod="openstack/download-cache-openstack-openstack-cell1-vdqgc" Nov 28 09:15:17 crc kubenswrapper[4946]: I1128 09:15:17.164886 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/82f88c64-bdae-4b41-a21a-e9bcae8aac99-ceph\") pod \"download-cache-openstack-openstack-cell1-vdqgc\" (UID: \"82f88c64-bdae-4b41-a21a-e9bcae8aac99\") " pod="openstack/download-cache-openstack-openstack-cell1-vdqgc" Nov 28 09:15:17 crc kubenswrapper[4946]: I1128 09:15:17.266938 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/82f88c64-bdae-4b41-a21a-e9bcae8aac99-ceph\") pod \"download-cache-openstack-openstack-cell1-vdqgc\" (UID: \"82f88c64-bdae-4b41-a21a-e9bcae8aac99\") " pod="openstack/download-cache-openstack-openstack-cell1-vdqgc" Nov 28 09:15:17 crc kubenswrapper[4946]: I1128 09:15:17.266997 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82f88c64-bdae-4b41-a21a-e9bcae8aac99-inventory\") pod \"download-cache-openstack-openstack-cell1-vdqgc\" (UID: \"82f88c64-bdae-4b41-a21a-e9bcae8aac99\") " pod="openstack/download-cache-openstack-openstack-cell1-vdqgc" Nov 28 09:15:17 crc kubenswrapper[4946]: I1128 09:15:17.267061 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptmcr\" (UniqueName: \"kubernetes.io/projected/82f88c64-bdae-4b41-a21a-e9bcae8aac99-kube-api-access-ptmcr\") pod \"download-cache-openstack-openstack-cell1-vdqgc\" (UID: \"82f88c64-bdae-4b41-a21a-e9bcae8aac99\") " pod="openstack/download-cache-openstack-openstack-cell1-vdqgc" Nov 28 09:15:17 crc kubenswrapper[4946]: I1128 09:15:17.267101 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/82f88c64-bdae-4b41-a21a-e9bcae8aac99-ssh-key\") pod \"download-cache-openstack-openstack-cell1-vdqgc\" (UID: \"82f88c64-bdae-4b41-a21a-e9bcae8aac99\") " pod="openstack/download-cache-openstack-openstack-cell1-vdqgc" Nov 28 09:15:17 crc kubenswrapper[4946]: I1128 09:15:17.274168 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/82f88c64-bdae-4b41-a21a-e9bcae8aac99-ceph\") pod \"download-cache-openstack-openstack-cell1-vdqgc\" (UID: \"82f88c64-bdae-4b41-a21a-e9bcae8aac99\") " pod="openstack/download-cache-openstack-openstack-cell1-vdqgc" Nov 28 09:15:17 crc kubenswrapper[4946]: I1128 09:15:17.275208 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82f88c64-bdae-4b41-a21a-e9bcae8aac99-inventory\") pod \"download-cache-openstack-openstack-cell1-vdqgc\" (UID: \"82f88c64-bdae-4b41-a21a-e9bcae8aac99\") " pod="openstack/download-cache-openstack-openstack-cell1-vdqgc" Nov 28 09:15:17 crc kubenswrapper[4946]: I1128 09:15:17.283959 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/82f88c64-bdae-4b41-a21a-e9bcae8aac99-ssh-key\") pod \"download-cache-openstack-openstack-cell1-vdqgc\" (UID: \"82f88c64-bdae-4b41-a21a-e9bcae8aac99\") " pod="openstack/download-cache-openstack-openstack-cell1-vdqgc" Nov 28 09:15:17 crc kubenswrapper[4946]: I1128 09:15:17.289088 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptmcr\" (UniqueName: \"kubernetes.io/projected/82f88c64-bdae-4b41-a21a-e9bcae8aac99-kube-api-access-ptmcr\") pod \"download-cache-openstack-openstack-cell1-vdqgc\" (UID: \"82f88c64-bdae-4b41-a21a-e9bcae8aac99\") " pod="openstack/download-cache-openstack-openstack-cell1-vdqgc" Nov 28 09:15:17 crc kubenswrapper[4946]: I1128 09:15:17.386450 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-vdqgc" Nov 28 09:15:17 crc kubenswrapper[4946]: I1128 09:15:17.943827 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-vdqgc"] Nov 28 09:15:17 crc kubenswrapper[4946]: W1128 09:15:17.947670 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82f88c64_bdae_4b41_a21a_e9bcae8aac99.slice/crio-a8d6a4455f8c6ebb8aaaadb7eb58d0a502a6386495a43daae785eb27a70562a6 WatchSource:0}: Error finding container a8d6a4455f8c6ebb8aaaadb7eb58d0a502a6386495a43daae785eb27a70562a6: Status 404 returned error can't find the container with id a8d6a4455f8c6ebb8aaaadb7eb58d0a502a6386495a43daae785eb27a70562a6 Nov 28 09:15:18 crc kubenswrapper[4946]: I1128 09:15:18.025248 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-vdqgc" event={"ID":"82f88c64-bdae-4b41-a21a-e9bcae8aac99","Type":"ContainerStarted","Data":"a8d6a4455f8c6ebb8aaaadb7eb58d0a502a6386495a43daae785eb27a70562a6"} Nov 28 09:15:19 crc kubenswrapper[4946]: I1128 09:15:19.017876 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-vdqgc" event={"ID":"82f88c64-bdae-4b41-a21a-e9bcae8aac99","Type":"ContainerStarted","Data":"442cf22272d7d893409384a76efc66d8afd477c7cdce9da6f154493488e194b8"} Nov 28 09:15:19 crc kubenswrapper[4946]: I1128 09:15:19.049196 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-vdqgc" podStartSLOduration=1.4490503270000001 podStartE2EDuration="2.049176146s" podCreationTimestamp="2025-11-28 09:15:17 +0000 UTC" firstStartedPulling="2025-11-28 09:15:17.951404509 +0000 UTC m=+8572.329469660" lastFinishedPulling="2025-11-28 09:15:18.551530348 +0000 UTC m=+8572.929595479" observedRunningTime="2025-11-28 09:15:19.037919897 +0000 UTC m=+8573.415985028" watchObservedRunningTime="2025-11-28 09:15:19.049176146 +0000 UTC m=+8573.427241247" Nov 28 09:15:24 crc kubenswrapper[4946]: I1128 09:15:24.730999 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:15:24 crc kubenswrapper[4946]: I1128 09:15:24.731809 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:15:52 crc kubenswrapper[4946]: I1128 09:15:52.359911 4946 scope.go:117] "RemoveContainer" containerID="b1904cb93cef921a5741f9346108c53ea15e41a0cbdd179f760cb1a53f1c5fb3" Nov 28 09:15:54 crc kubenswrapper[4946]: I1128 09:15:54.730533 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:15:54 crc kubenswrapper[4946]: I1128 09:15:54.730944 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:16:22 crc kubenswrapper[4946]: I1128 09:16:22.807416 4946 generic.go:334] "Generic (PLEG): container finished" podID="bed38fc8-f019-4416-b82d-f86696d869be" containerID="fc9c2f7432dfcb627ea7c63c2408a17813f71d60802bcfd626b3a65bddb399fd" exitCode=0 Nov 28 09:16:22 crc kubenswrapper[4946]: I1128 09:16:22.807545 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-networker-b5nt7" event={"ID":"bed38fc8-f019-4416-b82d-f86696d869be","Type":"ContainerDied","Data":"fc9c2f7432dfcb627ea7c63c2408a17813f71d60802bcfd626b3a65bddb399fd"} Nov 28 09:16:24 crc kubenswrapper[4946]: I1128 09:16:24.415955 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-networker-b5nt7" Nov 28 09:16:24 crc kubenswrapper[4946]: I1128 09:16:24.561377 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bed38fc8-f019-4416-b82d-f86696d869be-inventory\") pod \"bed38fc8-f019-4416-b82d-f86696d869be\" (UID: \"bed38fc8-f019-4416-b82d-f86696d869be\") " Nov 28 09:16:24 crc kubenswrapper[4946]: I1128 09:16:24.561488 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bed38fc8-f019-4416-b82d-f86696d869be-ssh-key\") pod \"bed38fc8-f019-4416-b82d-f86696d869be\" (UID: \"bed38fc8-f019-4416-b82d-f86696d869be\") " Nov 28 09:16:24 crc kubenswrapper[4946]: I1128 09:16:24.561559 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx27b\" (UniqueName: \"kubernetes.io/projected/bed38fc8-f019-4416-b82d-f86696d869be-kube-api-access-hx27b\") pod \"bed38fc8-f019-4416-b82d-f86696d869be\" (UID: \"bed38fc8-f019-4416-b82d-f86696d869be\") " Nov 28 09:16:24 crc kubenswrapper[4946]: I1128 09:16:24.571678 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bed38fc8-f019-4416-b82d-f86696d869be-kube-api-access-hx27b" (OuterVolumeSpecName: "kube-api-access-hx27b") pod "bed38fc8-f019-4416-b82d-f86696d869be" (UID: "bed38fc8-f019-4416-b82d-f86696d869be"). InnerVolumeSpecName "kube-api-access-hx27b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:16:24 crc kubenswrapper[4946]: I1128 09:16:24.590697 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed38fc8-f019-4416-b82d-f86696d869be-inventory" (OuterVolumeSpecName: "inventory") pod "bed38fc8-f019-4416-b82d-f86696d869be" (UID: "bed38fc8-f019-4416-b82d-f86696d869be"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:16:24 crc kubenswrapper[4946]: I1128 09:16:24.604003 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed38fc8-f019-4416-b82d-f86696d869be-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bed38fc8-f019-4416-b82d-f86696d869be" (UID: "bed38fc8-f019-4416-b82d-f86696d869be"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:16:24 crc kubenswrapper[4946]: I1128 09:16:24.664086 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bed38fc8-f019-4416-b82d-f86696d869be-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:16:24 crc kubenswrapper[4946]: I1128 09:16:24.664114 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bed38fc8-f019-4416-b82d-f86696d869be-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:16:24 crc kubenswrapper[4946]: I1128 09:16:24.664124 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx27b\" (UniqueName: \"kubernetes.io/projected/bed38fc8-f019-4416-b82d-f86696d869be-kube-api-access-hx27b\") on node \"crc\" DevicePath \"\"" Nov 28 09:16:24 crc kubenswrapper[4946]: I1128 09:16:24.730528 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:16:24 crc kubenswrapper[4946]: I1128 09:16:24.730593 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:16:24 crc kubenswrapper[4946]: I1128 09:16:24.730650 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 09:16:24 crc kubenswrapper[4946]: I1128 09:16:24.731546 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d8b45121214190074c519e34154db0426b7e8853a8fe0a16ee20d03a0a14cd84"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 09:16:24 crc kubenswrapper[4946]: I1128 09:16:24.731627 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://d8b45121214190074c519e34154db0426b7e8853a8fe0a16ee20d03a0a14cd84" gracePeriod=600 Nov 28 09:16:24 crc kubenswrapper[4946]: I1128 09:16:24.835922 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-networker-b5nt7" event={"ID":"bed38fc8-f019-4416-b82d-f86696d869be","Type":"ContainerDied","Data":"4705425e6f777ee2144832c0d60ee422ab6ce289821dc67be3c144e7840b1725"} Nov 28 09:16:24 crc kubenswrapper[4946]: I1128 09:16:24.835976 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4705425e6f777ee2144832c0d60ee422ab6ce289821dc67be3c144e7840b1725" Nov 28 09:16:24 crc kubenswrapper[4946]: I1128 09:16:24.836034 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-networker-b5nt7" Nov 28 09:16:24 crc kubenswrapper[4946]: I1128 09:16:24.950170 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-networker-h7dbw"] Nov 28 09:16:24 crc kubenswrapper[4946]: E1128 09:16:24.950641 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bed38fc8-f019-4416-b82d-f86696d869be" containerName="download-cache-openstack-openstack-networker" Nov 28 09:16:24 crc kubenswrapper[4946]: I1128 09:16:24.950663 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="bed38fc8-f019-4416-b82d-f86696d869be" containerName="download-cache-openstack-openstack-networker" Nov 28 09:16:24 crc kubenswrapper[4946]: I1128 09:16:24.950944 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="bed38fc8-f019-4416-b82d-f86696d869be" containerName="download-cache-openstack-openstack-networker" Nov 28 09:16:24 crc kubenswrapper[4946]: I1128 09:16:24.951791 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-networker-h7dbw" Nov 28 09:16:24 crc kubenswrapper[4946]: I1128 09:16:24.955557 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-7cq8d" Nov 28 09:16:24 crc kubenswrapper[4946]: I1128 09:16:24.955895 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Nov 28 09:16:24 crc kubenswrapper[4946]: I1128 09:16:24.981303 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-networker-h7dbw"] Nov 28 09:16:25 crc kubenswrapper[4946]: I1128 09:16:25.076271 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a8f17537-bfd9-4581-a146-8582ca24195c-ssh-key\") pod \"configure-network-openstack-openstack-networker-h7dbw\" (UID: \"a8f17537-bfd9-4581-a146-8582ca24195c\") " pod="openstack/configure-network-openstack-openstack-networker-h7dbw" Nov 28 09:16:25 crc kubenswrapper[4946]: I1128 09:16:25.077067 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqx26\" (UniqueName: \"kubernetes.io/projected/a8f17537-bfd9-4581-a146-8582ca24195c-kube-api-access-tqx26\") pod \"configure-network-openstack-openstack-networker-h7dbw\" (UID: \"a8f17537-bfd9-4581-a146-8582ca24195c\") " pod="openstack/configure-network-openstack-openstack-networker-h7dbw" Nov 28 09:16:25 crc kubenswrapper[4946]: I1128 09:16:25.077151 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8f17537-bfd9-4581-a146-8582ca24195c-inventory\") pod \"configure-network-openstack-openstack-networker-h7dbw\" (UID: \"a8f17537-bfd9-4581-a146-8582ca24195c\") " pod="openstack/configure-network-openstack-openstack-networker-h7dbw" Nov 28 09:16:25 crc kubenswrapper[4946]: I1128 09:16:25.178918 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a8f17537-bfd9-4581-a146-8582ca24195c-ssh-key\") pod \"configure-network-openstack-openstack-networker-h7dbw\" (UID: \"a8f17537-bfd9-4581-a146-8582ca24195c\") " pod="openstack/configure-network-openstack-openstack-networker-h7dbw" Nov 28 09:16:25 crc kubenswrapper[4946]: I1128 09:16:25.179166 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqx26\" (UniqueName: \"kubernetes.io/projected/a8f17537-bfd9-4581-a146-8582ca24195c-kube-api-access-tqx26\") pod \"configure-network-openstack-openstack-networker-h7dbw\" (UID: \"a8f17537-bfd9-4581-a146-8582ca24195c\") " pod="openstack/configure-network-openstack-openstack-networker-h7dbw" Nov 28 09:16:25 crc kubenswrapper[4946]: I1128 09:16:25.179276 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8f17537-bfd9-4581-a146-8582ca24195c-inventory\") pod \"configure-network-openstack-openstack-networker-h7dbw\" (UID: \"a8f17537-bfd9-4581-a146-8582ca24195c\") " pod="openstack/configure-network-openstack-openstack-networker-h7dbw" Nov 28 09:16:25 crc kubenswrapper[4946]: I1128 09:16:25.187967 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a8f17537-bfd9-4581-a146-8582ca24195c-ssh-key\") pod \"configure-network-openstack-openstack-networker-h7dbw\" (UID: \"a8f17537-bfd9-4581-a146-8582ca24195c\") " pod="openstack/configure-network-openstack-openstack-networker-h7dbw" Nov 28 09:16:25 crc kubenswrapper[4946]: I1128 09:16:25.192001 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8f17537-bfd9-4581-a146-8582ca24195c-inventory\") pod \"configure-network-openstack-openstack-networker-h7dbw\" (UID: \"a8f17537-bfd9-4581-a146-8582ca24195c\") " pod="openstack/configure-network-openstack-openstack-networker-h7dbw" Nov 28 09:16:25 crc kubenswrapper[4946]: I1128 09:16:25.211619 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqx26\" (UniqueName: \"kubernetes.io/projected/a8f17537-bfd9-4581-a146-8582ca24195c-kube-api-access-tqx26\") pod \"configure-network-openstack-openstack-networker-h7dbw\" (UID: \"a8f17537-bfd9-4581-a146-8582ca24195c\") " pod="openstack/configure-network-openstack-openstack-networker-h7dbw" Nov 28 09:16:25 crc kubenswrapper[4946]: I1128 09:16:25.271668 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-networker-h7dbw" Nov 28 09:16:25 crc kubenswrapper[4946]: I1128 09:16:25.867666 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-networker-h7dbw"] Nov 28 09:16:25 crc kubenswrapper[4946]: I1128 09:16:25.869453 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="d8b45121214190074c519e34154db0426b7e8853a8fe0a16ee20d03a0a14cd84" exitCode=0 Nov 28 09:16:25 crc kubenswrapper[4946]: I1128 09:16:25.869541 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"d8b45121214190074c519e34154db0426b7e8853a8fe0a16ee20d03a0a14cd84"} Nov 28 09:16:25 crc kubenswrapper[4946]: I1128 09:16:25.869595 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80"} Nov 28 09:16:25 crc kubenswrapper[4946]: I1128 09:16:25.869623 4946 scope.go:117] "RemoveContainer" containerID="ab4ee89bbdc7af78a19c7399e676257a6fc30a42a7abd619fc68ec52c84a5880" Nov 28 09:16:25 crc kubenswrapper[4946]: W1128 09:16:25.872991 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8f17537_bfd9_4581_a146_8582ca24195c.slice/crio-567a4ef87e58e4b303713a455d30004a60e4c4eca1a189cfc6f4a99b14e1271d WatchSource:0}: Error finding container 567a4ef87e58e4b303713a455d30004a60e4c4eca1a189cfc6f4a99b14e1271d: Status 404 returned error can't find the container with id 567a4ef87e58e4b303713a455d30004a60e4c4eca1a189cfc6f4a99b14e1271d Nov 28 09:16:26 crc kubenswrapper[4946]: I1128 09:16:26.883773 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-networker-h7dbw" event={"ID":"a8f17537-bfd9-4581-a146-8582ca24195c","Type":"ContainerStarted","Data":"4c8680f8988ed629dac413594a33ffdf0f8afb769e4bc2c5f3c778fa2d80c7cd"} Nov 28 09:16:26 crc kubenswrapper[4946]: I1128 09:16:26.884351 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-networker-h7dbw" event={"ID":"a8f17537-bfd9-4581-a146-8582ca24195c","Type":"ContainerStarted","Data":"567a4ef87e58e4b303713a455d30004a60e4c4eca1a189cfc6f4a99b14e1271d"} Nov 28 09:16:26 crc kubenswrapper[4946]: I1128 09:16:26.907020 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-networker-h7dbw" podStartSLOduration=2.246648324 podStartE2EDuration="2.907000586s" podCreationTimestamp="2025-11-28 09:16:24 +0000 UTC" firstStartedPulling="2025-11-28 09:16:25.881840509 +0000 UTC m=+8640.259905670" lastFinishedPulling="2025-11-28 09:16:26.542192811 +0000 UTC m=+8640.920257932" observedRunningTime="2025-11-28 09:16:26.898865974 +0000 UTC m=+8641.276931085" watchObservedRunningTime="2025-11-28 09:16:26.907000586 +0000 UTC m=+8641.285065697" Nov 28 09:16:43 crc kubenswrapper[4946]: I1128 09:16:43.303541 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b7tnm"] Nov 28 09:16:43 crc kubenswrapper[4946]: I1128 09:16:43.307334 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b7tnm" Nov 28 09:16:43 crc kubenswrapper[4946]: I1128 09:16:43.331136 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b7tnm"] Nov 28 09:16:43 crc kubenswrapper[4946]: I1128 09:16:43.445022 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6pzm\" (UniqueName: \"kubernetes.io/projected/45c881e1-1946-459c-bd4c-f7790c2dfba1-kube-api-access-c6pzm\") pod \"certified-operators-b7tnm\" (UID: \"45c881e1-1946-459c-bd4c-f7790c2dfba1\") " pod="openshift-marketplace/certified-operators-b7tnm" Nov 28 09:16:43 crc kubenswrapper[4946]: I1128 09:16:43.445355 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45c881e1-1946-459c-bd4c-f7790c2dfba1-catalog-content\") pod \"certified-operators-b7tnm\" (UID: \"45c881e1-1946-459c-bd4c-f7790c2dfba1\") " pod="openshift-marketplace/certified-operators-b7tnm" Nov 28 09:16:43 crc kubenswrapper[4946]: I1128 09:16:43.445416 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45c881e1-1946-459c-bd4c-f7790c2dfba1-utilities\") pod \"certified-operators-b7tnm\" (UID: \"45c881e1-1946-459c-bd4c-f7790c2dfba1\") " pod="openshift-marketplace/certified-operators-b7tnm" Nov 28 09:16:43 crc kubenswrapper[4946]: I1128 09:16:43.546982 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45c881e1-1946-459c-bd4c-f7790c2dfba1-catalog-content\") pod \"certified-operators-b7tnm\" (UID: \"45c881e1-1946-459c-bd4c-f7790c2dfba1\") " pod="openshift-marketplace/certified-operators-b7tnm" Nov 28 09:16:43 crc kubenswrapper[4946]: I1128 09:16:43.547050 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45c881e1-1946-459c-bd4c-f7790c2dfba1-utilities\") pod \"certified-operators-b7tnm\" (UID: \"45c881e1-1946-459c-bd4c-f7790c2dfba1\") " pod="openshift-marketplace/certified-operators-b7tnm" Nov 28 09:16:43 crc kubenswrapper[4946]: I1128 09:16:43.547174 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6pzm\" (UniqueName: \"kubernetes.io/projected/45c881e1-1946-459c-bd4c-f7790c2dfba1-kube-api-access-c6pzm\") pod \"certified-operators-b7tnm\" (UID: \"45c881e1-1946-459c-bd4c-f7790c2dfba1\") " pod="openshift-marketplace/certified-operators-b7tnm" Nov 28 09:16:43 crc kubenswrapper[4946]: I1128 09:16:43.547654 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45c881e1-1946-459c-bd4c-f7790c2dfba1-catalog-content\") pod \"certified-operators-b7tnm\" (UID: \"45c881e1-1946-459c-bd4c-f7790c2dfba1\") " pod="openshift-marketplace/certified-operators-b7tnm" Nov 28 09:16:43 crc kubenswrapper[4946]: I1128 09:16:43.547718 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45c881e1-1946-459c-bd4c-f7790c2dfba1-utilities\") pod \"certified-operators-b7tnm\" (UID: \"45c881e1-1946-459c-bd4c-f7790c2dfba1\") " pod="openshift-marketplace/certified-operators-b7tnm" Nov 28 09:16:43 crc kubenswrapper[4946]: I1128 09:16:43.571476 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6pzm\" (UniqueName: \"kubernetes.io/projected/45c881e1-1946-459c-bd4c-f7790c2dfba1-kube-api-access-c6pzm\") pod \"certified-operators-b7tnm\" (UID: \"45c881e1-1946-459c-bd4c-f7790c2dfba1\") " pod="openshift-marketplace/certified-operators-b7tnm" Nov 28 09:16:43 crc kubenswrapper[4946]: I1128 09:16:43.632491 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b7tnm" Nov 28 09:16:44 crc kubenswrapper[4946]: I1128 09:16:44.139809 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b7tnm"] Nov 28 09:16:45 crc kubenswrapper[4946]: I1128 09:16:45.066045 4946 generic.go:334] "Generic (PLEG): container finished" podID="45c881e1-1946-459c-bd4c-f7790c2dfba1" containerID="d96a98cbeca82ac13e1a6adeca6b0315850c91bcb8924e5c2b91bb51837dd3bc" exitCode=0 Nov 28 09:16:45 crc kubenswrapper[4946]: I1128 09:16:45.066143 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7tnm" event={"ID":"45c881e1-1946-459c-bd4c-f7790c2dfba1","Type":"ContainerDied","Data":"d96a98cbeca82ac13e1a6adeca6b0315850c91bcb8924e5c2b91bb51837dd3bc"} Nov 28 09:16:45 crc kubenswrapper[4946]: I1128 09:16:45.066394 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7tnm" event={"ID":"45c881e1-1946-459c-bd4c-f7790c2dfba1","Type":"ContainerStarted","Data":"dc9fabeda98cef2951451173395cc14bddf8f2255d1d485f7060fdca6e12cd1f"} Nov 28 09:16:46 crc kubenswrapper[4946]: I1128 09:16:46.077911 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7tnm" event={"ID":"45c881e1-1946-459c-bd4c-f7790c2dfba1","Type":"ContainerStarted","Data":"bf9c58e9d3ad8c2d5d0340f1063b8a50949d64944899500d3307ab2f006e68f3"} Nov 28 09:16:47 crc kubenswrapper[4946]: I1128 09:16:47.092072 4946 generic.go:334] "Generic (PLEG): container finished" podID="45c881e1-1946-459c-bd4c-f7790c2dfba1" containerID="bf9c58e9d3ad8c2d5d0340f1063b8a50949d64944899500d3307ab2f006e68f3" exitCode=0 Nov 28 09:16:47 crc kubenswrapper[4946]: I1128 09:16:47.092147 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7tnm" event={"ID":"45c881e1-1946-459c-bd4c-f7790c2dfba1","Type":"ContainerDied","Data":"bf9c58e9d3ad8c2d5d0340f1063b8a50949d64944899500d3307ab2f006e68f3"} Nov 28 09:16:49 crc kubenswrapper[4946]: I1128 09:16:49.115303 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7tnm" event={"ID":"45c881e1-1946-459c-bd4c-f7790c2dfba1","Type":"ContainerStarted","Data":"983aea18e97b59add5dfde8df406dde26ac25c7f1de2dd653f90c89caa7de755"} Nov 28 09:16:52 crc kubenswrapper[4946]: I1128 09:16:52.142730 4946 generic.go:334] "Generic (PLEG): container finished" podID="82f88c64-bdae-4b41-a21a-e9bcae8aac99" containerID="442cf22272d7d893409384a76efc66d8afd477c7cdce9da6f154493488e194b8" exitCode=0 Nov 28 09:16:52 crc kubenswrapper[4946]: I1128 09:16:52.142815 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-vdqgc" event={"ID":"82f88c64-bdae-4b41-a21a-e9bcae8aac99","Type":"ContainerDied","Data":"442cf22272d7d893409384a76efc66d8afd477c7cdce9da6f154493488e194b8"} Nov 28 09:16:52 crc kubenswrapper[4946]: I1128 09:16:52.164993 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b7tnm" podStartSLOduration=6.154532933 podStartE2EDuration="9.164974611s" podCreationTimestamp="2025-11-28 09:16:43 +0000 UTC" firstStartedPulling="2025-11-28 09:16:45.068062996 +0000 UTC m=+8659.446128107" lastFinishedPulling="2025-11-28 09:16:48.078504674 +0000 UTC m=+8662.456569785" observedRunningTime="2025-11-28 09:16:49.14917875 +0000 UTC m=+8663.527243871" watchObservedRunningTime="2025-11-28 09:16:52.164974611 +0000 UTC m=+8666.543039722" Nov 28 09:16:52 crc kubenswrapper[4946]: I1128 09:16:52.424049 4946 scope.go:117] "RemoveContainer" containerID="c05344c4e5931403c20a2983817f78deb417b2a637b6b70c9aa5de2a94e9e2c8" Nov 28 09:16:52 crc kubenswrapper[4946]: I1128 09:16:52.466197 4946 scope.go:117] "RemoveContainer" containerID="27e09de53eee663f41a8673024b6b248a1976e2576f8cb660802a73b9bf923b5" Nov 28 09:16:52 crc kubenswrapper[4946]: I1128 09:16:52.512051 4946 scope.go:117] "RemoveContainer" containerID="0c8287fcd31a71b062a0dc507c4244d2d944593480a107cd86caaeae47cc501f" Nov 28 09:16:53 crc kubenswrapper[4946]: I1128 09:16:53.633652 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b7tnm" Nov 28 09:16:53 crc kubenswrapper[4946]: I1128 09:16:53.634354 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b7tnm" Nov 28 09:16:53 crc kubenswrapper[4946]: I1128 09:16:53.694700 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b7tnm" Nov 28 09:16:53 crc kubenswrapper[4946]: I1128 09:16:53.715920 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-vdqgc" Nov 28 09:16:53 crc kubenswrapper[4946]: I1128 09:16:53.887705 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptmcr\" (UniqueName: \"kubernetes.io/projected/82f88c64-bdae-4b41-a21a-e9bcae8aac99-kube-api-access-ptmcr\") pod \"82f88c64-bdae-4b41-a21a-e9bcae8aac99\" (UID: \"82f88c64-bdae-4b41-a21a-e9bcae8aac99\") " Nov 28 09:16:53 crc kubenswrapper[4946]: I1128 09:16:53.887775 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82f88c64-bdae-4b41-a21a-e9bcae8aac99-inventory\") pod \"82f88c64-bdae-4b41-a21a-e9bcae8aac99\" (UID: \"82f88c64-bdae-4b41-a21a-e9bcae8aac99\") " Nov 28 09:16:53 crc kubenswrapper[4946]: I1128 09:16:53.887839 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/82f88c64-bdae-4b41-a21a-e9bcae8aac99-ssh-key\") pod \"82f88c64-bdae-4b41-a21a-e9bcae8aac99\" (UID: \"82f88c64-bdae-4b41-a21a-e9bcae8aac99\") " Nov 28 09:16:53 crc kubenswrapper[4946]: I1128 09:16:53.887897 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/82f88c64-bdae-4b41-a21a-e9bcae8aac99-ceph\") pod \"82f88c64-bdae-4b41-a21a-e9bcae8aac99\" (UID: \"82f88c64-bdae-4b41-a21a-e9bcae8aac99\") " Nov 28 09:16:53 crc kubenswrapper[4946]: I1128 09:16:53.896033 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82f88c64-bdae-4b41-a21a-e9bcae8aac99-kube-api-access-ptmcr" (OuterVolumeSpecName: "kube-api-access-ptmcr") pod "82f88c64-bdae-4b41-a21a-e9bcae8aac99" (UID: "82f88c64-bdae-4b41-a21a-e9bcae8aac99"). InnerVolumeSpecName "kube-api-access-ptmcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:16:53 crc kubenswrapper[4946]: I1128 09:16:53.896425 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82f88c64-bdae-4b41-a21a-e9bcae8aac99-ceph" (OuterVolumeSpecName: "ceph") pod "82f88c64-bdae-4b41-a21a-e9bcae8aac99" (UID: "82f88c64-bdae-4b41-a21a-e9bcae8aac99"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:16:53 crc kubenswrapper[4946]: I1128 09:16:53.930626 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82f88c64-bdae-4b41-a21a-e9bcae8aac99-inventory" (OuterVolumeSpecName: "inventory") pod "82f88c64-bdae-4b41-a21a-e9bcae8aac99" (UID: "82f88c64-bdae-4b41-a21a-e9bcae8aac99"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:16:53 crc kubenswrapper[4946]: I1128 09:16:53.956792 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82f88c64-bdae-4b41-a21a-e9bcae8aac99-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "82f88c64-bdae-4b41-a21a-e9bcae8aac99" (UID: "82f88c64-bdae-4b41-a21a-e9bcae8aac99"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:16:53 crc kubenswrapper[4946]: I1128 09:16:53.991831 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptmcr\" (UniqueName: \"kubernetes.io/projected/82f88c64-bdae-4b41-a21a-e9bcae8aac99-kube-api-access-ptmcr\") on node \"crc\" DevicePath \"\"" Nov 28 09:16:53 crc kubenswrapper[4946]: I1128 09:16:53.991861 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82f88c64-bdae-4b41-a21a-e9bcae8aac99-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:16:53 crc kubenswrapper[4946]: I1128 09:16:53.991871 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/82f88c64-bdae-4b41-a21a-e9bcae8aac99-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:16:53 crc kubenswrapper[4946]: I1128 09:16:53.991880 4946 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/82f88c64-bdae-4b41-a21a-e9bcae8aac99-ceph\") on node \"crc\" DevicePath \"\"" Nov 28 09:16:54 crc kubenswrapper[4946]: I1128 09:16:54.174311 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-vdqgc" event={"ID":"82f88c64-bdae-4b41-a21a-e9bcae8aac99","Type":"ContainerDied","Data":"a8d6a4455f8c6ebb8aaaadb7eb58d0a502a6386495a43daae785eb27a70562a6"} Nov 28 09:16:54 crc kubenswrapper[4946]: I1128 09:16:54.174345 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-vdqgc" Nov 28 09:16:54 crc kubenswrapper[4946]: I1128 09:16:54.174365 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8d6a4455f8c6ebb8aaaadb7eb58d0a502a6386495a43daae785eb27a70562a6" Nov 28 09:16:54 crc kubenswrapper[4946]: I1128 09:16:54.315523 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-hp2dx"] Nov 28 09:16:54 crc kubenswrapper[4946]: E1128 09:16:54.315991 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82f88c64-bdae-4b41-a21a-e9bcae8aac99" containerName="download-cache-openstack-openstack-cell1" Nov 28 09:16:54 crc kubenswrapper[4946]: I1128 09:16:54.316006 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="82f88c64-bdae-4b41-a21a-e9bcae8aac99" containerName="download-cache-openstack-openstack-cell1" Nov 28 09:16:54 crc kubenswrapper[4946]: I1128 09:16:54.316204 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="82f88c64-bdae-4b41-a21a-e9bcae8aac99" containerName="download-cache-openstack-openstack-cell1" Nov 28 09:16:54 crc kubenswrapper[4946]: I1128 09:16:54.316958 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-hp2dx" Nov 28 09:16:54 crc kubenswrapper[4946]: I1128 09:16:54.323157 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 28 09:16:54 crc kubenswrapper[4946]: I1128 09:16:54.328097 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-psw2j" Nov 28 09:16:54 crc kubenswrapper[4946]: I1128 09:16:54.347133 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-hp2dx"] Nov 28 09:16:54 crc kubenswrapper[4946]: I1128 09:16:54.400445 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gn4c\" (UniqueName: \"kubernetes.io/projected/18ddb878-91b4-42ad-b516-fb436b5ecf2a-kube-api-access-8gn4c\") pod \"configure-network-openstack-openstack-cell1-hp2dx\" (UID: \"18ddb878-91b4-42ad-b516-fb436b5ecf2a\") " pod="openstack/configure-network-openstack-openstack-cell1-hp2dx" Nov 28 09:16:54 crc kubenswrapper[4946]: I1128 09:16:54.400519 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/18ddb878-91b4-42ad-b516-fb436b5ecf2a-ceph\") pod \"configure-network-openstack-openstack-cell1-hp2dx\" (UID: \"18ddb878-91b4-42ad-b516-fb436b5ecf2a\") " pod="openstack/configure-network-openstack-openstack-cell1-hp2dx" Nov 28 09:16:54 crc kubenswrapper[4946]: I1128 09:16:54.400763 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18ddb878-91b4-42ad-b516-fb436b5ecf2a-ssh-key\") pod \"configure-network-openstack-openstack-cell1-hp2dx\" (UID: \"18ddb878-91b4-42ad-b516-fb436b5ecf2a\") " pod="openstack/configure-network-openstack-openstack-cell1-hp2dx" Nov 28 09:16:54 crc kubenswrapper[4946]: I1128 09:16:54.400913 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18ddb878-91b4-42ad-b516-fb436b5ecf2a-inventory\") pod \"configure-network-openstack-openstack-cell1-hp2dx\" (UID: \"18ddb878-91b4-42ad-b516-fb436b5ecf2a\") " pod="openstack/configure-network-openstack-openstack-cell1-hp2dx" Nov 28 09:16:54 crc kubenswrapper[4946]: I1128 09:16:54.452516 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b7tnm" Nov 28 09:16:54 crc kubenswrapper[4946]: I1128 09:16:54.503274 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18ddb878-91b4-42ad-b516-fb436b5ecf2a-inventory\") pod \"configure-network-openstack-openstack-cell1-hp2dx\" (UID: \"18ddb878-91b4-42ad-b516-fb436b5ecf2a\") " pod="openstack/configure-network-openstack-openstack-cell1-hp2dx" Nov 28 09:16:54 crc kubenswrapper[4946]: I1128 09:16:54.503369 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gn4c\" (UniqueName: \"kubernetes.io/projected/18ddb878-91b4-42ad-b516-fb436b5ecf2a-kube-api-access-8gn4c\") pod \"configure-network-openstack-openstack-cell1-hp2dx\" (UID: \"18ddb878-91b4-42ad-b516-fb436b5ecf2a\") " pod="openstack/configure-network-openstack-openstack-cell1-hp2dx" Nov 28 09:16:54 crc kubenswrapper[4946]: I1128 09:16:54.503405 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/18ddb878-91b4-42ad-b516-fb436b5ecf2a-ceph\") pod \"configure-network-openstack-openstack-cell1-hp2dx\" (UID: \"18ddb878-91b4-42ad-b516-fb436b5ecf2a\") " pod="openstack/configure-network-openstack-openstack-cell1-hp2dx" Nov 28 09:16:54 crc kubenswrapper[4946]: I1128 09:16:54.503523 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18ddb878-91b4-42ad-b516-fb436b5ecf2a-ssh-key\") pod \"configure-network-openstack-openstack-cell1-hp2dx\" (UID: \"18ddb878-91b4-42ad-b516-fb436b5ecf2a\") " pod="openstack/configure-network-openstack-openstack-cell1-hp2dx" Nov 28 09:16:54 crc kubenswrapper[4946]: I1128 09:16:54.508036 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18ddb878-91b4-42ad-b516-fb436b5ecf2a-inventory\") pod \"configure-network-openstack-openstack-cell1-hp2dx\" (UID: \"18ddb878-91b4-42ad-b516-fb436b5ecf2a\") " pod="openstack/configure-network-openstack-openstack-cell1-hp2dx" Nov 28 09:16:54 crc kubenswrapper[4946]: I1128 09:16:54.508169 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18ddb878-91b4-42ad-b516-fb436b5ecf2a-ssh-key\") pod \"configure-network-openstack-openstack-cell1-hp2dx\" (UID: \"18ddb878-91b4-42ad-b516-fb436b5ecf2a\") " pod="openstack/configure-network-openstack-openstack-cell1-hp2dx" Nov 28 09:16:54 crc kubenswrapper[4946]: I1128 09:16:54.509155 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/18ddb878-91b4-42ad-b516-fb436b5ecf2a-ceph\") pod \"configure-network-openstack-openstack-cell1-hp2dx\" (UID: \"18ddb878-91b4-42ad-b516-fb436b5ecf2a\") " pod="openstack/configure-network-openstack-openstack-cell1-hp2dx" Nov 28 09:16:54 crc kubenswrapper[4946]: I1128 09:16:54.511156 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b7tnm"] Nov 28 09:16:54 crc kubenswrapper[4946]: I1128 09:16:54.517928 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gn4c\" (UniqueName: \"kubernetes.io/projected/18ddb878-91b4-42ad-b516-fb436b5ecf2a-kube-api-access-8gn4c\") pod \"configure-network-openstack-openstack-cell1-hp2dx\" (UID: \"18ddb878-91b4-42ad-b516-fb436b5ecf2a\") " pod="openstack/configure-network-openstack-openstack-cell1-hp2dx" Nov 28 09:16:54 crc kubenswrapper[4946]: I1128 09:16:54.645197 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-hp2dx" Nov 28 09:16:55 crc kubenswrapper[4946]: I1128 09:16:55.019866 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-hp2dx"] Nov 28 09:16:55 crc kubenswrapper[4946]: I1128 09:16:55.183143 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-hp2dx" event={"ID":"18ddb878-91b4-42ad-b516-fb436b5ecf2a","Type":"ContainerStarted","Data":"5d642a9d220db59d0e40bf96b46e9ff5a9b958a9844f0d71c265263fa75aa008"} Nov 28 09:16:56 crc kubenswrapper[4946]: I1128 09:16:56.194173 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-hp2dx" event={"ID":"18ddb878-91b4-42ad-b516-fb436b5ecf2a","Type":"ContainerStarted","Data":"ff24f988fb953d9a576248851a9af90fb1afb1474e705c5135bb0e6140bf623a"} Nov 28 09:16:56 crc kubenswrapper[4946]: I1128 09:16:56.194304 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b7tnm" podUID="45c881e1-1946-459c-bd4c-f7790c2dfba1" containerName="registry-server" containerID="cri-o://983aea18e97b59add5dfde8df406dde26ac25c7f1de2dd653f90c89caa7de755" gracePeriod=2 Nov 28 09:16:56 crc kubenswrapper[4946]: I1128 09:16:56.237825 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-hp2dx" podStartSLOduration=1.615237063 podStartE2EDuration="2.237801589s" podCreationTimestamp="2025-11-28 09:16:54 +0000 UTC" firstStartedPulling="2025-11-28 09:16:55.028514807 +0000 UTC m=+8669.406579918" lastFinishedPulling="2025-11-28 09:16:55.651079292 +0000 UTC m=+8670.029144444" observedRunningTime="2025-11-28 09:16:56.221340821 +0000 UTC m=+8670.599405972" watchObservedRunningTime="2025-11-28 09:16:56.237801589 +0000 UTC m=+8670.615866710" Nov 28 09:16:56 crc kubenswrapper[4946]: I1128 09:16:56.654014 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b7tnm" Nov 28 09:16:56 crc kubenswrapper[4946]: I1128 09:16:56.850286 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6pzm\" (UniqueName: \"kubernetes.io/projected/45c881e1-1946-459c-bd4c-f7790c2dfba1-kube-api-access-c6pzm\") pod \"45c881e1-1946-459c-bd4c-f7790c2dfba1\" (UID: \"45c881e1-1946-459c-bd4c-f7790c2dfba1\") " Nov 28 09:16:56 crc kubenswrapper[4946]: I1128 09:16:56.851080 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45c881e1-1946-459c-bd4c-f7790c2dfba1-catalog-content\") pod \"45c881e1-1946-459c-bd4c-f7790c2dfba1\" (UID: \"45c881e1-1946-459c-bd4c-f7790c2dfba1\") " Nov 28 09:16:56 crc kubenswrapper[4946]: I1128 09:16:56.851147 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45c881e1-1946-459c-bd4c-f7790c2dfba1-utilities\") pod \"45c881e1-1946-459c-bd4c-f7790c2dfba1\" (UID: \"45c881e1-1946-459c-bd4c-f7790c2dfba1\") " Nov 28 09:16:56 crc kubenswrapper[4946]: I1128 09:16:56.852154 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45c881e1-1946-459c-bd4c-f7790c2dfba1-utilities" (OuterVolumeSpecName: "utilities") pod "45c881e1-1946-459c-bd4c-f7790c2dfba1" (UID: "45c881e1-1946-459c-bd4c-f7790c2dfba1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:16:56 crc kubenswrapper[4946]: I1128 09:16:56.858113 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45c881e1-1946-459c-bd4c-f7790c2dfba1-kube-api-access-c6pzm" (OuterVolumeSpecName: "kube-api-access-c6pzm") pod "45c881e1-1946-459c-bd4c-f7790c2dfba1" (UID: "45c881e1-1946-459c-bd4c-f7790c2dfba1"). InnerVolumeSpecName "kube-api-access-c6pzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:16:56 crc kubenswrapper[4946]: I1128 09:16:56.953647 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45c881e1-1946-459c-bd4c-f7790c2dfba1-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 09:16:56 crc kubenswrapper[4946]: I1128 09:16:56.953677 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6pzm\" (UniqueName: \"kubernetes.io/projected/45c881e1-1946-459c-bd4c-f7790c2dfba1-kube-api-access-c6pzm\") on node \"crc\" DevicePath \"\"" Nov 28 09:16:57 crc kubenswrapper[4946]: I1128 09:16:57.081673 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45c881e1-1946-459c-bd4c-f7790c2dfba1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45c881e1-1946-459c-bd4c-f7790c2dfba1" (UID: "45c881e1-1946-459c-bd4c-f7790c2dfba1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:16:57 crc kubenswrapper[4946]: I1128 09:16:57.158991 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45c881e1-1946-459c-bd4c-f7790c2dfba1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 09:16:57 crc kubenswrapper[4946]: I1128 09:16:57.205951 4946 generic.go:334] "Generic (PLEG): container finished" podID="45c881e1-1946-459c-bd4c-f7790c2dfba1" containerID="983aea18e97b59add5dfde8df406dde26ac25c7f1de2dd653f90c89caa7de755" exitCode=0 Nov 28 09:16:57 crc kubenswrapper[4946]: I1128 09:16:57.206047 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7tnm" event={"ID":"45c881e1-1946-459c-bd4c-f7790c2dfba1","Type":"ContainerDied","Data":"983aea18e97b59add5dfde8df406dde26ac25c7f1de2dd653f90c89caa7de755"} Nov 28 09:16:57 crc kubenswrapper[4946]: I1128 09:16:57.207098 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7tnm" event={"ID":"45c881e1-1946-459c-bd4c-f7790c2dfba1","Type":"ContainerDied","Data":"dc9fabeda98cef2951451173395cc14bddf8f2255d1d485f7060fdca6e12cd1f"} Nov 28 09:16:57 crc kubenswrapper[4946]: I1128 09:16:57.206072 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b7tnm" Nov 28 09:16:57 crc kubenswrapper[4946]: I1128 09:16:57.207150 4946 scope.go:117] "RemoveContainer" containerID="983aea18e97b59add5dfde8df406dde26ac25c7f1de2dd653f90c89caa7de755" Nov 28 09:16:57 crc kubenswrapper[4946]: I1128 09:16:57.254551 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b7tnm"] Nov 28 09:16:57 crc kubenswrapper[4946]: I1128 09:16:57.261348 4946 scope.go:117] "RemoveContainer" containerID="bf9c58e9d3ad8c2d5d0340f1063b8a50949d64944899500d3307ab2f006e68f3" Nov 28 09:16:57 crc kubenswrapper[4946]: I1128 09:16:57.264685 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b7tnm"] Nov 28 09:16:57 crc kubenswrapper[4946]: I1128 09:16:57.291071 4946 scope.go:117] "RemoveContainer" containerID="d96a98cbeca82ac13e1a6adeca6b0315850c91bcb8924e5c2b91bb51837dd3bc" Nov 28 09:16:57 crc kubenswrapper[4946]: I1128 09:16:57.355821 4946 scope.go:117] "RemoveContainer" containerID="983aea18e97b59add5dfde8df406dde26ac25c7f1de2dd653f90c89caa7de755" Nov 28 09:16:57 crc kubenswrapper[4946]: E1128 09:16:57.356277 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"983aea18e97b59add5dfde8df406dde26ac25c7f1de2dd653f90c89caa7de755\": container with ID starting with 983aea18e97b59add5dfde8df406dde26ac25c7f1de2dd653f90c89caa7de755 not found: ID does not exist" containerID="983aea18e97b59add5dfde8df406dde26ac25c7f1de2dd653f90c89caa7de755" Nov 28 09:16:57 crc kubenswrapper[4946]: I1128 09:16:57.356354 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"983aea18e97b59add5dfde8df406dde26ac25c7f1de2dd653f90c89caa7de755"} err="failed to get container status \"983aea18e97b59add5dfde8df406dde26ac25c7f1de2dd653f90c89caa7de755\": rpc error: code = NotFound desc = could not find container \"983aea18e97b59add5dfde8df406dde26ac25c7f1de2dd653f90c89caa7de755\": container with ID starting with 983aea18e97b59add5dfde8df406dde26ac25c7f1de2dd653f90c89caa7de755 not found: ID does not exist" Nov 28 09:16:57 crc kubenswrapper[4946]: I1128 09:16:57.356397 4946 scope.go:117] "RemoveContainer" containerID="bf9c58e9d3ad8c2d5d0340f1063b8a50949d64944899500d3307ab2f006e68f3" Nov 28 09:16:57 crc kubenswrapper[4946]: E1128 09:16:57.357001 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf9c58e9d3ad8c2d5d0340f1063b8a50949d64944899500d3307ab2f006e68f3\": container with ID starting with bf9c58e9d3ad8c2d5d0340f1063b8a50949d64944899500d3307ab2f006e68f3 not found: ID does not exist" containerID="bf9c58e9d3ad8c2d5d0340f1063b8a50949d64944899500d3307ab2f006e68f3" Nov 28 09:16:57 crc kubenswrapper[4946]: I1128 09:16:57.357049 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf9c58e9d3ad8c2d5d0340f1063b8a50949d64944899500d3307ab2f006e68f3"} err="failed to get container status \"bf9c58e9d3ad8c2d5d0340f1063b8a50949d64944899500d3307ab2f006e68f3\": rpc error: code = NotFound desc = could not find container \"bf9c58e9d3ad8c2d5d0340f1063b8a50949d64944899500d3307ab2f006e68f3\": container with ID starting with bf9c58e9d3ad8c2d5d0340f1063b8a50949d64944899500d3307ab2f006e68f3 not found: ID does not exist" Nov 28 09:16:57 crc kubenswrapper[4946]: I1128 09:16:57.357087 4946 scope.go:117] "RemoveContainer" containerID="d96a98cbeca82ac13e1a6adeca6b0315850c91bcb8924e5c2b91bb51837dd3bc" Nov 28 09:16:57 crc kubenswrapper[4946]: E1128 09:16:57.357407 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d96a98cbeca82ac13e1a6adeca6b0315850c91bcb8924e5c2b91bb51837dd3bc\": container with ID starting with d96a98cbeca82ac13e1a6adeca6b0315850c91bcb8924e5c2b91bb51837dd3bc not found: ID does not exist" containerID="d96a98cbeca82ac13e1a6adeca6b0315850c91bcb8924e5c2b91bb51837dd3bc" Nov 28 09:16:57 crc kubenswrapper[4946]: I1128 09:16:57.357451 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d96a98cbeca82ac13e1a6adeca6b0315850c91bcb8924e5c2b91bb51837dd3bc"} err="failed to get container status \"d96a98cbeca82ac13e1a6adeca6b0315850c91bcb8924e5c2b91bb51837dd3bc\": rpc error: code = NotFound desc = could not find container \"d96a98cbeca82ac13e1a6adeca6b0315850c91bcb8924e5c2b91bb51837dd3bc\": container with ID starting with d96a98cbeca82ac13e1a6adeca6b0315850c91bcb8924e5c2b91bb51837dd3bc not found: ID does not exist" Nov 28 09:16:58 crc kubenswrapper[4946]: I1128 09:16:58.030710 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45c881e1-1946-459c-bd4c-f7790c2dfba1" path="/var/lib/kubelet/pods/45c881e1-1946-459c-bd4c-f7790c2dfba1/volumes" Nov 28 09:17:27 crc kubenswrapper[4946]: I1128 09:17:27.549455 4946 generic.go:334] "Generic (PLEG): container finished" podID="a8f17537-bfd9-4581-a146-8582ca24195c" containerID="4c8680f8988ed629dac413594a33ffdf0f8afb769e4bc2c5f3c778fa2d80c7cd" exitCode=0 Nov 28 09:17:27 crc kubenswrapper[4946]: I1128 09:17:27.549510 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-networker-h7dbw" event={"ID":"a8f17537-bfd9-4581-a146-8582ca24195c","Type":"ContainerDied","Data":"4c8680f8988ed629dac413594a33ffdf0f8afb769e4bc2c5f3c778fa2d80c7cd"} Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.034761 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-networker-h7dbw" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.096802 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqx26\" (UniqueName: \"kubernetes.io/projected/a8f17537-bfd9-4581-a146-8582ca24195c-kube-api-access-tqx26\") pod \"a8f17537-bfd9-4581-a146-8582ca24195c\" (UID: \"a8f17537-bfd9-4581-a146-8582ca24195c\") " Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.097098 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a8f17537-bfd9-4581-a146-8582ca24195c-ssh-key\") pod \"a8f17537-bfd9-4581-a146-8582ca24195c\" (UID: \"a8f17537-bfd9-4581-a146-8582ca24195c\") " Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.097218 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8f17537-bfd9-4581-a146-8582ca24195c-inventory\") pod \"a8f17537-bfd9-4581-a146-8582ca24195c\" (UID: \"a8f17537-bfd9-4581-a146-8582ca24195c\") " Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.103882 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8f17537-bfd9-4581-a146-8582ca24195c-kube-api-access-tqx26" (OuterVolumeSpecName: "kube-api-access-tqx26") pod "a8f17537-bfd9-4581-a146-8582ca24195c" (UID: "a8f17537-bfd9-4581-a146-8582ca24195c"). InnerVolumeSpecName "kube-api-access-tqx26". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.129758 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8f17537-bfd9-4581-a146-8582ca24195c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a8f17537-bfd9-4581-a146-8582ca24195c" (UID: "a8f17537-bfd9-4581-a146-8582ca24195c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.136417 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8f17537-bfd9-4581-a146-8582ca24195c-inventory" (OuterVolumeSpecName: "inventory") pod "a8f17537-bfd9-4581-a146-8582ca24195c" (UID: "a8f17537-bfd9-4581-a146-8582ca24195c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.201721 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a8f17537-bfd9-4581-a146-8582ca24195c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.201757 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8f17537-bfd9-4581-a146-8582ca24195c-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.201808 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqx26\" (UniqueName: \"kubernetes.io/projected/a8f17537-bfd9-4581-a146-8582ca24195c-kube-api-access-tqx26\") on node \"crc\" DevicePath \"\"" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.573881 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-networker-h7dbw" event={"ID":"a8f17537-bfd9-4581-a146-8582ca24195c","Type":"ContainerDied","Data":"567a4ef87e58e4b303713a455d30004a60e4c4eca1a189cfc6f4a99b14e1271d"} Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.574341 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="567a4ef87e58e4b303713a455d30004a60e4c4eca1a189cfc6f4a99b14e1271d" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.574113 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-networker-h7dbw" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.647841 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-networker-pdfg5"] Nov 28 09:17:29 crc kubenswrapper[4946]: E1128 09:17:29.648197 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f17537-bfd9-4581-a146-8582ca24195c" containerName="configure-network-openstack-openstack-networker" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.648214 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f17537-bfd9-4581-a146-8582ca24195c" containerName="configure-network-openstack-openstack-networker" Nov 28 09:17:29 crc kubenswrapper[4946]: E1128 09:17:29.648226 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45c881e1-1946-459c-bd4c-f7790c2dfba1" containerName="extract-utilities" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.648234 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="45c881e1-1946-459c-bd4c-f7790c2dfba1" containerName="extract-utilities" Nov 28 09:17:29 crc kubenswrapper[4946]: E1128 09:17:29.648252 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45c881e1-1946-459c-bd4c-f7790c2dfba1" containerName="registry-server" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.648259 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="45c881e1-1946-459c-bd4c-f7790c2dfba1" containerName="registry-server" Nov 28 09:17:29 crc kubenswrapper[4946]: E1128 09:17:29.648284 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45c881e1-1946-459c-bd4c-f7790c2dfba1" containerName="extract-content" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.648290 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="45c881e1-1946-459c-bd4c-f7790c2dfba1" containerName="extract-content" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.648484 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="45c881e1-1946-459c-bd4c-f7790c2dfba1" containerName="registry-server" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.648499 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8f17537-bfd9-4581-a146-8582ca24195c" containerName="configure-network-openstack-openstack-networker" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.649153 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-networker-pdfg5" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.650918 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-7cq8d" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.651055 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.661559 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-networker-pdfg5"] Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.713869 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfmtc\" (UniqueName: \"kubernetes.io/projected/03b3ceb1-7fc7-4224-8dd4-cff5c557f005-kube-api-access-kfmtc\") pod \"validate-network-openstack-openstack-networker-pdfg5\" (UID: \"03b3ceb1-7fc7-4224-8dd4-cff5c557f005\") " pod="openstack/validate-network-openstack-openstack-networker-pdfg5" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.713907 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03b3ceb1-7fc7-4224-8dd4-cff5c557f005-ssh-key\") pod \"validate-network-openstack-openstack-networker-pdfg5\" (UID: \"03b3ceb1-7fc7-4224-8dd4-cff5c557f005\") " pod="openstack/validate-network-openstack-openstack-networker-pdfg5" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.713936 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03b3ceb1-7fc7-4224-8dd4-cff5c557f005-inventory\") pod \"validate-network-openstack-openstack-networker-pdfg5\" (UID: \"03b3ceb1-7fc7-4224-8dd4-cff5c557f005\") " pod="openstack/validate-network-openstack-openstack-networker-pdfg5" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.816489 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfmtc\" (UniqueName: \"kubernetes.io/projected/03b3ceb1-7fc7-4224-8dd4-cff5c557f005-kube-api-access-kfmtc\") pod \"validate-network-openstack-openstack-networker-pdfg5\" (UID: \"03b3ceb1-7fc7-4224-8dd4-cff5c557f005\") " pod="openstack/validate-network-openstack-openstack-networker-pdfg5" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.816560 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03b3ceb1-7fc7-4224-8dd4-cff5c557f005-ssh-key\") pod \"validate-network-openstack-openstack-networker-pdfg5\" (UID: \"03b3ceb1-7fc7-4224-8dd4-cff5c557f005\") " pod="openstack/validate-network-openstack-openstack-networker-pdfg5" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.816625 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03b3ceb1-7fc7-4224-8dd4-cff5c557f005-inventory\") pod \"validate-network-openstack-openstack-networker-pdfg5\" (UID: \"03b3ceb1-7fc7-4224-8dd4-cff5c557f005\") " pod="openstack/validate-network-openstack-openstack-networker-pdfg5" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.822345 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03b3ceb1-7fc7-4224-8dd4-cff5c557f005-ssh-key\") pod \"validate-network-openstack-openstack-networker-pdfg5\" (UID: \"03b3ceb1-7fc7-4224-8dd4-cff5c557f005\") " pod="openstack/validate-network-openstack-openstack-networker-pdfg5" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.822339 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03b3ceb1-7fc7-4224-8dd4-cff5c557f005-inventory\") pod \"validate-network-openstack-openstack-networker-pdfg5\" (UID: \"03b3ceb1-7fc7-4224-8dd4-cff5c557f005\") " pod="openstack/validate-network-openstack-openstack-networker-pdfg5" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.836011 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfmtc\" (UniqueName: \"kubernetes.io/projected/03b3ceb1-7fc7-4224-8dd4-cff5c557f005-kube-api-access-kfmtc\") pod \"validate-network-openstack-openstack-networker-pdfg5\" (UID: \"03b3ceb1-7fc7-4224-8dd4-cff5c557f005\") " pod="openstack/validate-network-openstack-openstack-networker-pdfg5" Nov 28 09:17:29 crc kubenswrapper[4946]: I1128 09:17:29.965309 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-networker-pdfg5" Nov 28 09:17:30 crc kubenswrapper[4946]: I1128 09:17:30.562166 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-networker-pdfg5"] Nov 28 09:17:30 crc kubenswrapper[4946]: I1128 09:17:30.571304 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 09:17:30 crc kubenswrapper[4946]: I1128 09:17:30.585681 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-networker-pdfg5" event={"ID":"03b3ceb1-7fc7-4224-8dd4-cff5c557f005","Type":"ContainerStarted","Data":"48b2a4535a3112096020cea3349276b951a45e2a4485ea6eab4f2f63142e6399"} Nov 28 09:17:31 crc kubenswrapper[4946]: I1128 09:17:31.595044 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-networker-pdfg5" event={"ID":"03b3ceb1-7fc7-4224-8dd4-cff5c557f005","Type":"ContainerStarted","Data":"c3fcfe56d1796d4a7491f10e535ca8d9259a0c35f963c9e033753d3027cebf72"} Nov 28 09:17:31 crc kubenswrapper[4946]: I1128 09:17:31.613213 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-networker-pdfg5" podStartSLOduration=1.976812598 podStartE2EDuration="2.613187186s" podCreationTimestamp="2025-11-28 09:17:29 +0000 UTC" firstStartedPulling="2025-11-28 09:17:30.571017787 +0000 UTC m=+8704.949082908" lastFinishedPulling="2025-11-28 09:17:31.207392345 +0000 UTC m=+8705.585457496" observedRunningTime="2025-11-28 09:17:31.608525591 +0000 UTC m=+8705.986590712" watchObservedRunningTime="2025-11-28 09:17:31.613187186 +0000 UTC m=+8705.991252307" Nov 28 09:17:35 crc kubenswrapper[4946]: I1128 09:17:35.667622 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mtbkl"] Nov 28 09:17:35 crc kubenswrapper[4946]: I1128 09:17:35.676911 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mtbkl" Nov 28 09:17:35 crc kubenswrapper[4946]: I1128 09:17:35.689072 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtbkl"] Nov 28 09:17:35 crc kubenswrapper[4946]: I1128 09:17:35.747508 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a18f3eee-5d76-4b78-8e6c-ffddc09e1d72-utilities\") pod \"redhat-marketplace-mtbkl\" (UID: \"a18f3eee-5d76-4b78-8e6c-ffddc09e1d72\") " pod="openshift-marketplace/redhat-marketplace-mtbkl" Nov 28 09:17:35 crc kubenswrapper[4946]: I1128 09:17:35.747582 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rww8n\" (UniqueName: \"kubernetes.io/projected/a18f3eee-5d76-4b78-8e6c-ffddc09e1d72-kube-api-access-rww8n\") pod \"redhat-marketplace-mtbkl\" (UID: \"a18f3eee-5d76-4b78-8e6c-ffddc09e1d72\") " pod="openshift-marketplace/redhat-marketplace-mtbkl" Nov 28 09:17:35 crc kubenswrapper[4946]: I1128 09:17:35.747710 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a18f3eee-5d76-4b78-8e6c-ffddc09e1d72-catalog-content\") pod \"redhat-marketplace-mtbkl\" (UID: \"a18f3eee-5d76-4b78-8e6c-ffddc09e1d72\") " pod="openshift-marketplace/redhat-marketplace-mtbkl" Nov 28 09:17:35 crc kubenswrapper[4946]: I1128 09:17:35.849308 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a18f3eee-5d76-4b78-8e6c-ffddc09e1d72-utilities\") pod \"redhat-marketplace-mtbkl\" (UID: \"a18f3eee-5d76-4b78-8e6c-ffddc09e1d72\") " pod="openshift-marketplace/redhat-marketplace-mtbkl" Nov 28 09:17:35 crc kubenswrapper[4946]: I1128 09:17:35.849413 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rww8n\" (UniqueName: \"kubernetes.io/projected/a18f3eee-5d76-4b78-8e6c-ffddc09e1d72-kube-api-access-rww8n\") pod \"redhat-marketplace-mtbkl\" (UID: \"a18f3eee-5d76-4b78-8e6c-ffddc09e1d72\") " pod="openshift-marketplace/redhat-marketplace-mtbkl" Nov 28 09:17:35 crc kubenswrapper[4946]: I1128 09:17:35.849483 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a18f3eee-5d76-4b78-8e6c-ffddc09e1d72-catalog-content\") pod \"redhat-marketplace-mtbkl\" (UID: \"a18f3eee-5d76-4b78-8e6c-ffddc09e1d72\") " pod="openshift-marketplace/redhat-marketplace-mtbkl" Nov 28 09:17:35 crc kubenswrapper[4946]: I1128 09:17:35.850292 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a18f3eee-5d76-4b78-8e6c-ffddc09e1d72-catalog-content\") pod \"redhat-marketplace-mtbkl\" (UID: \"a18f3eee-5d76-4b78-8e6c-ffddc09e1d72\") " pod="openshift-marketplace/redhat-marketplace-mtbkl" Nov 28 09:17:35 crc kubenswrapper[4946]: I1128 09:17:35.850293 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a18f3eee-5d76-4b78-8e6c-ffddc09e1d72-utilities\") pod \"redhat-marketplace-mtbkl\" (UID: \"a18f3eee-5d76-4b78-8e6c-ffddc09e1d72\") " pod="openshift-marketplace/redhat-marketplace-mtbkl" Nov 28 09:17:35 crc kubenswrapper[4946]: I1128 09:17:35.870378 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rww8n\" (UniqueName: \"kubernetes.io/projected/a18f3eee-5d76-4b78-8e6c-ffddc09e1d72-kube-api-access-rww8n\") pod \"redhat-marketplace-mtbkl\" (UID: \"a18f3eee-5d76-4b78-8e6c-ffddc09e1d72\") " pod="openshift-marketplace/redhat-marketplace-mtbkl" Nov 28 09:17:36 crc kubenswrapper[4946]: I1128 09:17:36.003467 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mtbkl" Nov 28 09:17:36 crc kubenswrapper[4946]: I1128 09:17:36.485457 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtbkl"] Nov 28 09:17:36 crc kubenswrapper[4946]: I1128 09:17:36.655778 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtbkl" event={"ID":"a18f3eee-5d76-4b78-8e6c-ffddc09e1d72","Type":"ContainerStarted","Data":"dd9e785a644d809299085a41ef1d06fd6fcbac7f14238f817384e8a2884df6f6"} Nov 28 09:17:36 crc kubenswrapper[4946]: I1128 09:17:36.658368 4946 generic.go:334] "Generic (PLEG): container finished" podID="03b3ceb1-7fc7-4224-8dd4-cff5c557f005" containerID="c3fcfe56d1796d4a7491f10e535ca8d9259a0c35f963c9e033753d3027cebf72" exitCode=0 Nov 28 09:17:36 crc kubenswrapper[4946]: I1128 09:17:36.658435 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-networker-pdfg5" event={"ID":"03b3ceb1-7fc7-4224-8dd4-cff5c557f005","Type":"ContainerDied","Data":"c3fcfe56d1796d4a7491f10e535ca8d9259a0c35f963c9e033753d3027cebf72"} Nov 28 09:17:37 crc kubenswrapper[4946]: I1128 09:17:37.670834 4946 generic.go:334] "Generic (PLEG): container finished" podID="a18f3eee-5d76-4b78-8e6c-ffddc09e1d72" containerID="6dc804582a90a5fdca60fb95370114a3f05ddb8a2ada9b5c99f41ebbff8a738e" exitCode=0 Nov 28 09:17:37 crc kubenswrapper[4946]: I1128 09:17:37.670921 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtbkl" event={"ID":"a18f3eee-5d76-4b78-8e6c-ffddc09e1d72","Type":"ContainerDied","Data":"6dc804582a90a5fdca60fb95370114a3f05ddb8a2ada9b5c99f41ebbff8a738e"} Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.148965 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-networker-pdfg5" Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.302851 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfmtc\" (UniqueName: \"kubernetes.io/projected/03b3ceb1-7fc7-4224-8dd4-cff5c557f005-kube-api-access-kfmtc\") pod \"03b3ceb1-7fc7-4224-8dd4-cff5c557f005\" (UID: \"03b3ceb1-7fc7-4224-8dd4-cff5c557f005\") " Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.303055 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03b3ceb1-7fc7-4224-8dd4-cff5c557f005-ssh-key\") pod \"03b3ceb1-7fc7-4224-8dd4-cff5c557f005\" (UID: \"03b3ceb1-7fc7-4224-8dd4-cff5c557f005\") " Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.303131 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03b3ceb1-7fc7-4224-8dd4-cff5c557f005-inventory\") pod \"03b3ceb1-7fc7-4224-8dd4-cff5c557f005\" (UID: \"03b3ceb1-7fc7-4224-8dd4-cff5c557f005\") " Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.308450 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03b3ceb1-7fc7-4224-8dd4-cff5c557f005-kube-api-access-kfmtc" (OuterVolumeSpecName: "kube-api-access-kfmtc") pod "03b3ceb1-7fc7-4224-8dd4-cff5c557f005" (UID: "03b3ceb1-7fc7-4224-8dd4-cff5c557f005"). InnerVolumeSpecName "kube-api-access-kfmtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.334721 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03b3ceb1-7fc7-4224-8dd4-cff5c557f005-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "03b3ceb1-7fc7-4224-8dd4-cff5c557f005" (UID: "03b3ceb1-7fc7-4224-8dd4-cff5c557f005"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.336923 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03b3ceb1-7fc7-4224-8dd4-cff5c557f005-inventory" (OuterVolumeSpecName: "inventory") pod "03b3ceb1-7fc7-4224-8dd4-cff5c557f005" (UID: "03b3ceb1-7fc7-4224-8dd4-cff5c557f005"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.406297 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfmtc\" (UniqueName: \"kubernetes.io/projected/03b3ceb1-7fc7-4224-8dd4-cff5c557f005-kube-api-access-kfmtc\") on node \"crc\" DevicePath \"\"" Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.406356 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03b3ceb1-7fc7-4224-8dd4-cff5c557f005-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.406371 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03b3ceb1-7fc7-4224-8dd4-cff5c557f005-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.681171 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-networker-pdfg5" event={"ID":"03b3ceb1-7fc7-4224-8dd4-cff5c557f005","Type":"ContainerDied","Data":"48b2a4535a3112096020cea3349276b951a45e2a4485ea6eab4f2f63142e6399"} Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.681453 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48b2a4535a3112096020cea3349276b951a45e2a4485ea6eab4f2f63142e6399" Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.681233 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-networker-pdfg5" Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.760459 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-networker-jkjpm"] Nov 28 09:17:38 crc kubenswrapper[4946]: E1128 09:17:38.760950 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03b3ceb1-7fc7-4224-8dd4-cff5c557f005" containerName="validate-network-openstack-openstack-networker" Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.760965 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="03b3ceb1-7fc7-4224-8dd4-cff5c557f005" containerName="validate-network-openstack-openstack-networker" Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.761144 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="03b3ceb1-7fc7-4224-8dd4-cff5c557f005" containerName="validate-network-openstack-openstack-networker" Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.761883 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-networker-jkjpm" Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.764438 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-7cq8d" Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.764794 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.770590 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-networker-jkjpm"] Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.817831 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01c5528b-23a0-42cb-b0d8-37daec0b4ccf-ssh-key\") pod \"install-os-openstack-openstack-networker-jkjpm\" (UID: \"01c5528b-23a0-42cb-b0d8-37daec0b4ccf\") " pod="openstack/install-os-openstack-openstack-networker-jkjpm" Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.817907 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4rvf\" (UniqueName: \"kubernetes.io/projected/01c5528b-23a0-42cb-b0d8-37daec0b4ccf-kube-api-access-t4rvf\") pod \"install-os-openstack-openstack-networker-jkjpm\" (UID: \"01c5528b-23a0-42cb-b0d8-37daec0b4ccf\") " pod="openstack/install-os-openstack-openstack-networker-jkjpm" Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.818267 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01c5528b-23a0-42cb-b0d8-37daec0b4ccf-inventory\") pod \"install-os-openstack-openstack-networker-jkjpm\" (UID: \"01c5528b-23a0-42cb-b0d8-37daec0b4ccf\") " pod="openstack/install-os-openstack-openstack-networker-jkjpm" Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.920173 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4rvf\" (UniqueName: \"kubernetes.io/projected/01c5528b-23a0-42cb-b0d8-37daec0b4ccf-kube-api-access-t4rvf\") pod \"install-os-openstack-openstack-networker-jkjpm\" (UID: \"01c5528b-23a0-42cb-b0d8-37daec0b4ccf\") " pod="openstack/install-os-openstack-openstack-networker-jkjpm" Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.920330 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01c5528b-23a0-42cb-b0d8-37daec0b4ccf-inventory\") pod \"install-os-openstack-openstack-networker-jkjpm\" (UID: \"01c5528b-23a0-42cb-b0d8-37daec0b4ccf\") " pod="openstack/install-os-openstack-openstack-networker-jkjpm" Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.920413 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01c5528b-23a0-42cb-b0d8-37daec0b4ccf-ssh-key\") pod \"install-os-openstack-openstack-networker-jkjpm\" (UID: \"01c5528b-23a0-42cb-b0d8-37daec0b4ccf\") " pod="openstack/install-os-openstack-openstack-networker-jkjpm" Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.924956 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01c5528b-23a0-42cb-b0d8-37daec0b4ccf-ssh-key\") pod \"install-os-openstack-openstack-networker-jkjpm\" (UID: \"01c5528b-23a0-42cb-b0d8-37daec0b4ccf\") " pod="openstack/install-os-openstack-openstack-networker-jkjpm" Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.930332 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01c5528b-23a0-42cb-b0d8-37daec0b4ccf-inventory\") pod \"install-os-openstack-openstack-networker-jkjpm\" (UID: \"01c5528b-23a0-42cb-b0d8-37daec0b4ccf\") " pod="openstack/install-os-openstack-openstack-networker-jkjpm" Nov 28 09:17:38 crc kubenswrapper[4946]: I1128 09:17:38.937150 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4rvf\" (UniqueName: \"kubernetes.io/projected/01c5528b-23a0-42cb-b0d8-37daec0b4ccf-kube-api-access-t4rvf\") pod \"install-os-openstack-openstack-networker-jkjpm\" (UID: \"01c5528b-23a0-42cb-b0d8-37daec0b4ccf\") " pod="openstack/install-os-openstack-openstack-networker-jkjpm" Nov 28 09:17:39 crc kubenswrapper[4946]: I1128 09:17:39.145887 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-networker-jkjpm" Nov 28 09:17:39 crc kubenswrapper[4946]: I1128 09:17:39.679925 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-networker-jkjpm"] Nov 28 09:17:39 crc kubenswrapper[4946]: W1128 09:17:39.682597 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01c5528b_23a0_42cb_b0d8_37daec0b4ccf.slice/crio-49afcf126813ab3e5e54eea34763676db1a3f7275a8a376e810f26be7d0909bf WatchSource:0}: Error finding container 49afcf126813ab3e5e54eea34763676db1a3f7275a8a376e810f26be7d0909bf: Status 404 returned error can't find the container with id 49afcf126813ab3e5e54eea34763676db1a3f7275a8a376e810f26be7d0909bf Nov 28 09:17:39 crc kubenswrapper[4946]: I1128 09:17:39.693372 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-networker-jkjpm" event={"ID":"01c5528b-23a0-42cb-b0d8-37daec0b4ccf","Type":"ContainerStarted","Data":"49afcf126813ab3e5e54eea34763676db1a3f7275a8a376e810f26be7d0909bf"} Nov 28 09:17:39 crc kubenswrapper[4946]: I1128 09:17:39.696545 4946 generic.go:334] "Generic (PLEG): container finished" podID="a18f3eee-5d76-4b78-8e6c-ffddc09e1d72" containerID="f7f446f9b2be013612faefd41e34d9604bdfe9743a8e2310c2d48d65d0294680" exitCode=0 Nov 28 09:17:39 crc kubenswrapper[4946]: I1128 09:17:39.696571 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtbkl" event={"ID":"a18f3eee-5d76-4b78-8e6c-ffddc09e1d72","Type":"ContainerDied","Data":"f7f446f9b2be013612faefd41e34d9604bdfe9743a8e2310c2d48d65d0294680"} Nov 28 09:17:40 crc kubenswrapper[4946]: I1128 09:17:40.711740 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtbkl" event={"ID":"a18f3eee-5d76-4b78-8e6c-ffddc09e1d72","Type":"ContainerStarted","Data":"613135288ca8fcf48fc8dbb3f19090231020df9c3bd7e6c9897c5fdef936c71e"} Nov 28 09:17:40 crc kubenswrapper[4946]: I1128 09:17:40.728036 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mtbkl" podStartSLOduration=3.084067859 podStartE2EDuration="5.728017321s" podCreationTimestamp="2025-11-28 09:17:35 +0000 UTC" firstStartedPulling="2025-11-28 09:17:37.673430089 +0000 UTC m=+8712.051495200" lastFinishedPulling="2025-11-28 09:17:40.317379541 +0000 UTC m=+8714.695444662" observedRunningTime="2025-11-28 09:17:40.725436077 +0000 UTC m=+8715.103501208" watchObservedRunningTime="2025-11-28 09:17:40.728017321 +0000 UTC m=+8715.106082442" Nov 28 09:17:41 crc kubenswrapper[4946]: I1128 09:17:41.722620 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-networker-jkjpm" event={"ID":"01c5528b-23a0-42cb-b0d8-37daec0b4ccf","Type":"ContainerStarted","Data":"730170d9d7595f08fdddb6dc844ddb1f9b06522214c608255b4f057a65cdc6b2"} Nov 28 09:17:41 crc kubenswrapper[4946]: I1128 09:17:41.740778 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-networker-jkjpm" podStartSLOduration=2.916524826 podStartE2EDuration="3.740764501s" podCreationTimestamp="2025-11-28 09:17:38 +0000 UTC" firstStartedPulling="2025-11-28 09:17:39.684169272 +0000 UTC m=+8714.062234373" lastFinishedPulling="2025-11-28 09:17:40.508408937 +0000 UTC m=+8714.886474048" observedRunningTime="2025-11-28 09:17:41.737084149 +0000 UTC m=+8716.115149260" watchObservedRunningTime="2025-11-28 09:17:41.740764501 +0000 UTC m=+8716.118829602" Nov 28 09:17:46 crc kubenswrapper[4946]: I1128 09:17:46.004340 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mtbkl" Nov 28 09:17:46 crc kubenswrapper[4946]: I1128 09:17:46.004862 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mtbkl" Nov 28 09:17:46 crc kubenswrapper[4946]: I1128 09:17:46.072766 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mtbkl" Nov 28 09:17:46 crc kubenswrapper[4946]: I1128 09:17:46.838139 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mtbkl" Nov 28 09:17:46 crc kubenswrapper[4946]: I1128 09:17:46.892938 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtbkl"] Nov 28 09:17:48 crc kubenswrapper[4946]: I1128 09:17:48.788615 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mtbkl" podUID="a18f3eee-5d76-4b78-8e6c-ffddc09e1d72" containerName="registry-server" containerID="cri-o://613135288ca8fcf48fc8dbb3f19090231020df9c3bd7e6c9897c5fdef936c71e" gracePeriod=2 Nov 28 09:17:49 crc kubenswrapper[4946]: I1128 09:17:49.323291 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mtbkl" Nov 28 09:17:49 crc kubenswrapper[4946]: I1128 09:17:49.464365 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a18f3eee-5d76-4b78-8e6c-ffddc09e1d72-utilities\") pod \"a18f3eee-5d76-4b78-8e6c-ffddc09e1d72\" (UID: \"a18f3eee-5d76-4b78-8e6c-ffddc09e1d72\") " Nov 28 09:17:49 crc kubenswrapper[4946]: I1128 09:17:49.464547 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a18f3eee-5d76-4b78-8e6c-ffddc09e1d72-catalog-content\") pod \"a18f3eee-5d76-4b78-8e6c-ffddc09e1d72\" (UID: \"a18f3eee-5d76-4b78-8e6c-ffddc09e1d72\") " Nov 28 09:17:49 crc kubenswrapper[4946]: I1128 09:17:49.464658 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rww8n\" (UniqueName: \"kubernetes.io/projected/a18f3eee-5d76-4b78-8e6c-ffddc09e1d72-kube-api-access-rww8n\") pod \"a18f3eee-5d76-4b78-8e6c-ffddc09e1d72\" (UID: \"a18f3eee-5d76-4b78-8e6c-ffddc09e1d72\") " Nov 28 09:17:49 crc kubenswrapper[4946]: I1128 09:17:49.465370 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a18f3eee-5d76-4b78-8e6c-ffddc09e1d72-utilities" (OuterVolumeSpecName: "utilities") pod "a18f3eee-5d76-4b78-8e6c-ffddc09e1d72" (UID: "a18f3eee-5d76-4b78-8e6c-ffddc09e1d72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:17:49 crc kubenswrapper[4946]: I1128 09:17:49.471198 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a18f3eee-5d76-4b78-8e6c-ffddc09e1d72-kube-api-access-rww8n" (OuterVolumeSpecName: "kube-api-access-rww8n") pod "a18f3eee-5d76-4b78-8e6c-ffddc09e1d72" (UID: "a18f3eee-5d76-4b78-8e6c-ffddc09e1d72"). InnerVolumeSpecName "kube-api-access-rww8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:17:49 crc kubenswrapper[4946]: I1128 09:17:49.484215 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a18f3eee-5d76-4b78-8e6c-ffddc09e1d72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a18f3eee-5d76-4b78-8e6c-ffddc09e1d72" (UID: "a18f3eee-5d76-4b78-8e6c-ffddc09e1d72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:17:49 crc kubenswrapper[4946]: I1128 09:17:49.566949 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a18f3eee-5d76-4b78-8e6c-ffddc09e1d72-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 09:17:49 crc kubenswrapper[4946]: I1128 09:17:49.566987 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a18f3eee-5d76-4b78-8e6c-ffddc09e1d72-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 09:17:49 crc kubenswrapper[4946]: I1128 09:17:49.566998 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rww8n\" (UniqueName: \"kubernetes.io/projected/a18f3eee-5d76-4b78-8e6c-ffddc09e1d72-kube-api-access-rww8n\") on node \"crc\" DevicePath \"\"" Nov 28 09:17:49 crc kubenswrapper[4946]: I1128 09:17:49.804693 4946 generic.go:334] "Generic (PLEG): container finished" podID="a18f3eee-5d76-4b78-8e6c-ffddc09e1d72" containerID="613135288ca8fcf48fc8dbb3f19090231020df9c3bd7e6c9897c5fdef936c71e" exitCode=0 Nov 28 09:17:49 crc kubenswrapper[4946]: I1128 09:17:49.804773 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtbkl" event={"ID":"a18f3eee-5d76-4b78-8e6c-ffddc09e1d72","Type":"ContainerDied","Data":"613135288ca8fcf48fc8dbb3f19090231020df9c3bd7e6c9897c5fdef936c71e"} Nov 28 09:17:49 crc kubenswrapper[4946]: I1128 09:17:49.804815 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mtbkl" Nov 28 09:17:49 crc kubenswrapper[4946]: I1128 09:17:49.804843 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtbkl" event={"ID":"a18f3eee-5d76-4b78-8e6c-ffddc09e1d72","Type":"ContainerDied","Data":"dd9e785a644d809299085a41ef1d06fd6fcbac7f14238f817384e8a2884df6f6"} Nov 28 09:17:49 crc kubenswrapper[4946]: I1128 09:17:49.804866 4946 scope.go:117] "RemoveContainer" containerID="613135288ca8fcf48fc8dbb3f19090231020df9c3bd7e6c9897c5fdef936c71e" Nov 28 09:17:49 crc kubenswrapper[4946]: I1128 09:17:49.852253 4946 scope.go:117] "RemoveContainer" containerID="f7f446f9b2be013612faefd41e34d9604bdfe9743a8e2310c2d48d65d0294680" Nov 28 09:17:49 crc kubenswrapper[4946]: I1128 09:17:49.869816 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtbkl"] Nov 28 09:17:49 crc kubenswrapper[4946]: I1128 09:17:49.882882 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtbkl"] Nov 28 09:17:49 crc kubenswrapper[4946]: I1128 09:17:49.888413 4946 scope.go:117] "RemoveContainer" containerID="6dc804582a90a5fdca60fb95370114a3f05ddb8a2ada9b5c99f41ebbff8a738e" Nov 28 09:17:49 crc kubenswrapper[4946]: I1128 09:17:49.939905 4946 scope.go:117] "RemoveContainer" containerID="613135288ca8fcf48fc8dbb3f19090231020df9c3bd7e6c9897c5fdef936c71e" Nov 28 09:17:49 crc kubenswrapper[4946]: E1128 09:17:49.940451 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"613135288ca8fcf48fc8dbb3f19090231020df9c3bd7e6c9897c5fdef936c71e\": container with ID starting with 613135288ca8fcf48fc8dbb3f19090231020df9c3bd7e6c9897c5fdef936c71e not found: ID does not exist" containerID="613135288ca8fcf48fc8dbb3f19090231020df9c3bd7e6c9897c5fdef936c71e" Nov 28 09:17:49 crc kubenswrapper[4946]: I1128 09:17:49.940567 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"613135288ca8fcf48fc8dbb3f19090231020df9c3bd7e6c9897c5fdef936c71e"} err="failed to get container status \"613135288ca8fcf48fc8dbb3f19090231020df9c3bd7e6c9897c5fdef936c71e\": rpc error: code = NotFound desc = could not find container \"613135288ca8fcf48fc8dbb3f19090231020df9c3bd7e6c9897c5fdef936c71e\": container with ID starting with 613135288ca8fcf48fc8dbb3f19090231020df9c3bd7e6c9897c5fdef936c71e not found: ID does not exist" Nov 28 09:17:49 crc kubenswrapper[4946]: I1128 09:17:49.940653 4946 scope.go:117] "RemoveContainer" containerID="f7f446f9b2be013612faefd41e34d9604bdfe9743a8e2310c2d48d65d0294680" Nov 28 09:17:49 crc kubenswrapper[4946]: E1128 09:17:49.940946 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7f446f9b2be013612faefd41e34d9604bdfe9743a8e2310c2d48d65d0294680\": container with ID starting with f7f446f9b2be013612faefd41e34d9604bdfe9743a8e2310c2d48d65d0294680 not found: ID does not exist" containerID="f7f446f9b2be013612faefd41e34d9604bdfe9743a8e2310c2d48d65d0294680" Nov 28 09:17:49 crc kubenswrapper[4946]: I1128 09:17:49.941052 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7f446f9b2be013612faefd41e34d9604bdfe9743a8e2310c2d48d65d0294680"} err="failed to get container status \"f7f446f9b2be013612faefd41e34d9604bdfe9743a8e2310c2d48d65d0294680\": rpc error: code = NotFound desc = could not find container \"f7f446f9b2be013612faefd41e34d9604bdfe9743a8e2310c2d48d65d0294680\": container with ID starting with f7f446f9b2be013612faefd41e34d9604bdfe9743a8e2310c2d48d65d0294680 not found: ID does not exist" Nov 28 09:17:49 crc kubenswrapper[4946]: I1128 09:17:49.941158 4946 scope.go:117] "RemoveContainer" containerID="6dc804582a90a5fdca60fb95370114a3f05ddb8a2ada9b5c99f41ebbff8a738e" Nov 28 09:17:49 crc kubenswrapper[4946]: E1128 09:17:49.941538 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dc804582a90a5fdca60fb95370114a3f05ddb8a2ada9b5c99f41ebbff8a738e\": container with ID starting with 6dc804582a90a5fdca60fb95370114a3f05ddb8a2ada9b5c99f41ebbff8a738e not found: ID does not exist" containerID="6dc804582a90a5fdca60fb95370114a3f05ddb8a2ada9b5c99f41ebbff8a738e" Nov 28 09:17:49 crc kubenswrapper[4946]: I1128 09:17:49.941618 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dc804582a90a5fdca60fb95370114a3f05ddb8a2ada9b5c99f41ebbff8a738e"} err="failed to get container status \"6dc804582a90a5fdca60fb95370114a3f05ddb8a2ada9b5c99f41ebbff8a738e\": rpc error: code = NotFound desc = could not find container \"6dc804582a90a5fdca60fb95370114a3f05ddb8a2ada9b5c99f41ebbff8a738e\": container with ID starting with 6dc804582a90a5fdca60fb95370114a3f05ddb8a2ada9b5c99f41ebbff8a738e not found: ID does not exist" Nov 28 09:17:50 crc kubenswrapper[4946]: I1128 09:17:50.001446 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a18f3eee-5d76-4b78-8e6c-ffddc09e1d72" path="/var/lib/kubelet/pods/a18f3eee-5d76-4b78-8e6c-ffddc09e1d72/volumes" Nov 28 09:17:57 crc kubenswrapper[4946]: I1128 09:17:57.908402 4946 generic.go:334] "Generic (PLEG): container finished" podID="18ddb878-91b4-42ad-b516-fb436b5ecf2a" containerID="ff24f988fb953d9a576248851a9af90fb1afb1474e705c5135bb0e6140bf623a" exitCode=0 Nov 28 09:17:57 crc kubenswrapper[4946]: I1128 09:17:57.908584 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-hp2dx" event={"ID":"18ddb878-91b4-42ad-b516-fb436b5ecf2a","Type":"ContainerDied","Data":"ff24f988fb953d9a576248851a9af90fb1afb1474e705c5135bb0e6140bf623a"} Nov 28 09:17:59 crc kubenswrapper[4946]: I1128 09:17:59.413550 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-hp2dx" Nov 28 09:17:59 crc kubenswrapper[4946]: I1128 09:17:59.598415 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/18ddb878-91b4-42ad-b516-fb436b5ecf2a-ceph\") pod \"18ddb878-91b4-42ad-b516-fb436b5ecf2a\" (UID: \"18ddb878-91b4-42ad-b516-fb436b5ecf2a\") " Nov 28 09:17:59 crc kubenswrapper[4946]: I1128 09:17:59.598572 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gn4c\" (UniqueName: \"kubernetes.io/projected/18ddb878-91b4-42ad-b516-fb436b5ecf2a-kube-api-access-8gn4c\") pod \"18ddb878-91b4-42ad-b516-fb436b5ecf2a\" (UID: \"18ddb878-91b4-42ad-b516-fb436b5ecf2a\") " Nov 28 09:17:59 crc kubenswrapper[4946]: I1128 09:17:59.598622 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18ddb878-91b4-42ad-b516-fb436b5ecf2a-ssh-key\") pod \"18ddb878-91b4-42ad-b516-fb436b5ecf2a\" (UID: \"18ddb878-91b4-42ad-b516-fb436b5ecf2a\") " Nov 28 09:17:59 crc kubenswrapper[4946]: I1128 09:17:59.598662 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18ddb878-91b4-42ad-b516-fb436b5ecf2a-inventory\") pod \"18ddb878-91b4-42ad-b516-fb436b5ecf2a\" (UID: \"18ddb878-91b4-42ad-b516-fb436b5ecf2a\") " Nov 28 09:17:59 crc kubenswrapper[4946]: I1128 09:17:59.604081 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18ddb878-91b4-42ad-b516-fb436b5ecf2a-ceph" (OuterVolumeSpecName: "ceph") pod "18ddb878-91b4-42ad-b516-fb436b5ecf2a" (UID: "18ddb878-91b4-42ad-b516-fb436b5ecf2a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:17:59 crc kubenswrapper[4946]: I1128 09:17:59.606735 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18ddb878-91b4-42ad-b516-fb436b5ecf2a-kube-api-access-8gn4c" (OuterVolumeSpecName: "kube-api-access-8gn4c") pod "18ddb878-91b4-42ad-b516-fb436b5ecf2a" (UID: "18ddb878-91b4-42ad-b516-fb436b5ecf2a"). InnerVolumeSpecName "kube-api-access-8gn4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:17:59 crc kubenswrapper[4946]: I1128 09:17:59.640770 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18ddb878-91b4-42ad-b516-fb436b5ecf2a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "18ddb878-91b4-42ad-b516-fb436b5ecf2a" (UID: "18ddb878-91b4-42ad-b516-fb436b5ecf2a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:17:59 crc kubenswrapper[4946]: I1128 09:17:59.651589 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18ddb878-91b4-42ad-b516-fb436b5ecf2a-inventory" (OuterVolumeSpecName: "inventory") pod "18ddb878-91b4-42ad-b516-fb436b5ecf2a" (UID: "18ddb878-91b4-42ad-b516-fb436b5ecf2a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:17:59 crc kubenswrapper[4946]: I1128 09:17:59.700876 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18ddb878-91b4-42ad-b516-fb436b5ecf2a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:17:59 crc kubenswrapper[4946]: I1128 09:17:59.700905 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18ddb878-91b4-42ad-b516-fb436b5ecf2a-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:17:59 crc kubenswrapper[4946]: I1128 09:17:59.700915 4946 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/18ddb878-91b4-42ad-b516-fb436b5ecf2a-ceph\") on node \"crc\" DevicePath \"\"" Nov 28 09:17:59 crc kubenswrapper[4946]: I1128 09:17:59.700945 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gn4c\" (UniqueName: \"kubernetes.io/projected/18ddb878-91b4-42ad-b516-fb436b5ecf2a-kube-api-access-8gn4c\") on node \"crc\" DevicePath \"\"" Nov 28 09:17:59 crc kubenswrapper[4946]: I1128 09:17:59.931516 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-hp2dx" event={"ID":"18ddb878-91b4-42ad-b516-fb436b5ecf2a","Type":"ContainerDied","Data":"5d642a9d220db59d0e40bf96b46e9ff5a9b958a9844f0d71c265263fa75aa008"} Nov 28 09:17:59 crc kubenswrapper[4946]: I1128 09:17:59.931595 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d642a9d220db59d0e40bf96b46e9ff5a9b958a9844f0d71c265263fa75aa008" Nov 28 09:17:59 crc kubenswrapper[4946]: I1128 09:17:59.931550 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-hp2dx" Nov 28 09:18:00 crc kubenswrapper[4946]: I1128 09:18:00.007810 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-jx5qh"] Nov 28 09:18:00 crc kubenswrapper[4946]: E1128 09:18:00.008248 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a18f3eee-5d76-4b78-8e6c-ffddc09e1d72" containerName="registry-server" Nov 28 09:18:00 crc kubenswrapper[4946]: I1128 09:18:00.008266 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="a18f3eee-5d76-4b78-8e6c-ffddc09e1d72" containerName="registry-server" Nov 28 09:18:00 crc kubenswrapper[4946]: E1128 09:18:00.008295 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a18f3eee-5d76-4b78-8e6c-ffddc09e1d72" containerName="extract-utilities" Nov 28 09:18:00 crc kubenswrapper[4946]: I1128 09:18:00.008304 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="a18f3eee-5d76-4b78-8e6c-ffddc09e1d72" containerName="extract-utilities" Nov 28 09:18:00 crc kubenswrapper[4946]: E1128 09:18:00.008333 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a18f3eee-5d76-4b78-8e6c-ffddc09e1d72" containerName="extract-content" Nov 28 09:18:00 crc kubenswrapper[4946]: I1128 09:18:00.008341 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="a18f3eee-5d76-4b78-8e6c-ffddc09e1d72" containerName="extract-content" Nov 28 09:18:00 crc kubenswrapper[4946]: E1128 09:18:00.008354 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18ddb878-91b4-42ad-b516-fb436b5ecf2a" containerName="configure-network-openstack-openstack-cell1" Nov 28 09:18:00 crc kubenswrapper[4946]: I1128 09:18:00.008360 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="18ddb878-91b4-42ad-b516-fb436b5ecf2a" containerName="configure-network-openstack-openstack-cell1" Nov 28 09:18:00 crc kubenswrapper[4946]: I1128 09:18:00.062121 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="a18f3eee-5d76-4b78-8e6c-ffddc09e1d72" containerName="registry-server" Nov 28 09:18:00 crc kubenswrapper[4946]: I1128 09:18:00.062217 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="18ddb878-91b4-42ad-b516-fb436b5ecf2a" containerName="configure-network-openstack-openstack-cell1" Nov 28 09:18:00 crc kubenswrapper[4946]: I1128 09:18:00.063255 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-jx5qh"] Nov 28 09:18:00 crc kubenswrapper[4946]: I1128 09:18:00.063375 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-jx5qh" Nov 28 09:18:00 crc kubenswrapper[4946]: I1128 09:18:00.065601 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 28 09:18:00 crc kubenswrapper[4946]: I1128 09:18:00.067738 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-psw2j" Nov 28 09:18:00 crc kubenswrapper[4946]: I1128 09:18:00.214231 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81d6f1ef-6551-4711-b21f-6af3aabc8d83-ssh-key\") pod \"validate-network-openstack-openstack-cell1-jx5qh\" (UID: \"81d6f1ef-6551-4711-b21f-6af3aabc8d83\") " pod="openstack/validate-network-openstack-openstack-cell1-jx5qh" Nov 28 09:18:00 crc kubenswrapper[4946]: I1128 09:18:00.214274 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81d6f1ef-6551-4711-b21f-6af3aabc8d83-inventory\") pod \"validate-network-openstack-openstack-cell1-jx5qh\" (UID: \"81d6f1ef-6551-4711-b21f-6af3aabc8d83\") " pod="openstack/validate-network-openstack-openstack-cell1-jx5qh" Nov 28 09:18:00 crc kubenswrapper[4946]: I1128 09:18:00.214322 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/81d6f1ef-6551-4711-b21f-6af3aabc8d83-ceph\") pod \"validate-network-openstack-openstack-cell1-jx5qh\" (UID: \"81d6f1ef-6551-4711-b21f-6af3aabc8d83\") " pod="openstack/validate-network-openstack-openstack-cell1-jx5qh" Nov 28 09:18:00 crc kubenswrapper[4946]: I1128 09:18:00.214386 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqfpl\" (UniqueName: \"kubernetes.io/projected/81d6f1ef-6551-4711-b21f-6af3aabc8d83-kube-api-access-bqfpl\") pod \"validate-network-openstack-openstack-cell1-jx5qh\" (UID: \"81d6f1ef-6551-4711-b21f-6af3aabc8d83\") " pod="openstack/validate-network-openstack-openstack-cell1-jx5qh" Nov 28 09:18:00 crc kubenswrapper[4946]: I1128 09:18:00.316118 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81d6f1ef-6551-4711-b21f-6af3aabc8d83-ssh-key\") pod \"validate-network-openstack-openstack-cell1-jx5qh\" (UID: \"81d6f1ef-6551-4711-b21f-6af3aabc8d83\") " pod="openstack/validate-network-openstack-openstack-cell1-jx5qh" Nov 28 09:18:00 crc kubenswrapper[4946]: I1128 09:18:00.316186 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81d6f1ef-6551-4711-b21f-6af3aabc8d83-inventory\") pod \"validate-network-openstack-openstack-cell1-jx5qh\" (UID: \"81d6f1ef-6551-4711-b21f-6af3aabc8d83\") " pod="openstack/validate-network-openstack-openstack-cell1-jx5qh" Nov 28 09:18:00 crc kubenswrapper[4946]: I1128 09:18:00.316287 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/81d6f1ef-6551-4711-b21f-6af3aabc8d83-ceph\") pod \"validate-network-openstack-openstack-cell1-jx5qh\" (UID: \"81d6f1ef-6551-4711-b21f-6af3aabc8d83\") " pod="openstack/validate-network-openstack-openstack-cell1-jx5qh" Nov 28 09:18:00 crc kubenswrapper[4946]: I1128 09:18:00.316353 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqfpl\" (UniqueName: \"kubernetes.io/projected/81d6f1ef-6551-4711-b21f-6af3aabc8d83-kube-api-access-bqfpl\") pod \"validate-network-openstack-openstack-cell1-jx5qh\" (UID: \"81d6f1ef-6551-4711-b21f-6af3aabc8d83\") " pod="openstack/validate-network-openstack-openstack-cell1-jx5qh" Nov 28 09:18:00 crc kubenswrapper[4946]: I1128 09:18:00.320142 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81d6f1ef-6551-4711-b21f-6af3aabc8d83-inventory\") pod \"validate-network-openstack-openstack-cell1-jx5qh\" (UID: \"81d6f1ef-6551-4711-b21f-6af3aabc8d83\") " pod="openstack/validate-network-openstack-openstack-cell1-jx5qh" Nov 28 09:18:00 crc kubenswrapper[4946]: I1128 09:18:00.323224 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81d6f1ef-6551-4711-b21f-6af3aabc8d83-ssh-key\") pod \"validate-network-openstack-openstack-cell1-jx5qh\" (UID: \"81d6f1ef-6551-4711-b21f-6af3aabc8d83\") " pod="openstack/validate-network-openstack-openstack-cell1-jx5qh" Nov 28 09:18:00 crc kubenswrapper[4946]: I1128 09:18:00.328973 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/81d6f1ef-6551-4711-b21f-6af3aabc8d83-ceph\") pod \"validate-network-openstack-openstack-cell1-jx5qh\" (UID: \"81d6f1ef-6551-4711-b21f-6af3aabc8d83\") " pod="openstack/validate-network-openstack-openstack-cell1-jx5qh" Nov 28 09:18:00 crc kubenswrapper[4946]: I1128 09:18:00.335203 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqfpl\" (UniqueName: \"kubernetes.io/projected/81d6f1ef-6551-4711-b21f-6af3aabc8d83-kube-api-access-bqfpl\") pod \"validate-network-openstack-openstack-cell1-jx5qh\" (UID: \"81d6f1ef-6551-4711-b21f-6af3aabc8d83\") " pod="openstack/validate-network-openstack-openstack-cell1-jx5qh" Nov 28 09:18:00 crc kubenswrapper[4946]: I1128 09:18:00.385364 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-jx5qh" Nov 28 09:18:01 crc kubenswrapper[4946]: I1128 09:18:01.029025 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-jx5qh"] Nov 28 09:18:01 crc kubenswrapper[4946]: I1128 09:18:01.957682 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-jx5qh" event={"ID":"81d6f1ef-6551-4711-b21f-6af3aabc8d83","Type":"ContainerStarted","Data":"9c0ad5600508faa99c882b197b357a91fe354ad2f2b2bbcb9c78a5967ef3e41d"} Nov 28 09:18:01 crc kubenswrapper[4946]: I1128 09:18:01.958074 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-jx5qh" event={"ID":"81d6f1ef-6551-4711-b21f-6af3aabc8d83","Type":"ContainerStarted","Data":"8aaadc1c5c2126b6bac8c6d564d0af53dee8d3114f961503841cf25ca3a965c9"} Nov 28 09:18:01 crc kubenswrapper[4946]: I1128 09:18:01.984546 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-jx5qh" podStartSLOduration=2.547957132 podStartE2EDuration="2.984527747s" podCreationTimestamp="2025-11-28 09:17:59 +0000 UTC" firstStartedPulling="2025-11-28 09:18:01.013157574 +0000 UTC m=+8735.391222685" lastFinishedPulling="2025-11-28 09:18:01.449728149 +0000 UTC m=+8735.827793300" observedRunningTime="2025-11-28 09:18:01.979523273 +0000 UTC m=+8736.357588404" watchObservedRunningTime="2025-11-28 09:18:01.984527747 +0000 UTC m=+8736.362592868" Nov 28 09:18:07 crc kubenswrapper[4946]: I1128 09:18:07.016909 4946 generic.go:334] "Generic (PLEG): container finished" podID="81d6f1ef-6551-4711-b21f-6af3aabc8d83" containerID="9c0ad5600508faa99c882b197b357a91fe354ad2f2b2bbcb9c78a5967ef3e41d" exitCode=0 Nov 28 09:18:07 crc kubenswrapper[4946]: I1128 09:18:07.017080 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-jx5qh" event={"ID":"81d6f1ef-6551-4711-b21f-6af3aabc8d83","Type":"ContainerDied","Data":"9c0ad5600508faa99c882b197b357a91fe354ad2f2b2bbcb9c78a5967ef3e41d"} Nov 28 09:18:08 crc kubenswrapper[4946]: I1128 09:18:08.595896 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-jx5qh" Nov 28 09:18:08 crc kubenswrapper[4946]: I1128 09:18:08.712912 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81d6f1ef-6551-4711-b21f-6af3aabc8d83-inventory\") pod \"81d6f1ef-6551-4711-b21f-6af3aabc8d83\" (UID: \"81d6f1ef-6551-4711-b21f-6af3aabc8d83\") " Nov 28 09:18:08 crc kubenswrapper[4946]: I1128 09:18:08.713446 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81d6f1ef-6551-4711-b21f-6af3aabc8d83-ssh-key\") pod \"81d6f1ef-6551-4711-b21f-6af3aabc8d83\" (UID: \"81d6f1ef-6551-4711-b21f-6af3aabc8d83\") " Nov 28 09:18:08 crc kubenswrapper[4946]: I1128 09:18:08.713661 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqfpl\" (UniqueName: \"kubernetes.io/projected/81d6f1ef-6551-4711-b21f-6af3aabc8d83-kube-api-access-bqfpl\") pod \"81d6f1ef-6551-4711-b21f-6af3aabc8d83\" (UID: \"81d6f1ef-6551-4711-b21f-6af3aabc8d83\") " Nov 28 09:18:08 crc kubenswrapper[4946]: I1128 09:18:08.713975 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/81d6f1ef-6551-4711-b21f-6af3aabc8d83-ceph\") pod \"81d6f1ef-6551-4711-b21f-6af3aabc8d83\" (UID: \"81d6f1ef-6551-4711-b21f-6af3aabc8d83\") " Nov 28 09:18:08 crc kubenswrapper[4946]: I1128 09:18:08.723256 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81d6f1ef-6551-4711-b21f-6af3aabc8d83-ceph" (OuterVolumeSpecName: "ceph") pod "81d6f1ef-6551-4711-b21f-6af3aabc8d83" (UID: "81d6f1ef-6551-4711-b21f-6af3aabc8d83"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:18:08 crc kubenswrapper[4946]: I1128 09:18:08.724303 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81d6f1ef-6551-4711-b21f-6af3aabc8d83-kube-api-access-bqfpl" (OuterVolumeSpecName: "kube-api-access-bqfpl") pod "81d6f1ef-6551-4711-b21f-6af3aabc8d83" (UID: "81d6f1ef-6551-4711-b21f-6af3aabc8d83"). InnerVolumeSpecName "kube-api-access-bqfpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:18:08 crc kubenswrapper[4946]: I1128 09:18:08.752126 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81d6f1ef-6551-4711-b21f-6af3aabc8d83-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "81d6f1ef-6551-4711-b21f-6af3aabc8d83" (UID: "81d6f1ef-6551-4711-b21f-6af3aabc8d83"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:18:08 crc kubenswrapper[4946]: I1128 09:18:08.757867 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81d6f1ef-6551-4711-b21f-6af3aabc8d83-inventory" (OuterVolumeSpecName: "inventory") pod "81d6f1ef-6551-4711-b21f-6af3aabc8d83" (UID: "81d6f1ef-6551-4711-b21f-6af3aabc8d83"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:18:08 crc kubenswrapper[4946]: I1128 09:18:08.821042 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81d6f1ef-6551-4711-b21f-6af3aabc8d83-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:18:08 crc kubenswrapper[4946]: I1128 09:18:08.821097 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81d6f1ef-6551-4711-b21f-6af3aabc8d83-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:18:08 crc kubenswrapper[4946]: I1128 09:18:08.821117 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqfpl\" (UniqueName: \"kubernetes.io/projected/81d6f1ef-6551-4711-b21f-6af3aabc8d83-kube-api-access-bqfpl\") on node \"crc\" DevicePath \"\"" Nov 28 09:18:08 crc kubenswrapper[4946]: I1128 09:18:08.821136 4946 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/81d6f1ef-6551-4711-b21f-6af3aabc8d83-ceph\") on node \"crc\" DevicePath \"\"" Nov 28 09:18:09 crc kubenswrapper[4946]: I1128 09:18:09.047917 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-jx5qh" event={"ID":"81d6f1ef-6551-4711-b21f-6af3aabc8d83","Type":"ContainerDied","Data":"8aaadc1c5c2126b6bac8c6d564d0af53dee8d3114f961503841cf25ca3a965c9"} Nov 28 09:18:09 crc kubenswrapper[4946]: I1128 09:18:09.048267 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8aaadc1c5c2126b6bac8c6d564d0af53dee8d3114f961503841cf25ca3a965c9" Nov 28 09:18:09 crc kubenswrapper[4946]: I1128 09:18:09.048185 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-jx5qh" Nov 28 09:18:09 crc kubenswrapper[4946]: I1128 09:18:09.168238 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-5vm8c"] Nov 28 09:18:09 crc kubenswrapper[4946]: E1128 09:18:09.168734 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d6f1ef-6551-4711-b21f-6af3aabc8d83" containerName="validate-network-openstack-openstack-cell1" Nov 28 09:18:09 crc kubenswrapper[4946]: I1128 09:18:09.168758 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d6f1ef-6551-4711-b21f-6af3aabc8d83" containerName="validate-network-openstack-openstack-cell1" Nov 28 09:18:09 crc kubenswrapper[4946]: I1128 09:18:09.169060 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="81d6f1ef-6551-4711-b21f-6af3aabc8d83" containerName="validate-network-openstack-openstack-cell1" Nov 28 09:18:09 crc kubenswrapper[4946]: I1128 09:18:09.169969 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-5vm8c" Nov 28 09:18:09 crc kubenswrapper[4946]: I1128 09:18:09.174627 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-psw2j" Nov 28 09:18:09 crc kubenswrapper[4946]: I1128 09:18:09.174810 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 28 09:18:09 crc kubenswrapper[4946]: I1128 09:18:09.183837 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-5vm8c"] Nov 28 09:18:09 crc kubenswrapper[4946]: I1128 09:18:09.230600 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78f78de8-2e26-47fe-b56f-4c4b8ee76058-inventory\") pod \"install-os-openstack-openstack-cell1-5vm8c\" (UID: \"78f78de8-2e26-47fe-b56f-4c4b8ee76058\") " pod="openstack/install-os-openstack-openstack-cell1-5vm8c" Nov 28 09:18:09 crc kubenswrapper[4946]: I1128 09:18:09.230690 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnqfq\" (UniqueName: \"kubernetes.io/projected/78f78de8-2e26-47fe-b56f-4c4b8ee76058-kube-api-access-vnqfq\") pod \"install-os-openstack-openstack-cell1-5vm8c\" (UID: \"78f78de8-2e26-47fe-b56f-4c4b8ee76058\") " pod="openstack/install-os-openstack-openstack-cell1-5vm8c" Nov 28 09:18:09 crc kubenswrapper[4946]: I1128 09:18:09.230885 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/78f78de8-2e26-47fe-b56f-4c4b8ee76058-ceph\") pod \"install-os-openstack-openstack-cell1-5vm8c\" (UID: \"78f78de8-2e26-47fe-b56f-4c4b8ee76058\") " pod="openstack/install-os-openstack-openstack-cell1-5vm8c" Nov 28 09:18:09 crc kubenswrapper[4946]: I1128 09:18:09.231021 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/78f78de8-2e26-47fe-b56f-4c4b8ee76058-ssh-key\") pod \"install-os-openstack-openstack-cell1-5vm8c\" (UID: \"78f78de8-2e26-47fe-b56f-4c4b8ee76058\") " pod="openstack/install-os-openstack-openstack-cell1-5vm8c" Nov 28 09:18:09 crc kubenswrapper[4946]: I1128 09:18:09.333138 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/78f78de8-2e26-47fe-b56f-4c4b8ee76058-ceph\") pod \"install-os-openstack-openstack-cell1-5vm8c\" (UID: \"78f78de8-2e26-47fe-b56f-4c4b8ee76058\") " pod="openstack/install-os-openstack-openstack-cell1-5vm8c" Nov 28 09:18:09 crc kubenswrapper[4946]: I1128 09:18:09.333678 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/78f78de8-2e26-47fe-b56f-4c4b8ee76058-ssh-key\") pod \"install-os-openstack-openstack-cell1-5vm8c\" (UID: \"78f78de8-2e26-47fe-b56f-4c4b8ee76058\") " pod="openstack/install-os-openstack-openstack-cell1-5vm8c" Nov 28 09:18:09 crc kubenswrapper[4946]: I1128 09:18:09.333722 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78f78de8-2e26-47fe-b56f-4c4b8ee76058-inventory\") pod \"install-os-openstack-openstack-cell1-5vm8c\" (UID: \"78f78de8-2e26-47fe-b56f-4c4b8ee76058\") " pod="openstack/install-os-openstack-openstack-cell1-5vm8c" Nov 28 09:18:09 crc kubenswrapper[4946]: I1128 09:18:09.333784 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnqfq\" (UniqueName: \"kubernetes.io/projected/78f78de8-2e26-47fe-b56f-4c4b8ee76058-kube-api-access-vnqfq\") pod \"install-os-openstack-openstack-cell1-5vm8c\" (UID: \"78f78de8-2e26-47fe-b56f-4c4b8ee76058\") " pod="openstack/install-os-openstack-openstack-cell1-5vm8c" Nov 28 09:18:09 crc kubenswrapper[4946]: I1128 09:18:09.338259 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/78f78de8-2e26-47fe-b56f-4c4b8ee76058-ceph\") pod \"install-os-openstack-openstack-cell1-5vm8c\" (UID: \"78f78de8-2e26-47fe-b56f-4c4b8ee76058\") " pod="openstack/install-os-openstack-openstack-cell1-5vm8c" Nov 28 09:18:09 crc kubenswrapper[4946]: I1128 09:18:09.338494 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/78f78de8-2e26-47fe-b56f-4c4b8ee76058-ssh-key\") pod \"install-os-openstack-openstack-cell1-5vm8c\" (UID: \"78f78de8-2e26-47fe-b56f-4c4b8ee76058\") " pod="openstack/install-os-openstack-openstack-cell1-5vm8c" Nov 28 09:18:09 crc kubenswrapper[4946]: I1128 09:18:09.345156 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78f78de8-2e26-47fe-b56f-4c4b8ee76058-inventory\") pod \"install-os-openstack-openstack-cell1-5vm8c\" (UID: \"78f78de8-2e26-47fe-b56f-4c4b8ee76058\") " pod="openstack/install-os-openstack-openstack-cell1-5vm8c" Nov 28 09:18:09 crc kubenswrapper[4946]: I1128 09:18:09.351058 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnqfq\" (UniqueName: \"kubernetes.io/projected/78f78de8-2e26-47fe-b56f-4c4b8ee76058-kube-api-access-vnqfq\") pod \"install-os-openstack-openstack-cell1-5vm8c\" (UID: \"78f78de8-2e26-47fe-b56f-4c4b8ee76058\") " pod="openstack/install-os-openstack-openstack-cell1-5vm8c" Nov 28 09:18:09 crc kubenswrapper[4946]: I1128 09:18:09.503153 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-5vm8c" Nov 28 09:18:10 crc kubenswrapper[4946]: I1128 09:18:10.175065 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-5vm8c"] Nov 28 09:18:11 crc kubenswrapper[4946]: I1128 09:18:11.072240 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-5vm8c" event={"ID":"78f78de8-2e26-47fe-b56f-4c4b8ee76058","Type":"ContainerStarted","Data":"ca891369fddd9447f2b1b370e2e46783b797908567055b5227b7d9e4849abae1"} Nov 28 09:18:11 crc kubenswrapper[4946]: I1128 09:18:11.072514 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-5vm8c" event={"ID":"78f78de8-2e26-47fe-b56f-4c4b8ee76058","Type":"ContainerStarted","Data":"7707d903e24bb80e9c0d9d379d142d26c4d17e9f418d7bbc555141cc06b889c3"} Nov 28 09:18:11 crc kubenswrapper[4946]: I1128 09:18:11.124878 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-5vm8c" podStartSLOduration=1.6761575199999998 podStartE2EDuration="2.124860935s" podCreationTimestamp="2025-11-28 09:18:09 +0000 UTC" firstStartedPulling="2025-11-28 09:18:10.170807441 +0000 UTC m=+8744.548872562" lastFinishedPulling="2025-11-28 09:18:10.619510826 +0000 UTC m=+8744.997575977" observedRunningTime="2025-11-28 09:18:11.107748791 +0000 UTC m=+8745.485813912" watchObservedRunningTime="2025-11-28 09:18:11.124860935 +0000 UTC m=+8745.502926046" Nov 28 09:18:29 crc kubenswrapper[4946]: I1128 09:18:29.295617 4946 generic.go:334] "Generic (PLEG): container finished" podID="01c5528b-23a0-42cb-b0d8-37daec0b4ccf" containerID="730170d9d7595f08fdddb6dc844ddb1f9b06522214c608255b4f057a65cdc6b2" exitCode=0 Nov 28 09:18:29 crc kubenswrapper[4946]: I1128 09:18:29.295725 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-networker-jkjpm" event={"ID":"01c5528b-23a0-42cb-b0d8-37daec0b4ccf","Type":"ContainerDied","Data":"730170d9d7595f08fdddb6dc844ddb1f9b06522214c608255b4f057a65cdc6b2"} Nov 28 09:18:30 crc kubenswrapper[4946]: I1128 09:18:30.920540 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-networker-jkjpm" Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.069339 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01c5528b-23a0-42cb-b0d8-37daec0b4ccf-ssh-key\") pod \"01c5528b-23a0-42cb-b0d8-37daec0b4ccf\" (UID: \"01c5528b-23a0-42cb-b0d8-37daec0b4ccf\") " Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.069802 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4rvf\" (UniqueName: \"kubernetes.io/projected/01c5528b-23a0-42cb-b0d8-37daec0b4ccf-kube-api-access-t4rvf\") pod \"01c5528b-23a0-42cb-b0d8-37daec0b4ccf\" (UID: \"01c5528b-23a0-42cb-b0d8-37daec0b4ccf\") " Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.070127 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01c5528b-23a0-42cb-b0d8-37daec0b4ccf-inventory\") pod \"01c5528b-23a0-42cb-b0d8-37daec0b4ccf\" (UID: \"01c5528b-23a0-42cb-b0d8-37daec0b4ccf\") " Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.075977 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01c5528b-23a0-42cb-b0d8-37daec0b4ccf-kube-api-access-t4rvf" (OuterVolumeSpecName: "kube-api-access-t4rvf") pod "01c5528b-23a0-42cb-b0d8-37daec0b4ccf" (UID: "01c5528b-23a0-42cb-b0d8-37daec0b4ccf"). InnerVolumeSpecName "kube-api-access-t4rvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.097556 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01c5528b-23a0-42cb-b0d8-37daec0b4ccf-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "01c5528b-23a0-42cb-b0d8-37daec0b4ccf" (UID: "01c5528b-23a0-42cb-b0d8-37daec0b4ccf"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.122512 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01c5528b-23a0-42cb-b0d8-37daec0b4ccf-inventory" (OuterVolumeSpecName: "inventory") pod "01c5528b-23a0-42cb-b0d8-37daec0b4ccf" (UID: "01c5528b-23a0-42cb-b0d8-37daec0b4ccf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.173418 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4rvf\" (UniqueName: \"kubernetes.io/projected/01c5528b-23a0-42cb-b0d8-37daec0b4ccf-kube-api-access-t4rvf\") on node \"crc\" DevicePath \"\"" Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.173504 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01c5528b-23a0-42cb-b0d8-37daec0b4ccf-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.173587 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01c5528b-23a0-42cb-b0d8-37daec0b4ccf-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.325809 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-networker-jkjpm" event={"ID":"01c5528b-23a0-42cb-b0d8-37daec0b4ccf","Type":"ContainerDied","Data":"49afcf126813ab3e5e54eea34763676db1a3f7275a8a376e810f26be7d0909bf"} Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.325874 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49afcf126813ab3e5e54eea34763676db1a3f7275a8a376e810f26be7d0909bf" Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.326323 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-networker-jkjpm" Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.436164 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-networker-kmdc6"] Nov 28 09:18:31 crc kubenswrapper[4946]: E1128 09:18:31.436945 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c5528b-23a0-42cb-b0d8-37daec0b4ccf" containerName="install-os-openstack-openstack-networker" Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.436972 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c5528b-23a0-42cb-b0d8-37daec0b4ccf" containerName="install-os-openstack-openstack-networker" Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.437319 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="01c5528b-23a0-42cb-b0d8-37daec0b4ccf" containerName="install-os-openstack-openstack-networker" Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.438572 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-networker-kmdc6" Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.444254 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-7cq8d" Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.445861 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.470109 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-networker-kmdc6"] Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.583365 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bde3f71a-f3d1-4dff-80fd-844623944482-ssh-key\") pod \"configure-os-openstack-openstack-networker-kmdc6\" (UID: \"bde3f71a-f3d1-4dff-80fd-844623944482\") " pod="openstack/configure-os-openstack-openstack-networker-kmdc6" Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.584616 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn87v\" (UniqueName: \"kubernetes.io/projected/bde3f71a-f3d1-4dff-80fd-844623944482-kube-api-access-tn87v\") pod \"configure-os-openstack-openstack-networker-kmdc6\" (UID: \"bde3f71a-f3d1-4dff-80fd-844623944482\") " pod="openstack/configure-os-openstack-openstack-networker-kmdc6" Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.584733 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bde3f71a-f3d1-4dff-80fd-844623944482-inventory\") pod \"configure-os-openstack-openstack-networker-kmdc6\" (UID: \"bde3f71a-f3d1-4dff-80fd-844623944482\") " pod="openstack/configure-os-openstack-openstack-networker-kmdc6" Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.687007 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bde3f71a-f3d1-4dff-80fd-844623944482-ssh-key\") pod \"configure-os-openstack-openstack-networker-kmdc6\" (UID: \"bde3f71a-f3d1-4dff-80fd-844623944482\") " pod="openstack/configure-os-openstack-openstack-networker-kmdc6" Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.687257 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn87v\" (UniqueName: \"kubernetes.io/projected/bde3f71a-f3d1-4dff-80fd-844623944482-kube-api-access-tn87v\") pod \"configure-os-openstack-openstack-networker-kmdc6\" (UID: \"bde3f71a-f3d1-4dff-80fd-844623944482\") " pod="openstack/configure-os-openstack-openstack-networker-kmdc6" Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.687302 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bde3f71a-f3d1-4dff-80fd-844623944482-inventory\") pod \"configure-os-openstack-openstack-networker-kmdc6\" (UID: \"bde3f71a-f3d1-4dff-80fd-844623944482\") " pod="openstack/configure-os-openstack-openstack-networker-kmdc6" Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.694311 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bde3f71a-f3d1-4dff-80fd-844623944482-inventory\") pod \"configure-os-openstack-openstack-networker-kmdc6\" (UID: \"bde3f71a-f3d1-4dff-80fd-844623944482\") " pod="openstack/configure-os-openstack-openstack-networker-kmdc6" Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.695572 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bde3f71a-f3d1-4dff-80fd-844623944482-ssh-key\") pod \"configure-os-openstack-openstack-networker-kmdc6\" (UID: \"bde3f71a-f3d1-4dff-80fd-844623944482\") " pod="openstack/configure-os-openstack-openstack-networker-kmdc6" Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.712107 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn87v\" (UniqueName: \"kubernetes.io/projected/bde3f71a-f3d1-4dff-80fd-844623944482-kube-api-access-tn87v\") pod \"configure-os-openstack-openstack-networker-kmdc6\" (UID: \"bde3f71a-f3d1-4dff-80fd-844623944482\") " pod="openstack/configure-os-openstack-openstack-networker-kmdc6" Nov 28 09:18:31 crc kubenswrapper[4946]: I1128 09:18:31.761680 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-networker-kmdc6" Nov 28 09:18:32 crc kubenswrapper[4946]: I1128 09:18:32.406504 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-networker-kmdc6"] Nov 28 09:18:32 crc kubenswrapper[4946]: W1128 09:18:32.918307 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbde3f71a_f3d1_4dff_80fd_844623944482.slice/crio-1eb2f474746cd5ba35ed83552e95e6fc839fa9889199badbb39f1dbeac00464f WatchSource:0}: Error finding container 1eb2f474746cd5ba35ed83552e95e6fc839fa9889199badbb39f1dbeac00464f: Status 404 returned error can't find the container with id 1eb2f474746cd5ba35ed83552e95e6fc839fa9889199badbb39f1dbeac00464f Nov 28 09:18:33 crc kubenswrapper[4946]: I1128 09:18:33.354566 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-networker-kmdc6" event={"ID":"bde3f71a-f3d1-4dff-80fd-844623944482","Type":"ContainerStarted","Data":"1eb2f474746cd5ba35ed83552e95e6fc839fa9889199badbb39f1dbeac00464f"} Nov 28 09:18:34 crc kubenswrapper[4946]: I1128 09:18:34.369760 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-networker-kmdc6" event={"ID":"bde3f71a-f3d1-4dff-80fd-844623944482","Type":"ContainerStarted","Data":"58e89dc467826ac8375f4f0c8dc8939b62eaf668e408c326d49961d2ba21e939"} Nov 28 09:18:34 crc kubenswrapper[4946]: I1128 09:18:34.401649 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-networker-kmdc6" podStartSLOduration=2.604581188 podStartE2EDuration="3.401622639s" podCreationTimestamp="2025-11-28 09:18:31 +0000 UTC" firstStartedPulling="2025-11-28 09:18:32.923241686 +0000 UTC m=+8767.301306837" lastFinishedPulling="2025-11-28 09:18:33.720283127 +0000 UTC m=+8768.098348288" observedRunningTime="2025-11-28 09:18:34.393419246 +0000 UTC m=+8768.771484357" watchObservedRunningTime="2025-11-28 09:18:34.401622639 +0000 UTC m=+8768.779687790" Nov 28 09:18:54 crc kubenswrapper[4946]: I1128 09:18:54.730535 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:18:54 crc kubenswrapper[4946]: I1128 09:18:54.731200 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:19:02 crc kubenswrapper[4946]: I1128 09:19:02.698753 4946 generic.go:334] "Generic (PLEG): container finished" podID="78f78de8-2e26-47fe-b56f-4c4b8ee76058" containerID="ca891369fddd9447f2b1b370e2e46783b797908567055b5227b7d9e4849abae1" exitCode=0 Nov 28 09:19:02 crc kubenswrapper[4946]: I1128 09:19:02.698863 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-5vm8c" event={"ID":"78f78de8-2e26-47fe-b56f-4c4b8ee76058","Type":"ContainerDied","Data":"ca891369fddd9447f2b1b370e2e46783b797908567055b5227b7d9e4849abae1"} Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.211873 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-5vm8c" Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.342163 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78f78de8-2e26-47fe-b56f-4c4b8ee76058-inventory\") pod \"78f78de8-2e26-47fe-b56f-4c4b8ee76058\" (UID: \"78f78de8-2e26-47fe-b56f-4c4b8ee76058\") " Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.342518 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/78f78de8-2e26-47fe-b56f-4c4b8ee76058-ssh-key\") pod \"78f78de8-2e26-47fe-b56f-4c4b8ee76058\" (UID: \"78f78de8-2e26-47fe-b56f-4c4b8ee76058\") " Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.342562 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/78f78de8-2e26-47fe-b56f-4c4b8ee76058-ceph\") pod \"78f78de8-2e26-47fe-b56f-4c4b8ee76058\" (UID: \"78f78de8-2e26-47fe-b56f-4c4b8ee76058\") " Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.342617 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnqfq\" (UniqueName: \"kubernetes.io/projected/78f78de8-2e26-47fe-b56f-4c4b8ee76058-kube-api-access-vnqfq\") pod \"78f78de8-2e26-47fe-b56f-4c4b8ee76058\" (UID: \"78f78de8-2e26-47fe-b56f-4c4b8ee76058\") " Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.352655 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78f78de8-2e26-47fe-b56f-4c4b8ee76058-kube-api-access-vnqfq" (OuterVolumeSpecName: "kube-api-access-vnqfq") pod "78f78de8-2e26-47fe-b56f-4c4b8ee76058" (UID: "78f78de8-2e26-47fe-b56f-4c4b8ee76058"). InnerVolumeSpecName "kube-api-access-vnqfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.354174 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78f78de8-2e26-47fe-b56f-4c4b8ee76058-ceph" (OuterVolumeSpecName: "ceph") pod "78f78de8-2e26-47fe-b56f-4c4b8ee76058" (UID: "78f78de8-2e26-47fe-b56f-4c4b8ee76058"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.378443 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78f78de8-2e26-47fe-b56f-4c4b8ee76058-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "78f78de8-2e26-47fe-b56f-4c4b8ee76058" (UID: "78f78de8-2e26-47fe-b56f-4c4b8ee76058"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.380632 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78f78de8-2e26-47fe-b56f-4c4b8ee76058-inventory" (OuterVolumeSpecName: "inventory") pod "78f78de8-2e26-47fe-b56f-4c4b8ee76058" (UID: "78f78de8-2e26-47fe-b56f-4c4b8ee76058"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.445924 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78f78de8-2e26-47fe-b56f-4c4b8ee76058-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.445973 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/78f78de8-2e26-47fe-b56f-4c4b8ee76058-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.445991 4946 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/78f78de8-2e26-47fe-b56f-4c4b8ee76058-ceph\") on node \"crc\" DevicePath \"\"" Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.446012 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnqfq\" (UniqueName: \"kubernetes.io/projected/78f78de8-2e26-47fe-b56f-4c4b8ee76058-kube-api-access-vnqfq\") on node \"crc\" DevicePath \"\"" Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.720441 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-5vm8c" event={"ID":"78f78de8-2e26-47fe-b56f-4c4b8ee76058","Type":"ContainerDied","Data":"7707d903e24bb80e9c0d9d379d142d26c4d17e9f418d7bbc555141cc06b889c3"} Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.720550 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7707d903e24bb80e9c0d9d379d142d26c4d17e9f418d7bbc555141cc06b889c3" Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.720547 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-5vm8c" Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.837649 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-gcsc9"] Nov 28 09:19:04 crc kubenswrapper[4946]: E1128 09:19:04.838383 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78f78de8-2e26-47fe-b56f-4c4b8ee76058" containerName="install-os-openstack-openstack-cell1" Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.838402 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="78f78de8-2e26-47fe-b56f-4c4b8ee76058" containerName="install-os-openstack-openstack-cell1" Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.838654 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="78f78de8-2e26-47fe-b56f-4c4b8ee76058" containerName="install-os-openstack-openstack-cell1" Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.839671 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-gcsc9" Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.842355 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-psw2j" Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.842975 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.854771 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-gcsc9"] Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.956141 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ec131430-238f-4a5c-8c02-f05d2b07e60e-ceph\") pod \"configure-os-openstack-openstack-cell1-gcsc9\" (UID: \"ec131430-238f-4a5c-8c02-f05d2b07e60e\") " pod="openstack/configure-os-openstack-openstack-cell1-gcsc9" Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.956183 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec131430-238f-4a5c-8c02-f05d2b07e60e-inventory\") pod \"configure-os-openstack-openstack-cell1-gcsc9\" (UID: \"ec131430-238f-4a5c-8c02-f05d2b07e60e\") " pod="openstack/configure-os-openstack-openstack-cell1-gcsc9" Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.956382 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkn2q\" (UniqueName: \"kubernetes.io/projected/ec131430-238f-4a5c-8c02-f05d2b07e60e-kube-api-access-nkn2q\") pod \"configure-os-openstack-openstack-cell1-gcsc9\" (UID: \"ec131430-238f-4a5c-8c02-f05d2b07e60e\") " pod="openstack/configure-os-openstack-openstack-cell1-gcsc9" Nov 28 09:19:04 crc kubenswrapper[4946]: I1128 09:19:04.956865 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec131430-238f-4a5c-8c02-f05d2b07e60e-ssh-key\") pod \"configure-os-openstack-openstack-cell1-gcsc9\" (UID: \"ec131430-238f-4a5c-8c02-f05d2b07e60e\") " pod="openstack/configure-os-openstack-openstack-cell1-gcsc9" Nov 28 09:19:05 crc kubenswrapper[4946]: I1128 09:19:05.059693 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec131430-238f-4a5c-8c02-f05d2b07e60e-ssh-key\") pod \"configure-os-openstack-openstack-cell1-gcsc9\" (UID: \"ec131430-238f-4a5c-8c02-f05d2b07e60e\") " pod="openstack/configure-os-openstack-openstack-cell1-gcsc9" Nov 28 09:19:05 crc kubenswrapper[4946]: I1128 09:19:05.060052 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ec131430-238f-4a5c-8c02-f05d2b07e60e-ceph\") pod \"configure-os-openstack-openstack-cell1-gcsc9\" (UID: \"ec131430-238f-4a5c-8c02-f05d2b07e60e\") " pod="openstack/configure-os-openstack-openstack-cell1-gcsc9" Nov 28 09:19:05 crc kubenswrapper[4946]: I1128 09:19:05.060130 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec131430-238f-4a5c-8c02-f05d2b07e60e-inventory\") pod \"configure-os-openstack-openstack-cell1-gcsc9\" (UID: \"ec131430-238f-4a5c-8c02-f05d2b07e60e\") " pod="openstack/configure-os-openstack-openstack-cell1-gcsc9" Nov 28 09:19:05 crc kubenswrapper[4946]: I1128 09:19:05.060210 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkn2q\" (UniqueName: \"kubernetes.io/projected/ec131430-238f-4a5c-8c02-f05d2b07e60e-kube-api-access-nkn2q\") pod \"configure-os-openstack-openstack-cell1-gcsc9\" (UID: \"ec131430-238f-4a5c-8c02-f05d2b07e60e\") " pod="openstack/configure-os-openstack-openstack-cell1-gcsc9" Nov 28 09:19:05 crc kubenswrapper[4946]: I1128 09:19:05.067730 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec131430-238f-4a5c-8c02-f05d2b07e60e-inventory\") pod \"configure-os-openstack-openstack-cell1-gcsc9\" (UID: \"ec131430-238f-4a5c-8c02-f05d2b07e60e\") " pod="openstack/configure-os-openstack-openstack-cell1-gcsc9" Nov 28 09:19:05 crc kubenswrapper[4946]: I1128 09:19:05.068758 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ec131430-238f-4a5c-8c02-f05d2b07e60e-ceph\") pod \"configure-os-openstack-openstack-cell1-gcsc9\" (UID: \"ec131430-238f-4a5c-8c02-f05d2b07e60e\") " pod="openstack/configure-os-openstack-openstack-cell1-gcsc9" Nov 28 09:19:05 crc kubenswrapper[4946]: I1128 09:19:05.070207 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec131430-238f-4a5c-8c02-f05d2b07e60e-ssh-key\") pod \"configure-os-openstack-openstack-cell1-gcsc9\" (UID: \"ec131430-238f-4a5c-8c02-f05d2b07e60e\") " pod="openstack/configure-os-openstack-openstack-cell1-gcsc9" Nov 28 09:19:05 crc kubenswrapper[4946]: I1128 09:19:05.079744 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkn2q\" (UniqueName: \"kubernetes.io/projected/ec131430-238f-4a5c-8c02-f05d2b07e60e-kube-api-access-nkn2q\") pod \"configure-os-openstack-openstack-cell1-gcsc9\" (UID: \"ec131430-238f-4a5c-8c02-f05d2b07e60e\") " pod="openstack/configure-os-openstack-openstack-cell1-gcsc9" Nov 28 09:19:05 crc kubenswrapper[4946]: I1128 09:19:05.158687 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-gcsc9" Nov 28 09:19:05 crc kubenswrapper[4946]: I1128 09:19:05.776337 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-gcsc9"] Nov 28 09:19:06 crc kubenswrapper[4946]: I1128 09:19:06.743073 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-gcsc9" event={"ID":"ec131430-238f-4a5c-8c02-f05d2b07e60e","Type":"ContainerStarted","Data":"ed8883f314d887e97ee699581e3134e33af840fbdaeaad8e8d5b0b149c22a646"} Nov 28 09:19:06 crc kubenswrapper[4946]: I1128 09:19:06.743995 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-gcsc9" event={"ID":"ec131430-238f-4a5c-8c02-f05d2b07e60e","Type":"ContainerStarted","Data":"4dd73e170482afb107403454061f750ffe0547770baaadab4d24161ba23447af"} Nov 28 09:19:06 crc kubenswrapper[4946]: I1128 09:19:06.765366 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-gcsc9" podStartSLOduration=2.08940444 podStartE2EDuration="2.765340518s" podCreationTimestamp="2025-11-28 09:19:04 +0000 UTC" firstStartedPulling="2025-11-28 09:19:05.78128598 +0000 UTC m=+8800.159351091" lastFinishedPulling="2025-11-28 09:19:06.457222058 +0000 UTC m=+8800.835287169" observedRunningTime="2025-11-28 09:19:06.757539674 +0000 UTC m=+8801.135604795" watchObservedRunningTime="2025-11-28 09:19:06.765340518 +0000 UTC m=+8801.143405669" Nov 28 09:19:12 crc kubenswrapper[4946]: I1128 09:19:12.836366 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f4vqj"] Nov 28 09:19:12 crc kubenswrapper[4946]: I1128 09:19:12.839635 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4vqj" Nov 28 09:19:12 crc kubenswrapper[4946]: I1128 09:19:12.865811 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f4vqj"] Nov 28 09:19:12 crc kubenswrapper[4946]: I1128 09:19:12.946093 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d1e635a-d107-4138-9386-bc276b454375-catalog-content\") pod \"redhat-operators-f4vqj\" (UID: \"4d1e635a-d107-4138-9386-bc276b454375\") " pod="openshift-marketplace/redhat-operators-f4vqj" Nov 28 09:19:12 crc kubenswrapper[4946]: I1128 09:19:12.946178 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d1e635a-d107-4138-9386-bc276b454375-utilities\") pod \"redhat-operators-f4vqj\" (UID: \"4d1e635a-d107-4138-9386-bc276b454375\") " pod="openshift-marketplace/redhat-operators-f4vqj" Nov 28 09:19:12 crc kubenswrapper[4946]: I1128 09:19:12.946211 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqvxd\" (UniqueName: \"kubernetes.io/projected/4d1e635a-d107-4138-9386-bc276b454375-kube-api-access-lqvxd\") pod \"redhat-operators-f4vqj\" (UID: \"4d1e635a-d107-4138-9386-bc276b454375\") " pod="openshift-marketplace/redhat-operators-f4vqj" Nov 28 09:19:13 crc kubenswrapper[4946]: I1128 09:19:13.048072 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d1e635a-d107-4138-9386-bc276b454375-catalog-content\") pod \"redhat-operators-f4vqj\" (UID: \"4d1e635a-d107-4138-9386-bc276b454375\") " pod="openshift-marketplace/redhat-operators-f4vqj" Nov 28 09:19:13 crc kubenswrapper[4946]: I1128 09:19:13.048163 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d1e635a-d107-4138-9386-bc276b454375-utilities\") pod \"redhat-operators-f4vqj\" (UID: \"4d1e635a-d107-4138-9386-bc276b454375\") " pod="openshift-marketplace/redhat-operators-f4vqj" Nov 28 09:19:13 crc kubenswrapper[4946]: I1128 09:19:13.048204 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqvxd\" (UniqueName: \"kubernetes.io/projected/4d1e635a-d107-4138-9386-bc276b454375-kube-api-access-lqvxd\") pod \"redhat-operators-f4vqj\" (UID: \"4d1e635a-d107-4138-9386-bc276b454375\") " pod="openshift-marketplace/redhat-operators-f4vqj" Nov 28 09:19:13 crc kubenswrapper[4946]: I1128 09:19:13.048790 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d1e635a-d107-4138-9386-bc276b454375-catalog-content\") pod \"redhat-operators-f4vqj\" (UID: \"4d1e635a-d107-4138-9386-bc276b454375\") " pod="openshift-marketplace/redhat-operators-f4vqj" Nov 28 09:19:13 crc kubenswrapper[4946]: I1128 09:19:13.048796 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d1e635a-d107-4138-9386-bc276b454375-utilities\") pod \"redhat-operators-f4vqj\" (UID: \"4d1e635a-d107-4138-9386-bc276b454375\") " pod="openshift-marketplace/redhat-operators-f4vqj" Nov 28 09:19:13 crc kubenswrapper[4946]: I1128 09:19:13.078846 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqvxd\" (UniqueName: \"kubernetes.io/projected/4d1e635a-d107-4138-9386-bc276b454375-kube-api-access-lqvxd\") pod \"redhat-operators-f4vqj\" (UID: \"4d1e635a-d107-4138-9386-bc276b454375\") " pod="openshift-marketplace/redhat-operators-f4vqj" Nov 28 09:19:13 crc kubenswrapper[4946]: I1128 09:19:13.217223 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4vqj" Nov 28 09:19:13 crc kubenswrapper[4946]: I1128 09:19:13.699626 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f4vqj"] Nov 28 09:19:13 crc kubenswrapper[4946]: I1128 09:19:13.817504 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4vqj" event={"ID":"4d1e635a-d107-4138-9386-bc276b454375","Type":"ContainerStarted","Data":"84ac4d0369f2c05524e555271cad657f957d7df5c1e80bce89109dc8c0cef99b"} Nov 28 09:19:14 crc kubenswrapper[4946]: I1128 09:19:14.830372 4946 generic.go:334] "Generic (PLEG): container finished" podID="4d1e635a-d107-4138-9386-bc276b454375" containerID="a362c21ded4434180956198973fd835fcda67e9f1aa6a4857acbb38ee4b93123" exitCode=0 Nov 28 09:19:14 crc kubenswrapper[4946]: I1128 09:19:14.830453 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4vqj" event={"ID":"4d1e635a-d107-4138-9386-bc276b454375","Type":"ContainerDied","Data":"a362c21ded4434180956198973fd835fcda67e9f1aa6a4857acbb38ee4b93123"} Nov 28 09:19:16 crc kubenswrapper[4946]: I1128 09:19:16.885940 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4vqj" event={"ID":"4d1e635a-d107-4138-9386-bc276b454375","Type":"ContainerStarted","Data":"86adf7c12896cc7fe10548eac215244205c6bfc57ba8f4f6efd913a5f0505ebc"} Nov 28 09:19:19 crc kubenswrapper[4946]: I1128 09:19:19.921447 4946 generic.go:334] "Generic (PLEG): container finished" podID="4d1e635a-d107-4138-9386-bc276b454375" containerID="86adf7c12896cc7fe10548eac215244205c6bfc57ba8f4f6efd913a5f0505ebc" exitCode=0 Nov 28 09:19:19 crc kubenswrapper[4946]: I1128 09:19:19.921539 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4vqj" event={"ID":"4d1e635a-d107-4138-9386-bc276b454375","Type":"ContainerDied","Data":"86adf7c12896cc7fe10548eac215244205c6bfc57ba8f4f6efd913a5f0505ebc"} Nov 28 09:19:20 crc kubenswrapper[4946]: I1128 09:19:20.933481 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4vqj" event={"ID":"4d1e635a-d107-4138-9386-bc276b454375","Type":"ContainerStarted","Data":"75a5ec5ddf3df5384aeb9b68d4d3037b44bb47e60c84bd091890af72e66a4024"} Nov 28 09:19:20 crc kubenswrapper[4946]: I1128 09:19:20.966442 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f4vqj" podStartSLOduration=3.30370734 podStartE2EDuration="8.966416527s" podCreationTimestamp="2025-11-28 09:19:12 +0000 UTC" firstStartedPulling="2025-11-28 09:19:14.833729479 +0000 UTC m=+8809.211794590" lastFinishedPulling="2025-11-28 09:19:20.496438656 +0000 UTC m=+8814.874503777" observedRunningTime="2025-11-28 09:19:20.959096756 +0000 UTC m=+8815.337161867" watchObservedRunningTime="2025-11-28 09:19:20.966416527 +0000 UTC m=+8815.344481638" Nov 28 09:19:23 crc kubenswrapper[4946]: I1128 09:19:23.219774 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f4vqj" Nov 28 09:19:23 crc kubenswrapper[4946]: I1128 09:19:23.220929 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f4vqj" Nov 28 09:19:24 crc kubenswrapper[4946]: I1128 09:19:24.278874 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f4vqj" podUID="4d1e635a-d107-4138-9386-bc276b454375" containerName="registry-server" probeResult="failure" output=< Nov 28 09:19:24 crc kubenswrapper[4946]: timeout: failed to connect service ":50051" within 1s Nov 28 09:19:24 crc kubenswrapper[4946]: > Nov 28 09:19:24 crc kubenswrapper[4946]: I1128 09:19:24.730704 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:19:24 crc kubenswrapper[4946]: I1128 09:19:24.730770 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:19:24 crc kubenswrapper[4946]: I1128 09:19:24.976475 4946 generic.go:334] "Generic (PLEG): container finished" podID="bde3f71a-f3d1-4dff-80fd-844623944482" containerID="58e89dc467826ac8375f4f0c8dc8939b62eaf668e408c326d49961d2ba21e939" exitCode=0 Nov 28 09:19:24 crc kubenswrapper[4946]: I1128 09:19:24.976530 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-networker-kmdc6" event={"ID":"bde3f71a-f3d1-4dff-80fd-844623944482","Type":"ContainerDied","Data":"58e89dc467826ac8375f4f0c8dc8939b62eaf668e408c326d49961d2ba21e939"} Nov 28 09:19:26 crc kubenswrapper[4946]: I1128 09:19:26.480834 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-networker-kmdc6" Nov 28 09:19:26 crc kubenswrapper[4946]: I1128 09:19:26.561402 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bde3f71a-f3d1-4dff-80fd-844623944482-inventory\") pod \"bde3f71a-f3d1-4dff-80fd-844623944482\" (UID: \"bde3f71a-f3d1-4dff-80fd-844623944482\") " Nov 28 09:19:26 crc kubenswrapper[4946]: I1128 09:19:26.561670 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn87v\" (UniqueName: \"kubernetes.io/projected/bde3f71a-f3d1-4dff-80fd-844623944482-kube-api-access-tn87v\") pod \"bde3f71a-f3d1-4dff-80fd-844623944482\" (UID: \"bde3f71a-f3d1-4dff-80fd-844623944482\") " Nov 28 09:19:26 crc kubenswrapper[4946]: I1128 09:19:26.562056 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bde3f71a-f3d1-4dff-80fd-844623944482-ssh-key\") pod \"bde3f71a-f3d1-4dff-80fd-844623944482\" (UID: \"bde3f71a-f3d1-4dff-80fd-844623944482\") " Nov 28 09:19:26 crc kubenswrapper[4946]: I1128 09:19:26.567927 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bde3f71a-f3d1-4dff-80fd-844623944482-kube-api-access-tn87v" (OuterVolumeSpecName: "kube-api-access-tn87v") pod "bde3f71a-f3d1-4dff-80fd-844623944482" (UID: "bde3f71a-f3d1-4dff-80fd-844623944482"). InnerVolumeSpecName "kube-api-access-tn87v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:19:26 crc kubenswrapper[4946]: I1128 09:19:26.590310 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bde3f71a-f3d1-4dff-80fd-844623944482-inventory" (OuterVolumeSpecName: "inventory") pod "bde3f71a-f3d1-4dff-80fd-844623944482" (UID: "bde3f71a-f3d1-4dff-80fd-844623944482"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:19:26 crc kubenswrapper[4946]: I1128 09:19:26.596061 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bde3f71a-f3d1-4dff-80fd-844623944482-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bde3f71a-f3d1-4dff-80fd-844623944482" (UID: "bde3f71a-f3d1-4dff-80fd-844623944482"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:19:26 crc kubenswrapper[4946]: I1128 09:19:26.665222 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bde3f71a-f3d1-4dff-80fd-844623944482-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:19:26 crc kubenswrapper[4946]: I1128 09:19:26.665530 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn87v\" (UniqueName: \"kubernetes.io/projected/bde3f71a-f3d1-4dff-80fd-844623944482-kube-api-access-tn87v\") on node \"crc\" DevicePath \"\"" Nov 28 09:19:26 crc kubenswrapper[4946]: I1128 09:19:26.665670 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bde3f71a-f3d1-4dff-80fd-844623944482-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:19:27 crc kubenswrapper[4946]: I1128 09:19:26.999850 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-networker-kmdc6" event={"ID":"bde3f71a-f3d1-4dff-80fd-844623944482","Type":"ContainerDied","Data":"1eb2f474746cd5ba35ed83552e95e6fc839fa9889199badbb39f1dbeac00464f"} Nov 28 09:19:27 crc kubenswrapper[4946]: I1128 09:19:27.000593 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1eb2f474746cd5ba35ed83552e95e6fc839fa9889199badbb39f1dbeac00464f" Nov 28 09:19:27 crc kubenswrapper[4946]: I1128 09:19:26.999907 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-networker-kmdc6" Nov 28 09:19:27 crc kubenswrapper[4946]: I1128 09:19:27.097845 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-networker-qtbl9"] Nov 28 09:19:27 crc kubenswrapper[4946]: E1128 09:19:27.098392 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde3f71a-f3d1-4dff-80fd-844623944482" containerName="configure-os-openstack-openstack-networker" Nov 28 09:19:27 crc kubenswrapper[4946]: I1128 09:19:27.098413 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde3f71a-f3d1-4dff-80fd-844623944482" containerName="configure-os-openstack-openstack-networker" Nov 28 09:19:27 crc kubenswrapper[4946]: I1128 09:19:27.098779 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="bde3f71a-f3d1-4dff-80fd-844623944482" containerName="configure-os-openstack-openstack-networker" Nov 28 09:19:27 crc kubenswrapper[4946]: I1128 09:19:27.099926 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-networker-qtbl9" Nov 28 09:19:27 crc kubenswrapper[4946]: I1128 09:19:27.102445 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-7cq8d" Nov 28 09:19:27 crc kubenswrapper[4946]: I1128 09:19:27.102635 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Nov 28 09:19:27 crc kubenswrapper[4946]: I1128 09:19:27.114991 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-networker-qtbl9"] Nov 28 09:19:27 crc kubenswrapper[4946]: E1128 09:19:27.176400 4946 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbde3f71a_f3d1_4dff_80fd_844623944482.slice/crio-1eb2f474746cd5ba35ed83552e95e6fc839fa9889199badbb39f1dbeac00464f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbde3f71a_f3d1_4dff_80fd_844623944482.slice\": RecentStats: unable to find data in memory cache]" Nov 28 09:19:27 crc kubenswrapper[4946]: I1128 09:19:27.277980 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e967dc40-00bf-470c-9d3d-3d57af669882-inventory\") pod \"run-os-openstack-openstack-networker-qtbl9\" (UID: \"e967dc40-00bf-470c-9d3d-3d57af669882\") " pod="openstack/run-os-openstack-openstack-networker-qtbl9" Nov 28 09:19:27 crc kubenswrapper[4946]: I1128 09:19:27.278077 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e967dc40-00bf-470c-9d3d-3d57af669882-ssh-key\") pod \"run-os-openstack-openstack-networker-qtbl9\" (UID: \"e967dc40-00bf-470c-9d3d-3d57af669882\") " pod="openstack/run-os-openstack-openstack-networker-qtbl9" Nov 28 09:19:27 crc kubenswrapper[4946]: I1128 09:19:27.278105 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgjqh\" (UniqueName: \"kubernetes.io/projected/e967dc40-00bf-470c-9d3d-3d57af669882-kube-api-access-wgjqh\") pod \"run-os-openstack-openstack-networker-qtbl9\" (UID: \"e967dc40-00bf-470c-9d3d-3d57af669882\") " pod="openstack/run-os-openstack-openstack-networker-qtbl9" Nov 28 09:19:27 crc kubenswrapper[4946]: I1128 09:19:27.379440 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgjqh\" (UniqueName: \"kubernetes.io/projected/e967dc40-00bf-470c-9d3d-3d57af669882-kube-api-access-wgjqh\") pod \"run-os-openstack-openstack-networker-qtbl9\" (UID: \"e967dc40-00bf-470c-9d3d-3d57af669882\") " pod="openstack/run-os-openstack-openstack-networker-qtbl9" Nov 28 09:19:27 crc kubenswrapper[4946]: I1128 09:19:27.379669 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e967dc40-00bf-470c-9d3d-3d57af669882-inventory\") pod \"run-os-openstack-openstack-networker-qtbl9\" (UID: \"e967dc40-00bf-470c-9d3d-3d57af669882\") " pod="openstack/run-os-openstack-openstack-networker-qtbl9" Nov 28 09:19:27 crc kubenswrapper[4946]: I1128 09:19:27.379741 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e967dc40-00bf-470c-9d3d-3d57af669882-ssh-key\") pod \"run-os-openstack-openstack-networker-qtbl9\" (UID: \"e967dc40-00bf-470c-9d3d-3d57af669882\") " pod="openstack/run-os-openstack-openstack-networker-qtbl9" Nov 28 09:19:27 crc kubenswrapper[4946]: I1128 09:19:27.386062 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e967dc40-00bf-470c-9d3d-3d57af669882-ssh-key\") pod \"run-os-openstack-openstack-networker-qtbl9\" (UID: \"e967dc40-00bf-470c-9d3d-3d57af669882\") " pod="openstack/run-os-openstack-openstack-networker-qtbl9" Nov 28 09:19:27 crc kubenswrapper[4946]: I1128 09:19:27.386540 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e967dc40-00bf-470c-9d3d-3d57af669882-inventory\") pod \"run-os-openstack-openstack-networker-qtbl9\" (UID: \"e967dc40-00bf-470c-9d3d-3d57af669882\") " pod="openstack/run-os-openstack-openstack-networker-qtbl9" Nov 28 09:19:27 crc kubenswrapper[4946]: I1128 09:19:27.398547 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgjqh\" (UniqueName: \"kubernetes.io/projected/e967dc40-00bf-470c-9d3d-3d57af669882-kube-api-access-wgjqh\") pod \"run-os-openstack-openstack-networker-qtbl9\" (UID: \"e967dc40-00bf-470c-9d3d-3d57af669882\") " pod="openstack/run-os-openstack-openstack-networker-qtbl9" Nov 28 09:19:27 crc kubenswrapper[4946]: I1128 09:19:27.476260 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-networker-qtbl9" Nov 28 09:19:28 crc kubenswrapper[4946]: I1128 09:19:28.172579 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-networker-qtbl9"] Nov 28 09:19:28 crc kubenswrapper[4946]: W1128 09:19:28.176479 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode967dc40_00bf_470c_9d3d_3d57af669882.slice/crio-1356dff320050aabf6efbf24fb1082faee192684d401dd1c6e0cdbd4e27f0d7e WatchSource:0}: Error finding container 1356dff320050aabf6efbf24fb1082faee192684d401dd1c6e0cdbd4e27f0d7e: Status 404 returned error can't find the container with id 1356dff320050aabf6efbf24fb1082faee192684d401dd1c6e0cdbd4e27f0d7e Nov 28 09:19:29 crc kubenswrapper[4946]: I1128 09:19:29.017559 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-networker-qtbl9" event={"ID":"e967dc40-00bf-470c-9d3d-3d57af669882","Type":"ContainerStarted","Data":"1356dff320050aabf6efbf24fb1082faee192684d401dd1c6e0cdbd4e27f0d7e"} Nov 28 09:19:30 crc kubenswrapper[4946]: I1128 09:19:30.031121 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-networker-qtbl9" event={"ID":"e967dc40-00bf-470c-9d3d-3d57af669882","Type":"ContainerStarted","Data":"efd4208bfd44e41956c8ad263cb783ed5331239a6278607177a90be6cba2492b"} Nov 28 09:19:30 crc kubenswrapper[4946]: I1128 09:19:30.054425 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-networker-qtbl9" podStartSLOduration=2.066403402 podStartE2EDuration="3.054406318s" podCreationTimestamp="2025-11-28 09:19:27 +0000 UTC" firstStartedPulling="2025-11-28 09:19:28.178581339 +0000 UTC m=+8822.556646450" lastFinishedPulling="2025-11-28 09:19:29.166584255 +0000 UTC m=+8823.544649366" observedRunningTime="2025-11-28 09:19:30.049855885 +0000 UTC m=+8824.427920996" watchObservedRunningTime="2025-11-28 09:19:30.054406318 +0000 UTC m=+8824.432471429" Nov 28 09:19:33 crc kubenswrapper[4946]: I1128 09:19:33.302872 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f4vqj" Nov 28 09:19:33 crc kubenswrapper[4946]: I1128 09:19:33.355858 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f4vqj" Nov 28 09:19:33 crc kubenswrapper[4946]: I1128 09:19:33.546431 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f4vqj"] Nov 28 09:19:35 crc kubenswrapper[4946]: I1128 09:19:35.101917 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f4vqj" podUID="4d1e635a-d107-4138-9386-bc276b454375" containerName="registry-server" containerID="cri-o://75a5ec5ddf3df5384aeb9b68d4d3037b44bb47e60c84bd091890af72e66a4024" gracePeriod=2 Nov 28 09:19:35 crc kubenswrapper[4946]: I1128 09:19:35.622320 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4vqj" Nov 28 09:19:35 crc kubenswrapper[4946]: I1128 09:19:35.782102 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqvxd\" (UniqueName: \"kubernetes.io/projected/4d1e635a-d107-4138-9386-bc276b454375-kube-api-access-lqvxd\") pod \"4d1e635a-d107-4138-9386-bc276b454375\" (UID: \"4d1e635a-d107-4138-9386-bc276b454375\") " Nov 28 09:19:35 crc kubenswrapper[4946]: I1128 09:19:35.782326 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d1e635a-d107-4138-9386-bc276b454375-utilities\") pod \"4d1e635a-d107-4138-9386-bc276b454375\" (UID: \"4d1e635a-d107-4138-9386-bc276b454375\") " Nov 28 09:19:35 crc kubenswrapper[4946]: I1128 09:19:35.782438 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d1e635a-d107-4138-9386-bc276b454375-catalog-content\") pod \"4d1e635a-d107-4138-9386-bc276b454375\" (UID: \"4d1e635a-d107-4138-9386-bc276b454375\") " Nov 28 09:19:35 crc kubenswrapper[4946]: I1128 09:19:35.783651 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d1e635a-d107-4138-9386-bc276b454375-utilities" (OuterVolumeSpecName: "utilities") pod "4d1e635a-d107-4138-9386-bc276b454375" (UID: "4d1e635a-d107-4138-9386-bc276b454375"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:19:35 crc kubenswrapper[4946]: I1128 09:19:35.788458 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d1e635a-d107-4138-9386-bc276b454375-kube-api-access-lqvxd" (OuterVolumeSpecName: "kube-api-access-lqvxd") pod "4d1e635a-d107-4138-9386-bc276b454375" (UID: "4d1e635a-d107-4138-9386-bc276b454375"). InnerVolumeSpecName "kube-api-access-lqvxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:19:35 crc kubenswrapper[4946]: I1128 09:19:35.885699 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d1e635a-d107-4138-9386-bc276b454375-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 09:19:35 crc kubenswrapper[4946]: I1128 09:19:35.885740 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqvxd\" (UniqueName: \"kubernetes.io/projected/4d1e635a-d107-4138-9386-bc276b454375-kube-api-access-lqvxd\") on node \"crc\" DevicePath \"\"" Nov 28 09:19:35 crc kubenswrapper[4946]: I1128 09:19:35.890967 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d1e635a-d107-4138-9386-bc276b454375-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d1e635a-d107-4138-9386-bc276b454375" (UID: "4d1e635a-d107-4138-9386-bc276b454375"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:19:35 crc kubenswrapper[4946]: I1128 09:19:35.987474 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d1e635a-d107-4138-9386-bc276b454375-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 09:19:36 crc kubenswrapper[4946]: I1128 09:19:36.112201 4946 generic.go:334] "Generic (PLEG): container finished" podID="4d1e635a-d107-4138-9386-bc276b454375" containerID="75a5ec5ddf3df5384aeb9b68d4d3037b44bb47e60c84bd091890af72e66a4024" exitCode=0 Nov 28 09:19:36 crc kubenswrapper[4946]: I1128 09:19:36.112263 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4vqj" event={"ID":"4d1e635a-d107-4138-9386-bc276b454375","Type":"ContainerDied","Data":"75a5ec5ddf3df5384aeb9b68d4d3037b44bb47e60c84bd091890af72e66a4024"} Nov 28 09:19:36 crc kubenswrapper[4946]: I1128 09:19:36.112328 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4vqj" event={"ID":"4d1e635a-d107-4138-9386-bc276b454375","Type":"ContainerDied","Data":"84ac4d0369f2c05524e555271cad657f957d7df5c1e80bce89109dc8c0cef99b"} Nov 28 09:19:36 crc kubenswrapper[4946]: I1128 09:19:36.112361 4946 scope.go:117] "RemoveContainer" containerID="75a5ec5ddf3df5384aeb9b68d4d3037b44bb47e60c84bd091890af72e66a4024" Nov 28 09:19:36 crc kubenswrapper[4946]: I1128 09:19:36.113502 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4vqj" Nov 28 09:19:36 crc kubenswrapper[4946]: I1128 09:19:36.139668 4946 scope.go:117] "RemoveContainer" containerID="86adf7c12896cc7fe10548eac215244205c6bfc57ba8f4f6efd913a5f0505ebc" Nov 28 09:19:36 crc kubenswrapper[4946]: I1128 09:19:36.145516 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f4vqj"] Nov 28 09:19:36 crc kubenswrapper[4946]: I1128 09:19:36.166995 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f4vqj"] Nov 28 09:19:36 crc kubenswrapper[4946]: I1128 09:19:36.168703 4946 scope.go:117] "RemoveContainer" containerID="a362c21ded4434180956198973fd835fcda67e9f1aa6a4857acbb38ee4b93123" Nov 28 09:19:36 crc kubenswrapper[4946]: I1128 09:19:36.234803 4946 scope.go:117] "RemoveContainer" containerID="75a5ec5ddf3df5384aeb9b68d4d3037b44bb47e60c84bd091890af72e66a4024" Nov 28 09:19:36 crc kubenswrapper[4946]: E1128 09:19:36.235270 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75a5ec5ddf3df5384aeb9b68d4d3037b44bb47e60c84bd091890af72e66a4024\": container with ID starting with 75a5ec5ddf3df5384aeb9b68d4d3037b44bb47e60c84bd091890af72e66a4024 not found: ID does not exist" containerID="75a5ec5ddf3df5384aeb9b68d4d3037b44bb47e60c84bd091890af72e66a4024" Nov 28 09:19:36 crc kubenswrapper[4946]: I1128 09:19:36.235317 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75a5ec5ddf3df5384aeb9b68d4d3037b44bb47e60c84bd091890af72e66a4024"} err="failed to get container status \"75a5ec5ddf3df5384aeb9b68d4d3037b44bb47e60c84bd091890af72e66a4024\": rpc error: code = NotFound desc = could not find container \"75a5ec5ddf3df5384aeb9b68d4d3037b44bb47e60c84bd091890af72e66a4024\": container with ID starting with 75a5ec5ddf3df5384aeb9b68d4d3037b44bb47e60c84bd091890af72e66a4024 not found: ID does not exist" Nov 28 09:19:36 crc kubenswrapper[4946]: I1128 09:19:36.235341 4946 scope.go:117] "RemoveContainer" containerID="86adf7c12896cc7fe10548eac215244205c6bfc57ba8f4f6efd913a5f0505ebc" Nov 28 09:19:36 crc kubenswrapper[4946]: E1128 09:19:36.235804 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86adf7c12896cc7fe10548eac215244205c6bfc57ba8f4f6efd913a5f0505ebc\": container with ID starting with 86adf7c12896cc7fe10548eac215244205c6bfc57ba8f4f6efd913a5f0505ebc not found: ID does not exist" containerID="86adf7c12896cc7fe10548eac215244205c6bfc57ba8f4f6efd913a5f0505ebc" Nov 28 09:19:36 crc kubenswrapper[4946]: I1128 09:19:36.235842 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86adf7c12896cc7fe10548eac215244205c6bfc57ba8f4f6efd913a5f0505ebc"} err="failed to get container status \"86adf7c12896cc7fe10548eac215244205c6bfc57ba8f4f6efd913a5f0505ebc\": rpc error: code = NotFound desc = could not find container \"86adf7c12896cc7fe10548eac215244205c6bfc57ba8f4f6efd913a5f0505ebc\": container with ID starting with 86adf7c12896cc7fe10548eac215244205c6bfc57ba8f4f6efd913a5f0505ebc not found: ID does not exist" Nov 28 09:19:36 crc kubenswrapper[4946]: I1128 09:19:36.235867 4946 scope.go:117] "RemoveContainer" containerID="a362c21ded4434180956198973fd835fcda67e9f1aa6a4857acbb38ee4b93123" Nov 28 09:19:36 crc kubenswrapper[4946]: E1128 09:19:36.236168 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a362c21ded4434180956198973fd835fcda67e9f1aa6a4857acbb38ee4b93123\": container with ID starting with a362c21ded4434180956198973fd835fcda67e9f1aa6a4857acbb38ee4b93123 not found: ID does not exist" containerID="a362c21ded4434180956198973fd835fcda67e9f1aa6a4857acbb38ee4b93123" Nov 28 09:19:36 crc kubenswrapper[4946]: I1128 09:19:36.236196 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a362c21ded4434180956198973fd835fcda67e9f1aa6a4857acbb38ee4b93123"} err="failed to get container status \"a362c21ded4434180956198973fd835fcda67e9f1aa6a4857acbb38ee4b93123\": rpc error: code = NotFound desc = could not find container \"a362c21ded4434180956198973fd835fcda67e9f1aa6a4857acbb38ee4b93123\": container with ID starting with a362c21ded4434180956198973fd835fcda67e9f1aa6a4857acbb38ee4b93123 not found: ID does not exist" Nov 28 09:19:38 crc kubenswrapper[4946]: I1128 09:19:38.003565 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d1e635a-d107-4138-9386-bc276b454375" path="/var/lib/kubelet/pods/4d1e635a-d107-4138-9386-bc276b454375/volumes" Nov 28 09:19:39 crc kubenswrapper[4946]: I1128 09:19:39.146156 4946 generic.go:334] "Generic (PLEG): container finished" podID="e967dc40-00bf-470c-9d3d-3d57af669882" containerID="efd4208bfd44e41956c8ad263cb783ed5331239a6278607177a90be6cba2492b" exitCode=0 Nov 28 09:19:39 crc kubenswrapper[4946]: I1128 09:19:39.146242 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-networker-qtbl9" event={"ID":"e967dc40-00bf-470c-9d3d-3d57af669882","Type":"ContainerDied","Data":"efd4208bfd44e41956c8ad263cb783ed5331239a6278607177a90be6cba2492b"} Nov 28 09:19:40 crc kubenswrapper[4946]: I1128 09:19:40.675774 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-networker-qtbl9" Nov 28 09:19:40 crc kubenswrapper[4946]: I1128 09:19:40.788273 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e967dc40-00bf-470c-9d3d-3d57af669882-ssh-key\") pod \"e967dc40-00bf-470c-9d3d-3d57af669882\" (UID: \"e967dc40-00bf-470c-9d3d-3d57af669882\") " Nov 28 09:19:40 crc kubenswrapper[4946]: I1128 09:19:40.788399 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e967dc40-00bf-470c-9d3d-3d57af669882-inventory\") pod \"e967dc40-00bf-470c-9d3d-3d57af669882\" (UID: \"e967dc40-00bf-470c-9d3d-3d57af669882\") " Nov 28 09:19:40 crc kubenswrapper[4946]: I1128 09:19:40.788558 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgjqh\" (UniqueName: \"kubernetes.io/projected/e967dc40-00bf-470c-9d3d-3d57af669882-kube-api-access-wgjqh\") pod \"e967dc40-00bf-470c-9d3d-3d57af669882\" (UID: \"e967dc40-00bf-470c-9d3d-3d57af669882\") " Nov 28 09:19:40 crc kubenswrapper[4946]: I1128 09:19:40.793957 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e967dc40-00bf-470c-9d3d-3d57af669882-kube-api-access-wgjqh" (OuterVolumeSpecName: "kube-api-access-wgjqh") pod "e967dc40-00bf-470c-9d3d-3d57af669882" (UID: "e967dc40-00bf-470c-9d3d-3d57af669882"). InnerVolumeSpecName "kube-api-access-wgjqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:19:40 crc kubenswrapper[4946]: I1128 09:19:40.822171 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e967dc40-00bf-470c-9d3d-3d57af669882-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e967dc40-00bf-470c-9d3d-3d57af669882" (UID: "e967dc40-00bf-470c-9d3d-3d57af669882"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:19:40 crc kubenswrapper[4946]: I1128 09:19:40.854615 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e967dc40-00bf-470c-9d3d-3d57af669882-inventory" (OuterVolumeSpecName: "inventory") pod "e967dc40-00bf-470c-9d3d-3d57af669882" (UID: "e967dc40-00bf-470c-9d3d-3d57af669882"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:19:40 crc kubenswrapper[4946]: I1128 09:19:40.891740 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e967dc40-00bf-470c-9d3d-3d57af669882-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:19:40 crc kubenswrapper[4946]: I1128 09:19:40.891777 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e967dc40-00bf-470c-9d3d-3d57af669882-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:19:40 crc kubenswrapper[4946]: I1128 09:19:40.891793 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgjqh\" (UniqueName: \"kubernetes.io/projected/e967dc40-00bf-470c-9d3d-3d57af669882-kube-api-access-wgjqh\") on node \"crc\" DevicePath \"\"" Nov 28 09:19:41 crc kubenswrapper[4946]: I1128 09:19:41.169148 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-networker-qtbl9" event={"ID":"e967dc40-00bf-470c-9d3d-3d57af669882","Type":"ContainerDied","Data":"1356dff320050aabf6efbf24fb1082faee192684d401dd1c6e0cdbd4e27f0d7e"} Nov 28 09:19:41 crc kubenswrapper[4946]: I1128 09:19:41.169193 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1356dff320050aabf6efbf24fb1082faee192684d401dd1c6e0cdbd4e27f0d7e" Nov 28 09:19:41 crc kubenswrapper[4946]: I1128 09:19:41.169236 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-networker-qtbl9" Nov 28 09:19:41 crc kubenswrapper[4946]: I1128 09:19:41.277403 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-networker-7k778"] Nov 28 09:19:41 crc kubenswrapper[4946]: E1128 09:19:41.277965 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e967dc40-00bf-470c-9d3d-3d57af669882" containerName="run-os-openstack-openstack-networker" Nov 28 09:19:41 crc kubenswrapper[4946]: I1128 09:19:41.277988 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="e967dc40-00bf-470c-9d3d-3d57af669882" containerName="run-os-openstack-openstack-networker" Nov 28 09:19:41 crc kubenswrapper[4946]: E1128 09:19:41.278011 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d1e635a-d107-4138-9386-bc276b454375" containerName="extract-utilities" Nov 28 09:19:41 crc kubenswrapper[4946]: I1128 09:19:41.278022 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d1e635a-d107-4138-9386-bc276b454375" containerName="extract-utilities" Nov 28 09:19:41 crc kubenswrapper[4946]: E1128 09:19:41.278043 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d1e635a-d107-4138-9386-bc276b454375" containerName="registry-server" Nov 28 09:19:41 crc kubenswrapper[4946]: I1128 09:19:41.278052 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d1e635a-d107-4138-9386-bc276b454375" containerName="registry-server" Nov 28 09:19:41 crc kubenswrapper[4946]: E1128 09:19:41.278070 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d1e635a-d107-4138-9386-bc276b454375" containerName="extract-content" Nov 28 09:19:41 crc kubenswrapper[4946]: I1128 09:19:41.278078 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d1e635a-d107-4138-9386-bc276b454375" containerName="extract-content" Nov 28 09:19:41 crc kubenswrapper[4946]: I1128 09:19:41.278336 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d1e635a-d107-4138-9386-bc276b454375" containerName="registry-server" Nov 28 09:19:41 crc kubenswrapper[4946]: I1128 09:19:41.278390 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="e967dc40-00bf-470c-9d3d-3d57af669882" containerName="run-os-openstack-openstack-networker" Nov 28 09:19:41 crc kubenswrapper[4946]: I1128 09:19:41.279348 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-networker-7k778" Nov 28 09:19:41 crc kubenswrapper[4946]: I1128 09:19:41.281640 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Nov 28 09:19:41 crc kubenswrapper[4946]: I1128 09:19:41.281648 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-7cq8d" Nov 28 09:19:41 crc kubenswrapper[4946]: I1128 09:19:41.291899 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-networker-7k778"] Nov 28 09:19:41 crc kubenswrapper[4946]: I1128 09:19:41.402018 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/febda310-f4a5-44c9-99b3-d7c31f0496a3-ssh-key\") pod \"reboot-os-openstack-openstack-networker-7k778\" (UID: \"febda310-f4a5-44c9-99b3-d7c31f0496a3\") " pod="openstack/reboot-os-openstack-openstack-networker-7k778" Nov 28 09:19:41 crc kubenswrapper[4946]: I1128 09:19:41.402583 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45sxg\" (UniqueName: \"kubernetes.io/projected/febda310-f4a5-44c9-99b3-d7c31f0496a3-kube-api-access-45sxg\") pod \"reboot-os-openstack-openstack-networker-7k778\" (UID: \"febda310-f4a5-44c9-99b3-d7c31f0496a3\") " pod="openstack/reboot-os-openstack-openstack-networker-7k778" Nov 28 09:19:41 crc kubenswrapper[4946]: I1128 09:19:41.402617 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/febda310-f4a5-44c9-99b3-d7c31f0496a3-inventory\") pod \"reboot-os-openstack-openstack-networker-7k778\" (UID: \"febda310-f4a5-44c9-99b3-d7c31f0496a3\") " pod="openstack/reboot-os-openstack-openstack-networker-7k778" Nov 28 09:19:41 crc kubenswrapper[4946]: I1128 09:19:41.504634 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45sxg\" (UniqueName: \"kubernetes.io/projected/febda310-f4a5-44c9-99b3-d7c31f0496a3-kube-api-access-45sxg\") pod \"reboot-os-openstack-openstack-networker-7k778\" (UID: \"febda310-f4a5-44c9-99b3-d7c31f0496a3\") " pod="openstack/reboot-os-openstack-openstack-networker-7k778" Nov 28 09:19:41 crc kubenswrapper[4946]: I1128 09:19:41.504678 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/febda310-f4a5-44c9-99b3-d7c31f0496a3-inventory\") pod \"reboot-os-openstack-openstack-networker-7k778\" (UID: \"febda310-f4a5-44c9-99b3-d7c31f0496a3\") " pod="openstack/reboot-os-openstack-openstack-networker-7k778" Nov 28 09:19:41 crc kubenswrapper[4946]: I1128 09:19:41.504721 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/febda310-f4a5-44c9-99b3-d7c31f0496a3-ssh-key\") pod \"reboot-os-openstack-openstack-networker-7k778\" (UID: \"febda310-f4a5-44c9-99b3-d7c31f0496a3\") " pod="openstack/reboot-os-openstack-openstack-networker-7k778" Nov 28 09:19:41 crc kubenswrapper[4946]: I1128 09:19:41.511107 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/febda310-f4a5-44c9-99b3-d7c31f0496a3-inventory\") pod \"reboot-os-openstack-openstack-networker-7k778\" (UID: \"febda310-f4a5-44c9-99b3-d7c31f0496a3\") " pod="openstack/reboot-os-openstack-openstack-networker-7k778" Nov 28 09:19:41 crc kubenswrapper[4946]: I1128 09:19:41.517949 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/febda310-f4a5-44c9-99b3-d7c31f0496a3-ssh-key\") pod \"reboot-os-openstack-openstack-networker-7k778\" (UID: \"febda310-f4a5-44c9-99b3-d7c31f0496a3\") " pod="openstack/reboot-os-openstack-openstack-networker-7k778" Nov 28 09:19:41 crc kubenswrapper[4946]: I1128 09:19:41.521416 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45sxg\" (UniqueName: \"kubernetes.io/projected/febda310-f4a5-44c9-99b3-d7c31f0496a3-kube-api-access-45sxg\") pod \"reboot-os-openstack-openstack-networker-7k778\" (UID: \"febda310-f4a5-44c9-99b3-d7c31f0496a3\") " pod="openstack/reboot-os-openstack-openstack-networker-7k778" Nov 28 09:19:41 crc kubenswrapper[4946]: I1128 09:19:41.599450 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-networker-7k778" Nov 28 09:19:42 crc kubenswrapper[4946]: I1128 09:19:42.253876 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-networker-7k778"] Nov 28 09:19:43 crc kubenswrapper[4946]: I1128 09:19:43.199205 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-networker-7k778" event={"ID":"febda310-f4a5-44c9-99b3-d7c31f0496a3","Type":"ContainerStarted","Data":"b622c9bea35ba6f305a599da2ab8c2feb45e42eeb5854cade21432c2bd78fa7e"} Nov 28 09:19:43 crc kubenswrapper[4946]: I1128 09:19:43.199985 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-networker-7k778" event={"ID":"febda310-f4a5-44c9-99b3-d7c31f0496a3","Type":"ContainerStarted","Data":"08634a66880cb0223fdb20f496a0e0c3d735d646f8ef05d389fd166e653ff7e5"} Nov 28 09:19:43 crc kubenswrapper[4946]: I1128 09:19:43.225181 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-networker-7k778" podStartSLOduration=1.577043983 podStartE2EDuration="2.225152772s" podCreationTimestamp="2025-11-28 09:19:41 +0000 UTC" firstStartedPulling="2025-11-28 09:19:42.257711605 +0000 UTC m=+8836.635776726" lastFinishedPulling="2025-11-28 09:19:42.905820374 +0000 UTC m=+8837.283885515" observedRunningTime="2025-11-28 09:19:43.219925652 +0000 UTC m=+8837.597990783" watchObservedRunningTime="2025-11-28 09:19:43.225152772 +0000 UTC m=+8837.603217923" Nov 28 09:19:54 crc kubenswrapper[4946]: I1128 09:19:54.730800 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:19:54 crc kubenswrapper[4946]: I1128 09:19:54.731610 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:19:54 crc kubenswrapper[4946]: I1128 09:19:54.731677 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 09:19:54 crc kubenswrapper[4946]: I1128 09:19:54.732838 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 09:19:54 crc kubenswrapper[4946]: I1128 09:19:54.732925 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" gracePeriod=600 Nov 28 09:19:54 crc kubenswrapper[4946]: E1128 09:19:54.862052 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:19:55 crc kubenswrapper[4946]: I1128 09:19:55.325746 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" exitCode=0 Nov 28 09:19:55 crc kubenswrapper[4946]: I1128 09:19:55.325812 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80"} Nov 28 09:19:55 crc kubenswrapper[4946]: I1128 09:19:55.325864 4946 scope.go:117] "RemoveContainer" containerID="d8b45121214190074c519e34154db0426b7e8853a8fe0a16ee20d03a0a14cd84" Nov 28 09:19:55 crc kubenswrapper[4946]: I1128 09:19:55.326630 4946 scope.go:117] "RemoveContainer" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" Nov 28 09:19:55 crc kubenswrapper[4946]: E1128 09:19:55.326990 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:19:57 crc kubenswrapper[4946]: I1128 09:19:57.362327 4946 generic.go:334] "Generic (PLEG): container finished" podID="ec131430-238f-4a5c-8c02-f05d2b07e60e" containerID="ed8883f314d887e97ee699581e3134e33af840fbdaeaad8e8d5b0b149c22a646" exitCode=0 Nov 28 09:19:57 crc kubenswrapper[4946]: I1128 09:19:57.362433 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-gcsc9" event={"ID":"ec131430-238f-4a5c-8c02-f05d2b07e60e","Type":"ContainerDied","Data":"ed8883f314d887e97ee699581e3134e33af840fbdaeaad8e8d5b0b149c22a646"} Nov 28 09:19:58 crc kubenswrapper[4946]: I1128 09:19:58.376770 4946 generic.go:334] "Generic (PLEG): container finished" podID="febda310-f4a5-44c9-99b3-d7c31f0496a3" containerID="b622c9bea35ba6f305a599da2ab8c2feb45e42eeb5854cade21432c2bd78fa7e" exitCode=0 Nov 28 09:19:58 crc kubenswrapper[4946]: I1128 09:19:58.376822 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-networker-7k778" event={"ID":"febda310-f4a5-44c9-99b3-d7c31f0496a3","Type":"ContainerDied","Data":"b622c9bea35ba6f305a599da2ab8c2feb45e42eeb5854cade21432c2bd78fa7e"} Nov 28 09:19:58 crc kubenswrapper[4946]: I1128 09:19:58.845856 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-gcsc9" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.035555 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ec131430-238f-4a5c-8c02-f05d2b07e60e-ceph\") pod \"ec131430-238f-4a5c-8c02-f05d2b07e60e\" (UID: \"ec131430-238f-4a5c-8c02-f05d2b07e60e\") " Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.035954 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec131430-238f-4a5c-8c02-f05d2b07e60e-inventory\") pod \"ec131430-238f-4a5c-8c02-f05d2b07e60e\" (UID: \"ec131430-238f-4a5c-8c02-f05d2b07e60e\") " Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.035989 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec131430-238f-4a5c-8c02-f05d2b07e60e-ssh-key\") pod \"ec131430-238f-4a5c-8c02-f05d2b07e60e\" (UID: \"ec131430-238f-4a5c-8c02-f05d2b07e60e\") " Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.036010 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkn2q\" (UniqueName: \"kubernetes.io/projected/ec131430-238f-4a5c-8c02-f05d2b07e60e-kube-api-access-nkn2q\") pod \"ec131430-238f-4a5c-8c02-f05d2b07e60e\" (UID: \"ec131430-238f-4a5c-8c02-f05d2b07e60e\") " Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.041018 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec131430-238f-4a5c-8c02-f05d2b07e60e-kube-api-access-nkn2q" (OuterVolumeSpecName: "kube-api-access-nkn2q") pod "ec131430-238f-4a5c-8c02-f05d2b07e60e" (UID: "ec131430-238f-4a5c-8c02-f05d2b07e60e"). InnerVolumeSpecName "kube-api-access-nkn2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.041750 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec131430-238f-4a5c-8c02-f05d2b07e60e-ceph" (OuterVolumeSpecName: "ceph") pod "ec131430-238f-4a5c-8c02-f05d2b07e60e" (UID: "ec131430-238f-4a5c-8c02-f05d2b07e60e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.062794 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec131430-238f-4a5c-8c02-f05d2b07e60e-inventory" (OuterVolumeSpecName: "inventory") pod "ec131430-238f-4a5c-8c02-f05d2b07e60e" (UID: "ec131430-238f-4a5c-8c02-f05d2b07e60e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.072554 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec131430-238f-4a5c-8c02-f05d2b07e60e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ec131430-238f-4a5c-8c02-f05d2b07e60e" (UID: "ec131430-238f-4a5c-8c02-f05d2b07e60e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.139883 4946 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ec131430-238f-4a5c-8c02-f05d2b07e60e-ceph\") on node \"crc\" DevicePath \"\"" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.139914 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec131430-238f-4a5c-8c02-f05d2b07e60e-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.139926 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec131430-238f-4a5c-8c02-f05d2b07e60e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.139939 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkn2q\" (UniqueName: \"kubernetes.io/projected/ec131430-238f-4a5c-8c02-f05d2b07e60e-kube-api-access-nkn2q\") on node \"crc\" DevicePath \"\"" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.391245 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-gcsc9" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.391238 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-gcsc9" event={"ID":"ec131430-238f-4a5c-8c02-f05d2b07e60e","Type":"ContainerDied","Data":"4dd73e170482afb107403454061f750ffe0547770baaadab4d24161ba23447af"} Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.391535 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dd73e170482afb107403454061f750ffe0547770baaadab4d24161ba23447af" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.548592 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-openstack-2jvc9"] Nov 28 09:19:59 crc kubenswrapper[4946]: E1128 09:19:59.549209 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec131430-238f-4a5c-8c02-f05d2b07e60e" containerName="configure-os-openstack-openstack-cell1" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.549226 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec131430-238f-4a5c-8c02-f05d2b07e60e" containerName="configure-os-openstack-openstack-cell1" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.549488 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec131430-238f-4a5c-8c02-f05d2b07e60e" containerName="configure-os-openstack-openstack-cell1" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.550414 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-2jvc9" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.556593 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.556645 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-psw2j" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.558705 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-2jvc9"] Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.754604 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-networker\" (UniqueName: \"kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-ssh-key-openstack-networker\") pod \"ssh-known-hosts-openstack-2jvc9\" (UID: \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\") " pod="openstack/ssh-known-hosts-openstack-2jvc9" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.754664 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-2jvc9\" (UID: \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\") " pod="openstack/ssh-known-hosts-openstack-2jvc9" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.754698 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-1\" (UniqueName: \"kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-inventory-1\") pod \"ssh-known-hosts-openstack-2jvc9\" (UID: \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\") " pod="openstack/ssh-known-hosts-openstack-2jvc9" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.754796 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ft94\" (UniqueName: \"kubernetes.io/projected/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-kube-api-access-5ft94\") pod \"ssh-known-hosts-openstack-2jvc9\" (UID: \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\") " pod="openstack/ssh-known-hosts-openstack-2jvc9" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.754864 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-inventory-0\") pod \"ssh-known-hosts-openstack-2jvc9\" (UID: \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\") " pod="openstack/ssh-known-hosts-openstack-2jvc9" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.754913 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-ceph\") pod \"ssh-known-hosts-openstack-2jvc9\" (UID: \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\") " pod="openstack/ssh-known-hosts-openstack-2jvc9" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.856760 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ft94\" (UniqueName: \"kubernetes.io/projected/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-kube-api-access-5ft94\") pod \"ssh-known-hosts-openstack-2jvc9\" (UID: \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\") " pod="openstack/ssh-known-hosts-openstack-2jvc9" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.856840 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-inventory-0\") pod \"ssh-known-hosts-openstack-2jvc9\" (UID: \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\") " pod="openstack/ssh-known-hosts-openstack-2jvc9" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.856896 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-ceph\") pod \"ssh-known-hosts-openstack-2jvc9\" (UID: \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\") " pod="openstack/ssh-known-hosts-openstack-2jvc9" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.856974 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-networker\" (UniqueName: \"kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-ssh-key-openstack-networker\") pod \"ssh-known-hosts-openstack-2jvc9\" (UID: \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\") " pod="openstack/ssh-known-hosts-openstack-2jvc9" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.857038 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-2jvc9\" (UID: \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\") " pod="openstack/ssh-known-hosts-openstack-2jvc9" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.857902 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-1\" (UniqueName: \"kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-inventory-1\") pod \"ssh-known-hosts-openstack-2jvc9\" (UID: \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\") " pod="openstack/ssh-known-hosts-openstack-2jvc9" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.862321 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-1\" (UniqueName: \"kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-inventory-1\") pod \"ssh-known-hosts-openstack-2jvc9\" (UID: \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\") " pod="openstack/ssh-known-hosts-openstack-2jvc9" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.862915 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-inventory-0\") pod \"ssh-known-hosts-openstack-2jvc9\" (UID: \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\") " pod="openstack/ssh-known-hosts-openstack-2jvc9" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.863033 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-ceph\") pod \"ssh-known-hosts-openstack-2jvc9\" (UID: \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\") " pod="openstack/ssh-known-hosts-openstack-2jvc9" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.863211 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-2jvc9\" (UID: \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\") " pod="openstack/ssh-known-hosts-openstack-2jvc9" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.865605 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-networker\" (UniqueName: \"kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-ssh-key-openstack-networker\") pod \"ssh-known-hosts-openstack-2jvc9\" (UID: \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\") " pod="openstack/ssh-known-hosts-openstack-2jvc9" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.879287 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ft94\" (UniqueName: \"kubernetes.io/projected/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-kube-api-access-5ft94\") pod \"ssh-known-hosts-openstack-2jvc9\" (UID: \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\") " pod="openstack/ssh-known-hosts-openstack-2jvc9" Nov 28 09:19:59 crc kubenswrapper[4946]: I1128 09:19:59.897191 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-2jvc9" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.011260 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-networker-7k778" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.164832 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/febda310-f4a5-44c9-99b3-d7c31f0496a3-inventory\") pod \"febda310-f4a5-44c9-99b3-d7c31f0496a3\" (UID: \"febda310-f4a5-44c9-99b3-d7c31f0496a3\") " Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.164889 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/febda310-f4a5-44c9-99b3-d7c31f0496a3-ssh-key\") pod \"febda310-f4a5-44c9-99b3-d7c31f0496a3\" (UID: \"febda310-f4a5-44c9-99b3-d7c31f0496a3\") " Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.164948 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45sxg\" (UniqueName: \"kubernetes.io/projected/febda310-f4a5-44c9-99b3-d7c31f0496a3-kube-api-access-45sxg\") pod \"febda310-f4a5-44c9-99b3-d7c31f0496a3\" (UID: \"febda310-f4a5-44c9-99b3-d7c31f0496a3\") " Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.169515 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/febda310-f4a5-44c9-99b3-d7c31f0496a3-kube-api-access-45sxg" (OuterVolumeSpecName: "kube-api-access-45sxg") pod "febda310-f4a5-44c9-99b3-d7c31f0496a3" (UID: "febda310-f4a5-44c9-99b3-d7c31f0496a3"). InnerVolumeSpecName "kube-api-access-45sxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.199655 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/febda310-f4a5-44c9-99b3-d7c31f0496a3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "febda310-f4a5-44c9-99b3-d7c31f0496a3" (UID: "febda310-f4a5-44c9-99b3-d7c31f0496a3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.209046 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/febda310-f4a5-44c9-99b3-d7c31f0496a3-inventory" (OuterVolumeSpecName: "inventory") pod "febda310-f4a5-44c9-99b3-d7c31f0496a3" (UID: "febda310-f4a5-44c9-99b3-d7c31f0496a3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.268613 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/febda310-f4a5-44c9-99b3-d7c31f0496a3-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.268655 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/febda310-f4a5-44c9-99b3-d7c31f0496a3-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.268668 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45sxg\" (UniqueName: \"kubernetes.io/projected/febda310-f4a5-44c9-99b3-d7c31f0496a3-kube-api-access-45sxg\") on node \"crc\" DevicePath \"\"" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.405345 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-networker-7k778" event={"ID":"febda310-f4a5-44c9-99b3-d7c31f0496a3","Type":"ContainerDied","Data":"08634a66880cb0223fdb20f496a0e0c3d735d646f8ef05d389fd166e653ff7e5"} Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.405820 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08634a66880cb0223fdb20f496a0e0c3d735d646f8ef05d389fd166e653ff7e5" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.405406 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-networker-7k778" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.497150 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-networker-b8hjj"] Nov 28 09:20:00 crc kubenswrapper[4946]: E1128 09:20:00.497812 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="febda310-f4a5-44c9-99b3-d7c31f0496a3" containerName="reboot-os-openstack-openstack-networker" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.497892 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="febda310-f4a5-44c9-99b3-d7c31f0496a3" containerName="reboot-os-openstack-openstack-networker" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.498132 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="febda310-f4a5-44c9-99b3-d7c31f0496a3" containerName="reboot-os-openstack-openstack-networker" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.498955 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-networker-b8hjj" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.510934 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-7cq8d" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.518040 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-networker-b8hjj"] Nov 28 09:20:00 crc kubenswrapper[4946]: W1128 09:20:00.562269 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49f9db9c_f1e9_48d1_80fe_e77b98c6271b.slice/crio-12f30ee77f759c3bbd1e6ac8e68420ebeee806e0c5f3d2970baa040f8275a547 WatchSource:0}: Error finding container 12f30ee77f759c3bbd1e6ac8e68420ebeee806e0c5f3d2970baa040f8275a547: Status 404 returned error can't find the container with id 12f30ee77f759c3bbd1e6ac8e68420ebeee806e0c5f3d2970baa040f8275a547 Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.563569 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-2jvc9"] Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.575338 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-b8hjj\" (UID: \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\") " pod="openstack/install-certs-openstack-openstack-networker-b8hjj" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.576547 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-b8hjj\" (UID: \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\") " pod="openstack/install-certs-openstack-openstack-networker-b8hjj" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.576660 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-ssh-key\") pod \"install-certs-openstack-openstack-networker-b8hjj\" (UID: \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\") " pod="openstack/install-certs-openstack-openstack-networker-b8hjj" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.576779 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9746k\" (UniqueName: \"kubernetes.io/projected/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-kube-api-access-9746k\") pod \"install-certs-openstack-openstack-networker-b8hjj\" (UID: \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\") " pod="openstack/install-certs-openstack-openstack-networker-b8hjj" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.577259 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-b8hjj\" (UID: \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\") " pod="openstack/install-certs-openstack-openstack-networker-b8hjj" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.577310 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-inventory\") pod \"install-certs-openstack-openstack-networker-b8hjj\" (UID: \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\") " pod="openstack/install-certs-openstack-openstack-networker-b8hjj" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.679367 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-b8hjj\" (UID: \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\") " pod="openstack/install-certs-openstack-openstack-networker-b8hjj" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.679409 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-inventory\") pod \"install-certs-openstack-openstack-networker-b8hjj\" (UID: \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\") " pod="openstack/install-certs-openstack-openstack-networker-b8hjj" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.679446 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-b8hjj\" (UID: \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\") " pod="openstack/install-certs-openstack-openstack-networker-b8hjj" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.679490 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-b8hjj\" (UID: \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\") " pod="openstack/install-certs-openstack-openstack-networker-b8hjj" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.679508 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-ssh-key\") pod \"install-certs-openstack-openstack-networker-b8hjj\" (UID: \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\") " pod="openstack/install-certs-openstack-openstack-networker-b8hjj" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.679542 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9746k\" (UniqueName: \"kubernetes.io/projected/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-kube-api-access-9746k\") pod \"install-certs-openstack-openstack-networker-b8hjj\" (UID: \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\") " pod="openstack/install-certs-openstack-openstack-networker-b8hjj" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.683593 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-b8hjj\" (UID: \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\") " pod="openstack/install-certs-openstack-openstack-networker-b8hjj" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.683602 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-ssh-key\") pod \"install-certs-openstack-openstack-networker-b8hjj\" (UID: \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\") " pod="openstack/install-certs-openstack-openstack-networker-b8hjj" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.683828 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-b8hjj\" (UID: \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\") " pod="openstack/install-certs-openstack-openstack-networker-b8hjj" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.684031 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-b8hjj\" (UID: \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\") " pod="openstack/install-certs-openstack-openstack-networker-b8hjj" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.684543 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-inventory\") pod \"install-certs-openstack-openstack-networker-b8hjj\" (UID: \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\") " pod="openstack/install-certs-openstack-openstack-networker-b8hjj" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.699655 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9746k\" (UniqueName: \"kubernetes.io/projected/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-kube-api-access-9746k\") pod \"install-certs-openstack-openstack-networker-b8hjj\" (UID: \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\") " pod="openstack/install-certs-openstack-openstack-networker-b8hjj" Nov 28 09:20:00 crc kubenswrapper[4946]: I1128 09:20:00.827283 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-networker-b8hjj" Nov 28 09:20:01 crc kubenswrapper[4946]: I1128 09:20:01.417665 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-2jvc9" event={"ID":"49f9db9c-f1e9-48d1-80fe-e77b98c6271b","Type":"ContainerStarted","Data":"12f30ee77f759c3bbd1e6ac8e68420ebeee806e0c5f3d2970baa040f8275a547"} Nov 28 09:20:01 crc kubenswrapper[4946]: I1128 09:20:01.468448 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-networker-b8hjj"] Nov 28 09:20:01 crc kubenswrapper[4946]: W1128 09:20:01.476378 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca68a9c2_7810_4c5c_9614_3aa70b0dff7f.slice/crio-c58dc0c49bccf7530531764ad1e4c7f745ede150cdc081226a2c345d25e3fd04 WatchSource:0}: Error finding container c58dc0c49bccf7530531764ad1e4c7f745ede150cdc081226a2c345d25e3fd04: Status 404 returned error can't find the container with id c58dc0c49bccf7530531764ad1e4c7f745ede150cdc081226a2c345d25e3fd04 Nov 28 09:20:02 crc kubenswrapper[4946]: I1128 09:20:02.437060 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-2jvc9" event={"ID":"49f9db9c-f1e9-48d1-80fe-e77b98c6271b","Type":"ContainerStarted","Data":"2df4cdda58a34cba6d093eb515880013391ca434aef5d905721844f05a1bce25"} Nov 28 09:20:02 crc kubenswrapper[4946]: I1128 09:20:02.441519 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-networker-b8hjj" event={"ID":"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f","Type":"ContainerStarted","Data":"8ddf2e6de6c1d5efed8691bf305e962e3acebd32d31dcc31fda25e2be9e4f065"} Nov 28 09:20:02 crc kubenswrapper[4946]: I1128 09:20:02.441555 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-networker-b8hjj" event={"ID":"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f","Type":"ContainerStarted","Data":"c58dc0c49bccf7530531764ad1e4c7f745ede150cdc081226a2c345d25e3fd04"} Nov 28 09:20:02 crc kubenswrapper[4946]: I1128 09:20:02.469401 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-openstack-2jvc9" podStartSLOduration=2.79390509 podStartE2EDuration="3.469379837s" podCreationTimestamp="2025-11-28 09:19:59 +0000 UTC" firstStartedPulling="2025-11-28 09:20:00.564678793 +0000 UTC m=+8854.942743904" lastFinishedPulling="2025-11-28 09:20:01.24015354 +0000 UTC m=+8855.618218651" observedRunningTime="2025-11-28 09:20:02.454753644 +0000 UTC m=+8856.832818755" watchObservedRunningTime="2025-11-28 09:20:02.469379837 +0000 UTC m=+8856.847444938" Nov 28 09:20:02 crc kubenswrapper[4946]: I1128 09:20:02.482421 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-networker-b8hjj" podStartSLOduration=1.804692117 podStartE2EDuration="2.48239845s" podCreationTimestamp="2025-11-28 09:20:00 +0000 UTC" firstStartedPulling="2025-11-28 09:20:01.480320475 +0000 UTC m=+8855.858385586" lastFinishedPulling="2025-11-28 09:20:02.158026808 +0000 UTC m=+8856.536091919" observedRunningTime="2025-11-28 09:20:02.476549035 +0000 UTC m=+8856.854614146" watchObservedRunningTime="2025-11-28 09:20:02.48239845 +0000 UTC m=+8856.860463571" Nov 28 09:20:07 crc kubenswrapper[4946]: I1128 09:20:07.990032 4946 scope.go:117] "RemoveContainer" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" Nov 28 09:20:07 crc kubenswrapper[4946]: E1128 09:20:07.991730 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:20:13 crc kubenswrapper[4946]: I1128 09:20:13.563124 4946 generic.go:334] "Generic (PLEG): container finished" podID="ca68a9c2-7810-4c5c-9614-3aa70b0dff7f" containerID="8ddf2e6de6c1d5efed8691bf305e962e3acebd32d31dcc31fda25e2be9e4f065" exitCode=0 Nov 28 09:20:13 crc kubenswrapper[4946]: I1128 09:20:13.563210 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-networker-b8hjj" event={"ID":"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f","Type":"ContainerDied","Data":"8ddf2e6de6c1d5efed8691bf305e962e3acebd32d31dcc31fda25e2be9e4f065"} Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.186519 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-networker-b8hjj" Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.358881 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-ssh-key\") pod \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\" (UID: \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\") " Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.358957 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-ovn-combined-ca-bundle\") pod \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\" (UID: \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\") " Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.358987 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-inventory\") pod \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\" (UID: \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\") " Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.359133 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-bootstrap-combined-ca-bundle\") pod \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\" (UID: \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\") " Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.359174 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9746k\" (UniqueName: \"kubernetes.io/projected/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-kube-api-access-9746k\") pod \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\" (UID: \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\") " Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.359210 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-neutron-metadata-combined-ca-bundle\") pod \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\" (UID: \"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f\") " Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.365726 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ca68a9c2-7810-4c5c-9614-3aa70b0dff7f" (UID: "ca68a9c2-7810-4c5c-9614-3aa70b0dff7f"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.366236 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "ca68a9c2-7810-4c5c-9614-3aa70b0dff7f" (UID: "ca68a9c2-7810-4c5c-9614-3aa70b0dff7f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.368240 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-kube-api-access-9746k" (OuterVolumeSpecName: "kube-api-access-9746k") pod "ca68a9c2-7810-4c5c-9614-3aa70b0dff7f" (UID: "ca68a9c2-7810-4c5c-9614-3aa70b0dff7f"). InnerVolumeSpecName "kube-api-access-9746k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.379148 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ca68a9c2-7810-4c5c-9614-3aa70b0dff7f" (UID: "ca68a9c2-7810-4c5c-9614-3aa70b0dff7f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.394803 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ca68a9c2-7810-4c5c-9614-3aa70b0dff7f" (UID: "ca68a9c2-7810-4c5c-9614-3aa70b0dff7f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.397836 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-inventory" (OuterVolumeSpecName: "inventory") pod "ca68a9c2-7810-4c5c-9614-3aa70b0dff7f" (UID: "ca68a9c2-7810-4c5c-9614-3aa70b0dff7f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.461301 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.461337 4946 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.461351 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.461363 4946 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.461376 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9746k\" (UniqueName: \"kubernetes.io/projected/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-kube-api-access-9746k\") on node \"crc\" DevicePath \"\"" Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.461388 4946 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca68a9c2-7810-4c5c-9614-3aa70b0dff7f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.594665 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-networker-b8hjj" event={"ID":"ca68a9c2-7810-4c5c-9614-3aa70b0dff7f","Type":"ContainerDied","Data":"c58dc0c49bccf7530531764ad1e4c7f745ede150cdc081226a2c345d25e3fd04"} Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.594705 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c58dc0c49bccf7530531764ad1e4c7f745ede150cdc081226a2c345d25e3fd04" Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.594771 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-networker-b8hjj" Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.774767 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-networker-s2jsm"] Nov 28 09:20:15 crc kubenswrapper[4946]: E1128 09:20:15.775553 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca68a9c2-7810-4c5c-9614-3aa70b0dff7f" containerName="install-certs-openstack-openstack-networker" Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.775583 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca68a9c2-7810-4c5c-9614-3aa70b0dff7f" containerName="install-certs-openstack-openstack-networker" Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.775984 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca68a9c2-7810-4c5c-9614-3aa70b0dff7f" containerName="install-certs-openstack-openstack-networker" Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.777351 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-networker-s2jsm" Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.779580 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.780690 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-7cq8d" Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.785401 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-networker-s2jsm"] Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.972256 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/507d2a47-1976-44de-b9d7-ba27223d3441-inventory\") pod \"ovn-openstack-openstack-networker-s2jsm\" (UID: \"507d2a47-1976-44de-b9d7-ba27223d3441\") " pod="openstack/ovn-openstack-openstack-networker-s2jsm" Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.972697 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/507d2a47-1976-44de-b9d7-ba27223d3441-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-networker-s2jsm\" (UID: \"507d2a47-1976-44de-b9d7-ba27223d3441\") " pod="openstack/ovn-openstack-openstack-networker-s2jsm" Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.972763 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/507d2a47-1976-44de-b9d7-ba27223d3441-ssh-key\") pod \"ovn-openstack-openstack-networker-s2jsm\" (UID: \"507d2a47-1976-44de-b9d7-ba27223d3441\") " pod="openstack/ovn-openstack-openstack-networker-s2jsm" Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.972810 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/507d2a47-1976-44de-b9d7-ba27223d3441-ovncontroller-config-0\") pod \"ovn-openstack-openstack-networker-s2jsm\" (UID: \"507d2a47-1976-44de-b9d7-ba27223d3441\") " pod="openstack/ovn-openstack-openstack-networker-s2jsm" Nov 28 09:20:15 crc kubenswrapper[4946]: I1128 09:20:15.972907 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8s4f\" (UniqueName: \"kubernetes.io/projected/507d2a47-1976-44de-b9d7-ba27223d3441-kube-api-access-b8s4f\") pod \"ovn-openstack-openstack-networker-s2jsm\" (UID: \"507d2a47-1976-44de-b9d7-ba27223d3441\") " pod="openstack/ovn-openstack-openstack-networker-s2jsm" Nov 28 09:20:16 crc kubenswrapper[4946]: I1128 09:20:16.075019 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8s4f\" (UniqueName: \"kubernetes.io/projected/507d2a47-1976-44de-b9d7-ba27223d3441-kube-api-access-b8s4f\") pod \"ovn-openstack-openstack-networker-s2jsm\" (UID: \"507d2a47-1976-44de-b9d7-ba27223d3441\") " pod="openstack/ovn-openstack-openstack-networker-s2jsm" Nov 28 09:20:16 crc kubenswrapper[4946]: I1128 09:20:16.075114 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/507d2a47-1976-44de-b9d7-ba27223d3441-inventory\") pod \"ovn-openstack-openstack-networker-s2jsm\" (UID: \"507d2a47-1976-44de-b9d7-ba27223d3441\") " pod="openstack/ovn-openstack-openstack-networker-s2jsm" Nov 28 09:20:16 crc kubenswrapper[4946]: I1128 09:20:16.075178 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/507d2a47-1976-44de-b9d7-ba27223d3441-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-networker-s2jsm\" (UID: \"507d2a47-1976-44de-b9d7-ba27223d3441\") " pod="openstack/ovn-openstack-openstack-networker-s2jsm" Nov 28 09:20:16 crc kubenswrapper[4946]: I1128 09:20:16.075218 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/507d2a47-1976-44de-b9d7-ba27223d3441-ssh-key\") pod \"ovn-openstack-openstack-networker-s2jsm\" (UID: \"507d2a47-1976-44de-b9d7-ba27223d3441\") " pod="openstack/ovn-openstack-openstack-networker-s2jsm" Nov 28 09:20:16 crc kubenswrapper[4946]: I1128 09:20:16.075249 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/507d2a47-1976-44de-b9d7-ba27223d3441-ovncontroller-config-0\") pod \"ovn-openstack-openstack-networker-s2jsm\" (UID: \"507d2a47-1976-44de-b9d7-ba27223d3441\") " pod="openstack/ovn-openstack-openstack-networker-s2jsm" Nov 28 09:20:16 crc kubenswrapper[4946]: I1128 09:20:16.076095 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/507d2a47-1976-44de-b9d7-ba27223d3441-ovncontroller-config-0\") pod \"ovn-openstack-openstack-networker-s2jsm\" (UID: \"507d2a47-1976-44de-b9d7-ba27223d3441\") " pod="openstack/ovn-openstack-openstack-networker-s2jsm" Nov 28 09:20:16 crc kubenswrapper[4946]: I1128 09:20:16.080691 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/507d2a47-1976-44de-b9d7-ba27223d3441-inventory\") pod \"ovn-openstack-openstack-networker-s2jsm\" (UID: \"507d2a47-1976-44de-b9d7-ba27223d3441\") " pod="openstack/ovn-openstack-openstack-networker-s2jsm" Nov 28 09:20:16 crc kubenswrapper[4946]: I1128 09:20:16.082135 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/507d2a47-1976-44de-b9d7-ba27223d3441-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-networker-s2jsm\" (UID: \"507d2a47-1976-44de-b9d7-ba27223d3441\") " pod="openstack/ovn-openstack-openstack-networker-s2jsm" Nov 28 09:20:16 crc kubenswrapper[4946]: I1128 09:20:16.091443 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/507d2a47-1976-44de-b9d7-ba27223d3441-ssh-key\") pod \"ovn-openstack-openstack-networker-s2jsm\" (UID: \"507d2a47-1976-44de-b9d7-ba27223d3441\") " pod="openstack/ovn-openstack-openstack-networker-s2jsm" Nov 28 09:20:16 crc kubenswrapper[4946]: I1128 09:20:16.098132 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8s4f\" (UniqueName: \"kubernetes.io/projected/507d2a47-1976-44de-b9d7-ba27223d3441-kube-api-access-b8s4f\") pod \"ovn-openstack-openstack-networker-s2jsm\" (UID: \"507d2a47-1976-44de-b9d7-ba27223d3441\") " pod="openstack/ovn-openstack-openstack-networker-s2jsm" Nov 28 09:20:16 crc kubenswrapper[4946]: I1128 09:20:16.397334 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-networker-s2jsm" Nov 28 09:20:17 crc kubenswrapper[4946]: I1128 09:20:17.418239 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-networker-s2jsm"] Nov 28 09:20:17 crc kubenswrapper[4946]: I1128 09:20:17.632791 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-networker-s2jsm" event={"ID":"507d2a47-1976-44de-b9d7-ba27223d3441","Type":"ContainerStarted","Data":"c6819d13eabef62dfcb7d8aa8304bcaa660530e40bf1289d0ec860a095b6f4b6"} Nov 28 09:20:18 crc kubenswrapper[4946]: I1128 09:20:18.647720 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-networker-s2jsm" event={"ID":"507d2a47-1976-44de-b9d7-ba27223d3441","Type":"ContainerStarted","Data":"3ccfe698e0ca4f3d67b4a96ff8b92eda1771c8ca705262fc12b1efcefc9424a9"} Nov 28 09:20:18 crc kubenswrapper[4946]: I1128 09:20:18.651676 4946 generic.go:334] "Generic (PLEG): container finished" podID="49f9db9c-f1e9-48d1-80fe-e77b98c6271b" containerID="2df4cdda58a34cba6d093eb515880013391ca434aef5d905721844f05a1bce25" exitCode=0 Nov 28 09:20:18 crc kubenswrapper[4946]: I1128 09:20:18.651720 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-2jvc9" event={"ID":"49f9db9c-f1e9-48d1-80fe-e77b98c6271b","Type":"ContainerDied","Data":"2df4cdda58a34cba6d093eb515880013391ca434aef5d905721844f05a1bce25"} Nov 28 09:20:18 crc kubenswrapper[4946]: I1128 09:20:18.673483 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-networker-s2jsm" podStartSLOduration=3.186813242 podStartE2EDuration="3.673440716s" podCreationTimestamp="2025-11-28 09:20:15 +0000 UTC" firstStartedPulling="2025-11-28 09:20:17.425459866 +0000 UTC m=+8871.803524997" lastFinishedPulling="2025-11-28 09:20:17.91208736 +0000 UTC m=+8872.290152471" observedRunningTime="2025-11-28 09:20:18.667638163 +0000 UTC m=+8873.045703304" watchObservedRunningTime="2025-11-28 09:20:18.673440716 +0000 UTC m=+8873.051505867" Nov 28 09:20:20 crc kubenswrapper[4946]: I1128 09:20:20.833076 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-2jvc9" Nov 28 09:20:20 crc kubenswrapper[4946]: I1128 09:20:20.936173 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-1\" (UniqueName: \"kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-inventory-1\") pod \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\" (UID: \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\") " Nov 28 09:20:20 crc kubenswrapper[4946]: I1128 09:20:20.936353 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-ssh-key-openstack-cell1\") pod \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\" (UID: \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\") " Nov 28 09:20:20 crc kubenswrapper[4946]: I1128 09:20:20.936400 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-networker\" (UniqueName: \"kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-ssh-key-openstack-networker\") pod \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\" (UID: \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\") " Nov 28 09:20:20 crc kubenswrapper[4946]: I1128 09:20:20.936444 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ft94\" (UniqueName: \"kubernetes.io/projected/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-kube-api-access-5ft94\") pod \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\" (UID: \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\") " Nov 28 09:20:20 crc kubenswrapper[4946]: I1128 09:20:20.936499 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-ceph\") pod \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\" (UID: \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\") " Nov 28 09:20:20 crc kubenswrapper[4946]: I1128 09:20:20.936515 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-inventory-0\") pod \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\" (UID: \"49f9db9c-f1e9-48d1-80fe-e77b98c6271b\") " Nov 28 09:20:20 crc kubenswrapper[4946]: I1128 09:20:20.970487 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-ceph" (OuterVolumeSpecName: "ceph") pod "49f9db9c-f1e9-48d1-80fe-e77b98c6271b" (UID: "49f9db9c-f1e9-48d1-80fe-e77b98c6271b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:20:20 crc kubenswrapper[4946]: I1128 09:20:20.985120 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-kube-api-access-5ft94" (OuterVolumeSpecName: "kube-api-access-5ft94") pod "49f9db9c-f1e9-48d1-80fe-e77b98c6271b" (UID: "49f9db9c-f1e9-48d1-80fe-e77b98c6271b"). InnerVolumeSpecName "kube-api-access-5ft94". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:20:21 crc kubenswrapper[4946]: I1128 09:20:21.041149 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ft94\" (UniqueName: \"kubernetes.io/projected/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-kube-api-access-5ft94\") on node \"crc\" DevicePath \"\"" Nov 28 09:20:21 crc kubenswrapper[4946]: I1128 09:20:21.041190 4946 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-ceph\") on node \"crc\" DevicePath \"\"" Nov 28 09:20:21 crc kubenswrapper[4946]: I1128 09:20:21.059385 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "49f9db9c-f1e9-48d1-80fe-e77b98c6271b" (UID: "49f9db9c-f1e9-48d1-80fe-e77b98c6271b"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:20:21 crc kubenswrapper[4946]: I1128 09:20:21.068569 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-inventory-1" (OuterVolumeSpecName: "inventory-1") pod "49f9db9c-f1e9-48d1-80fe-e77b98c6271b" (UID: "49f9db9c-f1e9-48d1-80fe-e77b98c6271b"). InnerVolumeSpecName "inventory-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:20:21 crc kubenswrapper[4946]: I1128 09:20:21.069074 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "49f9db9c-f1e9-48d1-80fe-e77b98c6271b" (UID: "49f9db9c-f1e9-48d1-80fe-e77b98c6271b"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:20:21 crc kubenswrapper[4946]: I1128 09:20:21.125674 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-ssh-key-openstack-networker" (OuterVolumeSpecName: "ssh-key-openstack-networker") pod "49f9db9c-f1e9-48d1-80fe-e77b98c6271b" (UID: "49f9db9c-f1e9-48d1-80fe-e77b98c6271b"). InnerVolumeSpecName "ssh-key-openstack-networker". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:20:21 crc kubenswrapper[4946]: I1128 09:20:21.145628 4946 reconciler_common.go:293] "Volume detached for volume \"inventory-1\" (UniqueName: \"kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-inventory-1\") on node \"crc\" DevicePath \"\"" Nov 28 09:20:21 crc kubenswrapper[4946]: I1128 09:20:21.145799 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Nov 28 09:20:21 crc kubenswrapper[4946]: I1128 09:20:21.145854 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-networker\" (UniqueName: \"kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-ssh-key-openstack-networker\") on node \"crc\" DevicePath \"\"" Nov 28 09:20:21 crc kubenswrapper[4946]: I1128 09:20:21.145902 4946 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/49f9db9c-f1e9-48d1-80fe-e77b98c6271b-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 28 09:20:21 crc kubenswrapper[4946]: I1128 09:20:21.685765 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-2jvc9" event={"ID":"49f9db9c-f1e9-48d1-80fe-e77b98c6271b","Type":"ContainerDied","Data":"12f30ee77f759c3bbd1e6ac8e68420ebeee806e0c5f3d2970baa040f8275a547"} Nov 28 09:20:21 crc kubenswrapper[4946]: I1128 09:20:21.685807 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12f30ee77f759c3bbd1e6ac8e68420ebeee806e0c5f3d2970baa040f8275a547" Nov 28 09:20:21 crc kubenswrapper[4946]: I1128 09:20:21.685836 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-2jvc9" Nov 28 09:20:21 crc kubenswrapper[4946]: I1128 09:20:21.971348 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-cell1-tv77g"] Nov 28 09:20:21 crc kubenswrapper[4946]: E1128 09:20:21.972562 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f9db9c-f1e9-48d1-80fe-e77b98c6271b" containerName="ssh-known-hosts-openstack" Nov 28 09:20:21 crc kubenswrapper[4946]: I1128 09:20:21.972586 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f9db9c-f1e9-48d1-80fe-e77b98c6271b" containerName="ssh-known-hosts-openstack" Nov 28 09:20:21 crc kubenswrapper[4946]: I1128 09:20:21.973104 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="49f9db9c-f1e9-48d1-80fe-e77b98c6271b" containerName="ssh-known-hosts-openstack" Nov 28 09:20:21 crc kubenswrapper[4946]: I1128 09:20:21.975162 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-tv77g" Nov 28 09:20:21 crc kubenswrapper[4946]: I1128 09:20:21.979564 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-psw2j" Nov 28 09:20:21 crc kubenswrapper[4946]: I1128 09:20:21.979733 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 28 09:20:22 crc kubenswrapper[4946]: I1128 09:20:22.009293 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-tv77g"] Nov 28 09:20:22 crc kubenswrapper[4946]: I1128 09:20:22.063673 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xvh2\" (UniqueName: \"kubernetes.io/projected/4d4e1e94-5489-464d-9a89-38c5931beaa6-kube-api-access-7xvh2\") pod \"run-os-openstack-openstack-cell1-tv77g\" (UID: \"4d4e1e94-5489-464d-9a89-38c5931beaa6\") " pod="openstack/run-os-openstack-openstack-cell1-tv77g" Nov 28 09:20:22 crc kubenswrapper[4946]: I1128 09:20:22.063747 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d4e1e94-5489-464d-9a89-38c5931beaa6-inventory\") pod \"run-os-openstack-openstack-cell1-tv77g\" (UID: \"4d4e1e94-5489-464d-9a89-38c5931beaa6\") " pod="openstack/run-os-openstack-openstack-cell1-tv77g" Nov 28 09:20:22 crc kubenswrapper[4946]: I1128 09:20:22.063904 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4d4e1e94-5489-464d-9a89-38c5931beaa6-ceph\") pod \"run-os-openstack-openstack-cell1-tv77g\" (UID: \"4d4e1e94-5489-464d-9a89-38c5931beaa6\") " pod="openstack/run-os-openstack-openstack-cell1-tv77g" Nov 28 09:20:22 crc kubenswrapper[4946]: I1128 09:20:22.063936 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4d4e1e94-5489-464d-9a89-38c5931beaa6-ssh-key\") pod \"run-os-openstack-openstack-cell1-tv77g\" (UID: \"4d4e1e94-5489-464d-9a89-38c5931beaa6\") " pod="openstack/run-os-openstack-openstack-cell1-tv77g" Nov 28 09:20:22 crc kubenswrapper[4946]: I1128 09:20:22.165765 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4d4e1e94-5489-464d-9a89-38c5931beaa6-ceph\") pod \"run-os-openstack-openstack-cell1-tv77g\" (UID: \"4d4e1e94-5489-464d-9a89-38c5931beaa6\") " pod="openstack/run-os-openstack-openstack-cell1-tv77g" Nov 28 09:20:22 crc kubenswrapper[4946]: I1128 09:20:22.166089 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4d4e1e94-5489-464d-9a89-38c5931beaa6-ssh-key\") pod \"run-os-openstack-openstack-cell1-tv77g\" (UID: \"4d4e1e94-5489-464d-9a89-38c5931beaa6\") " pod="openstack/run-os-openstack-openstack-cell1-tv77g" Nov 28 09:20:22 crc kubenswrapper[4946]: I1128 09:20:22.166218 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xvh2\" (UniqueName: \"kubernetes.io/projected/4d4e1e94-5489-464d-9a89-38c5931beaa6-kube-api-access-7xvh2\") pod \"run-os-openstack-openstack-cell1-tv77g\" (UID: \"4d4e1e94-5489-464d-9a89-38c5931beaa6\") " pod="openstack/run-os-openstack-openstack-cell1-tv77g" Nov 28 09:20:22 crc kubenswrapper[4946]: I1128 09:20:22.166255 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d4e1e94-5489-464d-9a89-38c5931beaa6-inventory\") pod \"run-os-openstack-openstack-cell1-tv77g\" (UID: \"4d4e1e94-5489-464d-9a89-38c5931beaa6\") " pod="openstack/run-os-openstack-openstack-cell1-tv77g" Nov 28 09:20:22 crc kubenswrapper[4946]: I1128 09:20:22.171153 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4d4e1e94-5489-464d-9a89-38c5931beaa6-ssh-key\") pod \"run-os-openstack-openstack-cell1-tv77g\" (UID: \"4d4e1e94-5489-464d-9a89-38c5931beaa6\") " pod="openstack/run-os-openstack-openstack-cell1-tv77g" Nov 28 09:20:22 crc kubenswrapper[4946]: I1128 09:20:22.171642 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d4e1e94-5489-464d-9a89-38c5931beaa6-inventory\") pod \"run-os-openstack-openstack-cell1-tv77g\" (UID: \"4d4e1e94-5489-464d-9a89-38c5931beaa6\") " pod="openstack/run-os-openstack-openstack-cell1-tv77g" Nov 28 09:20:22 crc kubenswrapper[4946]: I1128 09:20:22.176362 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4d4e1e94-5489-464d-9a89-38c5931beaa6-ceph\") pod \"run-os-openstack-openstack-cell1-tv77g\" (UID: \"4d4e1e94-5489-464d-9a89-38c5931beaa6\") " pod="openstack/run-os-openstack-openstack-cell1-tv77g" Nov 28 09:20:22 crc kubenswrapper[4946]: I1128 09:20:22.182226 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xvh2\" (UniqueName: \"kubernetes.io/projected/4d4e1e94-5489-464d-9a89-38c5931beaa6-kube-api-access-7xvh2\") pod \"run-os-openstack-openstack-cell1-tv77g\" (UID: \"4d4e1e94-5489-464d-9a89-38c5931beaa6\") " pod="openstack/run-os-openstack-openstack-cell1-tv77g" Nov 28 09:20:22 crc kubenswrapper[4946]: I1128 09:20:22.313097 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-tv77g" Nov 28 09:20:22 crc kubenswrapper[4946]: I1128 09:20:22.892167 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-tv77g"] Nov 28 09:20:22 crc kubenswrapper[4946]: W1128 09:20:22.893555 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d4e1e94_5489_464d_9a89_38c5931beaa6.slice/crio-76e8ecdab896cb88925f1348992be0976e0e19f5c51d6acc6b7bc42b6a2a8c85 WatchSource:0}: Error finding container 76e8ecdab896cb88925f1348992be0976e0e19f5c51d6acc6b7bc42b6a2a8c85: Status 404 returned error can't find the container with id 76e8ecdab896cb88925f1348992be0976e0e19f5c51d6acc6b7bc42b6a2a8c85 Nov 28 09:20:22 crc kubenswrapper[4946]: I1128 09:20:22.992745 4946 scope.go:117] "RemoveContainer" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" Nov 28 09:20:22 crc kubenswrapper[4946]: E1128 09:20:22.993388 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:20:23 crc kubenswrapper[4946]: I1128 09:20:23.711818 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-tv77g" event={"ID":"4d4e1e94-5489-464d-9a89-38c5931beaa6","Type":"ContainerStarted","Data":"76e8ecdab896cb88925f1348992be0976e0e19f5c51d6acc6b7bc42b6a2a8c85"} Nov 28 09:20:24 crc kubenswrapper[4946]: I1128 09:20:24.725797 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-tv77g" event={"ID":"4d4e1e94-5489-464d-9a89-38c5931beaa6","Type":"ContainerStarted","Data":"644b5a52c5c45e97d9fc2e2b58a43f5bbdaf0b37cbc71bd523d2554f0d57ac2b"} Nov 28 09:20:24 crc kubenswrapper[4946]: I1128 09:20:24.746933 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-cell1-tv77g" podStartSLOduration=3.180669259 podStartE2EDuration="3.746910658s" podCreationTimestamp="2025-11-28 09:20:21 +0000 UTC" firstStartedPulling="2025-11-28 09:20:22.898767226 +0000 UTC m=+8877.276832327" lastFinishedPulling="2025-11-28 09:20:23.465008605 +0000 UTC m=+8877.843073726" observedRunningTime="2025-11-28 09:20:24.74135145 +0000 UTC m=+8879.119416571" watchObservedRunningTime="2025-11-28 09:20:24.746910658 +0000 UTC m=+8879.124975769" Nov 28 09:20:33 crc kubenswrapper[4946]: I1128 09:20:33.815899 4946 generic.go:334] "Generic (PLEG): container finished" podID="4d4e1e94-5489-464d-9a89-38c5931beaa6" containerID="644b5a52c5c45e97d9fc2e2b58a43f5bbdaf0b37cbc71bd523d2554f0d57ac2b" exitCode=0 Nov 28 09:20:33 crc kubenswrapper[4946]: I1128 09:20:33.815962 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-tv77g" event={"ID":"4d4e1e94-5489-464d-9a89-38c5931beaa6","Type":"ContainerDied","Data":"644b5a52c5c45e97d9fc2e2b58a43f5bbdaf0b37cbc71bd523d2554f0d57ac2b"} Nov 28 09:20:35 crc kubenswrapper[4946]: I1128 09:20:35.385225 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-tv77g" Nov 28 09:20:35 crc kubenswrapper[4946]: I1128 09:20:35.451421 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4d4e1e94-5489-464d-9a89-38c5931beaa6-ssh-key\") pod \"4d4e1e94-5489-464d-9a89-38c5931beaa6\" (UID: \"4d4e1e94-5489-464d-9a89-38c5931beaa6\") " Nov 28 09:20:35 crc kubenswrapper[4946]: I1128 09:20:35.451671 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4d4e1e94-5489-464d-9a89-38c5931beaa6-ceph\") pod \"4d4e1e94-5489-464d-9a89-38c5931beaa6\" (UID: \"4d4e1e94-5489-464d-9a89-38c5931beaa6\") " Nov 28 09:20:35 crc kubenswrapper[4946]: I1128 09:20:35.451984 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d4e1e94-5489-464d-9a89-38c5931beaa6-inventory\") pod \"4d4e1e94-5489-464d-9a89-38c5931beaa6\" (UID: \"4d4e1e94-5489-464d-9a89-38c5931beaa6\") " Nov 28 09:20:35 crc kubenswrapper[4946]: I1128 09:20:35.452078 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xvh2\" (UniqueName: \"kubernetes.io/projected/4d4e1e94-5489-464d-9a89-38c5931beaa6-kube-api-access-7xvh2\") pod \"4d4e1e94-5489-464d-9a89-38c5931beaa6\" (UID: \"4d4e1e94-5489-464d-9a89-38c5931beaa6\") " Nov 28 09:20:35 crc kubenswrapper[4946]: I1128 09:20:35.461141 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4e1e94-5489-464d-9a89-38c5931beaa6-ceph" (OuterVolumeSpecName: "ceph") pod "4d4e1e94-5489-464d-9a89-38c5931beaa6" (UID: "4d4e1e94-5489-464d-9a89-38c5931beaa6"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:20:35 crc kubenswrapper[4946]: I1128 09:20:35.472408 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d4e1e94-5489-464d-9a89-38c5931beaa6-kube-api-access-7xvh2" (OuterVolumeSpecName: "kube-api-access-7xvh2") pod "4d4e1e94-5489-464d-9a89-38c5931beaa6" (UID: "4d4e1e94-5489-464d-9a89-38c5931beaa6"). InnerVolumeSpecName "kube-api-access-7xvh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:20:35 crc kubenswrapper[4946]: I1128 09:20:35.502569 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4e1e94-5489-464d-9a89-38c5931beaa6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4d4e1e94-5489-464d-9a89-38c5931beaa6" (UID: "4d4e1e94-5489-464d-9a89-38c5931beaa6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:20:35 crc kubenswrapper[4946]: I1128 09:20:35.511022 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4e1e94-5489-464d-9a89-38c5931beaa6-inventory" (OuterVolumeSpecName: "inventory") pod "4d4e1e94-5489-464d-9a89-38c5931beaa6" (UID: "4d4e1e94-5489-464d-9a89-38c5931beaa6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:20:35 crc kubenswrapper[4946]: I1128 09:20:35.554530 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d4e1e94-5489-464d-9a89-38c5931beaa6-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:20:35 crc kubenswrapper[4946]: I1128 09:20:35.554657 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xvh2\" (UniqueName: \"kubernetes.io/projected/4d4e1e94-5489-464d-9a89-38c5931beaa6-kube-api-access-7xvh2\") on node \"crc\" DevicePath \"\"" Nov 28 09:20:35 crc kubenswrapper[4946]: I1128 09:20:35.554748 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4d4e1e94-5489-464d-9a89-38c5931beaa6-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:20:35 crc kubenswrapper[4946]: I1128 09:20:35.554821 4946 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4d4e1e94-5489-464d-9a89-38c5931beaa6-ceph\") on node \"crc\" DevicePath \"\"" Nov 28 09:20:35 crc kubenswrapper[4946]: I1128 09:20:35.840507 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-tv77g" event={"ID":"4d4e1e94-5489-464d-9a89-38c5931beaa6","Type":"ContainerDied","Data":"76e8ecdab896cb88925f1348992be0976e0e19f5c51d6acc6b7bc42b6a2a8c85"} Nov 28 09:20:35 crc kubenswrapper[4946]: I1128 09:20:35.840566 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-tv77g" Nov 28 09:20:35 crc kubenswrapper[4946]: I1128 09:20:35.840598 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76e8ecdab896cb88925f1348992be0976e0e19f5c51d6acc6b7bc42b6a2a8c85" Nov 28 09:20:35 crc kubenswrapper[4946]: I1128 09:20:35.941372 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-bwtxd"] Nov 28 09:20:35 crc kubenswrapper[4946]: E1128 09:20:35.942105 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d4e1e94-5489-464d-9a89-38c5931beaa6" containerName="run-os-openstack-openstack-cell1" Nov 28 09:20:35 crc kubenswrapper[4946]: I1128 09:20:35.942125 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d4e1e94-5489-464d-9a89-38c5931beaa6" containerName="run-os-openstack-openstack-cell1" Nov 28 09:20:35 crc kubenswrapper[4946]: I1128 09:20:35.942384 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d4e1e94-5489-464d-9a89-38c5931beaa6" containerName="run-os-openstack-openstack-cell1" Nov 28 09:20:35 crc kubenswrapper[4946]: I1128 09:20:35.943326 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-bwtxd" Nov 28 09:20:35 crc kubenswrapper[4946]: I1128 09:20:35.949943 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 28 09:20:35 crc kubenswrapper[4946]: I1128 09:20:35.950076 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-psw2j" Nov 28 09:20:35 crc kubenswrapper[4946]: I1128 09:20:35.953898 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-bwtxd"] Nov 28 09:20:36 crc kubenswrapper[4946]: I1128 09:20:36.064120 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518-ceph\") pod \"reboot-os-openstack-openstack-cell1-bwtxd\" (UID: \"4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518\") " pod="openstack/reboot-os-openstack-openstack-cell1-bwtxd" Nov 28 09:20:36 crc kubenswrapper[4946]: I1128 09:20:36.064339 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-bwtxd\" (UID: \"4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518\") " pod="openstack/reboot-os-openstack-openstack-cell1-bwtxd" Nov 28 09:20:36 crc kubenswrapper[4946]: I1128 09:20:36.064452 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518-inventory\") pod \"reboot-os-openstack-openstack-cell1-bwtxd\" (UID: \"4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518\") " pod="openstack/reboot-os-openstack-openstack-cell1-bwtxd" Nov 28 09:20:36 crc kubenswrapper[4946]: I1128 09:20:36.064554 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54bzt\" (UniqueName: \"kubernetes.io/projected/4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518-kube-api-access-54bzt\") pod \"reboot-os-openstack-openstack-cell1-bwtxd\" (UID: \"4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518\") " pod="openstack/reboot-os-openstack-openstack-cell1-bwtxd" Nov 28 09:20:36 crc kubenswrapper[4946]: I1128 09:20:36.166625 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518-inventory\") pod \"reboot-os-openstack-openstack-cell1-bwtxd\" (UID: \"4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518\") " pod="openstack/reboot-os-openstack-openstack-cell1-bwtxd" Nov 28 09:20:36 crc kubenswrapper[4946]: I1128 09:20:36.166725 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54bzt\" (UniqueName: \"kubernetes.io/projected/4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518-kube-api-access-54bzt\") pod \"reboot-os-openstack-openstack-cell1-bwtxd\" (UID: \"4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518\") " pod="openstack/reboot-os-openstack-openstack-cell1-bwtxd" Nov 28 09:20:36 crc kubenswrapper[4946]: I1128 09:20:36.166754 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518-ceph\") pod \"reboot-os-openstack-openstack-cell1-bwtxd\" (UID: \"4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518\") " pod="openstack/reboot-os-openstack-openstack-cell1-bwtxd" Nov 28 09:20:36 crc kubenswrapper[4946]: I1128 09:20:36.166888 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-bwtxd\" (UID: \"4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518\") " pod="openstack/reboot-os-openstack-openstack-cell1-bwtxd" Nov 28 09:20:36 crc kubenswrapper[4946]: I1128 09:20:36.172998 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518-ceph\") pod \"reboot-os-openstack-openstack-cell1-bwtxd\" (UID: \"4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518\") " pod="openstack/reboot-os-openstack-openstack-cell1-bwtxd" Nov 28 09:20:36 crc kubenswrapper[4946]: I1128 09:20:36.173127 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-bwtxd\" (UID: \"4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518\") " pod="openstack/reboot-os-openstack-openstack-cell1-bwtxd" Nov 28 09:20:36 crc kubenswrapper[4946]: I1128 09:20:36.188752 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518-inventory\") pod \"reboot-os-openstack-openstack-cell1-bwtxd\" (UID: \"4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518\") " pod="openstack/reboot-os-openstack-openstack-cell1-bwtxd" Nov 28 09:20:36 crc kubenswrapper[4946]: I1128 09:20:36.198707 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54bzt\" (UniqueName: \"kubernetes.io/projected/4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518-kube-api-access-54bzt\") pod \"reboot-os-openstack-openstack-cell1-bwtxd\" (UID: \"4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518\") " pod="openstack/reboot-os-openstack-openstack-cell1-bwtxd" Nov 28 09:20:36 crc kubenswrapper[4946]: I1128 09:20:36.270716 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-bwtxd" Nov 28 09:20:36 crc kubenswrapper[4946]: I1128 09:20:36.838051 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-bwtxd"] Nov 28 09:20:36 crc kubenswrapper[4946]: I1128 09:20:36.854770 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-bwtxd" event={"ID":"4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518","Type":"ContainerStarted","Data":"9972b7bd0fd522169cb6b6cfa73eae3a635dbc662a4033bd6c298256fb88558e"} Nov 28 09:20:36 crc kubenswrapper[4946]: I1128 09:20:36.989623 4946 scope.go:117] "RemoveContainer" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" Nov 28 09:20:36 crc kubenswrapper[4946]: E1128 09:20:36.989905 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:20:37 crc kubenswrapper[4946]: I1128 09:20:37.868814 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-bwtxd" event={"ID":"4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518","Type":"ContainerStarted","Data":"4aa64cad9ba3d3c764969a290d84b8e666b6ac9c0574bd2f79995e9c6d952ce3"} Nov 28 09:20:37 crc kubenswrapper[4946]: I1128 09:20:37.890954 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-cell1-bwtxd" podStartSLOduration=2.299352672 podStartE2EDuration="2.890931989s" podCreationTimestamp="2025-11-28 09:20:35 +0000 UTC" firstStartedPulling="2025-11-28 09:20:36.840492125 +0000 UTC m=+8891.218557236" lastFinishedPulling="2025-11-28 09:20:37.432071402 +0000 UTC m=+8891.810136553" observedRunningTime="2025-11-28 09:20:37.889970145 +0000 UTC m=+8892.268035276" watchObservedRunningTime="2025-11-28 09:20:37.890931989 +0000 UTC m=+8892.268997110" Nov 28 09:20:50 crc kubenswrapper[4946]: I1128 09:20:50.989387 4946 scope.go:117] "RemoveContainer" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" Nov 28 09:20:50 crc kubenswrapper[4946]: E1128 09:20:50.990207 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:20:52 crc kubenswrapper[4946]: I1128 09:20:52.032081 4946 generic.go:334] "Generic (PLEG): container finished" podID="4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518" containerID="4aa64cad9ba3d3c764969a290d84b8e666b6ac9c0574bd2f79995e9c6d952ce3" exitCode=0 Nov 28 09:20:52 crc kubenswrapper[4946]: I1128 09:20:52.032128 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-bwtxd" event={"ID":"4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518","Type":"ContainerDied","Data":"4aa64cad9ba3d3c764969a290d84b8e666b6ac9c0574bd2f79995e9c6d952ce3"} Nov 28 09:20:53 crc kubenswrapper[4946]: I1128 09:20:53.474258 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-bwtxd" Nov 28 09:20:53 crc kubenswrapper[4946]: I1128 09:20:53.551450 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54bzt\" (UniqueName: \"kubernetes.io/projected/4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518-kube-api-access-54bzt\") pod \"4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518\" (UID: \"4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518\") " Nov 28 09:20:53 crc kubenswrapper[4946]: I1128 09:20:53.551670 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518-ssh-key\") pod \"4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518\" (UID: \"4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518\") " Nov 28 09:20:53 crc kubenswrapper[4946]: I1128 09:20:53.551761 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518-inventory\") pod \"4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518\" (UID: \"4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518\") " Nov 28 09:20:53 crc kubenswrapper[4946]: I1128 09:20:53.552037 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518-ceph\") pod \"4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518\" (UID: \"4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518\") " Nov 28 09:20:53 crc kubenswrapper[4946]: I1128 09:20:53.560617 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518-ceph" (OuterVolumeSpecName: "ceph") pod "4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518" (UID: "4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:20:53 crc kubenswrapper[4946]: I1128 09:20:53.560710 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518-kube-api-access-54bzt" (OuterVolumeSpecName: "kube-api-access-54bzt") pod "4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518" (UID: "4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518"). InnerVolumeSpecName "kube-api-access-54bzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:20:53 crc kubenswrapper[4946]: I1128 09:20:53.581879 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518" (UID: "4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:20:53 crc kubenswrapper[4946]: I1128 09:20:53.584571 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518-inventory" (OuterVolumeSpecName: "inventory") pod "4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518" (UID: "4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:20:53 crc kubenswrapper[4946]: I1128 09:20:53.654522 4946 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518-ceph\") on node \"crc\" DevicePath \"\"" Nov 28 09:20:53 crc kubenswrapper[4946]: I1128 09:20:53.654559 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54bzt\" (UniqueName: \"kubernetes.io/projected/4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518-kube-api-access-54bzt\") on node \"crc\" DevicePath \"\"" Nov 28 09:20:53 crc kubenswrapper[4946]: I1128 09:20:53.654569 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:20:53 crc kubenswrapper[4946]: I1128 09:20:53.654577 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.058706 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-bwtxd" event={"ID":"4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518","Type":"ContainerDied","Data":"9972b7bd0fd522169cb6b6cfa73eae3a635dbc662a4033bd6c298256fb88558e"} Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.058756 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9972b7bd0fd522169cb6b6cfa73eae3a635dbc662a4033bd6c298256fb88558e" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.058789 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-bwtxd" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.143747 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-qkmnt"] Nov 28 09:20:54 crc kubenswrapper[4946]: E1128 09:20:54.144294 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518" containerName="reboot-os-openstack-openstack-cell1" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.144315 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518" containerName="reboot-os-openstack-openstack-cell1" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.144627 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518" containerName="reboot-os-openstack-openstack-cell1" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.145627 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.148404 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-psw2j" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.149292 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.161420 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-qkmnt"] Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.166902 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.166986 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.167047 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.167077 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.167104 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db566\" (UniqueName: \"kubernetes.io/projected/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-kube-api-access-db566\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.167145 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.167181 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-inventory\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.167224 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-ssh-key\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.167265 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.167294 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-ceph\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.167335 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.167368 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.269081 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.269147 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-ceph\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.269188 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.269230 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.269366 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.269402 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.269447 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.269489 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.269516 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db566\" (UniqueName: \"kubernetes.io/projected/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-kube-api-access-db566\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.269556 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.269583 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-inventory\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.269627 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-ssh-key\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.275814 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.275978 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-ceph\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.276446 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.276497 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.276626 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-ssh-key\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.276726 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.278063 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.278244 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.279195 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-inventory\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.283157 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.290377 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.292985 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db566\" (UniqueName: \"kubernetes.io/projected/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-kube-api-access-db566\") pod \"install-certs-openstack-openstack-cell1-qkmnt\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:54 crc kubenswrapper[4946]: I1128 09:20:54.465684 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:20:55 crc kubenswrapper[4946]: I1128 09:20:55.096080 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-qkmnt"] Nov 28 09:20:56 crc kubenswrapper[4946]: I1128 09:20:56.081099 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" event={"ID":"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68","Type":"ContainerStarted","Data":"75cb2a8f01d978c72ce6610f123c4d1d81cfd455f37f97b44fe383ff4c19373c"} Nov 28 09:20:56 crc kubenswrapper[4946]: I1128 09:20:56.081602 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" event={"ID":"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68","Type":"ContainerStarted","Data":"4e474d79b4050ee0eb76e9e6aba79384b4a79447ec5d37e3ed4d936f9b2da331"} Nov 28 09:20:56 crc kubenswrapper[4946]: I1128 09:20:56.109679 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" podStartSLOduration=1.481633568 podStartE2EDuration="2.109649069s" podCreationTimestamp="2025-11-28 09:20:54 +0000 UTC" firstStartedPulling="2025-11-28 09:20:55.100259152 +0000 UTC m=+8909.478324263" lastFinishedPulling="2025-11-28 09:20:55.728274653 +0000 UTC m=+8910.106339764" observedRunningTime="2025-11-28 09:20:56.103743062 +0000 UTC m=+8910.481808183" watchObservedRunningTime="2025-11-28 09:20:56.109649069 +0000 UTC m=+8910.487714190" Nov 28 09:21:03 crc kubenswrapper[4946]: I1128 09:21:03.990326 4946 scope.go:117] "RemoveContainer" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" Nov 28 09:21:03 crc kubenswrapper[4946]: E1128 09:21:03.991237 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:21:10 crc kubenswrapper[4946]: I1128 09:21:10.410203 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8ktgg"] Nov 28 09:21:10 crc kubenswrapper[4946]: I1128 09:21:10.413924 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ktgg" Nov 28 09:21:10 crc kubenswrapper[4946]: I1128 09:21:10.430518 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8ktgg"] Nov 28 09:21:10 crc kubenswrapper[4946]: I1128 09:21:10.476521 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b346138c-a278-493a-831c-6538fc81e1b6-utilities\") pod \"community-operators-8ktgg\" (UID: \"b346138c-a278-493a-831c-6538fc81e1b6\") " pod="openshift-marketplace/community-operators-8ktgg" Nov 28 09:21:10 crc kubenswrapper[4946]: I1128 09:21:10.476653 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gfhh\" (UniqueName: \"kubernetes.io/projected/b346138c-a278-493a-831c-6538fc81e1b6-kube-api-access-4gfhh\") pod \"community-operators-8ktgg\" (UID: \"b346138c-a278-493a-831c-6538fc81e1b6\") " pod="openshift-marketplace/community-operators-8ktgg" Nov 28 09:21:10 crc kubenswrapper[4946]: I1128 09:21:10.476699 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b346138c-a278-493a-831c-6538fc81e1b6-catalog-content\") pod \"community-operators-8ktgg\" (UID: \"b346138c-a278-493a-831c-6538fc81e1b6\") " pod="openshift-marketplace/community-operators-8ktgg" Nov 28 09:21:10 crc kubenswrapper[4946]: I1128 09:21:10.578503 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b346138c-a278-493a-831c-6538fc81e1b6-utilities\") pod \"community-operators-8ktgg\" (UID: \"b346138c-a278-493a-831c-6538fc81e1b6\") " pod="openshift-marketplace/community-operators-8ktgg" Nov 28 09:21:10 crc kubenswrapper[4946]: I1128 09:21:10.578864 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gfhh\" (UniqueName: \"kubernetes.io/projected/b346138c-a278-493a-831c-6538fc81e1b6-kube-api-access-4gfhh\") pod \"community-operators-8ktgg\" (UID: \"b346138c-a278-493a-831c-6538fc81e1b6\") " pod="openshift-marketplace/community-operators-8ktgg" Nov 28 09:21:10 crc kubenswrapper[4946]: I1128 09:21:10.578977 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b346138c-a278-493a-831c-6538fc81e1b6-catalog-content\") pod \"community-operators-8ktgg\" (UID: \"b346138c-a278-493a-831c-6538fc81e1b6\") " pod="openshift-marketplace/community-operators-8ktgg" Nov 28 09:21:10 crc kubenswrapper[4946]: I1128 09:21:10.579561 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b346138c-a278-493a-831c-6538fc81e1b6-catalog-content\") pod \"community-operators-8ktgg\" (UID: \"b346138c-a278-493a-831c-6538fc81e1b6\") " pod="openshift-marketplace/community-operators-8ktgg" Nov 28 09:21:10 crc kubenswrapper[4946]: I1128 09:21:10.579881 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b346138c-a278-493a-831c-6538fc81e1b6-utilities\") pod \"community-operators-8ktgg\" (UID: \"b346138c-a278-493a-831c-6538fc81e1b6\") " pod="openshift-marketplace/community-operators-8ktgg" Nov 28 09:21:10 crc kubenswrapper[4946]: I1128 09:21:10.601698 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gfhh\" (UniqueName: \"kubernetes.io/projected/b346138c-a278-493a-831c-6538fc81e1b6-kube-api-access-4gfhh\") pod \"community-operators-8ktgg\" (UID: \"b346138c-a278-493a-831c-6538fc81e1b6\") " pod="openshift-marketplace/community-operators-8ktgg" Nov 28 09:21:10 crc kubenswrapper[4946]: I1128 09:21:10.750866 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ktgg" Nov 28 09:21:11 crc kubenswrapper[4946]: I1128 09:21:11.311299 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8ktgg"] Nov 28 09:21:12 crc kubenswrapper[4946]: I1128 09:21:12.248143 4946 generic.go:334] "Generic (PLEG): container finished" podID="b346138c-a278-493a-831c-6538fc81e1b6" containerID="caee0f6988b02b674b204ffbb602a114751afab6a71ee344f715992c6e9b5de7" exitCode=0 Nov 28 09:21:12 crc kubenswrapper[4946]: I1128 09:21:12.248804 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ktgg" event={"ID":"b346138c-a278-493a-831c-6538fc81e1b6","Type":"ContainerDied","Data":"caee0f6988b02b674b204ffbb602a114751afab6a71ee344f715992c6e9b5de7"} Nov 28 09:21:12 crc kubenswrapper[4946]: I1128 09:21:12.248839 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ktgg" event={"ID":"b346138c-a278-493a-831c-6538fc81e1b6","Type":"ContainerStarted","Data":"a72045ba5d65b1fb69319e1991372860321a0f3f9cc63c0a69c0cbd05daa584f"} Nov 28 09:21:14 crc kubenswrapper[4946]: I1128 09:21:14.273675 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ktgg" event={"ID":"b346138c-a278-493a-831c-6538fc81e1b6","Type":"ContainerStarted","Data":"c5e3a01c966378a65ce8f2726d6476de1e79627b57e1a91dc2d51c3b5494c8e2"} Nov 28 09:21:15 crc kubenswrapper[4946]: I1128 09:21:15.286603 4946 generic.go:334] "Generic (PLEG): container finished" podID="b346138c-a278-493a-831c-6538fc81e1b6" containerID="c5e3a01c966378a65ce8f2726d6476de1e79627b57e1a91dc2d51c3b5494c8e2" exitCode=0 Nov 28 09:21:15 crc kubenswrapper[4946]: I1128 09:21:15.286641 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ktgg" event={"ID":"b346138c-a278-493a-831c-6538fc81e1b6","Type":"ContainerDied","Data":"c5e3a01c966378a65ce8f2726d6476de1e79627b57e1a91dc2d51c3b5494c8e2"} Nov 28 09:21:16 crc kubenswrapper[4946]: I1128 09:21:16.299381 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ktgg" event={"ID":"b346138c-a278-493a-831c-6538fc81e1b6","Type":"ContainerStarted","Data":"600a0e0624e6c30082340c2baf709b75b35adea066c1645bd01700ee85ca5880"} Nov 28 09:21:16 crc kubenswrapper[4946]: I1128 09:21:16.324108 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8ktgg" podStartSLOduration=2.806964426 podStartE2EDuration="6.324085307s" podCreationTimestamp="2025-11-28 09:21:10 +0000 UTC" firstStartedPulling="2025-11-28 09:21:12.251990917 +0000 UTC m=+8926.630056028" lastFinishedPulling="2025-11-28 09:21:15.769111798 +0000 UTC m=+8930.147176909" observedRunningTime="2025-11-28 09:21:16.322626331 +0000 UTC m=+8930.700691452" watchObservedRunningTime="2025-11-28 09:21:16.324085307 +0000 UTC m=+8930.702150428" Nov 28 09:21:17 crc kubenswrapper[4946]: I1128 09:21:17.990347 4946 scope.go:117] "RemoveContainer" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" Nov 28 09:21:17 crc kubenswrapper[4946]: E1128 09:21:17.991923 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:21:18 crc kubenswrapper[4946]: I1128 09:21:18.320793 4946 generic.go:334] "Generic (PLEG): container finished" podID="72059c4b-03bd-4ec9-ba8a-5bca8ee58f68" containerID="75cb2a8f01d978c72ce6610f123c4d1d81cfd455f37f97b44fe383ff4c19373c" exitCode=0 Nov 28 09:21:18 crc kubenswrapper[4946]: I1128 09:21:18.320871 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" event={"ID":"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68","Type":"ContainerDied","Data":"75cb2a8f01d978c72ce6610f123c4d1d81cfd455f37f97b44fe383ff4c19373c"} Nov 28 09:21:19 crc kubenswrapper[4946]: I1128 09:21:19.957704 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.018018 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-ssh-key\") pod \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.018056 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-neutron-dhcp-combined-ca-bundle\") pod \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.018116 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-libvirt-combined-ca-bundle\") pod \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.018182 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-neutron-metadata-combined-ca-bundle\") pod \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.018227 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-telemetry-combined-ca-bundle\") pod \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.018324 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-inventory\") pod \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.018357 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-nova-combined-ca-bundle\") pod \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.018397 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db566\" (UniqueName: \"kubernetes.io/projected/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-kube-api-access-db566\") pod \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.018415 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-ceph\") pod \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.018448 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-bootstrap-combined-ca-bundle\") pod \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.018516 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-neutron-sriov-combined-ca-bundle\") pod \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.018573 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-ovn-combined-ca-bundle\") pod \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\" (UID: \"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68\") " Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.026934 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "72059c4b-03bd-4ec9-ba8a-5bca8ee58f68" (UID: "72059c4b-03bd-4ec9-ba8a-5bca8ee58f68"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.026993 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "72059c4b-03bd-4ec9-ba8a-5bca8ee58f68" (UID: "72059c4b-03bd-4ec9-ba8a-5bca8ee58f68"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.027238 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "72059c4b-03bd-4ec9-ba8a-5bca8ee58f68" (UID: "72059c4b-03bd-4ec9-ba8a-5bca8ee58f68"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.027292 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "72059c4b-03bd-4ec9-ba8a-5bca8ee58f68" (UID: "72059c4b-03bd-4ec9-ba8a-5bca8ee58f68"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.027775 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-kube-api-access-db566" (OuterVolumeSpecName: "kube-api-access-db566") pod "72059c4b-03bd-4ec9-ba8a-5bca8ee58f68" (UID: "72059c4b-03bd-4ec9-ba8a-5bca8ee58f68"). InnerVolumeSpecName "kube-api-access-db566". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.028686 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "72059c4b-03bd-4ec9-ba8a-5bca8ee58f68" (UID: "72059c4b-03bd-4ec9-ba8a-5bca8ee58f68"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.039422 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "72059c4b-03bd-4ec9-ba8a-5bca8ee58f68" (UID: "72059c4b-03bd-4ec9-ba8a-5bca8ee58f68"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.041446 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "72059c4b-03bd-4ec9-ba8a-5bca8ee58f68" (UID: "72059c4b-03bd-4ec9-ba8a-5bca8ee58f68"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.041625 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-ceph" (OuterVolumeSpecName: "ceph") pod "72059c4b-03bd-4ec9-ba8a-5bca8ee58f68" (UID: "72059c4b-03bd-4ec9-ba8a-5bca8ee58f68"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.042666 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "72059c4b-03bd-4ec9-ba8a-5bca8ee58f68" (UID: "72059c4b-03bd-4ec9-ba8a-5bca8ee58f68"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.057669 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-inventory" (OuterVolumeSpecName: "inventory") pod "72059c4b-03bd-4ec9-ba8a-5bca8ee58f68" (UID: "72059c4b-03bd-4ec9-ba8a-5bca8ee58f68"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.060061 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "72059c4b-03bd-4ec9-ba8a-5bca8ee58f68" (UID: "72059c4b-03bd-4ec9-ba8a-5bca8ee58f68"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.120425 4946 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.120454 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.120583 4946 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.120592 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db566\" (UniqueName: \"kubernetes.io/projected/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-kube-api-access-db566\") on node \"crc\" DevicePath \"\"" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.120600 4946 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-ceph\") on node \"crc\" DevicePath \"\"" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.120610 4946 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.120618 4946 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.120627 4946 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.120635 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.120644 4946 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.120654 4946 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.120663 4946 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72059c4b-03bd-4ec9-ba8a-5bca8ee58f68-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.345637 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" event={"ID":"72059c4b-03bd-4ec9-ba8a-5bca8ee58f68","Type":"ContainerDied","Data":"4e474d79b4050ee0eb76e9e6aba79384b4a79447ec5d37e3ed4d936f9b2da331"} Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.345953 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-qkmnt" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.346604 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e474d79b4050ee0eb76e9e6aba79384b4a79447ec5d37e3ed4d936f9b2da331" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.458532 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-4sfrp"] Nov 28 09:21:20 crc kubenswrapper[4946]: E1128 09:21:20.459161 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72059c4b-03bd-4ec9-ba8a-5bca8ee58f68" containerName="install-certs-openstack-openstack-cell1" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.459184 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="72059c4b-03bd-4ec9-ba8a-5bca8ee58f68" containerName="install-certs-openstack-openstack-cell1" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.459510 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="72059c4b-03bd-4ec9-ba8a-5bca8ee58f68" containerName="install-certs-openstack-openstack-cell1" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.460577 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-4sfrp" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.464559 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-4sfrp"] Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.464949 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-psw2j" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.465095 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.534972 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b27ef5d-f120-4a55-85ab-1299d601e069-inventory\") pod \"ceph-client-openstack-openstack-cell1-4sfrp\" (UID: \"8b27ef5d-f120-4a55-85ab-1299d601e069\") " pod="openstack/ceph-client-openstack-openstack-cell1-4sfrp" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.535211 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b27ef5d-f120-4a55-85ab-1299d601e069-ssh-key\") pod \"ceph-client-openstack-openstack-cell1-4sfrp\" (UID: \"8b27ef5d-f120-4a55-85ab-1299d601e069\") " pod="openstack/ceph-client-openstack-openstack-cell1-4sfrp" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.535340 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlsw8\" (UniqueName: \"kubernetes.io/projected/8b27ef5d-f120-4a55-85ab-1299d601e069-kube-api-access-wlsw8\") pod \"ceph-client-openstack-openstack-cell1-4sfrp\" (UID: \"8b27ef5d-f120-4a55-85ab-1299d601e069\") " pod="openstack/ceph-client-openstack-openstack-cell1-4sfrp" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.535449 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8b27ef5d-f120-4a55-85ab-1299d601e069-ceph\") pod \"ceph-client-openstack-openstack-cell1-4sfrp\" (UID: \"8b27ef5d-f120-4a55-85ab-1299d601e069\") " pod="openstack/ceph-client-openstack-openstack-cell1-4sfrp" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.637039 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlsw8\" (UniqueName: \"kubernetes.io/projected/8b27ef5d-f120-4a55-85ab-1299d601e069-kube-api-access-wlsw8\") pod \"ceph-client-openstack-openstack-cell1-4sfrp\" (UID: \"8b27ef5d-f120-4a55-85ab-1299d601e069\") " pod="openstack/ceph-client-openstack-openstack-cell1-4sfrp" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.637181 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8b27ef5d-f120-4a55-85ab-1299d601e069-ceph\") pod \"ceph-client-openstack-openstack-cell1-4sfrp\" (UID: \"8b27ef5d-f120-4a55-85ab-1299d601e069\") " pod="openstack/ceph-client-openstack-openstack-cell1-4sfrp" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.637359 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b27ef5d-f120-4a55-85ab-1299d601e069-inventory\") pod \"ceph-client-openstack-openstack-cell1-4sfrp\" (UID: \"8b27ef5d-f120-4a55-85ab-1299d601e069\") " pod="openstack/ceph-client-openstack-openstack-cell1-4sfrp" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.637389 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b27ef5d-f120-4a55-85ab-1299d601e069-ssh-key\") pod \"ceph-client-openstack-openstack-cell1-4sfrp\" (UID: \"8b27ef5d-f120-4a55-85ab-1299d601e069\") " pod="openstack/ceph-client-openstack-openstack-cell1-4sfrp" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.641877 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b27ef5d-f120-4a55-85ab-1299d601e069-ssh-key\") pod \"ceph-client-openstack-openstack-cell1-4sfrp\" (UID: \"8b27ef5d-f120-4a55-85ab-1299d601e069\") " pod="openstack/ceph-client-openstack-openstack-cell1-4sfrp" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.642036 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b27ef5d-f120-4a55-85ab-1299d601e069-inventory\") pod \"ceph-client-openstack-openstack-cell1-4sfrp\" (UID: \"8b27ef5d-f120-4a55-85ab-1299d601e069\") " pod="openstack/ceph-client-openstack-openstack-cell1-4sfrp" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.647513 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8b27ef5d-f120-4a55-85ab-1299d601e069-ceph\") pod \"ceph-client-openstack-openstack-cell1-4sfrp\" (UID: \"8b27ef5d-f120-4a55-85ab-1299d601e069\") " pod="openstack/ceph-client-openstack-openstack-cell1-4sfrp" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.658401 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlsw8\" (UniqueName: \"kubernetes.io/projected/8b27ef5d-f120-4a55-85ab-1299d601e069-kube-api-access-wlsw8\") pod \"ceph-client-openstack-openstack-cell1-4sfrp\" (UID: \"8b27ef5d-f120-4a55-85ab-1299d601e069\") " pod="openstack/ceph-client-openstack-openstack-cell1-4sfrp" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.751671 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8ktgg" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.751731 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8ktgg" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.783807 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-4sfrp" Nov 28 09:21:20 crc kubenswrapper[4946]: I1128 09:21:20.839951 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8ktgg" Nov 28 09:21:21 crc kubenswrapper[4946]: I1128 09:21:21.419184 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8ktgg" Nov 28 09:21:21 crc kubenswrapper[4946]: W1128 09:21:21.458956 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b27ef5d_f120_4a55_85ab_1299d601e069.slice/crio-71545dde586e7cb449c8fba5b3a46314cd971aee39d32e81d931b085d6fd61e3 WatchSource:0}: Error finding container 71545dde586e7cb449c8fba5b3a46314cd971aee39d32e81d931b085d6fd61e3: Status 404 returned error can't find the container with id 71545dde586e7cb449c8fba5b3a46314cd971aee39d32e81d931b085d6fd61e3 Nov 28 09:21:21 crc kubenswrapper[4946]: I1128 09:21:21.469299 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-4sfrp"] Nov 28 09:21:21 crc kubenswrapper[4946]: I1128 09:21:21.480773 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8ktgg"] Nov 28 09:21:22 crc kubenswrapper[4946]: I1128 09:21:22.369417 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-4sfrp" event={"ID":"8b27ef5d-f120-4a55-85ab-1299d601e069","Type":"ContainerStarted","Data":"1132b73ff9fd4311df5654152812b22780ea79f96fa5283ab09b29dbb104c4a1"} Nov 28 09:21:22 crc kubenswrapper[4946]: I1128 09:21:22.369857 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-4sfrp" event={"ID":"8b27ef5d-f120-4a55-85ab-1299d601e069","Type":"ContainerStarted","Data":"71545dde586e7cb449c8fba5b3a46314cd971aee39d32e81d931b085d6fd61e3"} Nov 28 09:21:22 crc kubenswrapper[4946]: I1128 09:21:22.407067 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-openstack-openstack-cell1-4sfrp" podStartSLOduration=1.95521424 podStartE2EDuration="2.407043823s" podCreationTimestamp="2025-11-28 09:21:20 +0000 UTC" firstStartedPulling="2025-11-28 09:21:21.461373567 +0000 UTC m=+8935.839438678" lastFinishedPulling="2025-11-28 09:21:21.91320315 +0000 UTC m=+8936.291268261" observedRunningTime="2025-11-28 09:21:22.391558569 +0000 UTC m=+8936.769623690" watchObservedRunningTime="2025-11-28 09:21:22.407043823 +0000 UTC m=+8936.785108934" Nov 28 09:21:23 crc kubenswrapper[4946]: I1128 09:21:23.378841 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8ktgg" podUID="b346138c-a278-493a-831c-6538fc81e1b6" containerName="registry-server" containerID="cri-o://600a0e0624e6c30082340c2baf709b75b35adea066c1645bd01700ee85ca5880" gracePeriod=2 Nov 28 09:21:24 crc kubenswrapper[4946]: I1128 09:21:24.948186 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ktgg" Nov 28 09:21:25 crc kubenswrapper[4946]: I1128 09:21:25.045272 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b346138c-a278-493a-831c-6538fc81e1b6-utilities\") pod \"b346138c-a278-493a-831c-6538fc81e1b6\" (UID: \"b346138c-a278-493a-831c-6538fc81e1b6\") " Nov 28 09:21:25 crc kubenswrapper[4946]: I1128 09:21:25.045429 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gfhh\" (UniqueName: \"kubernetes.io/projected/b346138c-a278-493a-831c-6538fc81e1b6-kube-api-access-4gfhh\") pod \"b346138c-a278-493a-831c-6538fc81e1b6\" (UID: \"b346138c-a278-493a-831c-6538fc81e1b6\") " Nov 28 09:21:25 crc kubenswrapper[4946]: I1128 09:21:25.045563 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b346138c-a278-493a-831c-6538fc81e1b6-catalog-content\") pod \"b346138c-a278-493a-831c-6538fc81e1b6\" (UID: \"b346138c-a278-493a-831c-6538fc81e1b6\") " Nov 28 09:21:25 crc kubenswrapper[4946]: I1128 09:21:25.051122 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b346138c-a278-493a-831c-6538fc81e1b6-utilities" (OuterVolumeSpecName: "utilities") pod "b346138c-a278-493a-831c-6538fc81e1b6" (UID: "b346138c-a278-493a-831c-6538fc81e1b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:21:25 crc kubenswrapper[4946]: I1128 09:21:25.055075 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b346138c-a278-493a-831c-6538fc81e1b6-kube-api-access-4gfhh" (OuterVolumeSpecName: "kube-api-access-4gfhh") pod "b346138c-a278-493a-831c-6538fc81e1b6" (UID: "b346138c-a278-493a-831c-6538fc81e1b6"). InnerVolumeSpecName "kube-api-access-4gfhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:21:25 crc kubenswrapper[4946]: I1128 09:21:25.118741 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b346138c-a278-493a-831c-6538fc81e1b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b346138c-a278-493a-831c-6538fc81e1b6" (UID: "b346138c-a278-493a-831c-6538fc81e1b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:21:25 crc kubenswrapper[4946]: I1128 09:21:25.148434 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b346138c-a278-493a-831c-6538fc81e1b6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 09:21:25 crc kubenswrapper[4946]: I1128 09:21:25.148516 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b346138c-a278-493a-831c-6538fc81e1b6-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 09:21:25 crc kubenswrapper[4946]: I1128 09:21:25.148538 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gfhh\" (UniqueName: \"kubernetes.io/projected/b346138c-a278-493a-831c-6538fc81e1b6-kube-api-access-4gfhh\") on node \"crc\" DevicePath \"\"" Nov 28 09:21:25 crc kubenswrapper[4946]: I1128 09:21:25.405422 4946 generic.go:334] "Generic (PLEG): container finished" podID="b346138c-a278-493a-831c-6538fc81e1b6" containerID="600a0e0624e6c30082340c2baf709b75b35adea066c1645bd01700ee85ca5880" exitCode=0 Nov 28 09:21:25 crc kubenswrapper[4946]: I1128 09:21:25.405529 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ktgg" event={"ID":"b346138c-a278-493a-831c-6538fc81e1b6","Type":"ContainerDied","Data":"600a0e0624e6c30082340c2baf709b75b35adea066c1645bd01700ee85ca5880"} Nov 28 09:21:25 crc kubenswrapper[4946]: I1128 09:21:25.405897 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ktgg" event={"ID":"b346138c-a278-493a-831c-6538fc81e1b6","Type":"ContainerDied","Data":"a72045ba5d65b1fb69319e1991372860321a0f3f9cc63c0a69c0cbd05daa584f"} Nov 28 09:21:25 crc kubenswrapper[4946]: I1128 09:21:25.405917 4946 scope.go:117] "RemoveContainer" containerID="600a0e0624e6c30082340c2baf709b75b35adea066c1645bd01700ee85ca5880" Nov 28 09:21:25 crc kubenswrapper[4946]: I1128 09:21:25.405558 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ktgg" Nov 28 09:21:25 crc kubenswrapper[4946]: I1128 09:21:25.467687 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8ktgg"] Nov 28 09:21:25 crc kubenswrapper[4946]: I1128 09:21:25.469654 4946 scope.go:117] "RemoveContainer" containerID="c5e3a01c966378a65ce8f2726d6476de1e79627b57e1a91dc2d51c3b5494c8e2" Nov 28 09:21:25 crc kubenswrapper[4946]: I1128 09:21:25.481263 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8ktgg"] Nov 28 09:21:25 crc kubenswrapper[4946]: I1128 09:21:25.523522 4946 scope.go:117] "RemoveContainer" containerID="caee0f6988b02b674b204ffbb602a114751afab6a71ee344f715992c6e9b5de7" Nov 28 09:21:26 crc kubenswrapper[4946]: I1128 09:21:26.004391 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b346138c-a278-493a-831c-6538fc81e1b6" path="/var/lib/kubelet/pods/b346138c-a278-493a-831c-6538fc81e1b6/volumes" Nov 28 09:21:26 crc kubenswrapper[4946]: I1128 09:21:26.170921 4946 scope.go:117] "RemoveContainer" containerID="600a0e0624e6c30082340c2baf709b75b35adea066c1645bd01700ee85ca5880" Nov 28 09:21:26 crc kubenswrapper[4946]: E1128 09:21:26.173903 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"600a0e0624e6c30082340c2baf709b75b35adea066c1645bd01700ee85ca5880\": container with ID starting with 600a0e0624e6c30082340c2baf709b75b35adea066c1645bd01700ee85ca5880 not found: ID does not exist" containerID="600a0e0624e6c30082340c2baf709b75b35adea066c1645bd01700ee85ca5880" Nov 28 09:21:26 crc kubenswrapper[4946]: I1128 09:21:26.173950 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"600a0e0624e6c30082340c2baf709b75b35adea066c1645bd01700ee85ca5880"} err="failed to get container status \"600a0e0624e6c30082340c2baf709b75b35adea066c1645bd01700ee85ca5880\": rpc error: code = NotFound desc = could not find container \"600a0e0624e6c30082340c2baf709b75b35adea066c1645bd01700ee85ca5880\": container with ID starting with 600a0e0624e6c30082340c2baf709b75b35adea066c1645bd01700ee85ca5880 not found: ID does not exist" Nov 28 09:21:26 crc kubenswrapper[4946]: I1128 09:21:26.173984 4946 scope.go:117] "RemoveContainer" containerID="c5e3a01c966378a65ce8f2726d6476de1e79627b57e1a91dc2d51c3b5494c8e2" Nov 28 09:21:26 crc kubenswrapper[4946]: E1128 09:21:26.174387 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5e3a01c966378a65ce8f2726d6476de1e79627b57e1a91dc2d51c3b5494c8e2\": container with ID starting with c5e3a01c966378a65ce8f2726d6476de1e79627b57e1a91dc2d51c3b5494c8e2 not found: ID does not exist" containerID="c5e3a01c966378a65ce8f2726d6476de1e79627b57e1a91dc2d51c3b5494c8e2" Nov 28 09:21:26 crc kubenswrapper[4946]: I1128 09:21:26.174433 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5e3a01c966378a65ce8f2726d6476de1e79627b57e1a91dc2d51c3b5494c8e2"} err="failed to get container status \"c5e3a01c966378a65ce8f2726d6476de1e79627b57e1a91dc2d51c3b5494c8e2\": rpc error: code = NotFound desc = could not find container \"c5e3a01c966378a65ce8f2726d6476de1e79627b57e1a91dc2d51c3b5494c8e2\": container with ID starting with c5e3a01c966378a65ce8f2726d6476de1e79627b57e1a91dc2d51c3b5494c8e2 not found: ID does not exist" Nov 28 09:21:26 crc kubenswrapper[4946]: I1128 09:21:26.174499 4946 scope.go:117] "RemoveContainer" containerID="caee0f6988b02b674b204ffbb602a114751afab6a71ee344f715992c6e9b5de7" Nov 28 09:21:26 crc kubenswrapper[4946]: E1128 09:21:26.174955 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caee0f6988b02b674b204ffbb602a114751afab6a71ee344f715992c6e9b5de7\": container with ID starting with caee0f6988b02b674b204ffbb602a114751afab6a71ee344f715992c6e9b5de7 not found: ID does not exist" containerID="caee0f6988b02b674b204ffbb602a114751afab6a71ee344f715992c6e9b5de7" Nov 28 09:21:26 crc kubenswrapper[4946]: I1128 09:21:26.175041 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caee0f6988b02b674b204ffbb602a114751afab6a71ee344f715992c6e9b5de7"} err="failed to get container status \"caee0f6988b02b674b204ffbb602a114751afab6a71ee344f715992c6e9b5de7\": rpc error: code = NotFound desc = could not find container \"caee0f6988b02b674b204ffbb602a114751afab6a71ee344f715992c6e9b5de7\": container with ID starting with caee0f6988b02b674b204ffbb602a114751afab6a71ee344f715992c6e9b5de7 not found: ID does not exist" Nov 28 09:21:28 crc kubenswrapper[4946]: I1128 09:21:28.990237 4946 scope.go:117] "RemoveContainer" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" Nov 28 09:21:28 crc kubenswrapper[4946]: E1128 09:21:28.991088 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:21:29 crc kubenswrapper[4946]: I1128 09:21:29.472981 4946 generic.go:334] "Generic (PLEG): container finished" podID="8b27ef5d-f120-4a55-85ab-1299d601e069" containerID="1132b73ff9fd4311df5654152812b22780ea79f96fa5283ab09b29dbb104c4a1" exitCode=0 Nov 28 09:21:29 crc kubenswrapper[4946]: I1128 09:21:29.473080 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-4sfrp" event={"ID":"8b27ef5d-f120-4a55-85ab-1299d601e069","Type":"ContainerDied","Data":"1132b73ff9fd4311df5654152812b22780ea79f96fa5283ab09b29dbb104c4a1"} Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.010251 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-4sfrp" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.084768 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b27ef5d-f120-4a55-85ab-1299d601e069-ssh-key\") pod \"8b27ef5d-f120-4a55-85ab-1299d601e069\" (UID: \"8b27ef5d-f120-4a55-85ab-1299d601e069\") " Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.084851 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlsw8\" (UniqueName: \"kubernetes.io/projected/8b27ef5d-f120-4a55-85ab-1299d601e069-kube-api-access-wlsw8\") pod \"8b27ef5d-f120-4a55-85ab-1299d601e069\" (UID: \"8b27ef5d-f120-4a55-85ab-1299d601e069\") " Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.085006 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b27ef5d-f120-4a55-85ab-1299d601e069-inventory\") pod \"8b27ef5d-f120-4a55-85ab-1299d601e069\" (UID: \"8b27ef5d-f120-4a55-85ab-1299d601e069\") " Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.085179 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8b27ef5d-f120-4a55-85ab-1299d601e069-ceph\") pod \"8b27ef5d-f120-4a55-85ab-1299d601e069\" (UID: \"8b27ef5d-f120-4a55-85ab-1299d601e069\") " Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.103778 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b27ef5d-f120-4a55-85ab-1299d601e069-kube-api-access-wlsw8" (OuterVolumeSpecName: "kube-api-access-wlsw8") pod "8b27ef5d-f120-4a55-85ab-1299d601e069" (UID: "8b27ef5d-f120-4a55-85ab-1299d601e069"). InnerVolumeSpecName "kube-api-access-wlsw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.104930 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b27ef5d-f120-4a55-85ab-1299d601e069-ceph" (OuterVolumeSpecName: "ceph") pod "8b27ef5d-f120-4a55-85ab-1299d601e069" (UID: "8b27ef5d-f120-4a55-85ab-1299d601e069"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.124604 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b27ef5d-f120-4a55-85ab-1299d601e069-inventory" (OuterVolumeSpecName: "inventory") pod "8b27ef5d-f120-4a55-85ab-1299d601e069" (UID: "8b27ef5d-f120-4a55-85ab-1299d601e069"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.149689 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b27ef5d-f120-4a55-85ab-1299d601e069-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8b27ef5d-f120-4a55-85ab-1299d601e069" (UID: "8b27ef5d-f120-4a55-85ab-1299d601e069"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.188326 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b27ef5d-f120-4a55-85ab-1299d601e069-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.188359 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlsw8\" (UniqueName: \"kubernetes.io/projected/8b27ef5d-f120-4a55-85ab-1299d601e069-kube-api-access-wlsw8\") on node \"crc\" DevicePath \"\"" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.188370 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b27ef5d-f120-4a55-85ab-1299d601e069-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.188378 4946 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8b27ef5d-f120-4a55-85ab-1299d601e069-ceph\") on node \"crc\" DevicePath \"\"" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.495059 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-4sfrp" event={"ID":"8b27ef5d-f120-4a55-85ab-1299d601e069","Type":"ContainerDied","Data":"71545dde586e7cb449c8fba5b3a46314cd971aee39d32e81d931b085d6fd61e3"} Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.495331 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71545dde586e7cb449c8fba5b3a46314cd971aee39d32e81d931b085d6fd61e3" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.495119 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-4sfrp" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.578989 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-cell1-79jwx"] Nov 28 09:21:31 crc kubenswrapper[4946]: E1128 09:21:31.579437 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b346138c-a278-493a-831c-6538fc81e1b6" containerName="registry-server" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.579456 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b346138c-a278-493a-831c-6538fc81e1b6" containerName="registry-server" Nov 28 09:21:31 crc kubenswrapper[4946]: E1128 09:21:31.579499 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b346138c-a278-493a-831c-6538fc81e1b6" containerName="extract-content" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.579510 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b346138c-a278-493a-831c-6538fc81e1b6" containerName="extract-content" Nov 28 09:21:31 crc kubenswrapper[4946]: E1128 09:21:31.579590 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b346138c-a278-493a-831c-6538fc81e1b6" containerName="extract-utilities" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.579600 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b346138c-a278-493a-831c-6538fc81e1b6" containerName="extract-utilities" Nov 28 09:21:31 crc kubenswrapper[4946]: E1128 09:21:31.579636 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b27ef5d-f120-4a55-85ab-1299d601e069" containerName="ceph-client-openstack-openstack-cell1" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.579645 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b27ef5d-f120-4a55-85ab-1299d601e069" containerName="ceph-client-openstack-openstack-cell1" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.579879 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b27ef5d-f120-4a55-85ab-1299d601e069" containerName="ceph-client-openstack-openstack-cell1" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.579918 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b346138c-a278-493a-831c-6538fc81e1b6" containerName="registry-server" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.582946 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-79jwx" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.585554 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-psw2j" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.590111 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.601323 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-79jwx"] Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.704648 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/86c84ce0-94fa-40be-8bcc-e6408a9e4411-ceph\") pod \"ovn-openstack-openstack-cell1-79jwx\" (UID: \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\") " pod="openstack/ovn-openstack-openstack-cell1-79jwx" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.704718 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/86c84ce0-94fa-40be-8bcc-e6408a9e4411-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-79jwx\" (UID: \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\") " pod="openstack/ovn-openstack-openstack-cell1-79jwx" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.704753 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86c84ce0-94fa-40be-8bcc-e6408a9e4411-inventory\") pod \"ovn-openstack-openstack-cell1-79jwx\" (UID: \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\") " pod="openstack/ovn-openstack-openstack-cell1-79jwx" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.704788 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/86c84ce0-94fa-40be-8bcc-e6408a9e4411-ssh-key\") pod \"ovn-openstack-openstack-cell1-79jwx\" (UID: \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\") " pod="openstack/ovn-openstack-openstack-cell1-79jwx" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.704810 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw494\" (UniqueName: \"kubernetes.io/projected/86c84ce0-94fa-40be-8bcc-e6408a9e4411-kube-api-access-zw494\") pod \"ovn-openstack-openstack-cell1-79jwx\" (UID: \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\") " pod="openstack/ovn-openstack-openstack-cell1-79jwx" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.704915 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c84ce0-94fa-40be-8bcc-e6408a9e4411-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-79jwx\" (UID: \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\") " pod="openstack/ovn-openstack-openstack-cell1-79jwx" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.807008 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/86c84ce0-94fa-40be-8bcc-e6408a9e4411-ssh-key\") pod \"ovn-openstack-openstack-cell1-79jwx\" (UID: \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\") " pod="openstack/ovn-openstack-openstack-cell1-79jwx" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.807079 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw494\" (UniqueName: \"kubernetes.io/projected/86c84ce0-94fa-40be-8bcc-e6408a9e4411-kube-api-access-zw494\") pod \"ovn-openstack-openstack-cell1-79jwx\" (UID: \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\") " pod="openstack/ovn-openstack-openstack-cell1-79jwx" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.807142 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c84ce0-94fa-40be-8bcc-e6408a9e4411-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-79jwx\" (UID: \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\") " pod="openstack/ovn-openstack-openstack-cell1-79jwx" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.807359 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/86c84ce0-94fa-40be-8bcc-e6408a9e4411-ceph\") pod \"ovn-openstack-openstack-cell1-79jwx\" (UID: \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\") " pod="openstack/ovn-openstack-openstack-cell1-79jwx" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.807424 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/86c84ce0-94fa-40be-8bcc-e6408a9e4411-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-79jwx\" (UID: \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\") " pod="openstack/ovn-openstack-openstack-cell1-79jwx" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.807505 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86c84ce0-94fa-40be-8bcc-e6408a9e4411-inventory\") pod \"ovn-openstack-openstack-cell1-79jwx\" (UID: \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\") " pod="openstack/ovn-openstack-openstack-cell1-79jwx" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.808925 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/86c84ce0-94fa-40be-8bcc-e6408a9e4411-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-79jwx\" (UID: \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\") " pod="openstack/ovn-openstack-openstack-cell1-79jwx" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.813968 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/86c84ce0-94fa-40be-8bcc-e6408a9e4411-ssh-key\") pod \"ovn-openstack-openstack-cell1-79jwx\" (UID: \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\") " pod="openstack/ovn-openstack-openstack-cell1-79jwx" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.814262 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86c84ce0-94fa-40be-8bcc-e6408a9e4411-inventory\") pod \"ovn-openstack-openstack-cell1-79jwx\" (UID: \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\") " pod="openstack/ovn-openstack-openstack-cell1-79jwx" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.815924 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c84ce0-94fa-40be-8bcc-e6408a9e4411-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-79jwx\" (UID: \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\") " pod="openstack/ovn-openstack-openstack-cell1-79jwx" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.818261 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/86c84ce0-94fa-40be-8bcc-e6408a9e4411-ceph\") pod \"ovn-openstack-openstack-cell1-79jwx\" (UID: \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\") " pod="openstack/ovn-openstack-openstack-cell1-79jwx" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.833346 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw494\" (UniqueName: \"kubernetes.io/projected/86c84ce0-94fa-40be-8bcc-e6408a9e4411-kube-api-access-zw494\") pod \"ovn-openstack-openstack-cell1-79jwx\" (UID: \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\") " pod="openstack/ovn-openstack-openstack-cell1-79jwx" Nov 28 09:21:31 crc kubenswrapper[4946]: I1128 09:21:31.905954 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-79jwx" Nov 28 09:21:32 crc kubenswrapper[4946]: I1128 09:21:32.503154 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-79jwx"] Nov 28 09:21:32 crc kubenswrapper[4946]: W1128 09:21:32.506132 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86c84ce0_94fa_40be_8bcc_e6408a9e4411.slice/crio-b8898d9ceff7f51256f0aee354c9bdde41877480530c497ac9e6155ed13d9667 WatchSource:0}: Error finding container b8898d9ceff7f51256f0aee354c9bdde41877480530c497ac9e6155ed13d9667: Status 404 returned error can't find the container with id b8898d9ceff7f51256f0aee354c9bdde41877480530c497ac9e6155ed13d9667 Nov 28 09:21:33 crc kubenswrapper[4946]: I1128 09:21:33.517984 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-79jwx" event={"ID":"86c84ce0-94fa-40be-8bcc-e6408a9e4411","Type":"ContainerStarted","Data":"c47d56bf23f04b56f075e863e6f95c95e4e6d1398a2cbd7e1bca266922572940"} Nov 28 09:21:33 crc kubenswrapper[4946]: I1128 09:21:33.518694 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-79jwx" event={"ID":"86c84ce0-94fa-40be-8bcc-e6408a9e4411","Type":"ContainerStarted","Data":"b8898d9ceff7f51256f0aee354c9bdde41877480530c497ac9e6155ed13d9667"} Nov 28 09:21:33 crc kubenswrapper[4946]: I1128 09:21:33.541488 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-cell1-79jwx" podStartSLOduration=2.097586025 podStartE2EDuration="2.54144663s" podCreationTimestamp="2025-11-28 09:21:31 +0000 UTC" firstStartedPulling="2025-11-28 09:21:32.509229559 +0000 UTC m=+8946.887294670" lastFinishedPulling="2025-11-28 09:21:32.953090164 +0000 UTC m=+8947.331155275" observedRunningTime="2025-11-28 09:21:33.533963355 +0000 UTC m=+8947.912028476" watchObservedRunningTime="2025-11-28 09:21:33.54144663 +0000 UTC m=+8947.919511741" Nov 28 09:21:39 crc kubenswrapper[4946]: I1128 09:21:39.989968 4946 scope.go:117] "RemoveContainer" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" Nov 28 09:21:39 crc kubenswrapper[4946]: E1128 09:21:39.990702 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:21:40 crc kubenswrapper[4946]: I1128 09:21:40.635946 4946 generic.go:334] "Generic (PLEG): container finished" podID="507d2a47-1976-44de-b9d7-ba27223d3441" containerID="3ccfe698e0ca4f3d67b4a96ff8b92eda1771c8ca705262fc12b1efcefc9424a9" exitCode=0 Nov 28 09:21:40 crc kubenswrapper[4946]: I1128 09:21:40.636100 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-networker-s2jsm" event={"ID":"507d2a47-1976-44de-b9d7-ba27223d3441","Type":"ContainerDied","Data":"3ccfe698e0ca4f3d67b4a96ff8b92eda1771c8ca705262fc12b1efcefc9424a9"} Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.161819 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-networker-s2jsm" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.264434 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/507d2a47-1976-44de-b9d7-ba27223d3441-ssh-key\") pod \"507d2a47-1976-44de-b9d7-ba27223d3441\" (UID: \"507d2a47-1976-44de-b9d7-ba27223d3441\") " Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.264506 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/507d2a47-1976-44de-b9d7-ba27223d3441-inventory\") pod \"507d2a47-1976-44de-b9d7-ba27223d3441\" (UID: \"507d2a47-1976-44de-b9d7-ba27223d3441\") " Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.264684 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8s4f\" (UniqueName: \"kubernetes.io/projected/507d2a47-1976-44de-b9d7-ba27223d3441-kube-api-access-b8s4f\") pod \"507d2a47-1976-44de-b9d7-ba27223d3441\" (UID: \"507d2a47-1976-44de-b9d7-ba27223d3441\") " Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.264716 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/507d2a47-1976-44de-b9d7-ba27223d3441-ovncontroller-config-0\") pod \"507d2a47-1976-44de-b9d7-ba27223d3441\" (UID: \"507d2a47-1976-44de-b9d7-ba27223d3441\") " Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.264812 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/507d2a47-1976-44de-b9d7-ba27223d3441-ovn-combined-ca-bundle\") pod \"507d2a47-1976-44de-b9d7-ba27223d3441\" (UID: \"507d2a47-1976-44de-b9d7-ba27223d3441\") " Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.270033 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/507d2a47-1976-44de-b9d7-ba27223d3441-kube-api-access-b8s4f" (OuterVolumeSpecName: "kube-api-access-b8s4f") pod "507d2a47-1976-44de-b9d7-ba27223d3441" (UID: "507d2a47-1976-44de-b9d7-ba27223d3441"). InnerVolumeSpecName "kube-api-access-b8s4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.270262 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/507d2a47-1976-44de-b9d7-ba27223d3441-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "507d2a47-1976-44de-b9d7-ba27223d3441" (UID: "507d2a47-1976-44de-b9d7-ba27223d3441"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.293310 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/507d2a47-1976-44de-b9d7-ba27223d3441-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "507d2a47-1976-44de-b9d7-ba27223d3441" (UID: "507d2a47-1976-44de-b9d7-ba27223d3441"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.294752 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/507d2a47-1976-44de-b9d7-ba27223d3441-inventory" (OuterVolumeSpecName: "inventory") pod "507d2a47-1976-44de-b9d7-ba27223d3441" (UID: "507d2a47-1976-44de-b9d7-ba27223d3441"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.311838 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/507d2a47-1976-44de-b9d7-ba27223d3441-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "507d2a47-1976-44de-b9d7-ba27223d3441" (UID: "507d2a47-1976-44de-b9d7-ba27223d3441"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.367663 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/507d2a47-1976-44de-b9d7-ba27223d3441-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.367695 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/507d2a47-1976-44de-b9d7-ba27223d3441-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.367709 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8s4f\" (UniqueName: \"kubernetes.io/projected/507d2a47-1976-44de-b9d7-ba27223d3441-kube-api-access-b8s4f\") on node \"crc\" DevicePath \"\"" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.367722 4946 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/507d2a47-1976-44de-b9d7-ba27223d3441-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.367733 4946 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/507d2a47-1976-44de-b9d7-ba27223d3441-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.658393 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-networker-s2jsm" event={"ID":"507d2a47-1976-44de-b9d7-ba27223d3441","Type":"ContainerDied","Data":"c6819d13eabef62dfcb7d8aa8304bcaa660530e40bf1289d0ec860a095b6f4b6"} Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.658451 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6819d13eabef62dfcb7d8aa8304bcaa660530e40bf1289d0ec860a095b6f4b6" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.658569 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-networker-s2jsm" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.763235 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-networker-qjh49"] Nov 28 09:21:42 crc kubenswrapper[4946]: E1128 09:21:42.763810 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="507d2a47-1976-44de-b9d7-ba27223d3441" containerName="ovn-openstack-openstack-networker" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.763832 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="507d2a47-1976-44de-b9d7-ba27223d3441" containerName="ovn-openstack-openstack-networker" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.764067 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="507d2a47-1976-44de-b9d7-ba27223d3441" containerName="ovn-openstack-openstack-networker" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.764990 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.767546 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.768031 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.768378 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-7cq8d" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.768660 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.779171 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-networker-qjh49"] Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.879682 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-networker-qjh49\" (UID: \"9cc5f29c-b756-4566-b796-a63b368f7578\") " pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.879802 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-inventory\") pod \"neutron-metadata-openstack-openstack-networker-qjh49\" (UID: \"9cc5f29c-b756-4566-b796-a63b368f7578\") " pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.879956 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-networker-qjh49\" (UID: \"9cc5f29c-b756-4566-b796-a63b368f7578\") " pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.880029 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-networker-qjh49\" (UID: \"9cc5f29c-b756-4566-b796-a63b368f7578\") " pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.880065 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8r5r\" (UniqueName: \"kubernetes.io/projected/9cc5f29c-b756-4566-b796-a63b368f7578-kube-api-access-q8r5r\") pod \"neutron-metadata-openstack-openstack-networker-qjh49\" (UID: \"9cc5f29c-b756-4566-b796-a63b368f7578\") " pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.880261 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-ssh-key\") pod \"neutron-metadata-openstack-openstack-networker-qjh49\" (UID: \"9cc5f29c-b756-4566-b796-a63b368f7578\") " pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.981717 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-ssh-key\") pod \"neutron-metadata-openstack-openstack-networker-qjh49\" (UID: \"9cc5f29c-b756-4566-b796-a63b368f7578\") " pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.981797 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-networker-qjh49\" (UID: \"9cc5f29c-b756-4566-b796-a63b368f7578\") " pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.981858 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-inventory\") pod \"neutron-metadata-openstack-openstack-networker-qjh49\" (UID: \"9cc5f29c-b756-4566-b796-a63b368f7578\") " pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.981946 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-networker-qjh49\" (UID: \"9cc5f29c-b756-4566-b796-a63b368f7578\") " pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.981989 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-networker-qjh49\" (UID: \"9cc5f29c-b756-4566-b796-a63b368f7578\") " pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.982038 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8r5r\" (UniqueName: \"kubernetes.io/projected/9cc5f29c-b756-4566-b796-a63b368f7578-kube-api-access-q8r5r\") pod \"neutron-metadata-openstack-openstack-networker-qjh49\" (UID: \"9cc5f29c-b756-4566-b796-a63b368f7578\") " pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.989380 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-ssh-key\") pod \"neutron-metadata-openstack-openstack-networker-qjh49\" (UID: \"9cc5f29c-b756-4566-b796-a63b368f7578\") " pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.989377 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-inventory\") pod \"neutron-metadata-openstack-openstack-networker-qjh49\" (UID: \"9cc5f29c-b756-4566-b796-a63b368f7578\") " pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.989872 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-networker-qjh49\" (UID: \"9cc5f29c-b756-4566-b796-a63b368f7578\") " pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.990265 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-networker-qjh49\" (UID: \"9cc5f29c-b756-4566-b796-a63b368f7578\") " pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" Nov 28 09:21:42 crc kubenswrapper[4946]: I1128 09:21:42.994146 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-networker-qjh49\" (UID: \"9cc5f29c-b756-4566-b796-a63b368f7578\") " pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" Nov 28 09:21:43 crc kubenswrapper[4946]: I1128 09:21:43.009560 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8r5r\" (UniqueName: \"kubernetes.io/projected/9cc5f29c-b756-4566-b796-a63b368f7578-kube-api-access-q8r5r\") pod \"neutron-metadata-openstack-openstack-networker-qjh49\" (UID: \"9cc5f29c-b756-4566-b796-a63b368f7578\") " pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" Nov 28 09:21:43 crc kubenswrapper[4946]: I1128 09:21:43.090911 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" Nov 28 09:21:43 crc kubenswrapper[4946]: I1128 09:21:43.640402 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-networker-qjh49"] Nov 28 09:21:43 crc kubenswrapper[4946]: I1128 09:21:43.674427 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" event={"ID":"9cc5f29c-b756-4566-b796-a63b368f7578","Type":"ContainerStarted","Data":"1af19365fc666695fb262c585f6eccd72a8e2c13ad8663c63b440f3b82adbbbd"} Nov 28 09:21:44 crc kubenswrapper[4946]: I1128 09:21:44.699531 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" event={"ID":"9cc5f29c-b756-4566-b796-a63b368f7578","Type":"ContainerStarted","Data":"56179b16f1b51668b77f31d0ac57aa7cfc2f71e04cefeb83686567dda667db85"} Nov 28 09:21:44 crc kubenswrapper[4946]: I1128 09:21:44.728946 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" podStartSLOduration=2.1793717790000002 podStartE2EDuration="2.728923433s" podCreationTimestamp="2025-11-28 09:21:42 +0000 UTC" firstStartedPulling="2025-11-28 09:21:43.650884376 +0000 UTC m=+8958.028949497" lastFinishedPulling="2025-11-28 09:21:44.200436 +0000 UTC m=+8958.578501151" observedRunningTime="2025-11-28 09:21:44.724522544 +0000 UTC m=+8959.102587655" watchObservedRunningTime="2025-11-28 09:21:44.728923433 +0000 UTC m=+8959.106988554" Nov 28 09:21:51 crc kubenswrapper[4946]: I1128 09:21:51.990417 4946 scope.go:117] "RemoveContainer" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" Nov 28 09:21:51 crc kubenswrapper[4946]: E1128 09:21:51.991579 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:22:05 crc kubenswrapper[4946]: I1128 09:22:05.996745 4946 scope.go:117] "RemoveContainer" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" Nov 28 09:22:05 crc kubenswrapper[4946]: E1128 09:22:05.997509 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:22:20 crc kubenswrapper[4946]: I1128 09:22:20.990654 4946 scope.go:117] "RemoveContainer" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" Nov 28 09:22:20 crc kubenswrapper[4946]: E1128 09:22:20.991596 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:22:35 crc kubenswrapper[4946]: I1128 09:22:35.997070 4946 scope.go:117] "RemoveContainer" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" Nov 28 09:22:35 crc kubenswrapper[4946]: E1128 09:22:35.997909 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:22:48 crc kubenswrapper[4946]: I1128 09:22:48.990243 4946 scope.go:117] "RemoveContainer" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" Nov 28 09:22:48 crc kubenswrapper[4946]: E1128 09:22:48.991142 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:22:49 crc kubenswrapper[4946]: I1128 09:22:49.456083 4946 generic.go:334] "Generic (PLEG): container finished" podID="86c84ce0-94fa-40be-8bcc-e6408a9e4411" containerID="c47d56bf23f04b56f075e863e6f95c95e4e6d1398a2cbd7e1bca266922572940" exitCode=0 Nov 28 09:22:49 crc kubenswrapper[4946]: I1128 09:22:49.456154 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-79jwx" event={"ID":"86c84ce0-94fa-40be-8bcc-e6408a9e4411","Type":"ContainerDied","Data":"c47d56bf23f04b56f075e863e6f95c95e4e6d1398a2cbd7e1bca266922572940"} Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.010932 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-79jwx" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.116757 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/86c84ce0-94fa-40be-8bcc-e6408a9e4411-ssh-key\") pod \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\" (UID: \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\") " Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.116811 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86c84ce0-94fa-40be-8bcc-e6408a9e4411-inventory\") pod \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\" (UID: \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\") " Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.116880 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/86c84ce0-94fa-40be-8bcc-e6408a9e4411-ceph\") pod \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\" (UID: \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\") " Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.117848 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw494\" (UniqueName: \"kubernetes.io/projected/86c84ce0-94fa-40be-8bcc-e6408a9e4411-kube-api-access-zw494\") pod \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\" (UID: \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\") " Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.117956 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/86c84ce0-94fa-40be-8bcc-e6408a9e4411-ovncontroller-config-0\") pod \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\" (UID: \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\") " Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.118377 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c84ce0-94fa-40be-8bcc-e6408a9e4411-ovn-combined-ca-bundle\") pod \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\" (UID: \"86c84ce0-94fa-40be-8bcc-e6408a9e4411\") " Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.129966 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86c84ce0-94fa-40be-8bcc-e6408a9e4411-kube-api-access-zw494" (OuterVolumeSpecName: "kube-api-access-zw494") pod "86c84ce0-94fa-40be-8bcc-e6408a9e4411" (UID: "86c84ce0-94fa-40be-8bcc-e6408a9e4411"). InnerVolumeSpecName "kube-api-access-zw494". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.130070 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86c84ce0-94fa-40be-8bcc-e6408a9e4411-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "86c84ce0-94fa-40be-8bcc-e6408a9e4411" (UID: "86c84ce0-94fa-40be-8bcc-e6408a9e4411"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.134650 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86c84ce0-94fa-40be-8bcc-e6408a9e4411-ceph" (OuterVolumeSpecName: "ceph") pod "86c84ce0-94fa-40be-8bcc-e6408a9e4411" (UID: "86c84ce0-94fa-40be-8bcc-e6408a9e4411"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.145494 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86c84ce0-94fa-40be-8bcc-e6408a9e4411-inventory" (OuterVolumeSpecName: "inventory") pod "86c84ce0-94fa-40be-8bcc-e6408a9e4411" (UID: "86c84ce0-94fa-40be-8bcc-e6408a9e4411"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.150392 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86c84ce0-94fa-40be-8bcc-e6408a9e4411-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "86c84ce0-94fa-40be-8bcc-e6408a9e4411" (UID: "86c84ce0-94fa-40be-8bcc-e6408a9e4411"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.155732 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86c84ce0-94fa-40be-8bcc-e6408a9e4411-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "86c84ce0-94fa-40be-8bcc-e6408a9e4411" (UID: "86c84ce0-94fa-40be-8bcc-e6408a9e4411"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.222773 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw494\" (UniqueName: \"kubernetes.io/projected/86c84ce0-94fa-40be-8bcc-e6408a9e4411-kube-api-access-zw494\") on node \"crc\" DevicePath \"\"" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.222801 4946 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/86c84ce0-94fa-40be-8bcc-e6408a9e4411-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.222811 4946 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c84ce0-94fa-40be-8bcc-e6408a9e4411-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.222819 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/86c84ce0-94fa-40be-8bcc-e6408a9e4411-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.222830 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86c84ce0-94fa-40be-8bcc-e6408a9e4411-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.222839 4946 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/86c84ce0-94fa-40be-8bcc-e6408a9e4411-ceph\") on node \"crc\" DevicePath \"\"" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.487683 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-79jwx" event={"ID":"86c84ce0-94fa-40be-8bcc-e6408a9e4411","Type":"ContainerDied","Data":"b8898d9ceff7f51256f0aee354c9bdde41877480530c497ac9e6155ed13d9667"} Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.487968 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8898d9ceff7f51256f0aee354c9bdde41877480530c497ac9e6155ed13d9667" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.487750 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-79jwx" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.683057 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-npk87"] Nov 28 09:22:51 crc kubenswrapper[4946]: E1128 09:22:51.684350 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86c84ce0-94fa-40be-8bcc-e6408a9e4411" containerName="ovn-openstack-openstack-cell1" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.684547 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="86c84ce0-94fa-40be-8bcc-e6408a9e4411" containerName="ovn-openstack-openstack-cell1" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.685060 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="86c84ce0-94fa-40be-8bcc-e6408a9e4411" containerName="ovn-openstack-openstack-cell1" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.686919 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.690257 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-psw2j" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.691641 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.694065 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-npk87"] Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.833412 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-npk87\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.833799 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-npk87\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.833902 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-npk87\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.834062 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-npk87\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.834137 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-npk87\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.834245 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-npk87\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.834403 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx4f4\" (UniqueName: \"kubernetes.io/projected/e2ad6585-a804-4048-b736-57d443b5d6cc-kube-api-access-rx4f4\") pod \"neutron-metadata-openstack-openstack-cell1-npk87\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.936749 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-npk87\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.937089 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-npk87\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.937191 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-npk87\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.937227 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-npk87\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.937253 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-npk87\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.937353 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx4f4\" (UniqueName: \"kubernetes.io/projected/e2ad6585-a804-4048-b736-57d443b5d6cc-kube-api-access-rx4f4\") pod \"neutron-metadata-openstack-openstack-cell1-npk87\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.937389 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-npk87\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.942098 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-npk87\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.942239 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-npk87\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.943452 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-npk87\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.943934 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-npk87\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.943956 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-npk87\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.945122 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-npk87\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" Nov 28 09:22:51 crc kubenswrapper[4946]: I1128 09:22:51.960195 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx4f4\" (UniqueName: \"kubernetes.io/projected/e2ad6585-a804-4048-b736-57d443b5d6cc-kube-api-access-rx4f4\") pod \"neutron-metadata-openstack-openstack-cell1-npk87\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" Nov 28 09:22:52 crc kubenswrapper[4946]: I1128 09:22:52.017297 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" Nov 28 09:22:52 crc kubenswrapper[4946]: I1128 09:22:52.501731 4946 generic.go:334] "Generic (PLEG): container finished" podID="9cc5f29c-b756-4566-b796-a63b368f7578" containerID="56179b16f1b51668b77f31d0ac57aa7cfc2f71e04cefeb83686567dda667db85" exitCode=0 Nov 28 09:22:52 crc kubenswrapper[4946]: I1128 09:22:52.501839 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" event={"ID":"9cc5f29c-b756-4566-b796-a63b368f7578","Type":"ContainerDied","Data":"56179b16f1b51668b77f31d0ac57aa7cfc2f71e04cefeb83686567dda667db85"} Nov 28 09:22:52 crc kubenswrapper[4946]: I1128 09:22:52.578841 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 09:22:52 crc kubenswrapper[4946]: I1128 09:22:52.590255 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-npk87"] Nov 28 09:22:53 crc kubenswrapper[4946]: I1128 09:22:53.519410 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" event={"ID":"e2ad6585-a804-4048-b736-57d443b5d6cc","Type":"ContainerStarted","Data":"b7f3c1a8b81b6a8f140468861092b8807ab83228935e75f2fac52a55eb347acd"} Nov 28 09:22:54 crc kubenswrapper[4946]: I1128 09:22:54.070963 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" Nov 28 09:22:54 crc kubenswrapper[4946]: I1128 09:22:54.201918 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8r5r\" (UniqueName: \"kubernetes.io/projected/9cc5f29c-b756-4566-b796-a63b368f7578-kube-api-access-q8r5r\") pod \"9cc5f29c-b756-4566-b796-a63b368f7578\" (UID: \"9cc5f29c-b756-4566-b796-a63b368f7578\") " Nov 28 09:22:54 crc kubenswrapper[4946]: I1128 09:22:54.202049 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-nova-metadata-neutron-config-0\") pod \"9cc5f29c-b756-4566-b796-a63b368f7578\" (UID: \"9cc5f29c-b756-4566-b796-a63b368f7578\") " Nov 28 09:22:54 crc kubenswrapper[4946]: I1128 09:22:54.202109 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-neutron-metadata-combined-ca-bundle\") pod \"9cc5f29c-b756-4566-b796-a63b368f7578\" (UID: \"9cc5f29c-b756-4566-b796-a63b368f7578\") " Nov 28 09:22:54 crc kubenswrapper[4946]: I1128 09:22:54.202216 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-ssh-key\") pod \"9cc5f29c-b756-4566-b796-a63b368f7578\" (UID: \"9cc5f29c-b756-4566-b796-a63b368f7578\") " Nov 28 09:22:54 crc kubenswrapper[4946]: I1128 09:22:54.202269 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-neutron-ovn-metadata-agent-neutron-config-0\") pod \"9cc5f29c-b756-4566-b796-a63b368f7578\" (UID: \"9cc5f29c-b756-4566-b796-a63b368f7578\") " Nov 28 09:22:54 crc kubenswrapper[4946]: I1128 09:22:54.202305 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-inventory\") pod \"9cc5f29c-b756-4566-b796-a63b368f7578\" (UID: \"9cc5f29c-b756-4566-b796-a63b368f7578\") " Nov 28 09:22:54 crc kubenswrapper[4946]: I1128 09:22:54.207109 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cc5f29c-b756-4566-b796-a63b368f7578-kube-api-access-q8r5r" (OuterVolumeSpecName: "kube-api-access-q8r5r") pod "9cc5f29c-b756-4566-b796-a63b368f7578" (UID: "9cc5f29c-b756-4566-b796-a63b368f7578"). InnerVolumeSpecName "kube-api-access-q8r5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:22:54 crc kubenswrapper[4946]: I1128 09:22:54.207710 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "9cc5f29c-b756-4566-b796-a63b368f7578" (UID: "9cc5f29c-b756-4566-b796-a63b368f7578"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:22:54 crc kubenswrapper[4946]: I1128 09:22:54.227661 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "9cc5f29c-b756-4566-b796-a63b368f7578" (UID: "9cc5f29c-b756-4566-b796-a63b368f7578"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:22:54 crc kubenswrapper[4946]: I1128 09:22:54.231505 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9cc5f29c-b756-4566-b796-a63b368f7578" (UID: "9cc5f29c-b756-4566-b796-a63b368f7578"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:22:54 crc kubenswrapper[4946]: I1128 09:22:54.243143 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-inventory" (OuterVolumeSpecName: "inventory") pod "9cc5f29c-b756-4566-b796-a63b368f7578" (UID: "9cc5f29c-b756-4566-b796-a63b368f7578"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:22:54 crc kubenswrapper[4946]: I1128 09:22:54.248860 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "9cc5f29c-b756-4566-b796-a63b368f7578" (UID: "9cc5f29c-b756-4566-b796-a63b368f7578"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:22:54 crc kubenswrapper[4946]: I1128 09:22:54.304903 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8r5r\" (UniqueName: \"kubernetes.io/projected/9cc5f29c-b756-4566-b796-a63b368f7578-kube-api-access-q8r5r\") on node \"crc\" DevicePath \"\"" Nov 28 09:22:54 crc kubenswrapper[4946]: I1128 09:22:54.304953 4946 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 09:22:54 crc kubenswrapper[4946]: I1128 09:22:54.304974 4946 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:22:54 crc kubenswrapper[4946]: I1128 09:22:54.304990 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:22:54 crc kubenswrapper[4946]: I1128 09:22:54.305003 4946 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 09:22:54 crc kubenswrapper[4946]: I1128 09:22:54.305019 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cc5f29c-b756-4566-b796-a63b368f7578-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:22:54 crc kubenswrapper[4946]: I1128 09:22:54.545688 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" event={"ID":"e2ad6585-a804-4048-b736-57d443b5d6cc","Type":"ContainerStarted","Data":"9bcf10eaed91eb23a950c3d5caf31fd346b7dd5ceb96529669cfc64f3a2932d2"} Nov 28 09:22:54 crc kubenswrapper[4946]: I1128 09:22:54.548760 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" event={"ID":"9cc5f29c-b756-4566-b796-a63b368f7578","Type":"ContainerDied","Data":"1af19365fc666695fb262c585f6eccd72a8e2c13ad8663c63b440f3b82adbbbd"} Nov 28 09:22:54 crc kubenswrapper[4946]: I1128 09:22:54.548780 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1af19365fc666695fb262c585f6eccd72a8e2c13ad8663c63b440f3b82adbbbd" Nov 28 09:22:54 crc kubenswrapper[4946]: I1128 09:22:54.548832 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-networker-qjh49" Nov 28 09:22:54 crc kubenswrapper[4946]: I1128 09:22:54.586703 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" podStartSLOduration=2.691318129 podStartE2EDuration="3.586686068s" podCreationTimestamp="2025-11-28 09:22:51 +0000 UTC" firstStartedPulling="2025-11-28 09:22:52.57856766 +0000 UTC m=+9026.956632791" lastFinishedPulling="2025-11-28 09:22:53.473935619 +0000 UTC m=+9027.852000730" observedRunningTime="2025-11-28 09:22:54.581670594 +0000 UTC m=+9028.959735735" watchObservedRunningTime="2025-11-28 09:22:54.586686068 +0000 UTC m=+9028.964751179" Nov 28 09:23:03 crc kubenswrapper[4946]: I1128 09:23:03.989735 4946 scope.go:117] "RemoveContainer" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" Nov 28 09:23:03 crc kubenswrapper[4946]: E1128 09:23:03.992363 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:23:18 crc kubenswrapper[4946]: I1128 09:23:18.990727 4946 scope.go:117] "RemoveContainer" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" Nov 28 09:23:18 crc kubenswrapper[4946]: E1128 09:23:18.991867 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:23:29 crc kubenswrapper[4946]: I1128 09:23:29.989683 4946 scope.go:117] "RemoveContainer" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" Nov 28 09:23:29 crc kubenswrapper[4946]: E1128 09:23:29.990526 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:23:40 crc kubenswrapper[4946]: I1128 09:23:40.991533 4946 scope.go:117] "RemoveContainer" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" Nov 28 09:23:40 crc kubenswrapper[4946]: E1128 09:23:40.992627 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:23:51 crc kubenswrapper[4946]: I1128 09:23:51.213918 4946 generic.go:334] "Generic (PLEG): container finished" podID="e2ad6585-a804-4048-b736-57d443b5d6cc" containerID="9bcf10eaed91eb23a950c3d5caf31fd346b7dd5ceb96529669cfc64f3a2932d2" exitCode=0 Nov 28 09:23:51 crc kubenswrapper[4946]: I1128 09:23:51.214011 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" event={"ID":"e2ad6585-a804-4048-b736-57d443b5d6cc","Type":"ContainerDied","Data":"9bcf10eaed91eb23a950c3d5caf31fd346b7dd5ceb96529669cfc64f3a2932d2"} Nov 28 09:23:51 crc kubenswrapper[4946]: I1128 09:23:51.990264 4946 scope.go:117] "RemoveContainer" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" Nov 28 09:23:51 crc kubenswrapper[4946]: E1128 09:23:51.990785 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:23:52 crc kubenswrapper[4946]: I1128 09:23:52.868353 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" Nov 28 09:23:52 crc kubenswrapper[4946]: I1128 09:23:52.979142 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-neutron-ovn-metadata-agent-neutron-config-0\") pod \"e2ad6585-a804-4048-b736-57d443b5d6cc\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " Nov 28 09:23:52 crc kubenswrapper[4946]: I1128 09:23:52.979235 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-ceph\") pod \"e2ad6585-a804-4048-b736-57d443b5d6cc\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " Nov 28 09:23:52 crc kubenswrapper[4946]: I1128 09:23:52.979293 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-nova-metadata-neutron-config-0\") pod \"e2ad6585-a804-4048-b736-57d443b5d6cc\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " Nov 28 09:23:52 crc kubenswrapper[4946]: I1128 09:23:52.979332 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx4f4\" (UniqueName: \"kubernetes.io/projected/e2ad6585-a804-4048-b736-57d443b5d6cc-kube-api-access-rx4f4\") pod \"e2ad6585-a804-4048-b736-57d443b5d6cc\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " Nov 28 09:23:52 crc kubenswrapper[4946]: I1128 09:23:52.979427 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-neutron-metadata-combined-ca-bundle\") pod \"e2ad6585-a804-4048-b736-57d443b5d6cc\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " Nov 28 09:23:52 crc kubenswrapper[4946]: I1128 09:23:52.979525 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-inventory\") pod \"e2ad6585-a804-4048-b736-57d443b5d6cc\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " Nov 28 09:23:52 crc kubenswrapper[4946]: I1128 09:23:52.979546 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-ssh-key\") pod \"e2ad6585-a804-4048-b736-57d443b5d6cc\" (UID: \"e2ad6585-a804-4048-b736-57d443b5d6cc\") " Nov 28 09:23:52 crc kubenswrapper[4946]: I1128 09:23:52.986368 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-ceph" (OuterVolumeSpecName: "ceph") pod "e2ad6585-a804-4048-b736-57d443b5d6cc" (UID: "e2ad6585-a804-4048-b736-57d443b5d6cc"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:23:52 crc kubenswrapper[4946]: I1128 09:23:52.986422 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2ad6585-a804-4048-b736-57d443b5d6cc-kube-api-access-rx4f4" (OuterVolumeSpecName: "kube-api-access-rx4f4") pod "e2ad6585-a804-4048-b736-57d443b5d6cc" (UID: "e2ad6585-a804-4048-b736-57d443b5d6cc"). InnerVolumeSpecName "kube-api-access-rx4f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:23:52 crc kubenswrapper[4946]: I1128 09:23:52.986417 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e2ad6585-a804-4048-b736-57d443b5d6cc" (UID: "e2ad6585-a804-4048-b736-57d443b5d6cc"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.011813 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "e2ad6585-a804-4048-b736-57d443b5d6cc" (UID: "e2ad6585-a804-4048-b736-57d443b5d6cc"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.023708 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "e2ad6585-a804-4048-b736-57d443b5d6cc" (UID: "e2ad6585-a804-4048-b736-57d443b5d6cc"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.026682 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e2ad6585-a804-4048-b736-57d443b5d6cc" (UID: "e2ad6585-a804-4048-b736-57d443b5d6cc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.026984 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-inventory" (OuterVolumeSpecName: "inventory") pod "e2ad6585-a804-4048-b736-57d443b5d6cc" (UID: "e2ad6585-a804-4048-b736-57d443b5d6cc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.084949 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.084983 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.084994 4946 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.085003 4946 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-ceph\") on node \"crc\" DevicePath \"\"" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.085015 4946 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.085024 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rx4f4\" (UniqueName: \"kubernetes.io/projected/e2ad6585-a804-4048-b736-57d443b5d6cc-kube-api-access-rx4f4\") on node \"crc\" DevicePath \"\"" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.085033 4946 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2ad6585-a804-4048-b736-57d443b5d6cc-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.241623 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" event={"ID":"e2ad6585-a804-4048-b736-57d443b5d6cc","Type":"ContainerDied","Data":"b7f3c1a8b81b6a8f140468861092b8807ab83228935e75f2fac52a55eb347acd"} Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.241735 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7f3c1a8b81b6a8f140468861092b8807ab83228935e75f2fac52a55eb347acd" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.241686 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-npk87" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.359522 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-5qp5n"] Nov 28 09:23:53 crc kubenswrapper[4946]: E1128 09:23:53.360266 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cc5f29c-b756-4566-b796-a63b368f7578" containerName="neutron-metadata-openstack-openstack-networker" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.360297 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cc5f29c-b756-4566-b796-a63b368f7578" containerName="neutron-metadata-openstack-openstack-networker" Nov 28 09:23:53 crc kubenswrapper[4946]: E1128 09:23:53.360337 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2ad6585-a804-4048-b736-57d443b5d6cc" containerName="neutron-metadata-openstack-openstack-cell1" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.360352 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2ad6585-a804-4048-b736-57d443b5d6cc" containerName="neutron-metadata-openstack-openstack-cell1" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.360724 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cc5f29c-b756-4566-b796-a63b368f7578" containerName="neutron-metadata-openstack-openstack-networker" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.360774 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2ad6585-a804-4048-b736-57d443b5d6cc" containerName="neutron-metadata-openstack-openstack-cell1" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.362554 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.366256 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-psw2j" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.366359 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.366487 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.366451 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.366701 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.380325 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-5qp5n"] Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.492644 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-ssh-key\") pod \"libvirt-openstack-openstack-cell1-5qp5n\" (UID: \"3820a072-910f-4f3c-a69c-7c178e101ed3\") " pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.492757 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-5qp5n\" (UID: \"3820a072-910f-4f3c-a69c-7c178e101ed3\") " pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.492841 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnsjz\" (UniqueName: \"kubernetes.io/projected/3820a072-910f-4f3c-a69c-7c178e101ed3-kube-api-access-lnsjz\") pod \"libvirt-openstack-openstack-cell1-5qp5n\" (UID: \"3820a072-910f-4f3c-a69c-7c178e101ed3\") " pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.492935 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-ceph\") pod \"libvirt-openstack-openstack-cell1-5qp5n\" (UID: \"3820a072-910f-4f3c-a69c-7c178e101ed3\") " pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.493005 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-5qp5n\" (UID: \"3820a072-910f-4f3c-a69c-7c178e101ed3\") " pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.493057 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-inventory\") pod \"libvirt-openstack-openstack-cell1-5qp5n\" (UID: \"3820a072-910f-4f3c-a69c-7c178e101ed3\") " pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.594703 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-5qp5n\" (UID: \"3820a072-910f-4f3c-a69c-7c178e101ed3\") " pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.594792 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnsjz\" (UniqueName: \"kubernetes.io/projected/3820a072-910f-4f3c-a69c-7c178e101ed3-kube-api-access-lnsjz\") pod \"libvirt-openstack-openstack-cell1-5qp5n\" (UID: \"3820a072-910f-4f3c-a69c-7c178e101ed3\") " pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.594846 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-ceph\") pod \"libvirt-openstack-openstack-cell1-5qp5n\" (UID: \"3820a072-910f-4f3c-a69c-7c178e101ed3\") " pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.594908 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-5qp5n\" (UID: \"3820a072-910f-4f3c-a69c-7c178e101ed3\") " pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.594939 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-inventory\") pod \"libvirt-openstack-openstack-cell1-5qp5n\" (UID: \"3820a072-910f-4f3c-a69c-7c178e101ed3\") " pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.594957 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-ssh-key\") pod \"libvirt-openstack-openstack-cell1-5qp5n\" (UID: \"3820a072-910f-4f3c-a69c-7c178e101ed3\") " pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.599080 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-inventory\") pod \"libvirt-openstack-openstack-cell1-5qp5n\" (UID: \"3820a072-910f-4f3c-a69c-7c178e101ed3\") " pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.599375 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-ceph\") pod \"libvirt-openstack-openstack-cell1-5qp5n\" (UID: \"3820a072-910f-4f3c-a69c-7c178e101ed3\") " pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.601520 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-ssh-key\") pod \"libvirt-openstack-openstack-cell1-5qp5n\" (UID: \"3820a072-910f-4f3c-a69c-7c178e101ed3\") " pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.601891 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-5qp5n\" (UID: \"3820a072-910f-4f3c-a69c-7c178e101ed3\") " pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.605737 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-5qp5n\" (UID: \"3820a072-910f-4f3c-a69c-7c178e101ed3\") " pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.617793 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnsjz\" (UniqueName: \"kubernetes.io/projected/3820a072-910f-4f3c-a69c-7c178e101ed3-kube-api-access-lnsjz\") pod \"libvirt-openstack-openstack-cell1-5qp5n\" (UID: \"3820a072-910f-4f3c-a69c-7c178e101ed3\") " pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" Nov 28 09:23:53 crc kubenswrapper[4946]: I1128 09:23:53.685167 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" Nov 28 09:23:54 crc kubenswrapper[4946]: I1128 09:23:54.259219 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-5qp5n"] Nov 28 09:23:55 crc kubenswrapper[4946]: I1128 09:23:55.264702 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" event={"ID":"3820a072-910f-4f3c-a69c-7c178e101ed3","Type":"ContainerStarted","Data":"30573c60891cf3cf2f7702d2611c7c42f6e346a3d99c8400b4e48548fa1c47df"} Nov 28 09:23:55 crc kubenswrapper[4946]: I1128 09:23:55.265067 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" event={"ID":"3820a072-910f-4f3c-a69c-7c178e101ed3","Type":"ContainerStarted","Data":"31bb125c9e4ef0a7861374f27c1896bc02fe84aff2e2d14bec7b7e2f0ce00e51"} Nov 28 09:23:55 crc kubenswrapper[4946]: I1128 09:23:55.293109 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" podStartSLOduration=1.6541837419999998 podStartE2EDuration="2.293091082s" podCreationTimestamp="2025-11-28 09:23:53 +0000 UTC" firstStartedPulling="2025-11-28 09:23:54.259032574 +0000 UTC m=+9088.637097685" lastFinishedPulling="2025-11-28 09:23:54.897939914 +0000 UTC m=+9089.276005025" observedRunningTime="2025-11-28 09:23:55.286539519 +0000 UTC m=+9089.664604650" watchObservedRunningTime="2025-11-28 09:23:55.293091082 +0000 UTC m=+9089.671156193" Nov 28 09:24:04 crc kubenswrapper[4946]: I1128 09:24:04.990742 4946 scope.go:117] "RemoveContainer" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" Nov 28 09:24:04 crc kubenswrapper[4946]: E1128 09:24:04.991664 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:24:19 crc kubenswrapper[4946]: I1128 09:24:19.989827 4946 scope.go:117] "RemoveContainer" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" Nov 28 09:24:19 crc kubenswrapper[4946]: E1128 09:24:19.990875 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:24:30 crc kubenswrapper[4946]: I1128 09:24:30.991089 4946 scope.go:117] "RemoveContainer" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" Nov 28 09:24:30 crc kubenswrapper[4946]: E1128 09:24:30.991814 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:24:42 crc kubenswrapper[4946]: I1128 09:24:42.990605 4946 scope.go:117] "RemoveContainer" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" Nov 28 09:24:42 crc kubenswrapper[4946]: E1128 09:24:42.991439 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:24:54 crc kubenswrapper[4946]: I1128 09:24:54.990701 4946 scope.go:117] "RemoveContainer" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" Nov 28 09:24:56 crc kubenswrapper[4946]: I1128 09:24:56.042147 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"135f3d68684897152e9b71c56021d71c909a6dea0f6f5ae66f99a5bc37e30f4d"} Nov 28 09:27:24 crc kubenswrapper[4946]: I1128 09:27:24.731110 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:27:24 crc kubenswrapper[4946]: I1128 09:27:24.732071 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:27:54 crc kubenswrapper[4946]: I1128 09:27:54.730895 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:27:54 crc kubenswrapper[4946]: I1128 09:27:54.731765 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:28:03 crc kubenswrapper[4946]: I1128 09:28:03.565364 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4r6xx"] Nov 28 09:28:03 crc kubenswrapper[4946]: I1128 09:28:03.570153 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4r6xx" Nov 28 09:28:03 crc kubenswrapper[4946]: I1128 09:28:03.605847 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4r6xx"] Nov 28 09:28:03 crc kubenswrapper[4946]: I1128 09:28:03.710557 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dfd0ede-73d6-4db1-a370-2fa036ef7df5-catalog-content\") pod \"redhat-marketplace-4r6xx\" (UID: \"2dfd0ede-73d6-4db1-a370-2fa036ef7df5\") " pod="openshift-marketplace/redhat-marketplace-4r6xx" Nov 28 09:28:03 crc kubenswrapper[4946]: I1128 09:28:03.710616 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jvnq\" (UniqueName: \"kubernetes.io/projected/2dfd0ede-73d6-4db1-a370-2fa036ef7df5-kube-api-access-4jvnq\") pod \"redhat-marketplace-4r6xx\" (UID: \"2dfd0ede-73d6-4db1-a370-2fa036ef7df5\") " pod="openshift-marketplace/redhat-marketplace-4r6xx" Nov 28 09:28:03 crc kubenswrapper[4946]: I1128 09:28:03.710810 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dfd0ede-73d6-4db1-a370-2fa036ef7df5-utilities\") pod \"redhat-marketplace-4r6xx\" (UID: \"2dfd0ede-73d6-4db1-a370-2fa036ef7df5\") " pod="openshift-marketplace/redhat-marketplace-4r6xx" Nov 28 09:28:03 crc kubenswrapper[4946]: I1128 09:28:03.815400 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dfd0ede-73d6-4db1-a370-2fa036ef7df5-catalog-content\") pod \"redhat-marketplace-4r6xx\" (UID: \"2dfd0ede-73d6-4db1-a370-2fa036ef7df5\") " pod="openshift-marketplace/redhat-marketplace-4r6xx" Nov 28 09:28:03 crc kubenswrapper[4946]: I1128 09:28:03.815718 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jvnq\" (UniqueName: \"kubernetes.io/projected/2dfd0ede-73d6-4db1-a370-2fa036ef7df5-kube-api-access-4jvnq\") pod \"redhat-marketplace-4r6xx\" (UID: \"2dfd0ede-73d6-4db1-a370-2fa036ef7df5\") " pod="openshift-marketplace/redhat-marketplace-4r6xx" Nov 28 09:28:03 crc kubenswrapper[4946]: I1128 09:28:03.815928 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dfd0ede-73d6-4db1-a370-2fa036ef7df5-catalog-content\") pod \"redhat-marketplace-4r6xx\" (UID: \"2dfd0ede-73d6-4db1-a370-2fa036ef7df5\") " pod="openshift-marketplace/redhat-marketplace-4r6xx" Nov 28 09:28:03 crc kubenswrapper[4946]: I1128 09:28:03.816032 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dfd0ede-73d6-4db1-a370-2fa036ef7df5-utilities\") pod \"redhat-marketplace-4r6xx\" (UID: \"2dfd0ede-73d6-4db1-a370-2fa036ef7df5\") " pod="openshift-marketplace/redhat-marketplace-4r6xx" Nov 28 09:28:03 crc kubenswrapper[4946]: I1128 09:28:03.816375 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dfd0ede-73d6-4db1-a370-2fa036ef7df5-utilities\") pod \"redhat-marketplace-4r6xx\" (UID: \"2dfd0ede-73d6-4db1-a370-2fa036ef7df5\") " pod="openshift-marketplace/redhat-marketplace-4r6xx" Nov 28 09:28:03 crc kubenswrapper[4946]: I1128 09:28:03.836864 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jvnq\" (UniqueName: \"kubernetes.io/projected/2dfd0ede-73d6-4db1-a370-2fa036ef7df5-kube-api-access-4jvnq\") pod \"redhat-marketplace-4r6xx\" (UID: \"2dfd0ede-73d6-4db1-a370-2fa036ef7df5\") " pod="openshift-marketplace/redhat-marketplace-4r6xx" Nov 28 09:28:03 crc kubenswrapper[4946]: I1128 09:28:03.903675 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4r6xx" Nov 28 09:28:04 crc kubenswrapper[4946]: I1128 09:28:04.425105 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4r6xx"] Nov 28 09:28:05 crc kubenswrapper[4946]: I1128 09:28:05.446852 4946 generic.go:334] "Generic (PLEG): container finished" podID="2dfd0ede-73d6-4db1-a370-2fa036ef7df5" containerID="5245db722db5dd5818794d30eddd1f50c104e8ef41dca1363f3663bca785a34b" exitCode=0 Nov 28 09:28:05 crc kubenswrapper[4946]: I1128 09:28:05.447439 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r6xx" event={"ID":"2dfd0ede-73d6-4db1-a370-2fa036ef7df5","Type":"ContainerDied","Data":"5245db722db5dd5818794d30eddd1f50c104e8ef41dca1363f3663bca785a34b"} Nov 28 09:28:05 crc kubenswrapper[4946]: I1128 09:28:05.447498 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r6xx" event={"ID":"2dfd0ede-73d6-4db1-a370-2fa036ef7df5","Type":"ContainerStarted","Data":"4896edf8a8f7c51b9d7379af67f748fb2d5a6555e55d917ae8199ce33db093b3"} Nov 28 09:28:05 crc kubenswrapper[4946]: I1128 09:28:05.449788 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 09:28:07 crc kubenswrapper[4946]: I1128 09:28:07.494909 4946 generic.go:334] "Generic (PLEG): container finished" podID="2dfd0ede-73d6-4db1-a370-2fa036ef7df5" containerID="0ac0eb49525e4236c347b1bec82a1cb38cee6525cd281ab07488e0fbe3301f5d" exitCode=0 Nov 28 09:28:07 crc kubenswrapper[4946]: I1128 09:28:07.495014 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r6xx" event={"ID":"2dfd0ede-73d6-4db1-a370-2fa036ef7df5","Type":"ContainerDied","Data":"0ac0eb49525e4236c347b1bec82a1cb38cee6525cd281ab07488e0fbe3301f5d"} Nov 28 09:28:09 crc kubenswrapper[4946]: I1128 09:28:09.519088 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r6xx" event={"ID":"2dfd0ede-73d6-4db1-a370-2fa036ef7df5","Type":"ContainerStarted","Data":"8287075c4ff6a7b2c225ae4b1c9e0b0849b014d9ed5712cfcdf800decaf010d4"} Nov 28 09:28:09 crc kubenswrapper[4946]: I1128 09:28:09.549577 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4r6xx" podStartSLOduration=3.779877344 podStartE2EDuration="6.549561296s" podCreationTimestamp="2025-11-28 09:28:03 +0000 UTC" firstStartedPulling="2025-11-28 09:28:05.448981877 +0000 UTC m=+9339.827046988" lastFinishedPulling="2025-11-28 09:28:08.218665799 +0000 UTC m=+9342.596730940" observedRunningTime="2025-11-28 09:28:09.544649394 +0000 UTC m=+9343.922714515" watchObservedRunningTime="2025-11-28 09:28:09.549561296 +0000 UTC m=+9343.927626407" Nov 28 09:28:13 crc kubenswrapper[4946]: I1128 09:28:13.904693 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4r6xx" Nov 28 09:28:13 crc kubenswrapper[4946]: I1128 09:28:13.905187 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4r6xx" Nov 28 09:28:13 crc kubenswrapper[4946]: I1128 09:28:13.976064 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4r6xx" Nov 28 09:28:14 crc kubenswrapper[4946]: I1128 09:28:14.701017 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4r6xx" Nov 28 09:28:14 crc kubenswrapper[4946]: I1128 09:28:14.765144 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4r6xx"] Nov 28 09:28:16 crc kubenswrapper[4946]: I1128 09:28:16.637249 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4r6xx" podUID="2dfd0ede-73d6-4db1-a370-2fa036ef7df5" containerName="registry-server" containerID="cri-o://8287075c4ff6a7b2c225ae4b1c9e0b0849b014d9ed5712cfcdf800decaf010d4" gracePeriod=2 Nov 28 09:28:17 crc kubenswrapper[4946]: I1128 09:28:17.267395 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4r6xx" Nov 28 09:28:17 crc kubenswrapper[4946]: I1128 09:28:17.430616 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dfd0ede-73d6-4db1-a370-2fa036ef7df5-catalog-content\") pod \"2dfd0ede-73d6-4db1-a370-2fa036ef7df5\" (UID: \"2dfd0ede-73d6-4db1-a370-2fa036ef7df5\") " Nov 28 09:28:17 crc kubenswrapper[4946]: I1128 09:28:17.430687 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dfd0ede-73d6-4db1-a370-2fa036ef7df5-utilities\") pod \"2dfd0ede-73d6-4db1-a370-2fa036ef7df5\" (UID: \"2dfd0ede-73d6-4db1-a370-2fa036ef7df5\") " Nov 28 09:28:17 crc kubenswrapper[4946]: I1128 09:28:17.430837 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jvnq\" (UniqueName: \"kubernetes.io/projected/2dfd0ede-73d6-4db1-a370-2fa036ef7df5-kube-api-access-4jvnq\") pod \"2dfd0ede-73d6-4db1-a370-2fa036ef7df5\" (UID: \"2dfd0ede-73d6-4db1-a370-2fa036ef7df5\") " Nov 28 09:28:17 crc kubenswrapper[4946]: I1128 09:28:17.433824 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dfd0ede-73d6-4db1-a370-2fa036ef7df5-utilities" (OuterVolumeSpecName: "utilities") pod "2dfd0ede-73d6-4db1-a370-2fa036ef7df5" (UID: "2dfd0ede-73d6-4db1-a370-2fa036ef7df5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:28:17 crc kubenswrapper[4946]: I1128 09:28:17.440950 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dfd0ede-73d6-4db1-a370-2fa036ef7df5-kube-api-access-4jvnq" (OuterVolumeSpecName: "kube-api-access-4jvnq") pod "2dfd0ede-73d6-4db1-a370-2fa036ef7df5" (UID: "2dfd0ede-73d6-4db1-a370-2fa036ef7df5"). InnerVolumeSpecName "kube-api-access-4jvnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:28:17 crc kubenswrapper[4946]: I1128 09:28:17.456155 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dfd0ede-73d6-4db1-a370-2fa036ef7df5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2dfd0ede-73d6-4db1-a370-2fa036ef7df5" (UID: "2dfd0ede-73d6-4db1-a370-2fa036ef7df5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:28:17 crc kubenswrapper[4946]: I1128 09:28:17.535717 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jvnq\" (UniqueName: \"kubernetes.io/projected/2dfd0ede-73d6-4db1-a370-2fa036ef7df5-kube-api-access-4jvnq\") on node \"crc\" DevicePath \"\"" Nov 28 09:28:17 crc kubenswrapper[4946]: I1128 09:28:17.535794 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dfd0ede-73d6-4db1-a370-2fa036ef7df5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 09:28:17 crc kubenswrapper[4946]: I1128 09:28:17.535812 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dfd0ede-73d6-4db1-a370-2fa036ef7df5-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 09:28:17 crc kubenswrapper[4946]: I1128 09:28:17.655784 4946 generic.go:334] "Generic (PLEG): container finished" podID="2dfd0ede-73d6-4db1-a370-2fa036ef7df5" containerID="8287075c4ff6a7b2c225ae4b1c9e0b0849b014d9ed5712cfcdf800decaf010d4" exitCode=0 Nov 28 09:28:17 crc kubenswrapper[4946]: I1128 09:28:17.655862 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r6xx" event={"ID":"2dfd0ede-73d6-4db1-a370-2fa036ef7df5","Type":"ContainerDied","Data":"8287075c4ff6a7b2c225ae4b1c9e0b0849b014d9ed5712cfcdf800decaf010d4"} Nov 28 09:28:17 crc kubenswrapper[4946]: I1128 09:28:17.655918 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4r6xx" event={"ID":"2dfd0ede-73d6-4db1-a370-2fa036ef7df5","Type":"ContainerDied","Data":"4896edf8a8f7c51b9d7379af67f748fb2d5a6555e55d917ae8199ce33db093b3"} Nov 28 09:28:17 crc kubenswrapper[4946]: I1128 09:28:17.655907 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4r6xx" Nov 28 09:28:17 crc kubenswrapper[4946]: I1128 09:28:17.655944 4946 scope.go:117] "RemoveContainer" containerID="8287075c4ff6a7b2c225ae4b1c9e0b0849b014d9ed5712cfcdf800decaf010d4" Nov 28 09:28:17 crc kubenswrapper[4946]: I1128 09:28:17.687779 4946 scope.go:117] "RemoveContainer" containerID="0ac0eb49525e4236c347b1bec82a1cb38cee6525cd281ab07488e0fbe3301f5d" Nov 28 09:28:17 crc kubenswrapper[4946]: I1128 09:28:17.718567 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4r6xx"] Nov 28 09:28:17 crc kubenswrapper[4946]: I1128 09:28:17.725633 4946 scope.go:117] "RemoveContainer" containerID="5245db722db5dd5818794d30eddd1f50c104e8ef41dca1363f3663bca785a34b" Nov 28 09:28:17 crc kubenswrapper[4946]: I1128 09:28:17.739620 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4r6xx"] Nov 28 09:28:17 crc kubenswrapper[4946]: I1128 09:28:17.775712 4946 scope.go:117] "RemoveContainer" containerID="8287075c4ff6a7b2c225ae4b1c9e0b0849b014d9ed5712cfcdf800decaf010d4" Nov 28 09:28:17 crc kubenswrapper[4946]: E1128 09:28:17.776228 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8287075c4ff6a7b2c225ae4b1c9e0b0849b014d9ed5712cfcdf800decaf010d4\": container with ID starting with 8287075c4ff6a7b2c225ae4b1c9e0b0849b014d9ed5712cfcdf800decaf010d4 not found: ID does not exist" containerID="8287075c4ff6a7b2c225ae4b1c9e0b0849b014d9ed5712cfcdf800decaf010d4" Nov 28 09:28:17 crc kubenswrapper[4946]: I1128 09:28:17.776272 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8287075c4ff6a7b2c225ae4b1c9e0b0849b014d9ed5712cfcdf800decaf010d4"} err="failed to get container status \"8287075c4ff6a7b2c225ae4b1c9e0b0849b014d9ed5712cfcdf800decaf010d4\": rpc error: code = NotFound desc = could not find container \"8287075c4ff6a7b2c225ae4b1c9e0b0849b014d9ed5712cfcdf800decaf010d4\": container with ID starting with 8287075c4ff6a7b2c225ae4b1c9e0b0849b014d9ed5712cfcdf800decaf010d4 not found: ID does not exist" Nov 28 09:28:17 crc kubenswrapper[4946]: I1128 09:28:17.776297 4946 scope.go:117] "RemoveContainer" containerID="0ac0eb49525e4236c347b1bec82a1cb38cee6525cd281ab07488e0fbe3301f5d" Nov 28 09:28:17 crc kubenswrapper[4946]: E1128 09:28:17.777076 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ac0eb49525e4236c347b1bec82a1cb38cee6525cd281ab07488e0fbe3301f5d\": container with ID starting with 0ac0eb49525e4236c347b1bec82a1cb38cee6525cd281ab07488e0fbe3301f5d not found: ID does not exist" containerID="0ac0eb49525e4236c347b1bec82a1cb38cee6525cd281ab07488e0fbe3301f5d" Nov 28 09:28:17 crc kubenswrapper[4946]: I1128 09:28:17.777120 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ac0eb49525e4236c347b1bec82a1cb38cee6525cd281ab07488e0fbe3301f5d"} err="failed to get container status \"0ac0eb49525e4236c347b1bec82a1cb38cee6525cd281ab07488e0fbe3301f5d\": rpc error: code = NotFound desc = could not find container \"0ac0eb49525e4236c347b1bec82a1cb38cee6525cd281ab07488e0fbe3301f5d\": container with ID starting with 0ac0eb49525e4236c347b1bec82a1cb38cee6525cd281ab07488e0fbe3301f5d not found: ID does not exist" Nov 28 09:28:17 crc kubenswrapper[4946]: I1128 09:28:17.777147 4946 scope.go:117] "RemoveContainer" containerID="5245db722db5dd5818794d30eddd1f50c104e8ef41dca1363f3663bca785a34b" Nov 28 09:28:17 crc kubenswrapper[4946]: E1128 09:28:17.777668 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5245db722db5dd5818794d30eddd1f50c104e8ef41dca1363f3663bca785a34b\": container with ID starting with 5245db722db5dd5818794d30eddd1f50c104e8ef41dca1363f3663bca785a34b not found: ID does not exist" containerID="5245db722db5dd5818794d30eddd1f50c104e8ef41dca1363f3663bca785a34b" Nov 28 09:28:17 crc kubenswrapper[4946]: I1128 09:28:17.777696 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5245db722db5dd5818794d30eddd1f50c104e8ef41dca1363f3663bca785a34b"} err="failed to get container status \"5245db722db5dd5818794d30eddd1f50c104e8ef41dca1363f3663bca785a34b\": rpc error: code = NotFound desc = could not find container \"5245db722db5dd5818794d30eddd1f50c104e8ef41dca1363f3663bca785a34b\": container with ID starting with 5245db722db5dd5818794d30eddd1f50c104e8ef41dca1363f3663bca785a34b not found: ID does not exist" Nov 28 09:28:18 crc kubenswrapper[4946]: I1128 09:28:18.006853 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dfd0ede-73d6-4db1-a370-2fa036ef7df5" path="/var/lib/kubelet/pods/2dfd0ede-73d6-4db1-a370-2fa036ef7df5/volumes" Nov 28 09:28:24 crc kubenswrapper[4946]: I1128 09:28:24.730379 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:28:24 crc kubenswrapper[4946]: I1128 09:28:24.730837 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:28:24 crc kubenswrapper[4946]: I1128 09:28:24.730893 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 09:28:24 crc kubenswrapper[4946]: I1128 09:28:24.731842 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"135f3d68684897152e9b71c56021d71c909a6dea0f6f5ae66f99a5bc37e30f4d"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 09:28:24 crc kubenswrapper[4946]: I1128 09:28:24.731911 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://135f3d68684897152e9b71c56021d71c909a6dea0f6f5ae66f99a5bc37e30f4d" gracePeriod=600 Nov 28 09:28:25 crc kubenswrapper[4946]: I1128 09:28:25.755499 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="135f3d68684897152e9b71c56021d71c909a6dea0f6f5ae66f99a5bc37e30f4d" exitCode=0 Nov 28 09:28:25 crc kubenswrapper[4946]: I1128 09:28:25.755553 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"135f3d68684897152e9b71c56021d71c909a6dea0f6f5ae66f99a5bc37e30f4d"} Nov 28 09:28:25 crc kubenswrapper[4946]: I1128 09:28:25.755936 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d"} Nov 28 09:28:25 crc kubenswrapper[4946]: I1128 09:28:25.755988 4946 scope.go:117] "RemoveContainer" containerID="fe24f1531f160069fdd7ae3397845365c158536dbada8db9bc15e63bc09c3d80" Nov 28 09:29:01 crc kubenswrapper[4946]: I1128 09:29:01.861164 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tbjf9"] Nov 28 09:29:01 crc kubenswrapper[4946]: E1128 09:29:01.862580 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dfd0ede-73d6-4db1-a370-2fa036ef7df5" containerName="registry-server" Nov 28 09:29:01 crc kubenswrapper[4946]: I1128 09:29:01.862601 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dfd0ede-73d6-4db1-a370-2fa036ef7df5" containerName="registry-server" Nov 28 09:29:01 crc kubenswrapper[4946]: E1128 09:29:01.862655 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dfd0ede-73d6-4db1-a370-2fa036ef7df5" containerName="extract-utilities" Nov 28 09:29:01 crc kubenswrapper[4946]: I1128 09:29:01.862666 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dfd0ede-73d6-4db1-a370-2fa036ef7df5" containerName="extract-utilities" Nov 28 09:29:01 crc kubenswrapper[4946]: E1128 09:29:01.862690 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dfd0ede-73d6-4db1-a370-2fa036ef7df5" containerName="extract-content" Nov 28 09:29:01 crc kubenswrapper[4946]: I1128 09:29:01.862703 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dfd0ede-73d6-4db1-a370-2fa036ef7df5" containerName="extract-content" Nov 28 09:29:01 crc kubenswrapper[4946]: I1128 09:29:01.863025 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dfd0ede-73d6-4db1-a370-2fa036ef7df5" containerName="registry-server" Nov 28 09:29:01 crc kubenswrapper[4946]: I1128 09:29:01.865643 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tbjf9" Nov 28 09:29:01 crc kubenswrapper[4946]: I1128 09:29:01.877791 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tbjf9"] Nov 28 09:29:02 crc kubenswrapper[4946]: I1128 09:29:02.013309 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dc1376a-004b-436f-9cd5-4cb5cacb8639-catalog-content\") pod \"certified-operators-tbjf9\" (UID: \"4dc1376a-004b-436f-9cd5-4cb5cacb8639\") " pod="openshift-marketplace/certified-operators-tbjf9" Nov 28 09:29:02 crc kubenswrapper[4946]: I1128 09:29:02.014051 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dc1376a-004b-436f-9cd5-4cb5cacb8639-utilities\") pod \"certified-operators-tbjf9\" (UID: \"4dc1376a-004b-436f-9cd5-4cb5cacb8639\") " pod="openshift-marketplace/certified-operators-tbjf9" Nov 28 09:29:02 crc kubenswrapper[4946]: I1128 09:29:02.014223 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prdd7\" (UniqueName: \"kubernetes.io/projected/4dc1376a-004b-436f-9cd5-4cb5cacb8639-kube-api-access-prdd7\") pod \"certified-operators-tbjf9\" (UID: \"4dc1376a-004b-436f-9cd5-4cb5cacb8639\") " pod="openshift-marketplace/certified-operators-tbjf9" Nov 28 09:29:02 crc kubenswrapper[4946]: I1128 09:29:02.116494 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dc1376a-004b-436f-9cd5-4cb5cacb8639-catalog-content\") pod \"certified-operators-tbjf9\" (UID: \"4dc1376a-004b-436f-9cd5-4cb5cacb8639\") " pod="openshift-marketplace/certified-operators-tbjf9" Nov 28 09:29:02 crc kubenswrapper[4946]: I1128 09:29:02.116698 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dc1376a-004b-436f-9cd5-4cb5cacb8639-utilities\") pod \"certified-operators-tbjf9\" (UID: \"4dc1376a-004b-436f-9cd5-4cb5cacb8639\") " pod="openshift-marketplace/certified-operators-tbjf9" Nov 28 09:29:02 crc kubenswrapper[4946]: I1128 09:29:02.116792 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prdd7\" (UniqueName: \"kubernetes.io/projected/4dc1376a-004b-436f-9cd5-4cb5cacb8639-kube-api-access-prdd7\") pod \"certified-operators-tbjf9\" (UID: \"4dc1376a-004b-436f-9cd5-4cb5cacb8639\") " pod="openshift-marketplace/certified-operators-tbjf9" Nov 28 09:29:02 crc kubenswrapper[4946]: I1128 09:29:02.117317 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dc1376a-004b-436f-9cd5-4cb5cacb8639-catalog-content\") pod \"certified-operators-tbjf9\" (UID: \"4dc1376a-004b-436f-9cd5-4cb5cacb8639\") " pod="openshift-marketplace/certified-operators-tbjf9" Nov 28 09:29:02 crc kubenswrapper[4946]: I1128 09:29:02.117764 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dc1376a-004b-436f-9cd5-4cb5cacb8639-utilities\") pod \"certified-operators-tbjf9\" (UID: \"4dc1376a-004b-436f-9cd5-4cb5cacb8639\") " pod="openshift-marketplace/certified-operators-tbjf9" Nov 28 09:29:02 crc kubenswrapper[4946]: I1128 09:29:02.156120 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prdd7\" (UniqueName: \"kubernetes.io/projected/4dc1376a-004b-436f-9cd5-4cb5cacb8639-kube-api-access-prdd7\") pod \"certified-operators-tbjf9\" (UID: \"4dc1376a-004b-436f-9cd5-4cb5cacb8639\") " pod="openshift-marketplace/certified-operators-tbjf9" Nov 28 09:29:02 crc kubenswrapper[4946]: I1128 09:29:02.206082 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tbjf9" Nov 28 09:29:02 crc kubenswrapper[4946]: I1128 09:29:02.808263 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tbjf9"] Nov 28 09:29:02 crc kubenswrapper[4946]: W1128 09:29:02.811899 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4dc1376a_004b_436f_9cd5_4cb5cacb8639.slice/crio-49f77296d2d171c8111226071c210df4e4d8399af42532fde62d6890100758c3 WatchSource:0}: Error finding container 49f77296d2d171c8111226071c210df4e4d8399af42532fde62d6890100758c3: Status 404 returned error can't find the container with id 49f77296d2d171c8111226071c210df4e4d8399af42532fde62d6890100758c3 Nov 28 09:29:03 crc kubenswrapper[4946]: I1128 09:29:03.233168 4946 generic.go:334] "Generic (PLEG): container finished" podID="4dc1376a-004b-436f-9cd5-4cb5cacb8639" containerID="e22d018e99e909afce358422703bb1c81e05c4088acf7b7d77f951518e7d1dd4" exitCode=0 Nov 28 09:29:03 crc kubenswrapper[4946]: I1128 09:29:03.233236 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbjf9" event={"ID":"4dc1376a-004b-436f-9cd5-4cb5cacb8639","Type":"ContainerDied","Data":"e22d018e99e909afce358422703bb1c81e05c4088acf7b7d77f951518e7d1dd4"} Nov 28 09:29:03 crc kubenswrapper[4946]: I1128 09:29:03.233570 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbjf9" event={"ID":"4dc1376a-004b-436f-9cd5-4cb5cacb8639","Type":"ContainerStarted","Data":"49f77296d2d171c8111226071c210df4e4d8399af42532fde62d6890100758c3"} Nov 28 09:29:03 crc kubenswrapper[4946]: I1128 09:29:03.235634 4946 generic.go:334] "Generic (PLEG): container finished" podID="3820a072-910f-4f3c-a69c-7c178e101ed3" containerID="30573c60891cf3cf2f7702d2611c7c42f6e346a3d99c8400b4e48548fa1c47df" exitCode=0 Nov 28 09:29:03 crc kubenswrapper[4946]: I1128 09:29:03.235679 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" event={"ID":"3820a072-910f-4f3c-a69c-7c178e101ed3","Type":"ContainerDied","Data":"30573c60891cf3cf2f7702d2611c7c42f6e346a3d99c8400b4e48548fa1c47df"} Nov 28 09:29:04 crc kubenswrapper[4946]: I1128 09:29:04.251121 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbjf9" event={"ID":"4dc1376a-004b-436f-9cd5-4cb5cacb8639","Type":"ContainerStarted","Data":"5df7c38f464cf6baf7ecb8cb2ab379602a9930ad21c2e624b0cbf6beb275d361"} Nov 28 09:29:04 crc kubenswrapper[4946]: I1128 09:29:04.831219 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" Nov 28 09:29:04 crc kubenswrapper[4946]: I1128 09:29:04.986680 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-libvirt-combined-ca-bundle\") pod \"3820a072-910f-4f3c-a69c-7c178e101ed3\" (UID: \"3820a072-910f-4f3c-a69c-7c178e101ed3\") " Nov 28 09:29:04 crc kubenswrapper[4946]: I1128 09:29:04.986760 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnsjz\" (UniqueName: \"kubernetes.io/projected/3820a072-910f-4f3c-a69c-7c178e101ed3-kube-api-access-lnsjz\") pod \"3820a072-910f-4f3c-a69c-7c178e101ed3\" (UID: \"3820a072-910f-4f3c-a69c-7c178e101ed3\") " Nov 28 09:29:04 crc kubenswrapper[4946]: I1128 09:29:04.986801 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-libvirt-secret-0\") pod \"3820a072-910f-4f3c-a69c-7c178e101ed3\" (UID: \"3820a072-910f-4f3c-a69c-7c178e101ed3\") " Nov 28 09:29:04 crc kubenswrapper[4946]: I1128 09:29:04.986829 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-ssh-key\") pod \"3820a072-910f-4f3c-a69c-7c178e101ed3\" (UID: \"3820a072-910f-4f3c-a69c-7c178e101ed3\") " Nov 28 09:29:04 crc kubenswrapper[4946]: I1128 09:29:04.986847 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-inventory\") pod \"3820a072-910f-4f3c-a69c-7c178e101ed3\" (UID: \"3820a072-910f-4f3c-a69c-7c178e101ed3\") " Nov 28 09:29:04 crc kubenswrapper[4946]: I1128 09:29:04.987144 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-ceph\") pod \"3820a072-910f-4f3c-a69c-7c178e101ed3\" (UID: \"3820a072-910f-4f3c-a69c-7c178e101ed3\") " Nov 28 09:29:04 crc kubenswrapper[4946]: I1128 09:29:04.992741 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "3820a072-910f-4f3c-a69c-7c178e101ed3" (UID: "3820a072-910f-4f3c-a69c-7c178e101ed3"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:29:04 crc kubenswrapper[4946]: I1128 09:29:04.994108 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3820a072-910f-4f3c-a69c-7c178e101ed3-kube-api-access-lnsjz" (OuterVolumeSpecName: "kube-api-access-lnsjz") pod "3820a072-910f-4f3c-a69c-7c178e101ed3" (UID: "3820a072-910f-4f3c-a69c-7c178e101ed3"). InnerVolumeSpecName "kube-api-access-lnsjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:29:04 crc kubenswrapper[4946]: I1128 09:29:04.995635 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-ceph" (OuterVolumeSpecName: "ceph") pod "3820a072-910f-4f3c-a69c-7c178e101ed3" (UID: "3820a072-910f-4f3c-a69c-7c178e101ed3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.020903 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3820a072-910f-4f3c-a69c-7c178e101ed3" (UID: "3820a072-910f-4f3c-a69c-7c178e101ed3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.030271 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "3820a072-910f-4f3c-a69c-7c178e101ed3" (UID: "3820a072-910f-4f3c-a69c-7c178e101ed3"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.049720 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-inventory" (OuterVolumeSpecName: "inventory") pod "3820a072-910f-4f3c-a69c-7c178e101ed3" (UID: "3820a072-910f-4f3c-a69c-7c178e101ed3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.089937 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.089967 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.089978 4946 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-ceph\") on node \"crc\" DevicePath \"\"" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.089988 4946 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.089999 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnsjz\" (UniqueName: \"kubernetes.io/projected/3820a072-910f-4f3c-a69c-7c178e101ed3-kube-api-access-lnsjz\") on node \"crc\" DevicePath \"\"" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.090008 4946 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3820a072-910f-4f3c-a69c-7c178e101ed3-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.276237 4946 generic.go:334] "Generic (PLEG): container finished" podID="4dc1376a-004b-436f-9cd5-4cb5cacb8639" containerID="5df7c38f464cf6baf7ecb8cb2ab379602a9930ad21c2e624b0cbf6beb275d361" exitCode=0 Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.276317 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbjf9" event={"ID":"4dc1376a-004b-436f-9cd5-4cb5cacb8639","Type":"ContainerDied","Data":"5df7c38f464cf6baf7ecb8cb2ab379602a9930ad21c2e624b0cbf6beb275d361"} Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.279946 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" event={"ID":"3820a072-910f-4f3c-a69c-7c178e101ed3","Type":"ContainerDied","Data":"31bb125c9e4ef0a7861374f27c1896bc02fe84aff2e2d14bec7b7e2f0ce00e51"} Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.279986 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31bb125c9e4ef0a7861374f27c1896bc02fe84aff2e2d14bec7b7e2f0ce00e51" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.280053 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-5qp5n" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.493632 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-tdfft"] Nov 28 09:29:05 crc kubenswrapper[4946]: E1128 09:29:05.494141 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3820a072-910f-4f3c-a69c-7c178e101ed3" containerName="libvirt-openstack-openstack-cell1" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.494157 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="3820a072-910f-4f3c-a69c-7c178e101ed3" containerName="libvirt-openstack-openstack-cell1" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.494435 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="3820a072-910f-4f3c-a69c-7c178e101ed3" containerName="libvirt-openstack-openstack-cell1" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.495659 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.508713 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-tdfft"] Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.517777 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.518106 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.518208 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.518254 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.518357 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.518518 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-psw2j" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.518697 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.607381 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.607452 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.607532 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-ceph\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.607580 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.607626 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.607649 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.607672 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.607690 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.608018 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.608079 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-inventory\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.608155 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wg27\" (UniqueName: \"kubernetes.io/projected/6b73ea39-1838-4f8b-b183-041d61c8c457-kube-api-access-8wg27\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.709647 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.709948 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-ceph\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.709998 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.710051 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.710075 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.710099 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.710116 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.710141 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.710157 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-inventory\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.710200 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wg27\" (UniqueName: \"kubernetes.io/projected/6b73ea39-1838-4f8b-b183-041d61c8c457-kube-api-access-8wg27\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.710778 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.711147 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.711328 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.714488 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.714562 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.714660 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.715022 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-ceph\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.715082 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.715107 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.715799 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.717022 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-inventory\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.731623 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wg27\" (UniqueName: \"kubernetes.io/projected/6b73ea39-1838-4f8b-b183-041d61c8c457-kube-api-access-8wg27\") pod \"nova-cell1-openstack-openstack-cell1-tdfft\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:05 crc kubenswrapper[4946]: I1128 09:29:05.834143 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:29:06 crc kubenswrapper[4946]: I1128 09:29:06.291535 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbjf9" event={"ID":"4dc1376a-004b-436f-9cd5-4cb5cacb8639","Type":"ContainerStarted","Data":"87d7944d5322b32ee4108cae85d676f9f04dea04e8587423ca8ea3872d611b1d"} Nov 28 09:29:06 crc kubenswrapper[4946]: I1128 09:29:06.315168 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tbjf9" podStartSLOduration=2.632719857 podStartE2EDuration="5.315150628s" podCreationTimestamp="2025-11-28 09:29:01 +0000 UTC" firstStartedPulling="2025-11-28 09:29:03.23486726 +0000 UTC m=+9397.612932371" lastFinishedPulling="2025-11-28 09:29:05.917298031 +0000 UTC m=+9400.295363142" observedRunningTime="2025-11-28 09:29:06.307060828 +0000 UTC m=+9400.685125939" watchObservedRunningTime="2025-11-28 09:29:06.315150628 +0000 UTC m=+9400.693215739" Nov 28 09:29:06 crc kubenswrapper[4946]: I1128 09:29:06.414273 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-tdfft"] Nov 28 09:29:07 crc kubenswrapper[4946]: I1128 09:29:07.306918 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" event={"ID":"6b73ea39-1838-4f8b-b183-041d61c8c457","Type":"ContainerStarted","Data":"c9e560eed0ad5cc8242e65cf380419f286cf16010d0d23cd0d1341bc15bcaf7a"} Nov 28 09:29:07 crc kubenswrapper[4946]: I1128 09:29:07.307328 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" event={"ID":"6b73ea39-1838-4f8b-b183-041d61c8c457","Type":"ContainerStarted","Data":"3f514cfcb57c42b3d145c924a7506068294965acf6a23ab05f36f863218a1fd1"} Nov 28 09:29:07 crc kubenswrapper[4946]: I1128 09:29:07.362355 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" podStartSLOduration=1.884492092 podStartE2EDuration="2.362334861s" podCreationTimestamp="2025-11-28 09:29:05 +0000 UTC" firstStartedPulling="2025-11-28 09:29:06.419375267 +0000 UTC m=+9400.797440378" lastFinishedPulling="2025-11-28 09:29:06.897218036 +0000 UTC m=+9401.275283147" observedRunningTime="2025-11-28 09:29:07.334245518 +0000 UTC m=+9401.712310659" watchObservedRunningTime="2025-11-28 09:29:07.362334861 +0000 UTC m=+9401.740399982" Nov 28 09:29:12 crc kubenswrapper[4946]: I1128 09:29:12.207607 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tbjf9" Nov 28 09:29:12 crc kubenswrapper[4946]: I1128 09:29:12.208260 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tbjf9" Nov 28 09:29:12 crc kubenswrapper[4946]: I1128 09:29:12.286071 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tbjf9" Nov 28 09:29:12 crc kubenswrapper[4946]: I1128 09:29:12.418758 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tbjf9" Nov 28 09:29:12 crc kubenswrapper[4946]: I1128 09:29:12.532549 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tbjf9"] Nov 28 09:29:14 crc kubenswrapper[4946]: I1128 09:29:14.375873 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tbjf9" podUID="4dc1376a-004b-436f-9cd5-4cb5cacb8639" containerName="registry-server" containerID="cri-o://87d7944d5322b32ee4108cae85d676f9f04dea04e8587423ca8ea3872d611b1d" gracePeriod=2 Nov 28 09:29:15 crc kubenswrapper[4946]: I1128 09:29:15.024748 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tbjf9" Nov 28 09:29:15 crc kubenswrapper[4946]: I1128 09:29:15.116195 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dc1376a-004b-436f-9cd5-4cb5cacb8639-utilities\") pod \"4dc1376a-004b-436f-9cd5-4cb5cacb8639\" (UID: \"4dc1376a-004b-436f-9cd5-4cb5cacb8639\") " Nov 28 09:29:15 crc kubenswrapper[4946]: I1128 09:29:15.116344 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prdd7\" (UniqueName: \"kubernetes.io/projected/4dc1376a-004b-436f-9cd5-4cb5cacb8639-kube-api-access-prdd7\") pod \"4dc1376a-004b-436f-9cd5-4cb5cacb8639\" (UID: \"4dc1376a-004b-436f-9cd5-4cb5cacb8639\") " Nov 28 09:29:15 crc kubenswrapper[4946]: I1128 09:29:15.116543 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dc1376a-004b-436f-9cd5-4cb5cacb8639-catalog-content\") pod \"4dc1376a-004b-436f-9cd5-4cb5cacb8639\" (UID: \"4dc1376a-004b-436f-9cd5-4cb5cacb8639\") " Nov 28 09:29:15 crc kubenswrapper[4946]: I1128 09:29:15.117236 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dc1376a-004b-436f-9cd5-4cb5cacb8639-utilities" (OuterVolumeSpecName: "utilities") pod "4dc1376a-004b-436f-9cd5-4cb5cacb8639" (UID: "4dc1376a-004b-436f-9cd5-4cb5cacb8639"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:29:15 crc kubenswrapper[4946]: I1128 09:29:15.117419 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dc1376a-004b-436f-9cd5-4cb5cacb8639-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 09:29:15 crc kubenswrapper[4946]: I1128 09:29:15.121278 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dc1376a-004b-436f-9cd5-4cb5cacb8639-kube-api-access-prdd7" (OuterVolumeSpecName: "kube-api-access-prdd7") pod "4dc1376a-004b-436f-9cd5-4cb5cacb8639" (UID: "4dc1376a-004b-436f-9cd5-4cb5cacb8639"). InnerVolumeSpecName "kube-api-access-prdd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:29:15 crc kubenswrapper[4946]: I1128 09:29:15.163034 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dc1376a-004b-436f-9cd5-4cb5cacb8639-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4dc1376a-004b-436f-9cd5-4cb5cacb8639" (UID: "4dc1376a-004b-436f-9cd5-4cb5cacb8639"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:29:15 crc kubenswrapper[4946]: I1128 09:29:15.219786 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prdd7\" (UniqueName: \"kubernetes.io/projected/4dc1376a-004b-436f-9cd5-4cb5cacb8639-kube-api-access-prdd7\") on node \"crc\" DevicePath \"\"" Nov 28 09:29:15 crc kubenswrapper[4946]: I1128 09:29:15.219820 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dc1376a-004b-436f-9cd5-4cb5cacb8639-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 09:29:15 crc kubenswrapper[4946]: I1128 09:29:15.392520 4946 generic.go:334] "Generic (PLEG): container finished" podID="4dc1376a-004b-436f-9cd5-4cb5cacb8639" containerID="87d7944d5322b32ee4108cae85d676f9f04dea04e8587423ca8ea3872d611b1d" exitCode=0 Nov 28 09:29:15 crc kubenswrapper[4946]: I1128 09:29:15.392583 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbjf9" event={"ID":"4dc1376a-004b-436f-9cd5-4cb5cacb8639","Type":"ContainerDied","Data":"87d7944d5322b32ee4108cae85d676f9f04dea04e8587423ca8ea3872d611b1d"} Nov 28 09:29:15 crc kubenswrapper[4946]: I1128 09:29:15.392664 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbjf9" event={"ID":"4dc1376a-004b-436f-9cd5-4cb5cacb8639","Type":"ContainerDied","Data":"49f77296d2d171c8111226071c210df4e4d8399af42532fde62d6890100758c3"} Nov 28 09:29:15 crc kubenswrapper[4946]: I1128 09:29:15.392668 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tbjf9" Nov 28 09:29:15 crc kubenswrapper[4946]: I1128 09:29:15.392696 4946 scope.go:117] "RemoveContainer" containerID="87d7944d5322b32ee4108cae85d676f9f04dea04e8587423ca8ea3872d611b1d" Nov 28 09:29:15 crc kubenswrapper[4946]: I1128 09:29:15.432859 4946 scope.go:117] "RemoveContainer" containerID="5df7c38f464cf6baf7ecb8cb2ab379602a9930ad21c2e624b0cbf6beb275d361" Nov 28 09:29:15 crc kubenswrapper[4946]: I1128 09:29:15.461162 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tbjf9"] Nov 28 09:29:15 crc kubenswrapper[4946]: I1128 09:29:15.470679 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tbjf9"] Nov 28 09:29:15 crc kubenswrapper[4946]: I1128 09:29:15.476183 4946 scope.go:117] "RemoveContainer" containerID="e22d018e99e909afce358422703bb1c81e05c4088acf7b7d77f951518e7d1dd4" Nov 28 09:29:15 crc kubenswrapper[4946]: I1128 09:29:15.520917 4946 scope.go:117] "RemoveContainer" containerID="87d7944d5322b32ee4108cae85d676f9f04dea04e8587423ca8ea3872d611b1d" Nov 28 09:29:15 crc kubenswrapper[4946]: E1128 09:29:15.521341 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87d7944d5322b32ee4108cae85d676f9f04dea04e8587423ca8ea3872d611b1d\": container with ID starting with 87d7944d5322b32ee4108cae85d676f9f04dea04e8587423ca8ea3872d611b1d not found: ID does not exist" containerID="87d7944d5322b32ee4108cae85d676f9f04dea04e8587423ca8ea3872d611b1d" Nov 28 09:29:15 crc kubenswrapper[4946]: I1128 09:29:15.521396 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87d7944d5322b32ee4108cae85d676f9f04dea04e8587423ca8ea3872d611b1d"} err="failed to get container status \"87d7944d5322b32ee4108cae85d676f9f04dea04e8587423ca8ea3872d611b1d\": rpc error: code = NotFound desc = could not find container \"87d7944d5322b32ee4108cae85d676f9f04dea04e8587423ca8ea3872d611b1d\": container with ID starting with 87d7944d5322b32ee4108cae85d676f9f04dea04e8587423ca8ea3872d611b1d not found: ID does not exist" Nov 28 09:29:15 crc kubenswrapper[4946]: I1128 09:29:15.521434 4946 scope.go:117] "RemoveContainer" containerID="5df7c38f464cf6baf7ecb8cb2ab379602a9930ad21c2e624b0cbf6beb275d361" Nov 28 09:29:15 crc kubenswrapper[4946]: E1128 09:29:15.522024 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5df7c38f464cf6baf7ecb8cb2ab379602a9930ad21c2e624b0cbf6beb275d361\": container with ID starting with 5df7c38f464cf6baf7ecb8cb2ab379602a9930ad21c2e624b0cbf6beb275d361 not found: ID does not exist" containerID="5df7c38f464cf6baf7ecb8cb2ab379602a9930ad21c2e624b0cbf6beb275d361" Nov 28 09:29:15 crc kubenswrapper[4946]: I1128 09:29:15.522072 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5df7c38f464cf6baf7ecb8cb2ab379602a9930ad21c2e624b0cbf6beb275d361"} err="failed to get container status \"5df7c38f464cf6baf7ecb8cb2ab379602a9930ad21c2e624b0cbf6beb275d361\": rpc error: code = NotFound desc = could not find container \"5df7c38f464cf6baf7ecb8cb2ab379602a9930ad21c2e624b0cbf6beb275d361\": container with ID starting with 5df7c38f464cf6baf7ecb8cb2ab379602a9930ad21c2e624b0cbf6beb275d361 not found: ID does not exist" Nov 28 09:29:15 crc kubenswrapper[4946]: I1128 09:29:15.522102 4946 scope.go:117] "RemoveContainer" containerID="e22d018e99e909afce358422703bb1c81e05c4088acf7b7d77f951518e7d1dd4" Nov 28 09:29:15 crc kubenswrapper[4946]: E1128 09:29:15.522680 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e22d018e99e909afce358422703bb1c81e05c4088acf7b7d77f951518e7d1dd4\": container with ID starting with e22d018e99e909afce358422703bb1c81e05c4088acf7b7d77f951518e7d1dd4 not found: ID does not exist" containerID="e22d018e99e909afce358422703bb1c81e05c4088acf7b7d77f951518e7d1dd4" Nov 28 09:29:15 crc kubenswrapper[4946]: I1128 09:29:15.522723 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e22d018e99e909afce358422703bb1c81e05c4088acf7b7d77f951518e7d1dd4"} err="failed to get container status \"e22d018e99e909afce358422703bb1c81e05c4088acf7b7d77f951518e7d1dd4\": rpc error: code = NotFound desc = could not find container \"e22d018e99e909afce358422703bb1c81e05c4088acf7b7d77f951518e7d1dd4\": container with ID starting with e22d018e99e909afce358422703bb1c81e05c4088acf7b7d77f951518e7d1dd4 not found: ID does not exist" Nov 28 09:29:16 crc kubenswrapper[4946]: I1128 09:29:16.010177 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dc1376a-004b-436f-9cd5-4cb5cacb8639" path="/var/lib/kubelet/pods/4dc1376a-004b-436f-9cd5-4cb5cacb8639/volumes" Nov 28 09:29:20 crc kubenswrapper[4946]: I1128 09:29:20.979980 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4dvkh"] Nov 28 09:29:20 crc kubenswrapper[4946]: E1128 09:29:20.981255 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dc1376a-004b-436f-9cd5-4cb5cacb8639" containerName="extract-content" Nov 28 09:29:20 crc kubenswrapper[4946]: I1128 09:29:20.981275 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dc1376a-004b-436f-9cd5-4cb5cacb8639" containerName="extract-content" Nov 28 09:29:20 crc kubenswrapper[4946]: E1128 09:29:20.981302 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dc1376a-004b-436f-9cd5-4cb5cacb8639" containerName="extract-utilities" Nov 28 09:29:20 crc kubenswrapper[4946]: I1128 09:29:20.981311 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dc1376a-004b-436f-9cd5-4cb5cacb8639" containerName="extract-utilities" Nov 28 09:29:20 crc kubenswrapper[4946]: E1128 09:29:20.981339 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dc1376a-004b-436f-9cd5-4cb5cacb8639" containerName="registry-server" Nov 28 09:29:20 crc kubenswrapper[4946]: I1128 09:29:20.981347 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dc1376a-004b-436f-9cd5-4cb5cacb8639" containerName="registry-server" Nov 28 09:29:20 crc kubenswrapper[4946]: I1128 09:29:20.981621 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dc1376a-004b-436f-9cd5-4cb5cacb8639" containerName="registry-server" Nov 28 09:29:20 crc kubenswrapper[4946]: I1128 09:29:20.983738 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dvkh" Nov 28 09:29:20 crc kubenswrapper[4946]: I1128 09:29:20.994869 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4dvkh"] Nov 28 09:29:21 crc kubenswrapper[4946]: I1128 09:29:21.056960 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4845c9be-ba2f-4bb0-b0b6-5478c26c8fda-utilities\") pod \"redhat-operators-4dvkh\" (UID: \"4845c9be-ba2f-4bb0-b0b6-5478c26c8fda\") " pod="openshift-marketplace/redhat-operators-4dvkh" Nov 28 09:29:21 crc kubenswrapper[4946]: I1128 09:29:21.057062 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4845c9be-ba2f-4bb0-b0b6-5478c26c8fda-catalog-content\") pod \"redhat-operators-4dvkh\" (UID: \"4845c9be-ba2f-4bb0-b0b6-5478c26c8fda\") " pod="openshift-marketplace/redhat-operators-4dvkh" Nov 28 09:29:21 crc kubenswrapper[4946]: I1128 09:29:21.057085 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmjc2\" (UniqueName: \"kubernetes.io/projected/4845c9be-ba2f-4bb0-b0b6-5478c26c8fda-kube-api-access-dmjc2\") pod \"redhat-operators-4dvkh\" (UID: \"4845c9be-ba2f-4bb0-b0b6-5478c26c8fda\") " pod="openshift-marketplace/redhat-operators-4dvkh" Nov 28 09:29:21 crc kubenswrapper[4946]: I1128 09:29:21.158538 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4845c9be-ba2f-4bb0-b0b6-5478c26c8fda-utilities\") pod \"redhat-operators-4dvkh\" (UID: \"4845c9be-ba2f-4bb0-b0b6-5478c26c8fda\") " pod="openshift-marketplace/redhat-operators-4dvkh" Nov 28 09:29:21 crc kubenswrapper[4946]: I1128 09:29:21.158649 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4845c9be-ba2f-4bb0-b0b6-5478c26c8fda-catalog-content\") pod \"redhat-operators-4dvkh\" (UID: \"4845c9be-ba2f-4bb0-b0b6-5478c26c8fda\") " pod="openshift-marketplace/redhat-operators-4dvkh" Nov 28 09:29:21 crc kubenswrapper[4946]: I1128 09:29:21.158682 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmjc2\" (UniqueName: \"kubernetes.io/projected/4845c9be-ba2f-4bb0-b0b6-5478c26c8fda-kube-api-access-dmjc2\") pod \"redhat-operators-4dvkh\" (UID: \"4845c9be-ba2f-4bb0-b0b6-5478c26c8fda\") " pod="openshift-marketplace/redhat-operators-4dvkh" Nov 28 09:29:21 crc kubenswrapper[4946]: I1128 09:29:21.159073 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4845c9be-ba2f-4bb0-b0b6-5478c26c8fda-utilities\") pod \"redhat-operators-4dvkh\" (UID: \"4845c9be-ba2f-4bb0-b0b6-5478c26c8fda\") " pod="openshift-marketplace/redhat-operators-4dvkh" Nov 28 09:29:21 crc kubenswrapper[4946]: I1128 09:29:21.159115 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4845c9be-ba2f-4bb0-b0b6-5478c26c8fda-catalog-content\") pod \"redhat-operators-4dvkh\" (UID: \"4845c9be-ba2f-4bb0-b0b6-5478c26c8fda\") " pod="openshift-marketplace/redhat-operators-4dvkh" Nov 28 09:29:21 crc kubenswrapper[4946]: I1128 09:29:21.185402 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmjc2\" (UniqueName: \"kubernetes.io/projected/4845c9be-ba2f-4bb0-b0b6-5478c26c8fda-kube-api-access-dmjc2\") pod \"redhat-operators-4dvkh\" (UID: \"4845c9be-ba2f-4bb0-b0b6-5478c26c8fda\") " pod="openshift-marketplace/redhat-operators-4dvkh" Nov 28 09:29:21 crc kubenswrapper[4946]: I1128 09:29:21.326394 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dvkh" Nov 28 09:29:21 crc kubenswrapper[4946]: I1128 09:29:21.797354 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4dvkh"] Nov 28 09:29:21 crc kubenswrapper[4946]: W1128 09:29:21.803627 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4845c9be_ba2f_4bb0_b0b6_5478c26c8fda.slice/crio-ea9de8cd868a57495d7b72773364803d378d3c702851b2ab8cc741d864573d0e WatchSource:0}: Error finding container ea9de8cd868a57495d7b72773364803d378d3c702851b2ab8cc741d864573d0e: Status 404 returned error can't find the container with id ea9de8cd868a57495d7b72773364803d378d3c702851b2ab8cc741d864573d0e Nov 28 09:29:22 crc kubenswrapper[4946]: I1128 09:29:22.478883 4946 generic.go:334] "Generic (PLEG): container finished" podID="4845c9be-ba2f-4bb0-b0b6-5478c26c8fda" containerID="5e4155fb06838f938ade1b74aeae75edc3f284d87631b884f22fbf8f371d6b51" exitCode=0 Nov 28 09:29:22 crc kubenswrapper[4946]: I1128 09:29:22.478965 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dvkh" event={"ID":"4845c9be-ba2f-4bb0-b0b6-5478c26c8fda","Type":"ContainerDied","Data":"5e4155fb06838f938ade1b74aeae75edc3f284d87631b884f22fbf8f371d6b51"} Nov 28 09:29:22 crc kubenswrapper[4946]: I1128 09:29:22.479192 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dvkh" event={"ID":"4845c9be-ba2f-4bb0-b0b6-5478c26c8fda","Type":"ContainerStarted","Data":"ea9de8cd868a57495d7b72773364803d378d3c702851b2ab8cc741d864573d0e"} Nov 28 09:29:23 crc kubenswrapper[4946]: I1128 09:29:23.496772 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dvkh" event={"ID":"4845c9be-ba2f-4bb0-b0b6-5478c26c8fda","Type":"ContainerStarted","Data":"49f53ca1d178c4c12d6e9084a7e4dbceb5443e021746e1eef30f6ab56cba8a3d"} Nov 28 09:29:26 crc kubenswrapper[4946]: I1128 09:29:26.539263 4946 generic.go:334] "Generic (PLEG): container finished" podID="4845c9be-ba2f-4bb0-b0b6-5478c26c8fda" containerID="49f53ca1d178c4c12d6e9084a7e4dbceb5443e021746e1eef30f6ab56cba8a3d" exitCode=0 Nov 28 09:29:26 crc kubenswrapper[4946]: I1128 09:29:26.539385 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dvkh" event={"ID":"4845c9be-ba2f-4bb0-b0b6-5478c26c8fda","Type":"ContainerDied","Data":"49f53ca1d178c4c12d6e9084a7e4dbceb5443e021746e1eef30f6ab56cba8a3d"} Nov 28 09:29:27 crc kubenswrapper[4946]: I1128 09:29:27.555756 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dvkh" event={"ID":"4845c9be-ba2f-4bb0-b0b6-5478c26c8fda","Type":"ContainerStarted","Data":"ed9bc4a9166baeac4d803d50e06b007e7c8929f39354177a2b00f57bc7becab3"} Nov 28 09:29:27 crc kubenswrapper[4946]: I1128 09:29:27.592802 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4dvkh" podStartSLOduration=2.999542184 podStartE2EDuration="7.592776667s" podCreationTimestamp="2025-11-28 09:29:20 +0000 UTC" firstStartedPulling="2025-11-28 09:29:22.481389102 +0000 UTC m=+9416.859454213" lastFinishedPulling="2025-11-28 09:29:27.074623545 +0000 UTC m=+9421.452688696" observedRunningTime="2025-11-28 09:29:27.581624723 +0000 UTC m=+9421.959689834" watchObservedRunningTime="2025-11-28 09:29:27.592776667 +0000 UTC m=+9421.970841788" Nov 28 09:29:31 crc kubenswrapper[4946]: I1128 09:29:31.326768 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4dvkh" Nov 28 09:29:31 crc kubenswrapper[4946]: I1128 09:29:31.328179 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4dvkh" Nov 28 09:29:32 crc kubenswrapper[4946]: I1128 09:29:32.423814 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4dvkh" podUID="4845c9be-ba2f-4bb0-b0b6-5478c26c8fda" containerName="registry-server" probeResult="failure" output=< Nov 28 09:29:32 crc kubenswrapper[4946]: timeout: failed to connect service ":50051" within 1s Nov 28 09:29:32 crc kubenswrapper[4946]: > Nov 28 09:29:41 crc kubenswrapper[4946]: I1128 09:29:41.384621 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4dvkh" Nov 28 09:29:41 crc kubenswrapper[4946]: I1128 09:29:41.443863 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4dvkh" Nov 28 09:29:41 crc kubenswrapper[4946]: I1128 09:29:41.634781 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4dvkh"] Nov 28 09:29:42 crc kubenswrapper[4946]: I1128 09:29:42.757544 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4dvkh" podUID="4845c9be-ba2f-4bb0-b0b6-5478c26c8fda" containerName="registry-server" containerID="cri-o://ed9bc4a9166baeac4d803d50e06b007e7c8929f39354177a2b00f57bc7becab3" gracePeriod=2 Nov 28 09:29:43 crc kubenswrapper[4946]: I1128 09:29:43.272497 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dvkh" Nov 28 09:29:43 crc kubenswrapper[4946]: I1128 09:29:43.368148 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmjc2\" (UniqueName: \"kubernetes.io/projected/4845c9be-ba2f-4bb0-b0b6-5478c26c8fda-kube-api-access-dmjc2\") pod \"4845c9be-ba2f-4bb0-b0b6-5478c26c8fda\" (UID: \"4845c9be-ba2f-4bb0-b0b6-5478c26c8fda\") " Nov 28 09:29:43 crc kubenswrapper[4946]: I1128 09:29:43.368290 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4845c9be-ba2f-4bb0-b0b6-5478c26c8fda-catalog-content\") pod \"4845c9be-ba2f-4bb0-b0b6-5478c26c8fda\" (UID: \"4845c9be-ba2f-4bb0-b0b6-5478c26c8fda\") " Nov 28 09:29:43 crc kubenswrapper[4946]: I1128 09:29:43.368615 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4845c9be-ba2f-4bb0-b0b6-5478c26c8fda-utilities\") pod \"4845c9be-ba2f-4bb0-b0b6-5478c26c8fda\" (UID: \"4845c9be-ba2f-4bb0-b0b6-5478c26c8fda\") " Nov 28 09:29:43 crc kubenswrapper[4946]: I1128 09:29:43.371789 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4845c9be-ba2f-4bb0-b0b6-5478c26c8fda-utilities" (OuterVolumeSpecName: "utilities") pod "4845c9be-ba2f-4bb0-b0b6-5478c26c8fda" (UID: "4845c9be-ba2f-4bb0-b0b6-5478c26c8fda"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:29:43 crc kubenswrapper[4946]: I1128 09:29:43.377852 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4845c9be-ba2f-4bb0-b0b6-5478c26c8fda-kube-api-access-dmjc2" (OuterVolumeSpecName: "kube-api-access-dmjc2") pod "4845c9be-ba2f-4bb0-b0b6-5478c26c8fda" (UID: "4845c9be-ba2f-4bb0-b0b6-5478c26c8fda"). InnerVolumeSpecName "kube-api-access-dmjc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:29:43 crc kubenswrapper[4946]: I1128 09:29:43.470709 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmjc2\" (UniqueName: \"kubernetes.io/projected/4845c9be-ba2f-4bb0-b0b6-5478c26c8fda-kube-api-access-dmjc2\") on node \"crc\" DevicePath \"\"" Nov 28 09:29:43 crc kubenswrapper[4946]: I1128 09:29:43.471003 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4845c9be-ba2f-4bb0-b0b6-5478c26c8fda-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 09:29:43 crc kubenswrapper[4946]: I1128 09:29:43.527658 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4845c9be-ba2f-4bb0-b0b6-5478c26c8fda-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4845c9be-ba2f-4bb0-b0b6-5478c26c8fda" (UID: "4845c9be-ba2f-4bb0-b0b6-5478c26c8fda"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:29:43 crc kubenswrapper[4946]: I1128 09:29:43.572539 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4845c9be-ba2f-4bb0-b0b6-5478c26c8fda-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 09:29:43 crc kubenswrapper[4946]: I1128 09:29:43.776375 4946 generic.go:334] "Generic (PLEG): container finished" podID="4845c9be-ba2f-4bb0-b0b6-5478c26c8fda" containerID="ed9bc4a9166baeac4d803d50e06b007e7c8929f39354177a2b00f57bc7becab3" exitCode=0 Nov 28 09:29:43 crc kubenswrapper[4946]: I1128 09:29:43.776418 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dvkh" event={"ID":"4845c9be-ba2f-4bb0-b0b6-5478c26c8fda","Type":"ContainerDied","Data":"ed9bc4a9166baeac4d803d50e06b007e7c8929f39354177a2b00f57bc7becab3"} Nov 28 09:29:43 crc kubenswrapper[4946]: I1128 09:29:43.776442 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dvkh" event={"ID":"4845c9be-ba2f-4bb0-b0b6-5478c26c8fda","Type":"ContainerDied","Data":"ea9de8cd868a57495d7b72773364803d378d3c702851b2ab8cc741d864573d0e"} Nov 28 09:29:43 crc kubenswrapper[4946]: I1128 09:29:43.776474 4946 scope.go:117] "RemoveContainer" containerID="ed9bc4a9166baeac4d803d50e06b007e7c8929f39354177a2b00f57bc7becab3" Nov 28 09:29:43 crc kubenswrapper[4946]: I1128 09:29:43.776589 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dvkh" Nov 28 09:29:43 crc kubenswrapper[4946]: I1128 09:29:43.814390 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4dvkh"] Nov 28 09:29:43 crc kubenswrapper[4946]: I1128 09:29:43.826359 4946 scope.go:117] "RemoveContainer" containerID="49f53ca1d178c4c12d6e9084a7e4dbceb5443e021746e1eef30f6ab56cba8a3d" Nov 28 09:29:43 crc kubenswrapper[4946]: I1128 09:29:43.828834 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4dvkh"] Nov 28 09:29:43 crc kubenswrapper[4946]: I1128 09:29:43.862091 4946 scope.go:117] "RemoveContainer" containerID="5e4155fb06838f938ade1b74aeae75edc3f284d87631b884f22fbf8f371d6b51" Nov 28 09:29:43 crc kubenswrapper[4946]: I1128 09:29:43.937297 4946 scope.go:117] "RemoveContainer" containerID="ed9bc4a9166baeac4d803d50e06b007e7c8929f39354177a2b00f57bc7becab3" Nov 28 09:29:43 crc kubenswrapper[4946]: E1128 09:29:43.937897 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed9bc4a9166baeac4d803d50e06b007e7c8929f39354177a2b00f57bc7becab3\": container with ID starting with ed9bc4a9166baeac4d803d50e06b007e7c8929f39354177a2b00f57bc7becab3 not found: ID does not exist" containerID="ed9bc4a9166baeac4d803d50e06b007e7c8929f39354177a2b00f57bc7becab3" Nov 28 09:29:43 crc kubenswrapper[4946]: I1128 09:29:43.938021 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed9bc4a9166baeac4d803d50e06b007e7c8929f39354177a2b00f57bc7becab3"} err="failed to get container status \"ed9bc4a9166baeac4d803d50e06b007e7c8929f39354177a2b00f57bc7becab3\": rpc error: code = NotFound desc = could not find container \"ed9bc4a9166baeac4d803d50e06b007e7c8929f39354177a2b00f57bc7becab3\": container with ID starting with ed9bc4a9166baeac4d803d50e06b007e7c8929f39354177a2b00f57bc7becab3 not found: ID does not exist" Nov 28 09:29:43 crc kubenswrapper[4946]: I1128 09:29:43.938113 4946 scope.go:117] "RemoveContainer" containerID="49f53ca1d178c4c12d6e9084a7e4dbceb5443e021746e1eef30f6ab56cba8a3d" Nov 28 09:29:43 crc kubenswrapper[4946]: E1128 09:29:43.938611 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49f53ca1d178c4c12d6e9084a7e4dbceb5443e021746e1eef30f6ab56cba8a3d\": container with ID starting with 49f53ca1d178c4c12d6e9084a7e4dbceb5443e021746e1eef30f6ab56cba8a3d not found: ID does not exist" containerID="49f53ca1d178c4c12d6e9084a7e4dbceb5443e021746e1eef30f6ab56cba8a3d" Nov 28 09:29:43 crc kubenswrapper[4946]: I1128 09:29:43.938687 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49f53ca1d178c4c12d6e9084a7e4dbceb5443e021746e1eef30f6ab56cba8a3d"} err="failed to get container status \"49f53ca1d178c4c12d6e9084a7e4dbceb5443e021746e1eef30f6ab56cba8a3d\": rpc error: code = NotFound desc = could not find container \"49f53ca1d178c4c12d6e9084a7e4dbceb5443e021746e1eef30f6ab56cba8a3d\": container with ID starting with 49f53ca1d178c4c12d6e9084a7e4dbceb5443e021746e1eef30f6ab56cba8a3d not found: ID does not exist" Nov 28 09:29:43 crc kubenswrapper[4946]: I1128 09:29:43.938720 4946 scope.go:117] "RemoveContainer" containerID="5e4155fb06838f938ade1b74aeae75edc3f284d87631b884f22fbf8f371d6b51" Nov 28 09:29:43 crc kubenswrapper[4946]: E1128 09:29:43.939153 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e4155fb06838f938ade1b74aeae75edc3f284d87631b884f22fbf8f371d6b51\": container with ID starting with 5e4155fb06838f938ade1b74aeae75edc3f284d87631b884f22fbf8f371d6b51 not found: ID does not exist" containerID="5e4155fb06838f938ade1b74aeae75edc3f284d87631b884f22fbf8f371d6b51" Nov 28 09:29:43 crc kubenswrapper[4946]: I1128 09:29:43.939180 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e4155fb06838f938ade1b74aeae75edc3f284d87631b884f22fbf8f371d6b51"} err="failed to get container status \"5e4155fb06838f938ade1b74aeae75edc3f284d87631b884f22fbf8f371d6b51\": rpc error: code = NotFound desc = could not find container \"5e4155fb06838f938ade1b74aeae75edc3f284d87631b884f22fbf8f371d6b51\": container with ID starting with 5e4155fb06838f938ade1b74aeae75edc3f284d87631b884f22fbf8f371d6b51 not found: ID does not exist" Nov 28 09:29:44 crc kubenswrapper[4946]: I1128 09:29:44.003133 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4845c9be-ba2f-4bb0-b0b6-5478c26c8fda" path="/var/lib/kubelet/pods/4845c9be-ba2f-4bb0-b0b6-5478c26c8fda/volumes" Nov 28 09:30:00 crc kubenswrapper[4946]: I1128 09:30:00.175383 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405370-tmw2v"] Nov 28 09:30:00 crc kubenswrapper[4946]: E1128 09:30:00.176670 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4845c9be-ba2f-4bb0-b0b6-5478c26c8fda" containerName="extract-content" Nov 28 09:30:00 crc kubenswrapper[4946]: I1128 09:30:00.176842 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="4845c9be-ba2f-4bb0-b0b6-5478c26c8fda" containerName="extract-content" Nov 28 09:30:00 crc kubenswrapper[4946]: E1128 09:30:00.176888 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4845c9be-ba2f-4bb0-b0b6-5478c26c8fda" containerName="registry-server" Nov 28 09:30:00 crc kubenswrapper[4946]: I1128 09:30:00.176902 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="4845c9be-ba2f-4bb0-b0b6-5478c26c8fda" containerName="registry-server" Nov 28 09:30:00 crc kubenswrapper[4946]: E1128 09:30:00.176935 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4845c9be-ba2f-4bb0-b0b6-5478c26c8fda" containerName="extract-utilities" Nov 28 09:30:00 crc kubenswrapper[4946]: I1128 09:30:00.176950 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="4845c9be-ba2f-4bb0-b0b6-5478c26c8fda" containerName="extract-utilities" Nov 28 09:30:00 crc kubenswrapper[4946]: I1128 09:30:00.177362 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="4845c9be-ba2f-4bb0-b0b6-5478c26c8fda" containerName="registry-server" Nov 28 09:30:00 crc kubenswrapper[4946]: I1128 09:30:00.179143 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405370-tmw2v" Nov 28 09:30:00 crc kubenswrapper[4946]: I1128 09:30:00.186515 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 09:30:00 crc kubenswrapper[4946]: I1128 09:30:00.188899 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 09:30:00 crc kubenswrapper[4946]: I1128 09:30:00.212878 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405370-tmw2v"] Nov 28 09:30:00 crc kubenswrapper[4946]: I1128 09:30:00.285547 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a37f46c-f843-41b7-a9e1-6fc7f7201e45-secret-volume\") pod \"collect-profiles-29405370-tmw2v\" (UID: \"1a37f46c-f843-41b7-a9e1-6fc7f7201e45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405370-tmw2v" Nov 28 09:30:00 crc kubenswrapper[4946]: I1128 09:30:00.285621 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a37f46c-f843-41b7-a9e1-6fc7f7201e45-config-volume\") pod \"collect-profiles-29405370-tmw2v\" (UID: \"1a37f46c-f843-41b7-a9e1-6fc7f7201e45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405370-tmw2v" Nov 28 09:30:00 crc kubenswrapper[4946]: I1128 09:30:00.285692 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqt5t\" (UniqueName: \"kubernetes.io/projected/1a37f46c-f843-41b7-a9e1-6fc7f7201e45-kube-api-access-zqt5t\") pod \"collect-profiles-29405370-tmw2v\" (UID: \"1a37f46c-f843-41b7-a9e1-6fc7f7201e45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405370-tmw2v" Nov 28 09:30:00 crc kubenswrapper[4946]: I1128 09:30:00.387680 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqt5t\" (UniqueName: \"kubernetes.io/projected/1a37f46c-f843-41b7-a9e1-6fc7f7201e45-kube-api-access-zqt5t\") pod \"collect-profiles-29405370-tmw2v\" (UID: \"1a37f46c-f843-41b7-a9e1-6fc7f7201e45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405370-tmw2v" Nov 28 09:30:00 crc kubenswrapper[4946]: I1128 09:30:00.387830 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a37f46c-f843-41b7-a9e1-6fc7f7201e45-secret-volume\") pod \"collect-profiles-29405370-tmw2v\" (UID: \"1a37f46c-f843-41b7-a9e1-6fc7f7201e45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405370-tmw2v" Nov 28 09:30:00 crc kubenswrapper[4946]: I1128 09:30:00.387912 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a37f46c-f843-41b7-a9e1-6fc7f7201e45-config-volume\") pod \"collect-profiles-29405370-tmw2v\" (UID: \"1a37f46c-f843-41b7-a9e1-6fc7f7201e45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405370-tmw2v" Nov 28 09:30:00 crc kubenswrapper[4946]: I1128 09:30:00.389028 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a37f46c-f843-41b7-a9e1-6fc7f7201e45-config-volume\") pod \"collect-profiles-29405370-tmw2v\" (UID: \"1a37f46c-f843-41b7-a9e1-6fc7f7201e45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405370-tmw2v" Nov 28 09:30:00 crc kubenswrapper[4946]: I1128 09:30:00.394818 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a37f46c-f843-41b7-a9e1-6fc7f7201e45-secret-volume\") pod \"collect-profiles-29405370-tmw2v\" (UID: \"1a37f46c-f843-41b7-a9e1-6fc7f7201e45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405370-tmw2v" Nov 28 09:30:00 crc kubenswrapper[4946]: I1128 09:30:00.412719 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqt5t\" (UniqueName: \"kubernetes.io/projected/1a37f46c-f843-41b7-a9e1-6fc7f7201e45-kube-api-access-zqt5t\") pod \"collect-profiles-29405370-tmw2v\" (UID: \"1a37f46c-f843-41b7-a9e1-6fc7f7201e45\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405370-tmw2v" Nov 28 09:30:00 crc kubenswrapper[4946]: I1128 09:30:00.513170 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405370-tmw2v" Nov 28 09:30:00 crc kubenswrapper[4946]: I1128 09:30:00.988935 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405370-tmw2v"] Nov 28 09:30:01 crc kubenswrapper[4946]: I1128 09:30:01.058192 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405370-tmw2v" event={"ID":"1a37f46c-f843-41b7-a9e1-6fc7f7201e45","Type":"ContainerStarted","Data":"d09a3486e8cdcb4700cf3b2f15c0ca6ddc2c39c7041f358cec999c370f8c9bc7"} Nov 28 09:30:02 crc kubenswrapper[4946]: I1128 09:30:02.099089 4946 generic.go:334] "Generic (PLEG): container finished" podID="1a37f46c-f843-41b7-a9e1-6fc7f7201e45" containerID="d176177fb65d51e29eb0e3e84bf8f10e3a8e36dea7ff15f153a6ee7f2ceebdd3" exitCode=0 Nov 28 09:30:02 crc kubenswrapper[4946]: I1128 09:30:02.099178 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405370-tmw2v" event={"ID":"1a37f46c-f843-41b7-a9e1-6fc7f7201e45","Type":"ContainerDied","Data":"d176177fb65d51e29eb0e3e84bf8f10e3a8e36dea7ff15f153a6ee7f2ceebdd3"} Nov 28 09:30:03 crc kubenswrapper[4946]: I1128 09:30:03.505710 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405370-tmw2v" Nov 28 09:30:03 crc kubenswrapper[4946]: I1128 09:30:03.661910 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a37f46c-f843-41b7-a9e1-6fc7f7201e45-secret-volume\") pod \"1a37f46c-f843-41b7-a9e1-6fc7f7201e45\" (UID: \"1a37f46c-f843-41b7-a9e1-6fc7f7201e45\") " Nov 28 09:30:03 crc kubenswrapper[4946]: I1128 09:30:03.662015 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a37f46c-f843-41b7-a9e1-6fc7f7201e45-config-volume\") pod \"1a37f46c-f843-41b7-a9e1-6fc7f7201e45\" (UID: \"1a37f46c-f843-41b7-a9e1-6fc7f7201e45\") " Nov 28 09:30:03 crc kubenswrapper[4946]: I1128 09:30:03.662092 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqt5t\" (UniqueName: \"kubernetes.io/projected/1a37f46c-f843-41b7-a9e1-6fc7f7201e45-kube-api-access-zqt5t\") pod \"1a37f46c-f843-41b7-a9e1-6fc7f7201e45\" (UID: \"1a37f46c-f843-41b7-a9e1-6fc7f7201e45\") " Nov 28 09:30:03 crc kubenswrapper[4946]: I1128 09:30:03.663184 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a37f46c-f843-41b7-a9e1-6fc7f7201e45-config-volume" (OuterVolumeSpecName: "config-volume") pod "1a37f46c-f843-41b7-a9e1-6fc7f7201e45" (UID: "1a37f46c-f843-41b7-a9e1-6fc7f7201e45"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:30:03 crc kubenswrapper[4946]: I1128 09:30:03.667773 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a37f46c-f843-41b7-a9e1-6fc7f7201e45-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1a37f46c-f843-41b7-a9e1-6fc7f7201e45" (UID: "1a37f46c-f843-41b7-a9e1-6fc7f7201e45"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:30:03 crc kubenswrapper[4946]: I1128 09:30:03.668524 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a37f46c-f843-41b7-a9e1-6fc7f7201e45-kube-api-access-zqt5t" (OuterVolumeSpecName: "kube-api-access-zqt5t") pod "1a37f46c-f843-41b7-a9e1-6fc7f7201e45" (UID: "1a37f46c-f843-41b7-a9e1-6fc7f7201e45"). InnerVolumeSpecName "kube-api-access-zqt5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:30:03 crc kubenswrapper[4946]: I1128 09:30:03.764690 4946 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a37f46c-f843-41b7-a9e1-6fc7f7201e45-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 09:30:03 crc kubenswrapper[4946]: I1128 09:30:03.764870 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqt5t\" (UniqueName: \"kubernetes.io/projected/1a37f46c-f843-41b7-a9e1-6fc7f7201e45-kube-api-access-zqt5t\") on node \"crc\" DevicePath \"\"" Nov 28 09:30:03 crc kubenswrapper[4946]: I1128 09:30:03.764931 4946 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a37f46c-f843-41b7-a9e1-6fc7f7201e45-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 09:30:04 crc kubenswrapper[4946]: I1128 09:30:04.119743 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405370-tmw2v" event={"ID":"1a37f46c-f843-41b7-a9e1-6fc7f7201e45","Type":"ContainerDied","Data":"d09a3486e8cdcb4700cf3b2f15c0ca6ddc2c39c7041f358cec999c370f8c9bc7"} Nov 28 09:30:04 crc kubenswrapper[4946]: I1128 09:30:04.119779 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d09a3486e8cdcb4700cf3b2f15c0ca6ddc2c39c7041f358cec999c370f8c9bc7" Nov 28 09:30:04 crc kubenswrapper[4946]: I1128 09:30:04.119833 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405370-tmw2v" Nov 28 09:30:04 crc kubenswrapper[4946]: I1128 09:30:04.596107 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405325-75gv6"] Nov 28 09:30:04 crc kubenswrapper[4946]: I1128 09:30:04.604771 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405325-75gv6"] Nov 28 09:30:06 crc kubenswrapper[4946]: I1128 09:30:06.024448 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58bd43b6-936f-4ce9-8f28-d90ddfef9086" path="/var/lib/kubelet/pods/58bd43b6-936f-4ce9-8f28-d90ddfef9086/volumes" Nov 28 09:30:53 crc kubenswrapper[4946]: I1128 09:30:53.032681 4946 scope.go:117] "RemoveContainer" containerID="aba5b3e284429745e165b145ef557f3b47e946e4b5136b26231008f0e2619785" Nov 28 09:30:54 crc kubenswrapper[4946]: I1128 09:30:54.731041 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:30:54 crc kubenswrapper[4946]: I1128 09:30:54.731781 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:31:23 crc kubenswrapper[4946]: I1128 09:31:23.796184 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kwhch"] Nov 28 09:31:23 crc kubenswrapper[4946]: E1128 09:31:23.797733 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a37f46c-f843-41b7-a9e1-6fc7f7201e45" containerName="collect-profiles" Nov 28 09:31:23 crc kubenswrapper[4946]: I1128 09:31:23.797751 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a37f46c-f843-41b7-a9e1-6fc7f7201e45" containerName="collect-profiles" Nov 28 09:31:23 crc kubenswrapper[4946]: I1128 09:31:23.798029 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a37f46c-f843-41b7-a9e1-6fc7f7201e45" containerName="collect-profiles" Nov 28 09:31:23 crc kubenswrapper[4946]: I1128 09:31:23.799924 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kwhch" Nov 28 09:31:23 crc kubenswrapper[4946]: I1128 09:31:23.813278 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kwhch"] Nov 28 09:31:23 crc kubenswrapper[4946]: I1128 09:31:23.815856 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffbc3445-930d-49c7-a148-dca85748632f-utilities\") pod \"community-operators-kwhch\" (UID: \"ffbc3445-930d-49c7-a148-dca85748632f\") " pod="openshift-marketplace/community-operators-kwhch" Nov 28 09:31:23 crc kubenswrapper[4946]: I1128 09:31:23.815907 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffbc3445-930d-49c7-a148-dca85748632f-catalog-content\") pod \"community-operators-kwhch\" (UID: \"ffbc3445-930d-49c7-a148-dca85748632f\") " pod="openshift-marketplace/community-operators-kwhch" Nov 28 09:31:23 crc kubenswrapper[4946]: I1128 09:31:23.815944 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7528z\" (UniqueName: \"kubernetes.io/projected/ffbc3445-930d-49c7-a148-dca85748632f-kube-api-access-7528z\") pod \"community-operators-kwhch\" (UID: \"ffbc3445-930d-49c7-a148-dca85748632f\") " pod="openshift-marketplace/community-operators-kwhch" Nov 28 09:31:23 crc kubenswrapper[4946]: I1128 09:31:23.918865 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffbc3445-930d-49c7-a148-dca85748632f-utilities\") pod \"community-operators-kwhch\" (UID: \"ffbc3445-930d-49c7-a148-dca85748632f\") " pod="openshift-marketplace/community-operators-kwhch" Nov 28 09:31:23 crc kubenswrapper[4946]: I1128 09:31:23.918925 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffbc3445-930d-49c7-a148-dca85748632f-catalog-content\") pod \"community-operators-kwhch\" (UID: \"ffbc3445-930d-49c7-a148-dca85748632f\") " pod="openshift-marketplace/community-operators-kwhch" Nov 28 09:31:23 crc kubenswrapper[4946]: I1128 09:31:23.918960 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7528z\" (UniqueName: \"kubernetes.io/projected/ffbc3445-930d-49c7-a148-dca85748632f-kube-api-access-7528z\") pod \"community-operators-kwhch\" (UID: \"ffbc3445-930d-49c7-a148-dca85748632f\") " pod="openshift-marketplace/community-operators-kwhch" Nov 28 09:31:23 crc kubenswrapper[4946]: I1128 09:31:23.919863 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffbc3445-930d-49c7-a148-dca85748632f-utilities\") pod \"community-operators-kwhch\" (UID: \"ffbc3445-930d-49c7-a148-dca85748632f\") " pod="openshift-marketplace/community-operators-kwhch" Nov 28 09:31:23 crc kubenswrapper[4946]: I1128 09:31:23.919884 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffbc3445-930d-49c7-a148-dca85748632f-catalog-content\") pod \"community-operators-kwhch\" (UID: \"ffbc3445-930d-49c7-a148-dca85748632f\") " pod="openshift-marketplace/community-operators-kwhch" Nov 28 09:31:23 crc kubenswrapper[4946]: I1128 09:31:23.942197 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7528z\" (UniqueName: \"kubernetes.io/projected/ffbc3445-930d-49c7-a148-dca85748632f-kube-api-access-7528z\") pod \"community-operators-kwhch\" (UID: \"ffbc3445-930d-49c7-a148-dca85748632f\") " pod="openshift-marketplace/community-operators-kwhch" Nov 28 09:31:24 crc kubenswrapper[4946]: I1128 09:31:24.136631 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kwhch" Nov 28 09:31:24 crc kubenswrapper[4946]: I1128 09:31:24.671688 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kwhch"] Nov 28 09:31:24 crc kubenswrapper[4946]: I1128 09:31:24.731274 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:31:24 crc kubenswrapper[4946]: I1128 09:31:24.731333 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:31:25 crc kubenswrapper[4946]: I1128 09:31:25.074221 4946 generic.go:334] "Generic (PLEG): container finished" podID="ffbc3445-930d-49c7-a148-dca85748632f" containerID="b4f5b62bd4dc8e47b82c51dfe5a1ea8cf5d0bc0861ee7810240bf526ed3c34b3" exitCode=0 Nov 28 09:31:25 crc kubenswrapper[4946]: I1128 09:31:25.074265 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwhch" event={"ID":"ffbc3445-930d-49c7-a148-dca85748632f","Type":"ContainerDied","Data":"b4f5b62bd4dc8e47b82c51dfe5a1ea8cf5d0bc0861ee7810240bf526ed3c34b3"} Nov 28 09:31:25 crc kubenswrapper[4946]: I1128 09:31:25.074293 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwhch" event={"ID":"ffbc3445-930d-49c7-a148-dca85748632f","Type":"ContainerStarted","Data":"e3d341e7f6f59e4601027aa4fb165b5606e401a6a88cc99b30286bc3f76012a6"} Nov 28 09:31:26 crc kubenswrapper[4946]: I1128 09:31:26.086544 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwhch" event={"ID":"ffbc3445-930d-49c7-a148-dca85748632f","Type":"ContainerStarted","Data":"24aac1bab2b1b2149ffb7a6f879d8f2b3a3a2113221fbbe8c6b4ad68a4565d8e"} Nov 28 09:31:27 crc kubenswrapper[4946]: I1128 09:31:27.102971 4946 generic.go:334] "Generic (PLEG): container finished" podID="ffbc3445-930d-49c7-a148-dca85748632f" containerID="24aac1bab2b1b2149ffb7a6f879d8f2b3a3a2113221fbbe8c6b4ad68a4565d8e" exitCode=0 Nov 28 09:31:27 crc kubenswrapper[4946]: I1128 09:31:27.103035 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwhch" event={"ID":"ffbc3445-930d-49c7-a148-dca85748632f","Type":"ContainerDied","Data":"24aac1bab2b1b2149ffb7a6f879d8f2b3a3a2113221fbbe8c6b4ad68a4565d8e"} Nov 28 09:31:28 crc kubenswrapper[4946]: I1128 09:31:28.118935 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwhch" event={"ID":"ffbc3445-930d-49c7-a148-dca85748632f","Type":"ContainerStarted","Data":"5830ac4a4fcddb530fccbe5dd3b5f8ac638244ddd7d0fb484d7424a1cf58eb12"} Nov 28 09:31:28 crc kubenswrapper[4946]: I1128 09:31:28.163456 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kwhch" podStartSLOduration=2.462183949 podStartE2EDuration="5.163426554s" podCreationTimestamp="2025-11-28 09:31:23 +0000 UTC" firstStartedPulling="2025-11-28 09:31:25.07643984 +0000 UTC m=+9539.454504951" lastFinishedPulling="2025-11-28 09:31:27.777682405 +0000 UTC m=+9542.155747556" observedRunningTime="2025-11-28 09:31:28.14869172 +0000 UTC m=+9542.526756841" watchObservedRunningTime="2025-11-28 09:31:28.163426554 +0000 UTC m=+9542.541491685" Nov 28 09:31:34 crc kubenswrapper[4946]: I1128 09:31:34.137575 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kwhch" Nov 28 09:31:34 crc kubenswrapper[4946]: I1128 09:31:34.138147 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kwhch" Nov 28 09:31:35 crc kubenswrapper[4946]: I1128 09:31:35.174300 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kwhch" Nov 28 09:31:35 crc kubenswrapper[4946]: I1128 09:31:35.249732 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kwhch" Nov 28 09:31:35 crc kubenswrapper[4946]: I1128 09:31:35.412712 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kwhch"] Nov 28 09:31:36 crc kubenswrapper[4946]: I1128 09:31:36.214211 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kwhch" podUID="ffbc3445-930d-49c7-a148-dca85748632f" containerName="registry-server" containerID="cri-o://5830ac4a4fcddb530fccbe5dd3b5f8ac638244ddd7d0fb484d7424a1cf58eb12" gracePeriod=2 Nov 28 09:31:37 crc kubenswrapper[4946]: I1128 09:31:37.121046 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kwhch" Nov 28 09:31:37 crc kubenswrapper[4946]: I1128 09:31:37.143094 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffbc3445-930d-49c7-a148-dca85748632f-catalog-content\") pod \"ffbc3445-930d-49c7-a148-dca85748632f\" (UID: \"ffbc3445-930d-49c7-a148-dca85748632f\") " Nov 28 09:31:37 crc kubenswrapper[4946]: I1128 09:31:37.143301 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7528z\" (UniqueName: \"kubernetes.io/projected/ffbc3445-930d-49c7-a148-dca85748632f-kube-api-access-7528z\") pod \"ffbc3445-930d-49c7-a148-dca85748632f\" (UID: \"ffbc3445-930d-49c7-a148-dca85748632f\") " Nov 28 09:31:37 crc kubenswrapper[4946]: I1128 09:31:37.143396 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffbc3445-930d-49c7-a148-dca85748632f-utilities\") pod \"ffbc3445-930d-49c7-a148-dca85748632f\" (UID: \"ffbc3445-930d-49c7-a148-dca85748632f\") " Nov 28 09:31:37 crc kubenswrapper[4946]: I1128 09:31:37.144131 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffbc3445-930d-49c7-a148-dca85748632f-utilities" (OuterVolumeSpecName: "utilities") pod "ffbc3445-930d-49c7-a148-dca85748632f" (UID: "ffbc3445-930d-49c7-a148-dca85748632f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:31:37 crc kubenswrapper[4946]: I1128 09:31:37.209634 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffbc3445-930d-49c7-a148-dca85748632f-kube-api-access-7528z" (OuterVolumeSpecName: "kube-api-access-7528z") pod "ffbc3445-930d-49c7-a148-dca85748632f" (UID: "ffbc3445-930d-49c7-a148-dca85748632f"). InnerVolumeSpecName "kube-api-access-7528z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:31:37 crc kubenswrapper[4946]: I1128 09:31:37.210270 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffbc3445-930d-49c7-a148-dca85748632f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffbc3445-930d-49c7-a148-dca85748632f" (UID: "ffbc3445-930d-49c7-a148-dca85748632f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:31:37 crc kubenswrapper[4946]: I1128 09:31:37.225326 4946 generic.go:334] "Generic (PLEG): container finished" podID="ffbc3445-930d-49c7-a148-dca85748632f" containerID="5830ac4a4fcddb530fccbe5dd3b5f8ac638244ddd7d0fb484d7424a1cf58eb12" exitCode=0 Nov 28 09:31:37 crc kubenswrapper[4946]: I1128 09:31:37.225364 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwhch" event={"ID":"ffbc3445-930d-49c7-a148-dca85748632f","Type":"ContainerDied","Data":"5830ac4a4fcddb530fccbe5dd3b5f8ac638244ddd7d0fb484d7424a1cf58eb12"} Nov 28 09:31:37 crc kubenswrapper[4946]: I1128 09:31:37.225388 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwhch" event={"ID":"ffbc3445-930d-49c7-a148-dca85748632f","Type":"ContainerDied","Data":"e3d341e7f6f59e4601027aa4fb165b5606e401a6a88cc99b30286bc3f76012a6"} Nov 28 09:31:37 crc kubenswrapper[4946]: I1128 09:31:37.225405 4946 scope.go:117] "RemoveContainer" containerID="5830ac4a4fcddb530fccbe5dd3b5f8ac638244ddd7d0fb484d7424a1cf58eb12" Nov 28 09:31:37 crc kubenswrapper[4946]: I1128 09:31:37.225553 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kwhch" Nov 28 09:31:37 crc kubenswrapper[4946]: I1128 09:31:37.245429 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffbc3445-930d-49c7-a148-dca85748632f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 09:31:37 crc kubenswrapper[4946]: I1128 09:31:37.245896 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7528z\" (UniqueName: \"kubernetes.io/projected/ffbc3445-930d-49c7-a148-dca85748632f-kube-api-access-7528z\") on node \"crc\" DevicePath \"\"" Nov 28 09:31:37 crc kubenswrapper[4946]: I1128 09:31:37.245910 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffbc3445-930d-49c7-a148-dca85748632f-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 09:31:37 crc kubenswrapper[4946]: I1128 09:31:37.259710 4946 scope.go:117] "RemoveContainer" containerID="24aac1bab2b1b2149ffb7a6f879d8f2b3a3a2113221fbbe8c6b4ad68a4565d8e" Nov 28 09:31:37 crc kubenswrapper[4946]: I1128 09:31:37.274036 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kwhch"] Nov 28 09:31:37 crc kubenswrapper[4946]: I1128 09:31:37.283670 4946 scope.go:117] "RemoveContainer" containerID="b4f5b62bd4dc8e47b82c51dfe5a1ea8cf5d0bc0861ee7810240bf526ed3c34b3" Nov 28 09:31:37 crc kubenswrapper[4946]: I1128 09:31:37.285978 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kwhch"] Nov 28 09:31:37 crc kubenswrapper[4946]: I1128 09:31:37.328813 4946 scope.go:117] "RemoveContainer" containerID="5830ac4a4fcddb530fccbe5dd3b5f8ac638244ddd7d0fb484d7424a1cf58eb12" Nov 28 09:31:37 crc kubenswrapper[4946]: E1128 09:31:37.330094 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5830ac4a4fcddb530fccbe5dd3b5f8ac638244ddd7d0fb484d7424a1cf58eb12\": container with ID starting with 5830ac4a4fcddb530fccbe5dd3b5f8ac638244ddd7d0fb484d7424a1cf58eb12 not found: ID does not exist" containerID="5830ac4a4fcddb530fccbe5dd3b5f8ac638244ddd7d0fb484d7424a1cf58eb12" Nov 28 09:31:37 crc kubenswrapper[4946]: I1128 09:31:37.330124 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5830ac4a4fcddb530fccbe5dd3b5f8ac638244ddd7d0fb484d7424a1cf58eb12"} err="failed to get container status \"5830ac4a4fcddb530fccbe5dd3b5f8ac638244ddd7d0fb484d7424a1cf58eb12\": rpc error: code = NotFound desc = could not find container \"5830ac4a4fcddb530fccbe5dd3b5f8ac638244ddd7d0fb484d7424a1cf58eb12\": container with ID starting with 5830ac4a4fcddb530fccbe5dd3b5f8ac638244ddd7d0fb484d7424a1cf58eb12 not found: ID does not exist" Nov 28 09:31:37 crc kubenswrapper[4946]: I1128 09:31:37.330162 4946 scope.go:117] "RemoveContainer" containerID="24aac1bab2b1b2149ffb7a6f879d8f2b3a3a2113221fbbe8c6b4ad68a4565d8e" Nov 28 09:31:37 crc kubenswrapper[4946]: E1128 09:31:37.330472 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24aac1bab2b1b2149ffb7a6f879d8f2b3a3a2113221fbbe8c6b4ad68a4565d8e\": container with ID starting with 24aac1bab2b1b2149ffb7a6f879d8f2b3a3a2113221fbbe8c6b4ad68a4565d8e not found: ID does not exist" containerID="24aac1bab2b1b2149ffb7a6f879d8f2b3a3a2113221fbbe8c6b4ad68a4565d8e" Nov 28 09:31:37 crc kubenswrapper[4946]: I1128 09:31:37.330495 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24aac1bab2b1b2149ffb7a6f879d8f2b3a3a2113221fbbe8c6b4ad68a4565d8e"} err="failed to get container status \"24aac1bab2b1b2149ffb7a6f879d8f2b3a3a2113221fbbe8c6b4ad68a4565d8e\": rpc error: code = NotFound desc = could not find container \"24aac1bab2b1b2149ffb7a6f879d8f2b3a3a2113221fbbe8c6b4ad68a4565d8e\": container with ID starting with 24aac1bab2b1b2149ffb7a6f879d8f2b3a3a2113221fbbe8c6b4ad68a4565d8e not found: ID does not exist" Nov 28 09:31:37 crc kubenswrapper[4946]: I1128 09:31:37.330506 4946 scope.go:117] "RemoveContainer" containerID="b4f5b62bd4dc8e47b82c51dfe5a1ea8cf5d0bc0861ee7810240bf526ed3c34b3" Nov 28 09:31:37 crc kubenswrapper[4946]: E1128 09:31:37.330935 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4f5b62bd4dc8e47b82c51dfe5a1ea8cf5d0bc0861ee7810240bf526ed3c34b3\": container with ID starting with b4f5b62bd4dc8e47b82c51dfe5a1ea8cf5d0bc0861ee7810240bf526ed3c34b3 not found: ID does not exist" containerID="b4f5b62bd4dc8e47b82c51dfe5a1ea8cf5d0bc0861ee7810240bf526ed3c34b3" Nov 28 09:31:37 crc kubenswrapper[4946]: I1128 09:31:37.331044 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4f5b62bd4dc8e47b82c51dfe5a1ea8cf5d0bc0861ee7810240bf526ed3c34b3"} err="failed to get container status \"b4f5b62bd4dc8e47b82c51dfe5a1ea8cf5d0bc0861ee7810240bf526ed3c34b3\": rpc error: code = NotFound desc = could not find container \"b4f5b62bd4dc8e47b82c51dfe5a1ea8cf5d0bc0861ee7810240bf526ed3c34b3\": container with ID starting with b4f5b62bd4dc8e47b82c51dfe5a1ea8cf5d0bc0861ee7810240bf526ed3c34b3 not found: ID does not exist" Nov 28 09:31:38 crc kubenswrapper[4946]: I1128 09:31:38.008784 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffbc3445-930d-49c7-a148-dca85748632f" path="/var/lib/kubelet/pods/ffbc3445-930d-49c7-a148-dca85748632f/volumes" Nov 28 09:31:54 crc kubenswrapper[4946]: I1128 09:31:54.731151 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:31:54 crc kubenswrapper[4946]: I1128 09:31:54.731858 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:31:54 crc kubenswrapper[4946]: I1128 09:31:54.731915 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 09:31:54 crc kubenswrapper[4946]: I1128 09:31:54.733070 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 09:31:54 crc kubenswrapper[4946]: I1128 09:31:54.733174 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" gracePeriod=600 Nov 28 09:31:54 crc kubenswrapper[4946]: E1128 09:31:54.864544 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:31:55 crc kubenswrapper[4946]: I1128 09:31:55.445246 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" exitCode=0 Nov 28 09:31:55 crc kubenswrapper[4946]: I1128 09:31:55.445311 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d"} Nov 28 09:31:55 crc kubenswrapper[4946]: I1128 09:31:55.445349 4946 scope.go:117] "RemoveContainer" containerID="135f3d68684897152e9b71c56021d71c909a6dea0f6f5ae66f99a5bc37e30f4d" Nov 28 09:31:55 crc kubenswrapper[4946]: I1128 09:31:55.446224 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:31:55 crc kubenswrapper[4946]: E1128 09:31:55.446769 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:32:07 crc kubenswrapper[4946]: I1128 09:32:07.990220 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:32:07 crc kubenswrapper[4946]: E1128 09:32:07.991374 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:32:21 crc kubenswrapper[4946]: I1128 09:32:21.990581 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:32:21 crc kubenswrapper[4946]: E1128 09:32:21.991763 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:32:33 crc kubenswrapper[4946]: I1128 09:32:33.991068 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:32:33 crc kubenswrapper[4946]: E1128 09:32:33.992164 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:32:44 crc kubenswrapper[4946]: I1128 09:32:44.990407 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:32:44 crc kubenswrapper[4946]: E1128 09:32:44.991408 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:32:51 crc kubenswrapper[4946]: I1128 09:32:51.148755 4946 generic.go:334] "Generic (PLEG): container finished" podID="6b73ea39-1838-4f8b-b183-041d61c8c457" containerID="c9e560eed0ad5cc8242e65cf380419f286cf16010d0d23cd0d1341bc15bcaf7a" exitCode=0 Nov 28 09:32:51 crc kubenswrapper[4946]: I1128 09:32:51.148775 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" event={"ID":"6b73ea39-1838-4f8b-b183-041d61c8c457","Type":"ContainerDied","Data":"c9e560eed0ad5cc8242e65cf380419f286cf16010d0d23cd0d1341bc15bcaf7a"} Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.693672 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.766733 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-ceph\") pod \"6b73ea39-1838-4f8b-b183-041d61c8c457\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.766823 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cell1-combined-ca-bundle\") pod \"6b73ea39-1838-4f8b-b183-041d61c8c457\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.766870 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-ssh-key\") pod \"6b73ea39-1838-4f8b-b183-041d61c8c457\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.766908 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cell1-compute-config-1\") pod \"6b73ea39-1838-4f8b-b183-041d61c8c457\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.766949 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-migration-ssh-key-0\") pod \"6b73ea39-1838-4f8b-b183-041d61c8c457\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.766979 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cell1-compute-config-0\") pod \"6b73ea39-1838-4f8b-b183-041d61c8c457\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.767060 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cells-global-config-0\") pod \"6b73ea39-1838-4f8b-b183-041d61c8c457\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.767098 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wg27\" (UniqueName: \"kubernetes.io/projected/6b73ea39-1838-4f8b-b183-041d61c8c457-kube-api-access-8wg27\") pod \"6b73ea39-1838-4f8b-b183-041d61c8c457\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.767145 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-migration-ssh-key-1\") pod \"6b73ea39-1838-4f8b-b183-041d61c8c457\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.767291 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-inventory\") pod \"6b73ea39-1838-4f8b-b183-041d61c8c457\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.767359 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cells-global-config-1\") pod \"6b73ea39-1838-4f8b-b183-041d61c8c457\" (UID: \"6b73ea39-1838-4f8b-b183-041d61c8c457\") " Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.773779 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-ceph" (OuterVolumeSpecName: "ceph") pod "6b73ea39-1838-4f8b-b183-041d61c8c457" (UID: "6b73ea39-1838-4f8b-b183-041d61c8c457"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.778420 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b73ea39-1838-4f8b-b183-041d61c8c457-kube-api-access-8wg27" (OuterVolumeSpecName: "kube-api-access-8wg27") pod "6b73ea39-1838-4f8b-b183-041d61c8c457" (UID: "6b73ea39-1838-4f8b-b183-041d61c8c457"). InnerVolumeSpecName "kube-api-access-8wg27". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.781850 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "6b73ea39-1838-4f8b-b183-041d61c8c457" (UID: "6b73ea39-1838-4f8b-b183-041d61c8c457"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.799016 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "6b73ea39-1838-4f8b-b183-041d61c8c457" (UID: "6b73ea39-1838-4f8b-b183-041d61c8c457"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.810765 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-inventory" (OuterVolumeSpecName: "inventory") pod "6b73ea39-1838-4f8b-b183-041d61c8c457" (UID: "6b73ea39-1838-4f8b-b183-041d61c8c457"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.812399 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "6b73ea39-1838-4f8b-b183-041d61c8c457" (UID: "6b73ea39-1838-4f8b-b183-041d61c8c457"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.817577 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "6b73ea39-1838-4f8b-b183-041d61c8c457" (UID: "6b73ea39-1838-4f8b-b183-041d61c8c457"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.818556 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "6b73ea39-1838-4f8b-b183-041d61c8c457" (UID: "6b73ea39-1838-4f8b-b183-041d61c8c457"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.824546 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cells-global-config-1" (OuterVolumeSpecName: "nova-cells-global-config-1") pod "6b73ea39-1838-4f8b-b183-041d61c8c457" (UID: "6b73ea39-1838-4f8b-b183-041d61c8c457"). InnerVolumeSpecName "nova-cells-global-config-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.828338 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6b73ea39-1838-4f8b-b183-041d61c8c457" (UID: "6b73ea39-1838-4f8b-b183-041d61c8c457"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.829542 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "6b73ea39-1838-4f8b-b183-041d61c8c457" (UID: "6b73ea39-1838-4f8b-b183-041d61c8c457"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.869956 4946 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cells-global-config-1\") on node \"crc\" DevicePath \"\"" Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.870007 4946 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-ceph\") on node \"crc\" DevicePath \"\"" Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.870028 4946 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.870048 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.870070 4946 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.870092 4946 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.870110 4946 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.870128 4946 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.870146 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wg27\" (UniqueName: \"kubernetes.io/projected/6b73ea39-1838-4f8b-b183-041d61c8c457-kube-api-access-8wg27\") on node \"crc\" DevicePath \"\"" Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.870165 4946 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 28 09:32:52 crc kubenswrapper[4946]: I1128 09:32:52.870183 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b73ea39-1838-4f8b-b183-041d61c8c457-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.180902 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" event={"ID":"6b73ea39-1838-4f8b-b183-041d61c8c457","Type":"ContainerDied","Data":"3f514cfcb57c42b3d145c924a7506068294965acf6a23ab05f36f863218a1fd1"} Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.180962 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f514cfcb57c42b3d145c924a7506068294965acf6a23ab05f36f863218a1fd1" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.181042 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-tdfft" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.418223 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-8pscp"] Nov 28 09:32:53 crc kubenswrapper[4946]: E1128 09:32:53.418745 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffbc3445-930d-49c7-a148-dca85748632f" containerName="registry-server" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.418764 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffbc3445-930d-49c7-a148-dca85748632f" containerName="registry-server" Nov 28 09:32:53 crc kubenswrapper[4946]: E1128 09:32:53.418784 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffbc3445-930d-49c7-a148-dca85748632f" containerName="extract-content" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.418793 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffbc3445-930d-49c7-a148-dca85748632f" containerName="extract-content" Nov 28 09:32:53 crc kubenswrapper[4946]: E1128 09:32:53.418811 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffbc3445-930d-49c7-a148-dca85748632f" containerName="extract-utilities" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.418819 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffbc3445-930d-49c7-a148-dca85748632f" containerName="extract-utilities" Nov 28 09:32:53 crc kubenswrapper[4946]: E1128 09:32:53.418847 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b73ea39-1838-4f8b-b183-041d61c8c457" containerName="nova-cell1-openstack-openstack-cell1" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.418856 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b73ea39-1838-4f8b-b183-041d61c8c457" containerName="nova-cell1-openstack-openstack-cell1" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.419091 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b73ea39-1838-4f8b-b183-041d61c8c457" containerName="nova-cell1-openstack-openstack-cell1" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.419121 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffbc3445-930d-49c7-a148-dca85748632f" containerName="registry-server" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.420046 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.423282 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.423630 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.424702 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.424918 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.433435 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-8pscp"] Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.438988 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-psw2j" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.487537 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-8pscp\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.487618 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm2v4\" (UniqueName: \"kubernetes.io/projected/c4819513-a41f-4a93-b8b5-9587c67a0832-kube-api-access-nm2v4\") pod \"telemetry-openstack-openstack-cell1-8pscp\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.487867 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-8pscp\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.488003 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ssh-key\") pod \"telemetry-openstack-openstack-cell1-8pscp\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.488111 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-inventory\") pod \"telemetry-openstack-openstack-cell1-8pscp\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.488296 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-8pscp\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.488589 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-8pscp\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.488803 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ceph\") pod \"telemetry-openstack-openstack-cell1-8pscp\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.591009 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ceph\") pod \"telemetry-openstack-openstack-cell1-8pscp\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.591127 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-8pscp\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.591207 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm2v4\" (UniqueName: \"kubernetes.io/projected/c4819513-a41f-4a93-b8b5-9587c67a0832-kube-api-access-nm2v4\") pod \"telemetry-openstack-openstack-cell1-8pscp\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.591283 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-8pscp\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.591345 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ssh-key\") pod \"telemetry-openstack-openstack-cell1-8pscp\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.591411 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-inventory\") pod \"telemetry-openstack-openstack-cell1-8pscp\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.591529 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-8pscp\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.591653 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-8pscp\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.596675 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-8pscp\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.597060 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-8pscp\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.597261 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ceph\") pod \"telemetry-openstack-openstack-cell1-8pscp\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.598272 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-inventory\") pod \"telemetry-openstack-openstack-cell1-8pscp\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.599044 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ssh-key\") pod \"telemetry-openstack-openstack-cell1-8pscp\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.599302 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-8pscp\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.600137 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-8pscp\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.612974 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm2v4\" (UniqueName: \"kubernetes.io/projected/c4819513-a41f-4a93-b8b5-9587c67a0832-kube-api-access-nm2v4\") pod \"telemetry-openstack-openstack-cell1-8pscp\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:53 crc kubenswrapper[4946]: I1128 09:32:53.765092 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:32:54 crc kubenswrapper[4946]: I1128 09:32:54.435979 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-8pscp"] Nov 28 09:32:55 crc kubenswrapper[4946]: I1128 09:32:55.205714 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-8pscp" event={"ID":"c4819513-a41f-4a93-b8b5-9587c67a0832","Type":"ContainerStarted","Data":"75663784e683c650ad665759412d08212262741c0ed37d793dd5b76b5635609a"} Nov 28 09:32:56 crc kubenswrapper[4946]: I1128 09:32:56.220021 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-8pscp" event={"ID":"c4819513-a41f-4a93-b8b5-9587c67a0832","Type":"ContainerStarted","Data":"640b10d50aab1c5c43a20fca86e29d197e7699ab436c75c70720e5e27c067369"} Nov 28 09:32:56 crc kubenswrapper[4946]: I1128 09:32:56.258918 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-openstack-openstack-cell1-8pscp" podStartSLOduration=2.658271106 podStartE2EDuration="3.258890372s" podCreationTimestamp="2025-11-28 09:32:53 +0000 UTC" firstStartedPulling="2025-11-28 09:32:54.440882038 +0000 UTC m=+9628.818947159" lastFinishedPulling="2025-11-28 09:32:55.041501314 +0000 UTC m=+9629.419566425" observedRunningTime="2025-11-28 09:32:56.241299939 +0000 UTC m=+9630.619365050" watchObservedRunningTime="2025-11-28 09:32:56.258890372 +0000 UTC m=+9630.636955513" Nov 28 09:32:58 crc kubenswrapper[4946]: I1128 09:32:58.990453 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:32:58 crc kubenswrapper[4946]: E1128 09:32:58.991702 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:33:09 crc kubenswrapper[4946]: I1128 09:33:09.993421 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:33:09 crc kubenswrapper[4946]: E1128 09:33:09.994785 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:33:20 crc kubenswrapper[4946]: I1128 09:33:20.990637 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:33:20 crc kubenswrapper[4946]: E1128 09:33:20.992086 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:33:31 crc kubenswrapper[4946]: I1128 09:33:31.989909 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:33:31 crc kubenswrapper[4946]: E1128 09:33:31.990661 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:33:46 crc kubenswrapper[4946]: I1128 09:33:46.990216 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:33:46 crc kubenswrapper[4946]: E1128 09:33:46.991374 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:33:57 crc kubenswrapper[4946]: I1128 09:33:57.990647 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:33:57 crc kubenswrapper[4946]: E1128 09:33:57.991790 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:34:12 crc kubenswrapper[4946]: I1128 09:34:12.990346 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:34:12 crc kubenswrapper[4946]: E1128 09:34:12.991217 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:34:24 crc kubenswrapper[4946]: I1128 09:34:24.990085 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:34:24 crc kubenswrapper[4946]: E1128 09:34:24.990827 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:34:39 crc kubenswrapper[4946]: I1128 09:34:39.990124 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:34:39 crc kubenswrapper[4946]: E1128 09:34:39.993071 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:34:54 crc kubenswrapper[4946]: I1128 09:34:54.990609 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:34:54 crc kubenswrapper[4946]: E1128 09:34:54.991649 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:35:06 crc kubenswrapper[4946]: I1128 09:35:06.000288 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:35:06 crc kubenswrapper[4946]: E1128 09:35:06.002843 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:35:17 crc kubenswrapper[4946]: I1128 09:35:17.991332 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:35:17 crc kubenswrapper[4946]: E1128 09:35:17.992353 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:35:29 crc kubenswrapper[4946]: I1128 09:35:29.991037 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:35:29 crc kubenswrapper[4946]: E1128 09:35:29.992395 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:35:41 crc kubenswrapper[4946]: I1128 09:35:41.990656 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:35:41 crc kubenswrapper[4946]: E1128 09:35:41.991709 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:35:52 crc kubenswrapper[4946]: I1128 09:35:52.991644 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:35:52 crc kubenswrapper[4946]: E1128 09:35:52.992799 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:36:03 crc kubenswrapper[4946]: I1128 09:36:03.990501 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:36:03 crc kubenswrapper[4946]: E1128 09:36:03.991420 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:36:16 crc kubenswrapper[4946]: I1128 09:36:16.990400 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:36:16 crc kubenswrapper[4946]: E1128 09:36:16.991538 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:36:30 crc kubenswrapper[4946]: I1128 09:36:30.990447 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:36:30 crc kubenswrapper[4946]: E1128 09:36:30.991366 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:36:43 crc kubenswrapper[4946]: I1128 09:36:43.990205 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:36:43 crc kubenswrapper[4946]: E1128 09:36:43.991974 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:36:54 crc kubenswrapper[4946]: I1128 09:36:54.991223 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:36:55 crc kubenswrapper[4946]: I1128 09:36:55.200924 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"0ec5e3eb54e35f954dba98a8c8522fadf3fba9bfecbc9e8dcd7fcd5944c599d0"} Nov 28 09:38:58 crc kubenswrapper[4946]: I1128 09:38:58.409767 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b2cch"] Nov 28 09:38:58 crc kubenswrapper[4946]: I1128 09:38:58.414536 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2cch" Nov 28 09:38:58 crc kubenswrapper[4946]: I1128 09:38:58.430178 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2cch"] Nov 28 09:38:58 crc kubenswrapper[4946]: I1128 09:38:58.534049 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51fa5742-b227-42ee-b5bf-6346c4440a7f-catalog-content\") pod \"redhat-marketplace-b2cch\" (UID: \"51fa5742-b227-42ee-b5bf-6346c4440a7f\") " pod="openshift-marketplace/redhat-marketplace-b2cch" Nov 28 09:38:58 crc kubenswrapper[4946]: I1128 09:38:58.534187 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2cb4\" (UniqueName: \"kubernetes.io/projected/51fa5742-b227-42ee-b5bf-6346c4440a7f-kube-api-access-k2cb4\") pod \"redhat-marketplace-b2cch\" (UID: \"51fa5742-b227-42ee-b5bf-6346c4440a7f\") " pod="openshift-marketplace/redhat-marketplace-b2cch" Nov 28 09:38:58 crc kubenswrapper[4946]: I1128 09:38:58.534541 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51fa5742-b227-42ee-b5bf-6346c4440a7f-utilities\") pod \"redhat-marketplace-b2cch\" (UID: \"51fa5742-b227-42ee-b5bf-6346c4440a7f\") " pod="openshift-marketplace/redhat-marketplace-b2cch" Nov 28 09:38:58 crc kubenswrapper[4946]: I1128 09:38:58.636974 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2cb4\" (UniqueName: \"kubernetes.io/projected/51fa5742-b227-42ee-b5bf-6346c4440a7f-kube-api-access-k2cb4\") pod \"redhat-marketplace-b2cch\" (UID: \"51fa5742-b227-42ee-b5bf-6346c4440a7f\") " pod="openshift-marketplace/redhat-marketplace-b2cch" Nov 28 09:38:58 crc kubenswrapper[4946]: I1128 09:38:58.637266 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51fa5742-b227-42ee-b5bf-6346c4440a7f-utilities\") pod \"redhat-marketplace-b2cch\" (UID: \"51fa5742-b227-42ee-b5bf-6346c4440a7f\") " pod="openshift-marketplace/redhat-marketplace-b2cch" Nov 28 09:38:58 crc kubenswrapper[4946]: I1128 09:38:58.637451 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51fa5742-b227-42ee-b5bf-6346c4440a7f-catalog-content\") pod \"redhat-marketplace-b2cch\" (UID: \"51fa5742-b227-42ee-b5bf-6346c4440a7f\") " pod="openshift-marketplace/redhat-marketplace-b2cch" Nov 28 09:38:58 crc kubenswrapper[4946]: I1128 09:38:58.638180 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51fa5742-b227-42ee-b5bf-6346c4440a7f-utilities\") pod \"redhat-marketplace-b2cch\" (UID: \"51fa5742-b227-42ee-b5bf-6346c4440a7f\") " pod="openshift-marketplace/redhat-marketplace-b2cch" Nov 28 09:38:58 crc kubenswrapper[4946]: I1128 09:38:58.638204 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51fa5742-b227-42ee-b5bf-6346c4440a7f-catalog-content\") pod \"redhat-marketplace-b2cch\" (UID: \"51fa5742-b227-42ee-b5bf-6346c4440a7f\") " pod="openshift-marketplace/redhat-marketplace-b2cch" Nov 28 09:38:58 crc kubenswrapper[4946]: I1128 09:38:58.679423 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2cb4\" (UniqueName: \"kubernetes.io/projected/51fa5742-b227-42ee-b5bf-6346c4440a7f-kube-api-access-k2cb4\") pod \"redhat-marketplace-b2cch\" (UID: \"51fa5742-b227-42ee-b5bf-6346c4440a7f\") " pod="openshift-marketplace/redhat-marketplace-b2cch" Nov 28 09:38:58 crc kubenswrapper[4946]: I1128 09:38:58.758926 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2cch" Nov 28 09:38:59 crc kubenswrapper[4946]: I1128 09:38:59.249905 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2cch"] Nov 28 09:38:59 crc kubenswrapper[4946]: I1128 09:38:59.824866 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2cch" event={"ID":"51fa5742-b227-42ee-b5bf-6346c4440a7f","Type":"ContainerStarted","Data":"90caea7259b26e0264f6a8f22c087ffe1c641d52bf5a0199e146450ec7197f36"} Nov 28 09:38:59 crc kubenswrapper[4946]: I1128 09:38:59.825256 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2cch" event={"ID":"51fa5742-b227-42ee-b5bf-6346c4440a7f","Type":"ContainerStarted","Data":"31a69989a86a10238a029ceef10d7ce3b98c8ef68046a4e5241eaf36f04bb9c1"} Nov 28 09:39:00 crc kubenswrapper[4946]: E1128 09:39:00.641198 4946 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51fa5742_b227_42ee_b5bf_6346c4440a7f.slice/crio-90caea7259b26e0264f6a8f22c087ffe1c641d52bf5a0199e146450ec7197f36.scope\": RecentStats: unable to find data in memory cache]" Nov 28 09:39:00 crc kubenswrapper[4946]: I1128 09:39:00.836069 4946 generic.go:334] "Generic (PLEG): container finished" podID="51fa5742-b227-42ee-b5bf-6346c4440a7f" containerID="90caea7259b26e0264f6a8f22c087ffe1c641d52bf5a0199e146450ec7197f36" exitCode=0 Nov 28 09:39:00 crc kubenswrapper[4946]: I1128 09:39:00.836198 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2cch" event={"ID":"51fa5742-b227-42ee-b5bf-6346c4440a7f","Type":"ContainerDied","Data":"90caea7259b26e0264f6a8f22c087ffe1c641d52bf5a0199e146450ec7197f36"} Nov 28 09:39:00 crc kubenswrapper[4946]: I1128 09:39:00.839780 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 09:39:02 crc kubenswrapper[4946]: I1128 09:39:02.862174 4946 generic.go:334] "Generic (PLEG): container finished" podID="51fa5742-b227-42ee-b5bf-6346c4440a7f" containerID="de7cde0688e174bfcdfe77aeb13d35fe83eafe6000f67d2e92868a8624bdd496" exitCode=0 Nov 28 09:39:02 crc kubenswrapper[4946]: I1128 09:39:02.862341 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2cch" event={"ID":"51fa5742-b227-42ee-b5bf-6346c4440a7f","Type":"ContainerDied","Data":"de7cde0688e174bfcdfe77aeb13d35fe83eafe6000f67d2e92868a8624bdd496"} Nov 28 09:39:05 crc kubenswrapper[4946]: I1128 09:39:05.907972 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2cch" event={"ID":"51fa5742-b227-42ee-b5bf-6346c4440a7f","Type":"ContainerStarted","Data":"3894e2f663f7166dd03fef17cd0401145a52bbf4c5633746b2252d06272f217c"} Nov 28 09:39:05 crc kubenswrapper[4946]: I1128 09:39:05.931691 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b2cch" podStartSLOduration=4.043628326 podStartE2EDuration="7.931668695s" podCreationTimestamp="2025-11-28 09:38:58 +0000 UTC" firstStartedPulling="2025-11-28 09:39:00.839561166 +0000 UTC m=+9995.217626277" lastFinishedPulling="2025-11-28 09:39:04.727601495 +0000 UTC m=+9999.105666646" observedRunningTime="2025-11-28 09:39:05.929633105 +0000 UTC m=+10000.307698226" watchObservedRunningTime="2025-11-28 09:39:05.931668695 +0000 UTC m=+10000.309733806" Nov 28 09:39:08 crc kubenswrapper[4946]: I1128 09:39:08.759570 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b2cch" Nov 28 09:39:08 crc kubenswrapper[4946]: I1128 09:39:08.760092 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b2cch" Nov 28 09:39:08 crc kubenswrapper[4946]: I1128 09:39:08.830822 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b2cch" Nov 28 09:39:10 crc kubenswrapper[4946]: I1128 09:39:10.045943 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b2cch" Nov 28 09:39:10 crc kubenswrapper[4946]: I1128 09:39:10.115180 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2cch"] Nov 28 09:39:11 crc kubenswrapper[4946]: I1128 09:39:11.986882 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b2cch" podUID="51fa5742-b227-42ee-b5bf-6346c4440a7f" containerName="registry-server" containerID="cri-o://3894e2f663f7166dd03fef17cd0401145a52bbf4c5633746b2252d06272f217c" gracePeriod=2 Nov 28 09:39:12 crc kubenswrapper[4946]: I1128 09:39:12.545862 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2cch" Nov 28 09:39:12 crc kubenswrapper[4946]: I1128 09:39:12.659128 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51fa5742-b227-42ee-b5bf-6346c4440a7f-utilities\") pod \"51fa5742-b227-42ee-b5bf-6346c4440a7f\" (UID: \"51fa5742-b227-42ee-b5bf-6346c4440a7f\") " Nov 28 09:39:12 crc kubenswrapper[4946]: I1128 09:39:12.659289 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51fa5742-b227-42ee-b5bf-6346c4440a7f-catalog-content\") pod \"51fa5742-b227-42ee-b5bf-6346c4440a7f\" (UID: \"51fa5742-b227-42ee-b5bf-6346c4440a7f\") " Nov 28 09:39:12 crc kubenswrapper[4946]: I1128 09:39:12.659360 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2cb4\" (UniqueName: \"kubernetes.io/projected/51fa5742-b227-42ee-b5bf-6346c4440a7f-kube-api-access-k2cb4\") pod \"51fa5742-b227-42ee-b5bf-6346c4440a7f\" (UID: \"51fa5742-b227-42ee-b5bf-6346c4440a7f\") " Nov 28 09:39:12 crc kubenswrapper[4946]: I1128 09:39:12.660255 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51fa5742-b227-42ee-b5bf-6346c4440a7f-utilities" (OuterVolumeSpecName: "utilities") pod "51fa5742-b227-42ee-b5bf-6346c4440a7f" (UID: "51fa5742-b227-42ee-b5bf-6346c4440a7f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:39:12 crc kubenswrapper[4946]: I1128 09:39:12.666087 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51fa5742-b227-42ee-b5bf-6346c4440a7f-kube-api-access-k2cb4" (OuterVolumeSpecName: "kube-api-access-k2cb4") pod "51fa5742-b227-42ee-b5bf-6346c4440a7f" (UID: "51fa5742-b227-42ee-b5bf-6346c4440a7f"). InnerVolumeSpecName "kube-api-access-k2cb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:39:12 crc kubenswrapper[4946]: I1128 09:39:12.677164 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51fa5742-b227-42ee-b5bf-6346c4440a7f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51fa5742-b227-42ee-b5bf-6346c4440a7f" (UID: "51fa5742-b227-42ee-b5bf-6346c4440a7f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:39:12 crc kubenswrapper[4946]: I1128 09:39:12.762092 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2cb4\" (UniqueName: \"kubernetes.io/projected/51fa5742-b227-42ee-b5bf-6346c4440a7f-kube-api-access-k2cb4\") on node \"crc\" DevicePath \"\"" Nov 28 09:39:12 crc kubenswrapper[4946]: I1128 09:39:12.762135 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51fa5742-b227-42ee-b5bf-6346c4440a7f-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 09:39:12 crc kubenswrapper[4946]: I1128 09:39:12.762150 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51fa5742-b227-42ee-b5bf-6346c4440a7f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 09:39:13 crc kubenswrapper[4946]: I1128 09:39:13.001659 4946 generic.go:334] "Generic (PLEG): container finished" podID="51fa5742-b227-42ee-b5bf-6346c4440a7f" containerID="3894e2f663f7166dd03fef17cd0401145a52bbf4c5633746b2252d06272f217c" exitCode=0 Nov 28 09:39:13 crc kubenswrapper[4946]: I1128 09:39:13.001719 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2cch" event={"ID":"51fa5742-b227-42ee-b5bf-6346c4440a7f","Type":"ContainerDied","Data":"3894e2f663f7166dd03fef17cd0401145a52bbf4c5633746b2252d06272f217c"} Nov 28 09:39:13 crc kubenswrapper[4946]: I1128 09:39:13.002172 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2cch" event={"ID":"51fa5742-b227-42ee-b5bf-6346c4440a7f","Type":"ContainerDied","Data":"31a69989a86a10238a029ceef10d7ce3b98c8ef68046a4e5241eaf36f04bb9c1"} Nov 28 09:39:13 crc kubenswrapper[4946]: I1128 09:39:13.002197 4946 scope.go:117] "RemoveContainer" containerID="3894e2f663f7166dd03fef17cd0401145a52bbf4c5633746b2252d06272f217c" Nov 28 09:39:13 crc kubenswrapper[4946]: I1128 09:39:13.001766 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2cch" Nov 28 09:39:13 crc kubenswrapper[4946]: I1128 09:39:13.035177 4946 scope.go:117] "RemoveContainer" containerID="de7cde0688e174bfcdfe77aeb13d35fe83eafe6000f67d2e92868a8624bdd496" Nov 28 09:39:13 crc kubenswrapper[4946]: I1128 09:39:13.056990 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2cch"] Nov 28 09:39:13 crc kubenswrapper[4946]: I1128 09:39:13.086280 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2cch"] Nov 28 09:39:13 crc kubenswrapper[4946]: I1128 09:39:13.090793 4946 scope.go:117] "RemoveContainer" containerID="90caea7259b26e0264f6a8f22c087ffe1c641d52bf5a0199e146450ec7197f36" Nov 28 09:39:13 crc kubenswrapper[4946]: I1128 09:39:13.117397 4946 scope.go:117] "RemoveContainer" containerID="3894e2f663f7166dd03fef17cd0401145a52bbf4c5633746b2252d06272f217c" Nov 28 09:39:13 crc kubenswrapper[4946]: E1128 09:39:13.118503 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3894e2f663f7166dd03fef17cd0401145a52bbf4c5633746b2252d06272f217c\": container with ID starting with 3894e2f663f7166dd03fef17cd0401145a52bbf4c5633746b2252d06272f217c not found: ID does not exist" containerID="3894e2f663f7166dd03fef17cd0401145a52bbf4c5633746b2252d06272f217c" Nov 28 09:39:13 crc kubenswrapper[4946]: I1128 09:39:13.118612 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3894e2f663f7166dd03fef17cd0401145a52bbf4c5633746b2252d06272f217c"} err="failed to get container status \"3894e2f663f7166dd03fef17cd0401145a52bbf4c5633746b2252d06272f217c\": rpc error: code = NotFound desc = could not find container \"3894e2f663f7166dd03fef17cd0401145a52bbf4c5633746b2252d06272f217c\": container with ID starting with 3894e2f663f7166dd03fef17cd0401145a52bbf4c5633746b2252d06272f217c not found: ID does not exist" Nov 28 09:39:13 crc kubenswrapper[4946]: I1128 09:39:13.118697 4946 scope.go:117] "RemoveContainer" containerID="de7cde0688e174bfcdfe77aeb13d35fe83eafe6000f67d2e92868a8624bdd496" Nov 28 09:39:13 crc kubenswrapper[4946]: E1128 09:39:13.119321 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de7cde0688e174bfcdfe77aeb13d35fe83eafe6000f67d2e92868a8624bdd496\": container with ID starting with de7cde0688e174bfcdfe77aeb13d35fe83eafe6000f67d2e92868a8624bdd496 not found: ID does not exist" containerID="de7cde0688e174bfcdfe77aeb13d35fe83eafe6000f67d2e92868a8624bdd496" Nov 28 09:39:13 crc kubenswrapper[4946]: I1128 09:39:13.119367 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de7cde0688e174bfcdfe77aeb13d35fe83eafe6000f67d2e92868a8624bdd496"} err="failed to get container status \"de7cde0688e174bfcdfe77aeb13d35fe83eafe6000f67d2e92868a8624bdd496\": rpc error: code = NotFound desc = could not find container \"de7cde0688e174bfcdfe77aeb13d35fe83eafe6000f67d2e92868a8624bdd496\": container with ID starting with de7cde0688e174bfcdfe77aeb13d35fe83eafe6000f67d2e92868a8624bdd496 not found: ID does not exist" Nov 28 09:39:13 crc kubenswrapper[4946]: I1128 09:39:13.119393 4946 scope.go:117] "RemoveContainer" containerID="90caea7259b26e0264f6a8f22c087ffe1c641d52bf5a0199e146450ec7197f36" Nov 28 09:39:13 crc kubenswrapper[4946]: E1128 09:39:13.119777 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90caea7259b26e0264f6a8f22c087ffe1c641d52bf5a0199e146450ec7197f36\": container with ID starting with 90caea7259b26e0264f6a8f22c087ffe1c641d52bf5a0199e146450ec7197f36 not found: ID does not exist" containerID="90caea7259b26e0264f6a8f22c087ffe1c641d52bf5a0199e146450ec7197f36" Nov 28 09:39:13 crc kubenswrapper[4946]: I1128 09:39:13.119865 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90caea7259b26e0264f6a8f22c087ffe1c641d52bf5a0199e146450ec7197f36"} err="failed to get container status \"90caea7259b26e0264f6a8f22c087ffe1c641d52bf5a0199e146450ec7197f36\": rpc error: code = NotFound desc = could not find container \"90caea7259b26e0264f6a8f22c087ffe1c641d52bf5a0199e146450ec7197f36\": container with ID starting with 90caea7259b26e0264f6a8f22c087ffe1c641d52bf5a0199e146450ec7197f36 not found: ID does not exist" Nov 28 09:39:14 crc kubenswrapper[4946]: I1128 09:39:14.030520 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51fa5742-b227-42ee-b5bf-6346c4440a7f" path="/var/lib/kubelet/pods/51fa5742-b227-42ee-b5bf-6346c4440a7f/volumes" Nov 28 09:39:24 crc kubenswrapper[4946]: I1128 09:39:24.731026 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:39:24 crc kubenswrapper[4946]: I1128 09:39:24.731777 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:39:49 crc kubenswrapper[4946]: I1128 09:39:49.376916 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-24v84"] Nov 28 09:39:49 crc kubenswrapper[4946]: E1128 09:39:49.378163 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51fa5742-b227-42ee-b5bf-6346c4440a7f" containerName="extract-content" Nov 28 09:39:49 crc kubenswrapper[4946]: I1128 09:39:49.378294 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="51fa5742-b227-42ee-b5bf-6346c4440a7f" containerName="extract-content" Nov 28 09:39:49 crc kubenswrapper[4946]: E1128 09:39:49.378315 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51fa5742-b227-42ee-b5bf-6346c4440a7f" containerName="extract-utilities" Nov 28 09:39:49 crc kubenswrapper[4946]: I1128 09:39:49.378326 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="51fa5742-b227-42ee-b5bf-6346c4440a7f" containerName="extract-utilities" Nov 28 09:39:49 crc kubenswrapper[4946]: E1128 09:39:49.378353 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51fa5742-b227-42ee-b5bf-6346c4440a7f" containerName="registry-server" Nov 28 09:39:49 crc kubenswrapper[4946]: I1128 09:39:49.378364 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="51fa5742-b227-42ee-b5bf-6346c4440a7f" containerName="registry-server" Nov 28 09:39:49 crc kubenswrapper[4946]: I1128 09:39:49.379038 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="51fa5742-b227-42ee-b5bf-6346c4440a7f" containerName="registry-server" Nov 28 09:39:49 crc kubenswrapper[4946]: I1128 09:39:49.382676 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-24v84" Nov 28 09:39:49 crc kubenswrapper[4946]: I1128 09:39:49.393837 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-24v84"] Nov 28 09:39:49 crc kubenswrapper[4946]: I1128 09:39:49.544993 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sftj6\" (UniqueName: \"kubernetes.io/projected/f23f46be-d84e-4136-9573-1b543adac7f8-kube-api-access-sftj6\") pod \"redhat-operators-24v84\" (UID: \"f23f46be-d84e-4136-9573-1b543adac7f8\") " pod="openshift-marketplace/redhat-operators-24v84" Nov 28 09:39:49 crc kubenswrapper[4946]: I1128 09:39:49.545213 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f23f46be-d84e-4136-9573-1b543adac7f8-catalog-content\") pod \"redhat-operators-24v84\" (UID: \"f23f46be-d84e-4136-9573-1b543adac7f8\") " pod="openshift-marketplace/redhat-operators-24v84" Nov 28 09:39:49 crc kubenswrapper[4946]: I1128 09:39:49.545271 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f23f46be-d84e-4136-9573-1b543adac7f8-utilities\") pod \"redhat-operators-24v84\" (UID: \"f23f46be-d84e-4136-9573-1b543adac7f8\") " pod="openshift-marketplace/redhat-operators-24v84" Nov 28 09:39:49 crc kubenswrapper[4946]: I1128 09:39:49.647537 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f23f46be-d84e-4136-9573-1b543adac7f8-catalog-content\") pod \"redhat-operators-24v84\" (UID: \"f23f46be-d84e-4136-9573-1b543adac7f8\") " pod="openshift-marketplace/redhat-operators-24v84" Nov 28 09:39:49 crc kubenswrapper[4946]: I1128 09:39:49.647590 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f23f46be-d84e-4136-9573-1b543adac7f8-utilities\") pod \"redhat-operators-24v84\" (UID: \"f23f46be-d84e-4136-9573-1b543adac7f8\") " pod="openshift-marketplace/redhat-operators-24v84" Nov 28 09:39:49 crc kubenswrapper[4946]: I1128 09:39:49.647708 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sftj6\" (UniqueName: \"kubernetes.io/projected/f23f46be-d84e-4136-9573-1b543adac7f8-kube-api-access-sftj6\") pod \"redhat-operators-24v84\" (UID: \"f23f46be-d84e-4136-9573-1b543adac7f8\") " pod="openshift-marketplace/redhat-operators-24v84" Nov 28 09:39:49 crc kubenswrapper[4946]: I1128 09:39:49.648142 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f23f46be-d84e-4136-9573-1b543adac7f8-catalog-content\") pod \"redhat-operators-24v84\" (UID: \"f23f46be-d84e-4136-9573-1b543adac7f8\") " pod="openshift-marketplace/redhat-operators-24v84" Nov 28 09:39:49 crc kubenswrapper[4946]: I1128 09:39:49.648178 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f23f46be-d84e-4136-9573-1b543adac7f8-utilities\") pod \"redhat-operators-24v84\" (UID: \"f23f46be-d84e-4136-9573-1b543adac7f8\") " pod="openshift-marketplace/redhat-operators-24v84" Nov 28 09:39:49 crc kubenswrapper[4946]: I1128 09:39:49.666476 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sftj6\" (UniqueName: \"kubernetes.io/projected/f23f46be-d84e-4136-9573-1b543adac7f8-kube-api-access-sftj6\") pod \"redhat-operators-24v84\" (UID: \"f23f46be-d84e-4136-9573-1b543adac7f8\") " pod="openshift-marketplace/redhat-operators-24v84" Nov 28 09:39:49 crc kubenswrapper[4946]: I1128 09:39:49.706008 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-24v84" Nov 28 09:39:50 crc kubenswrapper[4946]: I1128 09:39:50.202190 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-24v84"] Nov 28 09:39:50 crc kubenswrapper[4946]: W1128 09:39:50.203677 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf23f46be_d84e_4136_9573_1b543adac7f8.slice/crio-7e35384adca8d164e9ffc574b1dfe1f76381906b68ad45b41d8d197503890b5a WatchSource:0}: Error finding container 7e35384adca8d164e9ffc574b1dfe1f76381906b68ad45b41d8d197503890b5a: Status 404 returned error can't find the container with id 7e35384adca8d164e9ffc574b1dfe1f76381906b68ad45b41d8d197503890b5a Nov 28 09:39:50 crc kubenswrapper[4946]: I1128 09:39:50.482683 4946 generic.go:334] "Generic (PLEG): container finished" podID="f23f46be-d84e-4136-9573-1b543adac7f8" containerID="a0300eaf1df49fa6b4691747ad3be91554aad36fb3987bc4e097569cb74e917b" exitCode=0 Nov 28 09:39:50 crc kubenswrapper[4946]: I1128 09:39:50.482728 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24v84" event={"ID":"f23f46be-d84e-4136-9573-1b543adac7f8","Type":"ContainerDied","Data":"a0300eaf1df49fa6b4691747ad3be91554aad36fb3987bc4e097569cb74e917b"} Nov 28 09:39:50 crc kubenswrapper[4946]: I1128 09:39:50.482751 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24v84" event={"ID":"f23f46be-d84e-4136-9573-1b543adac7f8","Type":"ContainerStarted","Data":"7e35384adca8d164e9ffc574b1dfe1f76381906b68ad45b41d8d197503890b5a"} Nov 28 09:39:51 crc kubenswrapper[4946]: I1128 09:39:51.498793 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24v84" event={"ID":"f23f46be-d84e-4136-9573-1b543adac7f8","Type":"ContainerStarted","Data":"b8cc73e9c0b7ffa56e303aee798e474e0610c791e9c6580fa6b0c7299ccd80ec"} Nov 28 09:39:52 crc kubenswrapper[4946]: I1128 09:39:52.515211 4946 generic.go:334] "Generic (PLEG): container finished" podID="f23f46be-d84e-4136-9573-1b543adac7f8" containerID="b8cc73e9c0b7ffa56e303aee798e474e0610c791e9c6580fa6b0c7299ccd80ec" exitCode=0 Nov 28 09:39:52 crc kubenswrapper[4946]: I1128 09:39:52.515261 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24v84" event={"ID":"f23f46be-d84e-4136-9573-1b543adac7f8","Type":"ContainerDied","Data":"b8cc73e9c0b7ffa56e303aee798e474e0610c791e9c6580fa6b0c7299ccd80ec"} Nov 28 09:39:53 crc kubenswrapper[4946]: I1128 09:39:53.526311 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24v84" event={"ID":"f23f46be-d84e-4136-9573-1b543adac7f8","Type":"ContainerStarted","Data":"35093dbdd638b8485d39746d46270040d9ef06dbeee79fe113238a0d889ff4ec"} Nov 28 09:39:53 crc kubenswrapper[4946]: I1128 09:39:53.543729 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-24v84" podStartSLOduration=2.072394405 podStartE2EDuration="4.543696822s" podCreationTimestamp="2025-11-28 09:39:49 +0000 UTC" firstStartedPulling="2025-11-28 09:39:50.484652907 +0000 UTC m=+10044.862718018" lastFinishedPulling="2025-11-28 09:39:52.955955324 +0000 UTC m=+10047.334020435" observedRunningTime="2025-11-28 09:39:53.541073397 +0000 UTC m=+10047.919138548" watchObservedRunningTime="2025-11-28 09:39:53.543696822 +0000 UTC m=+10047.921761933" Nov 28 09:39:54 crc kubenswrapper[4946]: I1128 09:39:54.730356 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:39:54 crc kubenswrapper[4946]: I1128 09:39:54.730411 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:39:56 crc kubenswrapper[4946]: I1128 09:39:56.562800 4946 generic.go:334] "Generic (PLEG): container finished" podID="c4819513-a41f-4a93-b8b5-9587c67a0832" containerID="640b10d50aab1c5c43a20fca86e29d197e7699ab436c75c70720e5e27c067369" exitCode=0 Nov 28 09:39:56 crc kubenswrapper[4946]: I1128 09:39:56.562872 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-8pscp" event={"ID":"c4819513-a41f-4a93-b8b5-9587c67a0832","Type":"ContainerDied","Data":"640b10d50aab1c5c43a20fca86e29d197e7699ab436c75c70720e5e27c067369"} Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.051601 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.120615 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-inventory\") pod \"c4819513-a41f-4a93-b8b5-9587c67a0832\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.120977 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm2v4\" (UniqueName: \"kubernetes.io/projected/c4819513-a41f-4a93-b8b5-9587c67a0832-kube-api-access-nm2v4\") pod \"c4819513-a41f-4a93-b8b5-9587c67a0832\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.121115 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ssh-key\") pod \"c4819513-a41f-4a93-b8b5-9587c67a0832\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.121326 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-telemetry-combined-ca-bundle\") pod \"c4819513-a41f-4a93-b8b5-9587c67a0832\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.121450 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ceilometer-compute-config-data-2\") pod \"c4819513-a41f-4a93-b8b5-9587c67a0832\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.121753 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ceilometer-compute-config-data-1\") pod \"c4819513-a41f-4a93-b8b5-9587c67a0832\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.121870 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ceilometer-compute-config-data-0\") pod \"c4819513-a41f-4a93-b8b5-9587c67a0832\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.121988 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ceph\") pod \"c4819513-a41f-4a93-b8b5-9587c67a0832\" (UID: \"c4819513-a41f-4a93-b8b5-9587c67a0832\") " Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.128322 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "c4819513-a41f-4a93-b8b5-9587c67a0832" (UID: "c4819513-a41f-4a93-b8b5-9587c67a0832"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.129970 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4819513-a41f-4a93-b8b5-9587c67a0832-kube-api-access-nm2v4" (OuterVolumeSpecName: "kube-api-access-nm2v4") pod "c4819513-a41f-4a93-b8b5-9587c67a0832" (UID: "c4819513-a41f-4a93-b8b5-9587c67a0832"). InnerVolumeSpecName "kube-api-access-nm2v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.140956 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ceph" (OuterVolumeSpecName: "ceph") pod "c4819513-a41f-4a93-b8b5-9587c67a0832" (UID: "c4819513-a41f-4a93-b8b5-9587c67a0832"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.158314 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-inventory" (OuterVolumeSpecName: "inventory") pod "c4819513-a41f-4a93-b8b5-9587c67a0832" (UID: "c4819513-a41f-4a93-b8b5-9587c67a0832"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.163149 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "c4819513-a41f-4a93-b8b5-9587c67a0832" (UID: "c4819513-a41f-4a93-b8b5-9587c67a0832"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.172361 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "c4819513-a41f-4a93-b8b5-9587c67a0832" (UID: "c4819513-a41f-4a93-b8b5-9587c67a0832"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.178563 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "c4819513-a41f-4a93-b8b5-9587c67a0832" (UID: "c4819513-a41f-4a93-b8b5-9587c67a0832"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.198827 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c4819513-a41f-4a93-b8b5-9587c67a0832" (UID: "c4819513-a41f-4a93-b8b5-9587c67a0832"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.224581 4946 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.224616 4946 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.224629 4946 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ceph\") on node \"crc\" DevicePath \"\"" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.224642 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.224655 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm2v4\" (UniqueName: \"kubernetes.io/projected/c4819513-a41f-4a93-b8b5-9587c67a0832-kube-api-access-nm2v4\") on node \"crc\" DevicePath \"\"" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.224666 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.224675 4946 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.224685 4946 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c4819513-a41f-4a93-b8b5-9587c67a0832-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.583254 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-8pscp" event={"ID":"c4819513-a41f-4a93-b8b5-9587c67a0832","Type":"ContainerDied","Data":"75663784e683c650ad665759412d08212262741c0ed37d793dd5b76b5635609a"} Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.583297 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75663784e683c650ad665759412d08212262741c0ed37d793dd5b76b5635609a" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.583346 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-8pscp" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.718455 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-wgh6r"] Nov 28 09:39:58 crc kubenswrapper[4946]: E1128 09:39:58.718977 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4819513-a41f-4a93-b8b5-9587c67a0832" containerName="telemetry-openstack-openstack-cell1" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.718998 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4819513-a41f-4a93-b8b5-9587c67a0832" containerName="telemetry-openstack-openstack-cell1" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.719222 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4819513-a41f-4a93-b8b5-9587c67a0832" containerName="telemetry-openstack-openstack-cell1" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.720009 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.722158 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.722458 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.722827 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-sriov-agent-neutron-config" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.722895 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-psw2j" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.723998 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.736120 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-wgh6r"] Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.835772 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6swcq\" (UniqueName: \"kubernetes.io/projected/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-kube-api-access-6swcq\") pod \"neutron-sriov-openstack-openstack-cell1-wgh6r\" (UID: \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.835951 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-wgh6r\" (UID: \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.836031 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-wgh6r\" (UID: \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.836074 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-wgh6r\" (UID: \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.836120 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-wgh6r\" (UID: \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.836145 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-wgh6r\" (UID: \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.938197 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6swcq\" (UniqueName: \"kubernetes.io/projected/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-kube-api-access-6swcq\") pod \"neutron-sriov-openstack-openstack-cell1-wgh6r\" (UID: \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.938269 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-wgh6r\" (UID: \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.938295 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-wgh6r\" (UID: \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.938319 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-wgh6r\" (UID: \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.938339 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-wgh6r\" (UID: \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.938356 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-wgh6r\" (UID: \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.950146 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-wgh6r\" (UID: \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.950177 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-wgh6r\" (UID: \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.950875 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-wgh6r\" (UID: \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.952919 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-wgh6r\" (UID: \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.954077 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-wgh6r\" (UID: \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" Nov 28 09:39:58 crc kubenswrapper[4946]: I1128 09:39:58.969266 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6swcq\" (UniqueName: \"kubernetes.io/projected/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-kube-api-access-6swcq\") pod \"neutron-sriov-openstack-openstack-cell1-wgh6r\" (UID: \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" Nov 28 09:39:59 crc kubenswrapper[4946]: I1128 09:39:59.037838 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" Nov 28 09:39:59 crc kubenswrapper[4946]: I1128 09:39:59.567505 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-wgh6r"] Nov 28 09:39:59 crc kubenswrapper[4946]: I1128 09:39:59.597433 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" event={"ID":"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5","Type":"ContainerStarted","Data":"f19076c85c091d1cb33d36a56a7a9a3fda450c655c680e7b949db9c5d7aa8528"} Nov 28 09:39:59 crc kubenswrapper[4946]: I1128 09:39:59.706824 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-24v84" Nov 28 09:39:59 crc kubenswrapper[4946]: I1128 09:39:59.706918 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-24v84" Nov 28 09:40:00 crc kubenswrapper[4946]: I1128 09:40:00.513858 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-24v84" Nov 28 09:40:00 crc kubenswrapper[4946]: I1128 09:40:00.659800 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-24v84" Nov 28 09:40:00 crc kubenswrapper[4946]: I1128 09:40:00.756955 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-24v84"] Nov 28 09:40:01 crc kubenswrapper[4946]: I1128 09:40:01.625421 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" event={"ID":"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5","Type":"ContainerStarted","Data":"ef96d55c2887ce83ffe0469f25faff0dc2e77371f2ff85d3fad25ee9da059606"} Nov 28 09:40:01 crc kubenswrapper[4946]: I1128 09:40:01.674345 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" podStartSLOduration=3.212869236 podStartE2EDuration="3.67432397s" podCreationTimestamp="2025-11-28 09:39:58 +0000 UTC" firstStartedPulling="2025-11-28 09:39:59.565635411 +0000 UTC m=+10053.943700522" lastFinishedPulling="2025-11-28 09:40:00.027090105 +0000 UTC m=+10054.405155256" observedRunningTime="2025-11-28 09:40:01.662442687 +0000 UTC m=+10056.040507808" watchObservedRunningTime="2025-11-28 09:40:01.67432397 +0000 UTC m=+10056.052389091" Nov 28 09:40:02 crc kubenswrapper[4946]: I1128 09:40:02.634058 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-24v84" podUID="f23f46be-d84e-4136-9573-1b543adac7f8" containerName="registry-server" containerID="cri-o://35093dbdd638b8485d39746d46270040d9ef06dbeee79fe113238a0d889ff4ec" gracePeriod=2 Nov 28 09:40:03 crc kubenswrapper[4946]: I1128 09:40:03.644845 4946 generic.go:334] "Generic (PLEG): container finished" podID="f23f46be-d84e-4136-9573-1b543adac7f8" containerID="35093dbdd638b8485d39746d46270040d9ef06dbeee79fe113238a0d889ff4ec" exitCode=0 Nov 28 09:40:03 crc kubenswrapper[4946]: I1128 09:40:03.645339 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24v84" event={"ID":"f23f46be-d84e-4136-9573-1b543adac7f8","Type":"ContainerDied","Data":"35093dbdd638b8485d39746d46270040d9ef06dbeee79fe113238a0d889ff4ec"} Nov 28 09:40:03 crc kubenswrapper[4946]: I1128 09:40:03.748160 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-24v84" Nov 28 09:40:03 crc kubenswrapper[4946]: I1128 09:40:03.866628 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sftj6\" (UniqueName: \"kubernetes.io/projected/f23f46be-d84e-4136-9573-1b543adac7f8-kube-api-access-sftj6\") pod \"f23f46be-d84e-4136-9573-1b543adac7f8\" (UID: \"f23f46be-d84e-4136-9573-1b543adac7f8\") " Nov 28 09:40:03 crc kubenswrapper[4946]: I1128 09:40:03.866797 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f23f46be-d84e-4136-9573-1b543adac7f8-catalog-content\") pod \"f23f46be-d84e-4136-9573-1b543adac7f8\" (UID: \"f23f46be-d84e-4136-9573-1b543adac7f8\") " Nov 28 09:40:03 crc kubenswrapper[4946]: I1128 09:40:03.866911 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f23f46be-d84e-4136-9573-1b543adac7f8-utilities\") pod \"f23f46be-d84e-4136-9573-1b543adac7f8\" (UID: \"f23f46be-d84e-4136-9573-1b543adac7f8\") " Nov 28 09:40:03 crc kubenswrapper[4946]: I1128 09:40:03.867915 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f23f46be-d84e-4136-9573-1b543adac7f8-utilities" (OuterVolumeSpecName: "utilities") pod "f23f46be-d84e-4136-9573-1b543adac7f8" (UID: "f23f46be-d84e-4136-9573-1b543adac7f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:40:03 crc kubenswrapper[4946]: I1128 09:40:03.873032 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f23f46be-d84e-4136-9573-1b543adac7f8-kube-api-access-sftj6" (OuterVolumeSpecName: "kube-api-access-sftj6") pod "f23f46be-d84e-4136-9573-1b543adac7f8" (UID: "f23f46be-d84e-4136-9573-1b543adac7f8"). InnerVolumeSpecName "kube-api-access-sftj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:40:03 crc kubenswrapper[4946]: I1128 09:40:03.968987 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sftj6\" (UniqueName: \"kubernetes.io/projected/f23f46be-d84e-4136-9573-1b543adac7f8-kube-api-access-sftj6\") on node \"crc\" DevicePath \"\"" Nov 28 09:40:03 crc kubenswrapper[4946]: I1128 09:40:03.969025 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f23f46be-d84e-4136-9573-1b543adac7f8-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 09:40:04 crc kubenswrapper[4946]: I1128 09:40:04.003336 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f23f46be-d84e-4136-9573-1b543adac7f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f23f46be-d84e-4136-9573-1b543adac7f8" (UID: "f23f46be-d84e-4136-9573-1b543adac7f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:40:04 crc kubenswrapper[4946]: I1128 09:40:04.072242 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f23f46be-d84e-4136-9573-1b543adac7f8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 09:40:04 crc kubenswrapper[4946]: I1128 09:40:04.658671 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24v84" event={"ID":"f23f46be-d84e-4136-9573-1b543adac7f8","Type":"ContainerDied","Data":"7e35384adca8d164e9ffc574b1dfe1f76381906b68ad45b41d8d197503890b5a"} Nov 28 09:40:04 crc kubenswrapper[4946]: I1128 09:40:04.658730 4946 scope.go:117] "RemoveContainer" containerID="35093dbdd638b8485d39746d46270040d9ef06dbeee79fe113238a0d889ff4ec" Nov 28 09:40:04 crc kubenswrapper[4946]: I1128 09:40:04.658759 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-24v84" Nov 28 09:40:04 crc kubenswrapper[4946]: I1128 09:40:04.684326 4946 scope.go:117] "RemoveContainer" containerID="b8cc73e9c0b7ffa56e303aee798e474e0610c791e9c6580fa6b0c7299ccd80ec" Nov 28 09:40:04 crc kubenswrapper[4946]: I1128 09:40:04.709144 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-24v84"] Nov 28 09:40:04 crc kubenswrapper[4946]: I1128 09:40:04.722771 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-24v84"] Nov 28 09:40:04 crc kubenswrapper[4946]: I1128 09:40:04.730862 4946 scope.go:117] "RemoveContainer" containerID="a0300eaf1df49fa6b4691747ad3be91554aad36fb3987bc4e097569cb74e917b" Nov 28 09:40:06 crc kubenswrapper[4946]: I1128 09:40:06.003298 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f23f46be-d84e-4136-9573-1b543adac7f8" path="/var/lib/kubelet/pods/f23f46be-d84e-4136-9573-1b543adac7f8/volumes" Nov 28 09:40:24 crc kubenswrapper[4946]: I1128 09:40:24.731360 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:40:24 crc kubenswrapper[4946]: I1128 09:40:24.732091 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:40:24 crc kubenswrapper[4946]: I1128 09:40:24.732178 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 09:40:24 crc kubenswrapper[4946]: I1128 09:40:24.733296 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0ec5e3eb54e35f954dba98a8c8522fadf3fba9bfecbc9e8dcd7fcd5944c599d0"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 09:40:24 crc kubenswrapper[4946]: I1128 09:40:24.733392 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://0ec5e3eb54e35f954dba98a8c8522fadf3fba9bfecbc9e8dcd7fcd5944c599d0" gracePeriod=600 Nov 28 09:40:24 crc kubenswrapper[4946]: I1128 09:40:24.922255 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="0ec5e3eb54e35f954dba98a8c8522fadf3fba9bfecbc9e8dcd7fcd5944c599d0" exitCode=0 Nov 28 09:40:24 crc kubenswrapper[4946]: I1128 09:40:24.922326 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"0ec5e3eb54e35f954dba98a8c8522fadf3fba9bfecbc9e8dcd7fcd5944c599d0"} Nov 28 09:40:24 crc kubenswrapper[4946]: I1128 09:40:24.922641 4946 scope.go:117] "RemoveContainer" containerID="1a226ca4ef8fd70e0b3d4414936e85e3e1ca8212b259a322cd671dd24c7ac54d" Nov 28 09:40:25 crc kubenswrapper[4946]: I1128 09:40:25.942017 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29"} Nov 28 09:40:46 crc kubenswrapper[4946]: I1128 09:40:46.163963 4946 generic.go:334] "Generic (PLEG): container finished" podID="9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5" containerID="ef96d55c2887ce83ffe0469f25faff0dc2e77371f2ff85d3fad25ee9da059606" exitCode=0 Nov 28 09:40:46 crc kubenswrapper[4946]: I1128 09:40:46.164066 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" event={"ID":"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5","Type":"ContainerDied","Data":"ef96d55c2887ce83ffe0469f25faff0dc2e77371f2ff85d3fad25ee9da059606"} Nov 28 09:40:47 crc kubenswrapper[4946]: I1128 09:40:47.631661 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" Nov 28 09:40:47 crc kubenswrapper[4946]: I1128 09:40:47.801158 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-inventory\") pod \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\" (UID: \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\") " Nov 28 09:40:47 crc kubenswrapper[4946]: I1128 09:40:47.801290 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6swcq\" (UniqueName: \"kubernetes.io/projected/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-kube-api-access-6swcq\") pod \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\" (UID: \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\") " Nov 28 09:40:47 crc kubenswrapper[4946]: I1128 09:40:47.801393 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-neutron-sriov-combined-ca-bundle\") pod \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\" (UID: \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\") " Nov 28 09:40:47 crc kubenswrapper[4946]: I1128 09:40:47.801459 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-neutron-sriov-agent-neutron-config-0\") pod \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\" (UID: \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\") " Nov 28 09:40:47 crc kubenswrapper[4946]: I1128 09:40:47.801532 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-ssh-key\") pod \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\" (UID: \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\") " Nov 28 09:40:47 crc kubenswrapper[4946]: I1128 09:40:47.801596 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-ceph\") pod \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\" (UID: \"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5\") " Nov 28 09:40:47 crc kubenswrapper[4946]: I1128 09:40:47.813926 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-ceph" (OuterVolumeSpecName: "ceph") pod "9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5" (UID: "9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:40:47 crc kubenswrapper[4946]: I1128 09:40:47.816112 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5" (UID: "9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:40:47 crc kubenswrapper[4946]: I1128 09:40:47.819941 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-kube-api-access-6swcq" (OuterVolumeSpecName: "kube-api-access-6swcq") pod "9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5" (UID: "9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5"). InnerVolumeSpecName "kube-api-access-6swcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:40:47 crc kubenswrapper[4946]: I1128 09:40:47.839635 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-inventory" (OuterVolumeSpecName: "inventory") pod "9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5" (UID: "9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:40:47 crc kubenswrapper[4946]: I1128 09:40:47.854254 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5" (UID: "9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:40:47 crc kubenswrapper[4946]: I1128 09:40:47.857887 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-neutron-sriov-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-sriov-agent-neutron-config-0") pod "9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5" (UID: "9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5"). InnerVolumeSpecName "neutron-sriov-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:40:47 crc kubenswrapper[4946]: I1128 09:40:47.904194 4946 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:40:47 crc kubenswrapper[4946]: I1128 09:40:47.904800 4946 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-neutron-sriov-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 09:40:47 crc kubenswrapper[4946]: I1128 09:40:47.904904 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:40:47 crc kubenswrapper[4946]: I1128 09:40:47.904987 4946 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-ceph\") on node \"crc\" DevicePath \"\"" Nov 28 09:40:47 crc kubenswrapper[4946]: I1128 09:40:47.905068 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:40:47 crc kubenswrapper[4946]: I1128 09:40:47.905216 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6swcq\" (UniqueName: \"kubernetes.io/projected/9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5-kube-api-access-6swcq\") on node \"crc\" DevicePath \"\"" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.189795 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" event={"ID":"9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5","Type":"ContainerDied","Data":"f19076c85c091d1cb33d36a56a7a9a3fda450c655c680e7b949db9c5d7aa8528"} Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.189873 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f19076c85c091d1cb33d36a56a7a9a3fda450c655c680e7b949db9c5d7aa8528" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.189913 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-wgh6r" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.377631 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr"] Nov 28 09:40:48 crc kubenswrapper[4946]: E1128 09:40:48.378121 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5" containerName="neutron-sriov-openstack-openstack-cell1" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.378142 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5" containerName="neutron-sriov-openstack-openstack-cell1" Nov 28 09:40:48 crc kubenswrapper[4946]: E1128 09:40:48.378158 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f23f46be-d84e-4136-9573-1b543adac7f8" containerName="registry-server" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.378166 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="f23f46be-d84e-4136-9573-1b543adac7f8" containerName="registry-server" Nov 28 09:40:48 crc kubenswrapper[4946]: E1128 09:40:48.378223 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f23f46be-d84e-4136-9573-1b543adac7f8" containerName="extract-content" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.378233 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="f23f46be-d84e-4136-9573-1b543adac7f8" containerName="extract-content" Nov 28 09:40:48 crc kubenswrapper[4946]: E1128 09:40:48.378251 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f23f46be-d84e-4136-9573-1b543adac7f8" containerName="extract-utilities" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.378260 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="f23f46be-d84e-4136-9573-1b543adac7f8" containerName="extract-utilities" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.378516 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5" containerName="neutron-sriov-openstack-openstack-cell1" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.378542 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="f23f46be-d84e-4136-9573-1b543adac7f8" containerName="registry-server" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.379444 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.382594 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.383285 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-dhcp-agent-neutron-config" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.383679 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.383865 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-psw2j" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.384603 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.407130 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr"] Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.516366 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-8m7gr\" (UID: \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.516425 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6knx\" (UniqueName: \"kubernetes.io/projected/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-kube-api-access-h6knx\") pod \"neutron-dhcp-openstack-openstack-cell1-8m7gr\" (UID: \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.516500 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-8m7gr\" (UID: \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.516791 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-8m7gr\" (UID: \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.517010 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-8m7gr\" (UID: \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.517331 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-8m7gr\" (UID: \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.620152 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-8m7gr\" (UID: \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.620324 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-8m7gr\" (UID: \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.620372 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6knx\" (UniqueName: \"kubernetes.io/projected/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-kube-api-access-h6knx\") pod \"neutron-dhcp-openstack-openstack-cell1-8m7gr\" (UID: \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.620446 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-8m7gr\" (UID: \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.620595 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-8m7gr\" (UID: \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.620699 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-8m7gr\" (UID: \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.711294 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-8m7gr\" (UID: \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.711545 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-8m7gr\" (UID: \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.711497 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-8m7gr\" (UID: \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.712147 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-8m7gr\" (UID: \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.712418 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-8m7gr\" (UID: \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" Nov 28 09:40:48 crc kubenswrapper[4946]: I1128 09:40:48.714879 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6knx\" (UniqueName: \"kubernetes.io/projected/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-kube-api-access-h6knx\") pod \"neutron-dhcp-openstack-openstack-cell1-8m7gr\" (UID: \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" Nov 28 09:40:49 crc kubenswrapper[4946]: I1128 09:40:49.009081 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" Nov 28 09:40:49 crc kubenswrapper[4946]: I1128 09:40:49.618096 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr"] Nov 28 09:40:50 crc kubenswrapper[4946]: I1128 09:40:50.229148 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" event={"ID":"a9c2a77c-92e6-4a14-886a-feaeaa5db35f","Type":"ContainerStarted","Data":"5d9eb9b04fcd52255c4df24d024ff2ebd7825967056b97a6eab564ea506327df"} Nov 28 09:40:51 crc kubenswrapper[4946]: I1128 09:40:51.246989 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" event={"ID":"a9c2a77c-92e6-4a14-886a-feaeaa5db35f","Type":"ContainerStarted","Data":"a4b2548f0e1b0e42d1d46418f9ea1b53b6dc527bd8aa484c736d5486e886e298"} Nov 28 09:40:51 crc kubenswrapper[4946]: I1128 09:40:51.292048 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" podStartSLOduration=2.583357867 podStartE2EDuration="3.292017056s" podCreationTimestamp="2025-11-28 09:40:48 +0000 UTC" firstStartedPulling="2025-11-28 09:40:49.618994026 +0000 UTC m=+10103.997059177" lastFinishedPulling="2025-11-28 09:40:50.327653255 +0000 UTC m=+10104.705718366" observedRunningTime="2025-11-28 09:40:51.270711481 +0000 UTC m=+10105.648776622" watchObservedRunningTime="2025-11-28 09:40:51.292017056 +0000 UTC m=+10105.670082197" Nov 28 09:41:52 crc kubenswrapper[4946]: I1128 09:41:52.030513 4946 generic.go:334] "Generic (PLEG): container finished" podID="a9c2a77c-92e6-4a14-886a-feaeaa5db35f" containerID="a4b2548f0e1b0e42d1d46418f9ea1b53b6dc527bd8aa484c736d5486e886e298" exitCode=0 Nov 28 09:41:52 crc kubenswrapper[4946]: I1128 09:41:52.030613 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" event={"ID":"a9c2a77c-92e6-4a14-886a-feaeaa5db35f","Type":"ContainerDied","Data":"a4b2548f0e1b0e42d1d46418f9ea1b53b6dc527bd8aa484c736d5486e886e298"} Nov 28 09:41:53 crc kubenswrapper[4946]: I1128 09:41:53.535600 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" Nov 28 09:41:53 crc kubenswrapper[4946]: I1128 09:41:53.687665 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6knx\" (UniqueName: \"kubernetes.io/projected/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-kube-api-access-h6knx\") pod \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\" (UID: \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\") " Nov 28 09:41:53 crc kubenswrapper[4946]: I1128 09:41:53.687788 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-ssh-key\") pod \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\" (UID: \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\") " Nov 28 09:41:53 crc kubenswrapper[4946]: I1128 09:41:53.687894 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-ceph\") pod \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\" (UID: \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\") " Nov 28 09:41:53 crc kubenswrapper[4946]: I1128 09:41:53.687946 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-neutron-dhcp-combined-ca-bundle\") pod \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\" (UID: \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\") " Nov 28 09:41:53 crc kubenswrapper[4946]: I1128 09:41:53.688113 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-neutron-dhcp-agent-neutron-config-0\") pod \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\" (UID: \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\") " Nov 28 09:41:53 crc kubenswrapper[4946]: I1128 09:41:53.688203 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-inventory\") pod \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\" (UID: \"a9c2a77c-92e6-4a14-886a-feaeaa5db35f\") " Nov 28 09:41:53 crc kubenswrapper[4946]: I1128 09:41:53.693909 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-ceph" (OuterVolumeSpecName: "ceph") pod "a9c2a77c-92e6-4a14-886a-feaeaa5db35f" (UID: "a9c2a77c-92e6-4a14-886a-feaeaa5db35f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:41:53 crc kubenswrapper[4946]: I1128 09:41:53.694255 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "a9c2a77c-92e6-4a14-886a-feaeaa5db35f" (UID: "a9c2a77c-92e6-4a14-886a-feaeaa5db35f"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:41:53 crc kubenswrapper[4946]: I1128 09:41:53.695885 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-kube-api-access-h6knx" (OuterVolumeSpecName: "kube-api-access-h6knx") pod "a9c2a77c-92e6-4a14-886a-feaeaa5db35f" (UID: "a9c2a77c-92e6-4a14-886a-feaeaa5db35f"). InnerVolumeSpecName "kube-api-access-h6knx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:41:53 crc kubenswrapper[4946]: I1128 09:41:53.717872 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-inventory" (OuterVolumeSpecName: "inventory") pod "a9c2a77c-92e6-4a14-886a-feaeaa5db35f" (UID: "a9c2a77c-92e6-4a14-886a-feaeaa5db35f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:41:53 crc kubenswrapper[4946]: I1128 09:41:53.724313 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-neutron-dhcp-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-dhcp-agent-neutron-config-0") pod "a9c2a77c-92e6-4a14-886a-feaeaa5db35f" (UID: "a9c2a77c-92e6-4a14-886a-feaeaa5db35f"). InnerVolumeSpecName "neutron-dhcp-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:41:53 crc kubenswrapper[4946]: I1128 09:41:53.735619 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a9c2a77c-92e6-4a14-886a-feaeaa5db35f" (UID: "a9c2a77c-92e6-4a14-886a-feaeaa5db35f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:41:53 crc kubenswrapper[4946]: I1128 09:41:53.790241 4946 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-neutron-dhcp-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 09:41:53 crc kubenswrapper[4946]: I1128 09:41:53.790278 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:41:53 crc kubenswrapper[4946]: I1128 09:41:53.790291 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6knx\" (UniqueName: \"kubernetes.io/projected/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-kube-api-access-h6knx\") on node \"crc\" DevicePath \"\"" Nov 28 09:41:53 crc kubenswrapper[4946]: I1128 09:41:53.790301 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:41:53 crc kubenswrapper[4946]: I1128 09:41:53.790310 4946 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-ceph\") on node \"crc\" DevicePath \"\"" Nov 28 09:41:53 crc kubenswrapper[4946]: I1128 09:41:53.790318 4946 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c2a77c-92e6-4a14-886a-feaeaa5db35f-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:41:54 crc kubenswrapper[4946]: I1128 09:41:54.052901 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" event={"ID":"a9c2a77c-92e6-4a14-886a-feaeaa5db35f","Type":"ContainerDied","Data":"5d9eb9b04fcd52255c4df24d024ff2ebd7825967056b97a6eab564ea506327df"} Nov 28 09:41:54 crc kubenswrapper[4946]: I1128 09:41:54.053506 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d9eb9b04fcd52255c4df24d024ff2ebd7825967056b97a6eab564ea506327df" Nov 28 09:41:54 crc kubenswrapper[4946]: I1128 09:41:54.052947 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-8m7gr" Nov 28 09:42:01 crc kubenswrapper[4946]: I1128 09:42:01.876789 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 09:42:01 crc kubenswrapper[4946]: I1128 09:42:01.877975 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="e5a57460-cf67-4cef-80a0-bb873598b9ed" containerName="nova-cell0-conductor-conductor" containerID="cri-o://d91b1d82ecac143e4eed0fe464accd8eaa0a9e7ff4df07993bbdbc1616612709" gracePeriod=30 Nov 28 09:42:02 crc kubenswrapper[4946]: I1128 09:42:02.379373 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 09:42:02 crc kubenswrapper[4946]: I1128 09:42:02.379693 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="37c39660-02af-4065-8612-86ef02649a2f" containerName="nova-cell1-conductor-conductor" containerID="cri-o://8b7fa9697d8b4068b043297b7d17a4fa27cafffb08f872df98cd9831b62fdbb6" gracePeriod=30 Nov 28 09:42:02 crc kubenswrapper[4946]: I1128 09:42:02.544612 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 09:42:02 crc kubenswrapper[4946]: I1128 09:42:02.545169 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c0710305-34b3-4a15-841d-a90a2ff20c6a" containerName="nova-api-log" containerID="cri-o://26f93af648d01ce75fda4864d9831891ef89ac59f10173075768a4125def6e9d" gracePeriod=30 Nov 28 09:42:02 crc kubenswrapper[4946]: I1128 09:42:02.545297 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c0710305-34b3-4a15-841d-a90a2ff20c6a" containerName="nova-api-api" containerID="cri-o://d1211a4b11855f425498bc68db67776efd44f9c78e0813df5902e7e59207f617" gracePeriod=30 Nov 28 09:42:02 crc kubenswrapper[4946]: I1128 09:42:02.566521 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 09:42:02 crc kubenswrapper[4946]: I1128 09:42:02.568739 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6aad4350-9266-48ae-b3ae-b4ec6046fbb1" containerName="nova-scheduler-scheduler" containerID="cri-o://b9e3cc0b7dacace2f336a7508f89e0180d48d6bf43b2581788402aa77efcbf39" gracePeriod=30 Nov 28 09:42:02 crc kubenswrapper[4946]: I1128 09:42:02.606693 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 09:42:02 crc kubenswrapper[4946]: I1128 09:42:02.606924 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a" containerName="nova-metadata-log" containerID="cri-o://30580d9a89c94dc908f75e10a22137d36a1aa6bc4319f9b5776f47e91685ce68" gracePeriod=30 Nov 28 09:42:02 crc kubenswrapper[4946]: I1128 09:42:02.607416 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a" containerName="nova-metadata-metadata" containerID="cri-o://461d9f2607cac19a1f5c39e9388f04cd5c3a00c8d330d862c1f6b0f314304e8d" gracePeriod=30 Nov 28 09:42:02 crc kubenswrapper[4946]: E1128 09:42:02.902306 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d91b1d82ecac143e4eed0fe464accd8eaa0a9e7ff4df07993bbdbc1616612709" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 28 09:42:02 crc kubenswrapper[4946]: E1128 09:42:02.908230 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d91b1d82ecac143e4eed0fe464accd8eaa0a9e7ff4df07993bbdbc1616612709" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 28 09:42:02 crc kubenswrapper[4946]: E1128 09:42:02.910128 4946 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d91b1d82ecac143e4eed0fe464accd8eaa0a9e7ff4df07993bbdbc1616612709" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 28 09:42:02 crc kubenswrapper[4946]: E1128 09:42:02.910171 4946 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="e5a57460-cf67-4cef-80a0-bb873598b9ed" containerName="nova-cell0-conductor-conductor" Nov 28 09:42:03 crc kubenswrapper[4946]: I1128 09:42:03.161824 4946 generic.go:334] "Generic (PLEG): container finished" podID="c0710305-34b3-4a15-841d-a90a2ff20c6a" containerID="26f93af648d01ce75fda4864d9831891ef89ac59f10173075768a4125def6e9d" exitCode=143 Nov 28 09:42:03 crc kubenswrapper[4946]: I1128 09:42:03.161903 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0710305-34b3-4a15-841d-a90a2ff20c6a","Type":"ContainerDied","Data":"26f93af648d01ce75fda4864d9831891ef89ac59f10173075768a4125def6e9d"} Nov 28 09:42:03 crc kubenswrapper[4946]: I1128 09:42:03.164353 4946 generic.go:334] "Generic (PLEG): container finished" podID="0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a" containerID="30580d9a89c94dc908f75e10a22137d36a1aa6bc4319f9b5776f47e91685ce68" exitCode=143 Nov 28 09:42:03 crc kubenswrapper[4946]: I1128 09:42:03.164401 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a","Type":"ContainerDied","Data":"30580d9a89c94dc908f75e10a22137d36a1aa6bc4319f9b5776f47e91685ce68"} Nov 28 09:42:03 crc kubenswrapper[4946]: I1128 09:42:03.723215 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 09:42:03 crc kubenswrapper[4946]: I1128 09:42:03.803413 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c39660-02af-4065-8612-86ef02649a2f-combined-ca-bundle\") pod \"37c39660-02af-4065-8612-86ef02649a2f\" (UID: \"37c39660-02af-4065-8612-86ef02649a2f\") " Nov 28 09:42:03 crc kubenswrapper[4946]: I1128 09:42:03.803682 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37c39660-02af-4065-8612-86ef02649a2f-config-data\") pod \"37c39660-02af-4065-8612-86ef02649a2f\" (UID: \"37c39660-02af-4065-8612-86ef02649a2f\") " Nov 28 09:42:03 crc kubenswrapper[4946]: I1128 09:42:03.803782 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nxrx\" (UniqueName: \"kubernetes.io/projected/37c39660-02af-4065-8612-86ef02649a2f-kube-api-access-7nxrx\") pod \"37c39660-02af-4065-8612-86ef02649a2f\" (UID: \"37c39660-02af-4065-8612-86ef02649a2f\") " Nov 28 09:42:03 crc kubenswrapper[4946]: I1128 09:42:03.808612 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37c39660-02af-4065-8612-86ef02649a2f-kube-api-access-7nxrx" (OuterVolumeSpecName: "kube-api-access-7nxrx") pod "37c39660-02af-4065-8612-86ef02649a2f" (UID: "37c39660-02af-4065-8612-86ef02649a2f"). InnerVolumeSpecName "kube-api-access-7nxrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:42:03 crc kubenswrapper[4946]: I1128 09:42:03.831438 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c39660-02af-4065-8612-86ef02649a2f-config-data" (OuterVolumeSpecName: "config-data") pod "37c39660-02af-4065-8612-86ef02649a2f" (UID: "37c39660-02af-4065-8612-86ef02649a2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:42:03 crc kubenswrapper[4946]: I1128 09:42:03.859217 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c39660-02af-4065-8612-86ef02649a2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37c39660-02af-4065-8612-86ef02649a2f" (UID: "37c39660-02af-4065-8612-86ef02649a2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:42:03 crc kubenswrapper[4946]: I1128 09:42:03.906203 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37c39660-02af-4065-8612-86ef02649a2f-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 09:42:03 crc kubenswrapper[4946]: I1128 09:42:03.906245 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nxrx\" (UniqueName: \"kubernetes.io/projected/37c39660-02af-4065-8612-86ef02649a2f-kube-api-access-7nxrx\") on node \"crc\" DevicePath \"\"" Nov 28 09:42:03 crc kubenswrapper[4946]: I1128 09:42:03.906261 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c39660-02af-4065-8612-86ef02649a2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.178349 4946 generic.go:334] "Generic (PLEG): container finished" podID="37c39660-02af-4065-8612-86ef02649a2f" containerID="8b7fa9697d8b4068b043297b7d17a4fa27cafffb08f872df98cd9831b62fdbb6" exitCode=0 Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.178404 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"37c39660-02af-4065-8612-86ef02649a2f","Type":"ContainerDied","Data":"8b7fa9697d8b4068b043297b7d17a4fa27cafffb08f872df98cd9831b62fdbb6"} Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.178719 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"37c39660-02af-4065-8612-86ef02649a2f","Type":"ContainerDied","Data":"4f0258368ebfcc395917ae6ce55052b5889576f8ce88932ac9d931ee9ca9843f"} Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.178751 4946 scope.go:117] "RemoveContainer" containerID="8b7fa9697d8b4068b043297b7d17a4fa27cafffb08f872df98cd9831b62fdbb6" Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.178424 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.203103 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.213330 4946 scope.go:117] "RemoveContainer" containerID="8b7fa9697d8b4068b043297b7d17a4fa27cafffb08f872df98cd9831b62fdbb6" Nov 28 09:42:04 crc kubenswrapper[4946]: E1128 09:42:04.214099 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b7fa9697d8b4068b043297b7d17a4fa27cafffb08f872df98cd9831b62fdbb6\": container with ID starting with 8b7fa9697d8b4068b043297b7d17a4fa27cafffb08f872df98cd9831b62fdbb6 not found: ID does not exist" containerID="8b7fa9697d8b4068b043297b7d17a4fa27cafffb08f872df98cd9831b62fdbb6" Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.214168 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b7fa9697d8b4068b043297b7d17a4fa27cafffb08f872df98cd9831b62fdbb6"} err="failed to get container status \"8b7fa9697d8b4068b043297b7d17a4fa27cafffb08f872df98cd9831b62fdbb6\": rpc error: code = NotFound desc = could not find container \"8b7fa9697d8b4068b043297b7d17a4fa27cafffb08f872df98cd9831b62fdbb6\": container with ID starting with 8b7fa9697d8b4068b043297b7d17a4fa27cafffb08f872df98cd9831b62fdbb6 not found: ID does not exist" Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.221449 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.238728 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 09:42:04 crc kubenswrapper[4946]: E1128 09:42:04.239224 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9c2a77c-92e6-4a14-886a-feaeaa5db35f" containerName="neutron-dhcp-openstack-openstack-cell1" Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.239246 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9c2a77c-92e6-4a14-886a-feaeaa5db35f" containerName="neutron-dhcp-openstack-openstack-cell1" Nov 28 09:42:04 crc kubenswrapper[4946]: E1128 09:42:04.239267 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c39660-02af-4065-8612-86ef02649a2f" containerName="nova-cell1-conductor-conductor" Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.239274 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c39660-02af-4065-8612-86ef02649a2f" containerName="nova-cell1-conductor-conductor" Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.239717 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9c2a77c-92e6-4a14-886a-feaeaa5db35f" containerName="neutron-dhcp-openstack-openstack-cell1" Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.239733 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="37c39660-02af-4065-8612-86ef02649a2f" containerName="nova-cell1-conductor-conductor" Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.240537 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.244308 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.251272 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.314761 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86aef0b3-1873-47bf-aab0-257c77b2bbe6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"86aef0b3-1873-47bf-aab0-257c77b2bbe6\") " pod="openstack/nova-cell1-conductor-0" Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.314888 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86aef0b3-1873-47bf-aab0-257c77b2bbe6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"86aef0b3-1873-47bf-aab0-257c77b2bbe6\") " pod="openstack/nova-cell1-conductor-0" Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.314986 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl75l\" (UniqueName: \"kubernetes.io/projected/86aef0b3-1873-47bf-aab0-257c77b2bbe6-kube-api-access-cl75l\") pod \"nova-cell1-conductor-0\" (UID: \"86aef0b3-1873-47bf-aab0-257c77b2bbe6\") " pod="openstack/nova-cell1-conductor-0" Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.417288 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86aef0b3-1873-47bf-aab0-257c77b2bbe6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"86aef0b3-1873-47bf-aab0-257c77b2bbe6\") " pod="openstack/nova-cell1-conductor-0" Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.417503 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86aef0b3-1873-47bf-aab0-257c77b2bbe6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"86aef0b3-1873-47bf-aab0-257c77b2bbe6\") " pod="openstack/nova-cell1-conductor-0" Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.417719 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl75l\" (UniqueName: \"kubernetes.io/projected/86aef0b3-1873-47bf-aab0-257c77b2bbe6-kube-api-access-cl75l\") pod \"nova-cell1-conductor-0\" (UID: \"86aef0b3-1873-47bf-aab0-257c77b2bbe6\") " pod="openstack/nova-cell1-conductor-0" Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.424887 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86aef0b3-1873-47bf-aab0-257c77b2bbe6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"86aef0b3-1873-47bf-aab0-257c77b2bbe6\") " pod="openstack/nova-cell1-conductor-0" Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.425330 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86aef0b3-1873-47bf-aab0-257c77b2bbe6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"86aef0b3-1873-47bf-aab0-257c77b2bbe6\") " pod="openstack/nova-cell1-conductor-0" Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.437963 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl75l\" (UniqueName: \"kubernetes.io/projected/86aef0b3-1873-47bf-aab0-257c77b2bbe6-kube-api-access-cl75l\") pod \"nova-cell1-conductor-0\" (UID: \"86aef0b3-1873-47bf-aab0-257c77b2bbe6\") " pod="openstack/nova-cell1-conductor-0" Nov 28 09:42:04 crc kubenswrapper[4946]: I1128 09:42:04.569056 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 09:42:05 crc kubenswrapper[4946]: I1128 09:42:05.071257 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 09:42:05 crc kubenswrapper[4946]: W1128 09:42:05.078041 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86aef0b3_1873_47bf_aab0_257c77b2bbe6.slice/crio-f159d478f41529fd380abd255d8bed81d2a2db2517ba149623a345a4e8e4e4c6 WatchSource:0}: Error finding container f159d478f41529fd380abd255d8bed81d2a2db2517ba149623a345a4e8e4e4c6: Status 404 returned error can't find the container with id f159d478f41529fd380abd255d8bed81d2a2db2517ba149623a345a4e8e4e4c6 Nov 28 09:42:05 crc kubenswrapper[4946]: I1128 09:42:05.199193 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"86aef0b3-1873-47bf-aab0-257c77b2bbe6","Type":"ContainerStarted","Data":"f159d478f41529fd380abd255d8bed81d2a2db2517ba149623a345a4e8e4e4c6"} Nov 28 09:42:05 crc kubenswrapper[4946]: I1128 09:42:05.755500 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.89:8775/\": read tcp 10.217.0.2:58122->10.217.1.89:8775: read: connection reset by peer" Nov 28 09:42:05 crc kubenswrapper[4946]: I1128 09:42:05.755504 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.89:8775/\": read tcp 10.217.0.2:58136->10.217.1.89:8775: read: connection reset by peer" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.020973 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37c39660-02af-4065-8612-86ef02649a2f" path="/var/lib/kubelet/pods/37c39660-02af-4065-8612-86ef02649a2f/volumes" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.220010 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"86aef0b3-1873-47bf-aab0-257c77b2bbe6","Type":"ContainerStarted","Data":"507d9d70e10534ecd9090ca0acc0e383a37f9b7f782acfcac80b6deac7e99ef8"} Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.221255 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.242855 4946 generic.go:334] "Generic (PLEG): container finished" podID="0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a" containerID="461d9f2607cac19a1f5c39e9388f04cd5c3a00c8d330d862c1f6b0f314304e8d" exitCode=0 Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.242940 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a","Type":"ContainerDied","Data":"461d9f2607cac19a1f5c39e9388f04cd5c3a00c8d330d862c1f6b0f314304e8d"} Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.246733 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.250848 4946 generic.go:334] "Generic (PLEG): container finished" podID="6aad4350-9266-48ae-b3ae-b4ec6046fbb1" containerID="b9e3cc0b7dacace2f336a7508f89e0180d48d6bf43b2581788402aa77efcbf39" exitCode=0 Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.250932 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6aad4350-9266-48ae-b3ae-b4ec6046fbb1","Type":"ContainerDied","Data":"b9e3cc0b7dacace2f336a7508f89e0180d48d6bf43b2581788402aa77efcbf39"} Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.254733 4946 generic.go:334] "Generic (PLEG): container finished" podID="c0710305-34b3-4a15-841d-a90a2ff20c6a" containerID="d1211a4b11855f425498bc68db67776efd44f9c78e0813df5902e7e59207f617" exitCode=0 Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.254774 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.254777 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0710305-34b3-4a15-841d-a90a2ff20c6a","Type":"ContainerDied","Data":"d1211a4b11855f425498bc68db67776efd44f9c78e0813df5902e7e59207f617"} Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.255483 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0710305-34b3-4a15-841d-a90a2ff20c6a","Type":"ContainerDied","Data":"d69c8c3fd699e4fd72a54117765d39d73454767f32349d4605eec8db039c28fc"} Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.255506 4946 scope.go:117] "RemoveContainer" containerID="d1211a4b11855f425498bc68db67776efd44f9c78e0813df5902e7e59207f617" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.256556 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.256532457 podStartE2EDuration="2.256532457s" podCreationTimestamp="2025-11-28 09:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 09:42:06.249939224 +0000 UTC m=+10180.628004335" watchObservedRunningTime="2025-11-28 09:42:06.256532457 +0000 UTC m=+10180.634597568" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.311305 4946 scope.go:117] "RemoveContainer" containerID="26f93af648d01ce75fda4864d9831891ef89ac59f10173075768a4125def6e9d" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.344378 4946 scope.go:117] "RemoveContainer" containerID="d1211a4b11855f425498bc68db67776efd44f9c78e0813df5902e7e59207f617" Nov 28 09:42:06 crc kubenswrapper[4946]: E1128 09:42:06.345006 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1211a4b11855f425498bc68db67776efd44f9c78e0813df5902e7e59207f617\": container with ID starting with d1211a4b11855f425498bc68db67776efd44f9c78e0813df5902e7e59207f617 not found: ID does not exist" containerID="d1211a4b11855f425498bc68db67776efd44f9c78e0813df5902e7e59207f617" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.345043 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1211a4b11855f425498bc68db67776efd44f9c78e0813df5902e7e59207f617"} err="failed to get container status \"d1211a4b11855f425498bc68db67776efd44f9c78e0813df5902e7e59207f617\": rpc error: code = NotFound desc = could not find container \"d1211a4b11855f425498bc68db67776efd44f9c78e0813df5902e7e59207f617\": container with ID starting with d1211a4b11855f425498bc68db67776efd44f9c78e0813df5902e7e59207f617 not found: ID does not exist" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.345080 4946 scope.go:117] "RemoveContainer" containerID="26f93af648d01ce75fda4864d9831891ef89ac59f10173075768a4125def6e9d" Nov 28 09:42:06 crc kubenswrapper[4946]: E1128 09:42:06.345612 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26f93af648d01ce75fda4864d9831891ef89ac59f10173075768a4125def6e9d\": container with ID starting with 26f93af648d01ce75fda4864d9831891ef89ac59f10173075768a4125def6e9d not found: ID does not exist" containerID="26f93af648d01ce75fda4864d9831891ef89ac59f10173075768a4125def6e9d" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.345642 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26f93af648d01ce75fda4864d9831891ef89ac59f10173075768a4125def6e9d"} err="failed to get container status \"26f93af648d01ce75fda4864d9831891ef89ac59f10173075768a4125def6e9d\": rpc error: code = NotFound desc = could not find container \"26f93af648d01ce75fda4864d9831891ef89ac59f10173075768a4125def6e9d\": container with ID starting with 26f93af648d01ce75fda4864d9831891ef89ac59f10173075768a4125def6e9d not found: ID does not exist" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.369199 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.382435 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.383543 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0710305-34b3-4a15-841d-a90a2ff20c6a-logs\") pod \"c0710305-34b3-4a15-841d-a90a2ff20c6a\" (UID: \"c0710305-34b3-4a15-841d-a90a2ff20c6a\") " Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.383708 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79w28\" (UniqueName: \"kubernetes.io/projected/c0710305-34b3-4a15-841d-a90a2ff20c6a-kube-api-access-79w28\") pod \"c0710305-34b3-4a15-841d-a90a2ff20c6a\" (UID: \"c0710305-34b3-4a15-841d-a90a2ff20c6a\") " Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.383769 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0710305-34b3-4a15-841d-a90a2ff20c6a-combined-ca-bundle\") pod \"c0710305-34b3-4a15-841d-a90a2ff20c6a\" (UID: \"c0710305-34b3-4a15-841d-a90a2ff20c6a\") " Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.383800 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0710305-34b3-4a15-841d-a90a2ff20c6a-config-data\") pod \"c0710305-34b3-4a15-841d-a90a2ff20c6a\" (UID: \"c0710305-34b3-4a15-841d-a90a2ff20c6a\") " Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.388110 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0710305-34b3-4a15-841d-a90a2ff20c6a-logs" (OuterVolumeSpecName: "logs") pod "c0710305-34b3-4a15-841d-a90a2ff20c6a" (UID: "c0710305-34b3-4a15-841d-a90a2ff20c6a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.399786 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0710305-34b3-4a15-841d-a90a2ff20c6a-kube-api-access-79w28" (OuterVolumeSpecName: "kube-api-access-79w28") pod "c0710305-34b3-4a15-841d-a90a2ff20c6a" (UID: "c0710305-34b3-4a15-841d-a90a2ff20c6a"). InnerVolumeSpecName "kube-api-access-79w28". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.437967 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0710305-34b3-4a15-841d-a90a2ff20c6a-config-data" (OuterVolumeSpecName: "config-data") pod "c0710305-34b3-4a15-841d-a90a2ff20c6a" (UID: "c0710305-34b3-4a15-841d-a90a2ff20c6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.445346 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0710305-34b3-4a15-841d-a90a2ff20c6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0710305-34b3-4a15-841d-a90a2ff20c6a" (UID: "c0710305-34b3-4a15-841d-a90a2ff20c6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.485101 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpfrb\" (UniqueName: \"kubernetes.io/projected/6aad4350-9266-48ae-b3ae-b4ec6046fbb1-kube-api-access-hpfrb\") pod \"6aad4350-9266-48ae-b3ae-b4ec6046fbb1\" (UID: \"6aad4350-9266-48ae-b3ae-b4ec6046fbb1\") " Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.485143 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a-logs\") pod \"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a\" (UID: \"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a\") " Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.485177 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxg6r\" (UniqueName: \"kubernetes.io/projected/0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a-kube-api-access-sxg6r\") pod \"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a\" (UID: \"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a\") " Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.485263 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a-config-data\") pod \"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a\" (UID: \"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a\") " Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.485293 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a-combined-ca-bundle\") pod \"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a\" (UID: \"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a\") " Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.485429 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aad4350-9266-48ae-b3ae-b4ec6046fbb1-combined-ca-bundle\") pod \"6aad4350-9266-48ae-b3ae-b4ec6046fbb1\" (UID: \"6aad4350-9266-48ae-b3ae-b4ec6046fbb1\") " Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.485454 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aad4350-9266-48ae-b3ae-b4ec6046fbb1-config-data\") pod \"6aad4350-9266-48ae-b3ae-b4ec6046fbb1\" (UID: \"6aad4350-9266-48ae-b3ae-b4ec6046fbb1\") " Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.485867 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0710305-34b3-4a15-841d-a90a2ff20c6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.485886 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0710305-34b3-4a15-841d-a90a2ff20c6a-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.485895 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0710305-34b3-4a15-841d-a90a2ff20c6a-logs\") on node \"crc\" DevicePath \"\"" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.485905 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79w28\" (UniqueName: \"kubernetes.io/projected/c0710305-34b3-4a15-841d-a90a2ff20c6a-kube-api-access-79w28\") on node \"crc\" DevicePath \"\"" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.489389 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a-logs" (OuterVolumeSpecName: "logs") pod "0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a" (UID: "0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.496085 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a-kube-api-access-sxg6r" (OuterVolumeSpecName: "kube-api-access-sxg6r") pod "0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a" (UID: "0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a"). InnerVolumeSpecName "kube-api-access-sxg6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.500713 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aad4350-9266-48ae-b3ae-b4ec6046fbb1-kube-api-access-hpfrb" (OuterVolumeSpecName: "kube-api-access-hpfrb") pod "6aad4350-9266-48ae-b3ae-b4ec6046fbb1" (UID: "6aad4350-9266-48ae-b3ae-b4ec6046fbb1"). InnerVolumeSpecName "kube-api-access-hpfrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.529687 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aad4350-9266-48ae-b3ae-b4ec6046fbb1-config-data" (OuterVolumeSpecName: "config-data") pod "6aad4350-9266-48ae-b3ae-b4ec6046fbb1" (UID: "6aad4350-9266-48ae-b3ae-b4ec6046fbb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.533077 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a" (UID: "0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.542089 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aad4350-9266-48ae-b3ae-b4ec6046fbb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6aad4350-9266-48ae-b3ae-b4ec6046fbb1" (UID: "6aad4350-9266-48ae-b3ae-b4ec6046fbb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.545648 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a-config-data" (OuterVolumeSpecName: "config-data") pod "0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a" (UID: "0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.589055 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpfrb\" (UniqueName: \"kubernetes.io/projected/6aad4350-9266-48ae-b3ae-b4ec6046fbb1-kube-api-access-hpfrb\") on node \"crc\" DevicePath \"\"" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.589082 4946 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a-logs\") on node \"crc\" DevicePath \"\"" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.589093 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxg6r\" (UniqueName: \"kubernetes.io/projected/0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a-kube-api-access-sxg6r\") on node \"crc\" DevicePath \"\"" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.589102 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.589110 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.589122 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aad4350-9266-48ae-b3ae-b4ec6046fbb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.589131 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aad4350-9266-48ae-b3ae-b4ec6046fbb1-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.608881 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.630426 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.648377 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 09:42:06 crc kubenswrapper[4946]: E1128 09:42:06.648863 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0710305-34b3-4a15-841d-a90a2ff20c6a" containerName="nova-api-log" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.648880 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0710305-34b3-4a15-841d-a90a2ff20c6a" containerName="nova-api-log" Nov 28 09:42:06 crc kubenswrapper[4946]: E1128 09:42:06.648888 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aad4350-9266-48ae-b3ae-b4ec6046fbb1" containerName="nova-scheduler-scheduler" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.648895 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aad4350-9266-48ae-b3ae-b4ec6046fbb1" containerName="nova-scheduler-scheduler" Nov 28 09:42:06 crc kubenswrapper[4946]: E1128 09:42:06.648913 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a" containerName="nova-metadata-metadata" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.648922 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a" containerName="nova-metadata-metadata" Nov 28 09:42:06 crc kubenswrapper[4946]: E1128 09:42:06.648935 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a" containerName="nova-metadata-log" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.648941 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a" containerName="nova-metadata-log" Nov 28 09:42:06 crc kubenswrapper[4946]: E1128 09:42:06.648971 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0710305-34b3-4a15-841d-a90a2ff20c6a" containerName="nova-api-api" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.648978 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0710305-34b3-4a15-841d-a90a2ff20c6a" containerName="nova-api-api" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.649153 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a" containerName="nova-metadata-log" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.649168 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0710305-34b3-4a15-841d-a90a2ff20c6a" containerName="nova-api-api" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.649184 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0710305-34b3-4a15-841d-a90a2ff20c6a" containerName="nova-api-log" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.649194 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a" containerName="nova-metadata-metadata" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.649203 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aad4350-9266-48ae-b3ae-b4ec6046fbb1" containerName="nova-scheduler-scheduler" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.650290 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.653746 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.658603 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.792678 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/713eb85e-5541-4b11-aa8f-c0eb39a3596c-config-data\") pod \"nova-api-0\" (UID: \"713eb85e-5541-4b11-aa8f-c0eb39a3596c\") " pod="openstack/nova-api-0" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.792717 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/713eb85e-5541-4b11-aa8f-c0eb39a3596c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"713eb85e-5541-4b11-aa8f-c0eb39a3596c\") " pod="openstack/nova-api-0" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.792749 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/713eb85e-5541-4b11-aa8f-c0eb39a3596c-logs\") pod \"nova-api-0\" (UID: \"713eb85e-5541-4b11-aa8f-c0eb39a3596c\") " pod="openstack/nova-api-0" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.792784 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r69q4\" (UniqueName: \"kubernetes.io/projected/713eb85e-5541-4b11-aa8f-c0eb39a3596c-kube-api-access-r69q4\") pod \"nova-api-0\" (UID: \"713eb85e-5541-4b11-aa8f-c0eb39a3596c\") " pod="openstack/nova-api-0" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.894082 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/713eb85e-5541-4b11-aa8f-c0eb39a3596c-config-data\") pod \"nova-api-0\" (UID: \"713eb85e-5541-4b11-aa8f-c0eb39a3596c\") " pod="openstack/nova-api-0" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.894113 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/713eb85e-5541-4b11-aa8f-c0eb39a3596c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"713eb85e-5541-4b11-aa8f-c0eb39a3596c\") " pod="openstack/nova-api-0" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.894141 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/713eb85e-5541-4b11-aa8f-c0eb39a3596c-logs\") pod \"nova-api-0\" (UID: \"713eb85e-5541-4b11-aa8f-c0eb39a3596c\") " pod="openstack/nova-api-0" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.894178 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r69q4\" (UniqueName: \"kubernetes.io/projected/713eb85e-5541-4b11-aa8f-c0eb39a3596c-kube-api-access-r69q4\") pod \"nova-api-0\" (UID: \"713eb85e-5541-4b11-aa8f-c0eb39a3596c\") " pod="openstack/nova-api-0" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.894850 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/713eb85e-5541-4b11-aa8f-c0eb39a3596c-logs\") pod \"nova-api-0\" (UID: \"713eb85e-5541-4b11-aa8f-c0eb39a3596c\") " pod="openstack/nova-api-0" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.898502 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/713eb85e-5541-4b11-aa8f-c0eb39a3596c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"713eb85e-5541-4b11-aa8f-c0eb39a3596c\") " pod="openstack/nova-api-0" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.899741 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/713eb85e-5541-4b11-aa8f-c0eb39a3596c-config-data\") pod \"nova-api-0\" (UID: \"713eb85e-5541-4b11-aa8f-c0eb39a3596c\") " pod="openstack/nova-api-0" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.918006 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r69q4\" (UniqueName: \"kubernetes.io/projected/713eb85e-5541-4b11-aa8f-c0eb39a3596c-kube-api-access-r69q4\") pod \"nova-api-0\" (UID: \"713eb85e-5541-4b11-aa8f-c0eb39a3596c\") " pod="openstack/nova-api-0" Nov 28 09:42:06 crc kubenswrapper[4946]: I1128 09:42:06.966899 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.100040 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.200697 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a57460-cf67-4cef-80a0-bb873598b9ed-combined-ca-bundle\") pod \"e5a57460-cf67-4cef-80a0-bb873598b9ed\" (UID: \"e5a57460-cf67-4cef-80a0-bb873598b9ed\") " Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.201145 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7q8c\" (UniqueName: \"kubernetes.io/projected/e5a57460-cf67-4cef-80a0-bb873598b9ed-kube-api-access-w7q8c\") pod \"e5a57460-cf67-4cef-80a0-bb873598b9ed\" (UID: \"e5a57460-cf67-4cef-80a0-bb873598b9ed\") " Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.201275 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a57460-cf67-4cef-80a0-bb873598b9ed-config-data\") pod \"e5a57460-cf67-4cef-80a0-bb873598b9ed\" (UID: \"e5a57460-cf67-4cef-80a0-bb873598b9ed\") " Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.208727 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5a57460-cf67-4cef-80a0-bb873598b9ed-kube-api-access-w7q8c" (OuterVolumeSpecName: "kube-api-access-w7q8c") pod "e5a57460-cf67-4cef-80a0-bb873598b9ed" (UID: "e5a57460-cf67-4cef-80a0-bb873598b9ed"). InnerVolumeSpecName "kube-api-access-w7q8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.284962 4946 generic.go:334] "Generic (PLEG): container finished" podID="e5a57460-cf67-4cef-80a0-bb873598b9ed" containerID="d91b1d82ecac143e4eed0fe464accd8eaa0a9e7ff4df07993bbdbc1616612709" exitCode=0 Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.285015 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e5a57460-cf67-4cef-80a0-bb873598b9ed","Type":"ContainerDied","Data":"d91b1d82ecac143e4eed0fe464accd8eaa0a9e7ff4df07993bbdbc1616612709"} Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.285055 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e5a57460-cf67-4cef-80a0-bb873598b9ed","Type":"ContainerDied","Data":"5af1303474ed15dd81ca29b95812329e79acaccc59300d086d567af144c3072b"} Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.285726 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.286586 4946 scope.go:117] "RemoveContainer" containerID="d91b1d82ecac143e4eed0fe464accd8eaa0a9e7ff4df07993bbdbc1616612709" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.293823 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a","Type":"ContainerDied","Data":"7b2e33729b7e57013ceeef0c12c2268fff77a528cf19c54cbfff8b269edbf34e"} Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.293892 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.296571 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.296657 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6aad4350-9266-48ae-b3ae-b4ec6046fbb1","Type":"ContainerDied","Data":"034964ce32bcc7df81c867544ecbe9c3ba082f658252858409bc86d6073fce86"} Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.304553 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7q8c\" (UniqueName: \"kubernetes.io/projected/e5a57460-cf67-4cef-80a0-bb873598b9ed-kube-api-access-w7q8c\") on node \"crc\" DevicePath \"\"" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.317103 4946 scope.go:117] "RemoveContainer" containerID="d91b1d82ecac143e4eed0fe464accd8eaa0a9e7ff4df07993bbdbc1616612709" Nov 28 09:42:07 crc kubenswrapper[4946]: E1128 09:42:07.321908 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d91b1d82ecac143e4eed0fe464accd8eaa0a9e7ff4df07993bbdbc1616612709\": container with ID starting with d91b1d82ecac143e4eed0fe464accd8eaa0a9e7ff4df07993bbdbc1616612709 not found: ID does not exist" containerID="d91b1d82ecac143e4eed0fe464accd8eaa0a9e7ff4df07993bbdbc1616612709" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.321950 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d91b1d82ecac143e4eed0fe464accd8eaa0a9e7ff4df07993bbdbc1616612709"} err="failed to get container status \"d91b1d82ecac143e4eed0fe464accd8eaa0a9e7ff4df07993bbdbc1616612709\": rpc error: code = NotFound desc = could not find container \"d91b1d82ecac143e4eed0fe464accd8eaa0a9e7ff4df07993bbdbc1616612709\": container with ID starting with d91b1d82ecac143e4eed0fe464accd8eaa0a9e7ff4df07993bbdbc1616612709 not found: ID does not exist" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.321972 4946 scope.go:117] "RemoveContainer" containerID="461d9f2607cac19a1f5c39e9388f04cd5c3a00c8d330d862c1f6b0f314304e8d" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.336626 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.370630 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.371053 4946 scope.go:117] "RemoveContainer" containerID="30580d9a89c94dc908f75e10a22137d36a1aa6bc4319f9b5776f47e91685ce68" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.384567 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.399150 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.409419 4946 scope.go:117] "RemoveContainer" containerID="b9e3cc0b7dacace2f336a7508f89e0180d48d6bf43b2581788402aa77efcbf39" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.412839 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 09:42:07 crc kubenswrapper[4946]: E1128 09:42:07.414192 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a57460-cf67-4cef-80a0-bb873598b9ed" containerName="nova-cell0-conductor-conductor" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.414502 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a57460-cf67-4cef-80a0-bb873598b9ed" containerName="nova-cell0-conductor-conductor" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.414845 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5a57460-cf67-4cef-80a0-bb873598b9ed" containerName="nova-cell0-conductor-conductor" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.416012 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.417999 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.425350 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.439258 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.440858 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.443277 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.447953 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.507823 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcdfa2ec-3907-4ddd-be31-e87f640bf0d1-logs\") pod \"nova-metadata-0\" (UID: \"fcdfa2ec-3907-4ddd-be31-e87f640bf0d1\") " pod="openstack/nova-metadata-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.507878 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc39fcf1-2b4b-4630-9bf1-7482ffc4a262-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fc39fcf1-2b4b-4630-9bf1-7482ffc4a262\") " pod="openstack/nova-scheduler-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.507903 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6652\" (UniqueName: \"kubernetes.io/projected/fcdfa2ec-3907-4ddd-be31-e87f640bf0d1-kube-api-access-w6652\") pod \"nova-metadata-0\" (UID: \"fcdfa2ec-3907-4ddd-be31-e87f640bf0d1\") " pod="openstack/nova-metadata-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.507979 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcdfa2ec-3907-4ddd-be31-e87f640bf0d1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fcdfa2ec-3907-4ddd-be31-e87f640bf0d1\") " pod="openstack/nova-metadata-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.508018 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcdfa2ec-3907-4ddd-be31-e87f640bf0d1-config-data\") pod \"nova-metadata-0\" (UID: \"fcdfa2ec-3907-4ddd-be31-e87f640bf0d1\") " pod="openstack/nova-metadata-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.508047 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmxbd\" (UniqueName: \"kubernetes.io/projected/fc39fcf1-2b4b-4630-9bf1-7482ffc4a262-kube-api-access-lmxbd\") pod \"nova-scheduler-0\" (UID: \"fc39fcf1-2b4b-4630-9bf1-7482ffc4a262\") " pod="openstack/nova-scheduler-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.508119 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc39fcf1-2b4b-4630-9bf1-7482ffc4a262-config-data\") pod \"nova-scheduler-0\" (UID: \"fc39fcf1-2b4b-4630-9bf1-7482ffc4a262\") " pod="openstack/nova-scheduler-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.610447 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcdfa2ec-3907-4ddd-be31-e87f640bf0d1-logs\") pod \"nova-metadata-0\" (UID: \"fcdfa2ec-3907-4ddd-be31-e87f640bf0d1\") " pod="openstack/nova-metadata-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.610520 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc39fcf1-2b4b-4630-9bf1-7482ffc4a262-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fc39fcf1-2b4b-4630-9bf1-7482ffc4a262\") " pod="openstack/nova-scheduler-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.610545 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6652\" (UniqueName: \"kubernetes.io/projected/fcdfa2ec-3907-4ddd-be31-e87f640bf0d1-kube-api-access-w6652\") pod \"nova-metadata-0\" (UID: \"fcdfa2ec-3907-4ddd-be31-e87f640bf0d1\") " pod="openstack/nova-metadata-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.610605 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcdfa2ec-3907-4ddd-be31-e87f640bf0d1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fcdfa2ec-3907-4ddd-be31-e87f640bf0d1\") " pod="openstack/nova-metadata-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.610642 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcdfa2ec-3907-4ddd-be31-e87f640bf0d1-config-data\") pod \"nova-metadata-0\" (UID: \"fcdfa2ec-3907-4ddd-be31-e87f640bf0d1\") " pod="openstack/nova-metadata-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.610669 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmxbd\" (UniqueName: \"kubernetes.io/projected/fc39fcf1-2b4b-4630-9bf1-7482ffc4a262-kube-api-access-lmxbd\") pod \"nova-scheduler-0\" (UID: \"fc39fcf1-2b4b-4630-9bf1-7482ffc4a262\") " pod="openstack/nova-scheduler-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.610701 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc39fcf1-2b4b-4630-9bf1-7482ffc4a262-config-data\") pod \"nova-scheduler-0\" (UID: \"fc39fcf1-2b4b-4630-9bf1-7482ffc4a262\") " pod="openstack/nova-scheduler-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.610915 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcdfa2ec-3907-4ddd-be31-e87f640bf0d1-logs\") pod \"nova-metadata-0\" (UID: \"fcdfa2ec-3907-4ddd-be31-e87f640bf0d1\") " pod="openstack/nova-metadata-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.614745 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcdfa2ec-3907-4ddd-be31-e87f640bf0d1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fcdfa2ec-3907-4ddd-be31-e87f640bf0d1\") " pod="openstack/nova-metadata-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.615248 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc39fcf1-2b4b-4630-9bf1-7482ffc4a262-config-data\") pod \"nova-scheduler-0\" (UID: \"fc39fcf1-2b4b-4630-9bf1-7482ffc4a262\") " pod="openstack/nova-scheduler-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.619146 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc39fcf1-2b4b-4630-9bf1-7482ffc4a262-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fc39fcf1-2b4b-4630-9bf1-7482ffc4a262\") " pod="openstack/nova-scheduler-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.620420 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcdfa2ec-3907-4ddd-be31-e87f640bf0d1-config-data\") pod \"nova-metadata-0\" (UID: \"fcdfa2ec-3907-4ddd-be31-e87f640bf0d1\") " pod="openstack/nova-metadata-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.634336 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6652\" (UniqueName: \"kubernetes.io/projected/fcdfa2ec-3907-4ddd-be31-e87f640bf0d1-kube-api-access-w6652\") pod \"nova-metadata-0\" (UID: \"fcdfa2ec-3907-4ddd-be31-e87f640bf0d1\") " pod="openstack/nova-metadata-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.637171 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmxbd\" (UniqueName: \"kubernetes.io/projected/fc39fcf1-2b4b-4630-9bf1-7482ffc4a262-kube-api-access-lmxbd\") pod \"nova-scheduler-0\" (UID: \"fc39fcf1-2b4b-4630-9bf1-7482ffc4a262\") " pod="openstack/nova-scheduler-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.640794 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a57460-cf67-4cef-80a0-bb873598b9ed-config-data" (OuterVolumeSpecName: "config-data") pod "e5a57460-cf67-4cef-80a0-bb873598b9ed" (UID: "e5a57460-cf67-4cef-80a0-bb873598b9ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.645134 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a57460-cf67-4cef-80a0-bb873598b9ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5a57460-cf67-4cef-80a0-bb873598b9ed" (UID: "e5a57460-cf67-4cef-80a0-bb873598b9ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.685067 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.714378 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a57460-cf67-4cef-80a0-bb873598b9ed-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.714427 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a57460-cf67-4cef-80a0-bb873598b9ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.742179 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.760385 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 09:42:07 crc kubenswrapper[4946]: I1128 09:42:07.978340 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 09:42:08 crc kubenswrapper[4946]: I1128 09:42:08.005400 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a" path="/var/lib/kubelet/pods/0d5dc3d3-c8b2-4a20-9345-242cb8a3e12a/volumes" Nov 28 09:42:08 crc kubenswrapper[4946]: I1128 09:42:08.006178 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aad4350-9266-48ae-b3ae-b4ec6046fbb1" path="/var/lib/kubelet/pods/6aad4350-9266-48ae-b3ae-b4ec6046fbb1/volumes" Nov 28 09:42:08 crc kubenswrapper[4946]: I1128 09:42:08.006895 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0710305-34b3-4a15-841d-a90a2ff20c6a" path="/var/lib/kubelet/pods/c0710305-34b3-4a15-841d-a90a2ff20c6a/volumes" Nov 28 09:42:08 crc kubenswrapper[4946]: I1128 09:42:08.008287 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 09:42:08 crc kubenswrapper[4946]: I1128 09:42:08.012671 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 09:42:08 crc kubenswrapper[4946]: I1128 09:42:08.014072 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 09:42:08 crc kubenswrapper[4946]: I1128 09:42:08.017164 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 28 09:42:08 crc kubenswrapper[4946]: I1128 09:42:08.029711 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 09:42:08 crc kubenswrapper[4946]: I1128 09:42:08.125632 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef4e01f-b481-4b66-9032-794f0179ce67-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2ef4e01f-b481-4b66-9032-794f0179ce67\") " pod="openstack/nova-cell0-conductor-0" Nov 28 09:42:08 crc kubenswrapper[4946]: I1128 09:42:08.125807 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef4e01f-b481-4b66-9032-794f0179ce67-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2ef4e01f-b481-4b66-9032-794f0179ce67\") " pod="openstack/nova-cell0-conductor-0" Nov 28 09:42:08 crc kubenswrapper[4946]: I1128 09:42:08.125876 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfp9z\" (UniqueName: \"kubernetes.io/projected/2ef4e01f-b481-4b66-9032-794f0179ce67-kube-api-access-tfp9z\") pod \"nova-cell0-conductor-0\" (UID: \"2ef4e01f-b481-4b66-9032-794f0179ce67\") " pod="openstack/nova-cell0-conductor-0" Nov 28 09:42:08 crc kubenswrapper[4946]: I1128 09:42:08.228064 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef4e01f-b481-4b66-9032-794f0179ce67-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2ef4e01f-b481-4b66-9032-794f0179ce67\") " pod="openstack/nova-cell0-conductor-0" Nov 28 09:42:08 crc kubenswrapper[4946]: I1128 09:42:08.228126 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfp9z\" (UniqueName: \"kubernetes.io/projected/2ef4e01f-b481-4b66-9032-794f0179ce67-kube-api-access-tfp9z\") pod \"nova-cell0-conductor-0\" (UID: \"2ef4e01f-b481-4b66-9032-794f0179ce67\") " pod="openstack/nova-cell0-conductor-0" Nov 28 09:42:08 crc kubenswrapper[4946]: I1128 09:42:08.228236 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef4e01f-b481-4b66-9032-794f0179ce67-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2ef4e01f-b481-4b66-9032-794f0179ce67\") " pod="openstack/nova-cell0-conductor-0" Nov 28 09:42:08 crc kubenswrapper[4946]: I1128 09:42:08.235992 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef4e01f-b481-4b66-9032-794f0179ce67-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2ef4e01f-b481-4b66-9032-794f0179ce67\") " pod="openstack/nova-cell0-conductor-0" Nov 28 09:42:08 crc kubenswrapper[4946]: I1128 09:42:08.248204 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef4e01f-b481-4b66-9032-794f0179ce67-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2ef4e01f-b481-4b66-9032-794f0179ce67\") " pod="openstack/nova-cell0-conductor-0" Nov 28 09:42:08 crc kubenswrapper[4946]: I1128 09:42:08.253834 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfp9z\" (UniqueName: \"kubernetes.io/projected/2ef4e01f-b481-4b66-9032-794f0179ce67-kube-api-access-tfp9z\") pod \"nova-cell0-conductor-0\" (UID: \"2ef4e01f-b481-4b66-9032-794f0179ce67\") " pod="openstack/nova-cell0-conductor-0" Nov 28 09:42:08 crc kubenswrapper[4946]: I1128 09:42:08.274258 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 09:42:08 crc kubenswrapper[4946]: W1128 09:42:08.294260 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcdfa2ec_3907_4ddd_be31_e87f640bf0d1.slice/crio-4a75f93e30c1a4bd669409fd391a4811f546eb8fef72a003c3a924d7aad49f02 WatchSource:0}: Error finding container 4a75f93e30c1a4bd669409fd391a4811f546eb8fef72a003c3a924d7aad49f02: Status 404 returned error can't find the container with id 4a75f93e30c1a4bd669409fd391a4811f546eb8fef72a003c3a924d7aad49f02 Nov 28 09:42:08 crc kubenswrapper[4946]: I1128 09:42:08.317408 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"713eb85e-5541-4b11-aa8f-c0eb39a3596c","Type":"ContainerStarted","Data":"c5a6a07a7083ec7dcdb734241b2e85492255e6a686fd4a827f8790981e7b6d33"} Nov 28 09:42:08 crc kubenswrapper[4946]: I1128 09:42:08.317452 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"713eb85e-5541-4b11-aa8f-c0eb39a3596c","Type":"ContainerStarted","Data":"55309bf7db020ae64b5f901cfc9f79662c6497a6c0567a61ac26e673e7c8423c"} Nov 28 09:42:08 crc kubenswrapper[4946]: I1128 09:42:08.325409 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fcdfa2ec-3907-4ddd-be31-e87f640bf0d1","Type":"ContainerStarted","Data":"4a75f93e30c1a4bd669409fd391a4811f546eb8fef72a003c3a924d7aad49f02"} Nov 28 09:42:08 crc kubenswrapper[4946]: I1128 09:42:08.352006 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 09:42:08 crc kubenswrapper[4946]: I1128 09:42:08.512309 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 09:42:09 crc kubenswrapper[4946]: I1128 09:42:09.029229 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 09:42:09 crc kubenswrapper[4946]: I1128 09:42:09.344156 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fcdfa2ec-3907-4ddd-be31-e87f640bf0d1","Type":"ContainerStarted","Data":"cfa5e2d96c0a942ee070afcb6f1914f377461fc7cad24c12b8b811a50f42a698"} Nov 28 09:42:09 crc kubenswrapper[4946]: I1128 09:42:09.346930 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fcdfa2ec-3907-4ddd-be31-e87f640bf0d1","Type":"ContainerStarted","Data":"a4d1e52ec50baf0654bf8e6e999e6b5a08510c6746237bab2f6deff38eff7250"} Nov 28 09:42:09 crc kubenswrapper[4946]: I1128 09:42:09.347805 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2ef4e01f-b481-4b66-9032-794f0179ce67","Type":"ContainerStarted","Data":"0be738471f0f6ec70157bbee5bb659ceb7d64090282c9900b707f96488a85c1d"} Nov 28 09:42:09 crc kubenswrapper[4946]: I1128 09:42:09.351645 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"713eb85e-5541-4b11-aa8f-c0eb39a3596c","Type":"ContainerStarted","Data":"115a34aaa196dceff3f7528ae4d2701036bb03097e879e7699851a0d9cf14640"} Nov 28 09:42:09 crc kubenswrapper[4946]: I1128 09:42:09.363813 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fc39fcf1-2b4b-4630-9bf1-7482ffc4a262","Type":"ContainerStarted","Data":"b132ab09228a31468bcb7edd9861c377e26ee1b4af0e15dfbd7c0095f81108fc"} Nov 28 09:42:09 crc kubenswrapper[4946]: I1128 09:42:09.363871 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fc39fcf1-2b4b-4630-9bf1-7482ffc4a262","Type":"ContainerStarted","Data":"b182323caf4a2ba2f32e694b8b569bf9ec4095321fb6ab2bdc19943152a214f1"} Nov 28 09:42:09 crc kubenswrapper[4946]: I1128 09:42:09.376936 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.376908291 podStartE2EDuration="2.376908291s" podCreationTimestamp="2025-11-28 09:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 09:42:09.36713977 +0000 UTC m=+10183.745204881" watchObservedRunningTime="2025-11-28 09:42:09.376908291 +0000 UTC m=+10183.754973432" Nov 28 09:42:09 crc kubenswrapper[4946]: I1128 09:42:09.399577 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.39955092 podStartE2EDuration="2.39955092s" podCreationTimestamp="2025-11-28 09:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 09:42:09.391616544 +0000 UTC m=+10183.769681685" watchObservedRunningTime="2025-11-28 09:42:09.39955092 +0000 UTC m=+10183.777616061" Nov 28 09:42:09 crc kubenswrapper[4946]: I1128 09:42:09.425277 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.425254905 podStartE2EDuration="3.425254905s" podCreationTimestamp="2025-11-28 09:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 09:42:09.411732941 +0000 UTC m=+10183.789798092" watchObservedRunningTime="2025-11-28 09:42:09.425254905 +0000 UTC m=+10183.803320026" Nov 28 09:42:10 crc kubenswrapper[4946]: I1128 09:42:10.006623 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5a57460-cf67-4cef-80a0-bb873598b9ed" path="/var/lib/kubelet/pods/e5a57460-cf67-4cef-80a0-bb873598b9ed/volumes" Nov 28 09:42:10 crc kubenswrapper[4946]: I1128 09:42:10.386402 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2ef4e01f-b481-4b66-9032-794f0179ce67","Type":"ContainerStarted","Data":"2378ac173443095d530537c9e23ca968559ad2e9ab105e8eb06399368989cd0c"} Nov 28 09:42:10 crc kubenswrapper[4946]: I1128 09:42:10.413100 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=3.413077235 podStartE2EDuration="3.413077235s" podCreationTimestamp="2025-11-28 09:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 09:42:10.402408101 +0000 UTC m=+10184.780473222" watchObservedRunningTime="2025-11-28 09:42:10.413077235 +0000 UTC m=+10184.791142356" Nov 28 09:42:11 crc kubenswrapper[4946]: I1128 09:42:11.396071 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 28 09:42:12 crc kubenswrapper[4946]: I1128 09:42:12.742623 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 09:42:12 crc kubenswrapper[4946]: I1128 09:42:12.743025 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 09:42:12 crc kubenswrapper[4946]: I1128 09:42:12.761881 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 09:42:14 crc kubenswrapper[4946]: I1128 09:42:14.610978 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 28 09:42:16 crc kubenswrapper[4946]: I1128 09:42:16.967754 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 09:42:16 crc kubenswrapper[4946]: I1128 09:42:16.968118 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 09:42:17 crc kubenswrapper[4946]: I1128 09:42:17.742654 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 09:42:17 crc kubenswrapper[4946]: I1128 09:42:17.743058 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 09:42:17 crc kubenswrapper[4946]: I1128 09:42:17.761888 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 09:42:17 crc kubenswrapper[4946]: I1128 09:42:17.817847 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 09:42:18 crc kubenswrapper[4946]: I1128 09:42:18.049721 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="713eb85e-5541-4b11-aa8f-c0eb39a3596c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 09:42:18 crc kubenswrapper[4946]: I1128 09:42:18.049723 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="713eb85e-5541-4b11-aa8f-c0eb39a3596c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 09:42:18 crc kubenswrapper[4946]: I1128 09:42:18.514799 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 09:42:18 crc kubenswrapper[4946]: I1128 09:42:18.557135 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 28 09:42:18 crc kubenswrapper[4946]: I1128 09:42:18.824632 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fcdfa2ec-3907-4ddd-be31-e87f640bf0d1" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.193:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 09:42:18 crc kubenswrapper[4946]: I1128 09:42:18.824785 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fcdfa2ec-3907-4ddd-be31-e87f640bf0d1" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.193:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 09:42:26 crc kubenswrapper[4946]: I1128 09:42:26.972134 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 09:42:26 crc kubenswrapper[4946]: I1128 09:42:26.973207 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 09:42:26 crc kubenswrapper[4946]: I1128 09:42:26.973750 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 09:42:26 crc kubenswrapper[4946]: I1128 09:42:26.973829 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 09:42:26 crc kubenswrapper[4946]: I1128 09:42:26.976610 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 09:42:26 crc kubenswrapper[4946]: I1128 09:42:26.978713 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 09:42:27 crc kubenswrapper[4946]: I1128 09:42:27.746648 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 09:42:27 crc kubenswrapper[4946]: I1128 09:42:27.747604 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 09:42:27 crc kubenswrapper[4946]: I1128 09:42:27.750852 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 09:42:28 crc kubenswrapper[4946]: I1128 09:42:28.614959 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.779892 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl"] Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.781996 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.785038 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.785038 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-psw2j" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.785095 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.785184 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.785256 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.787297 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.787449 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.805866 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl"] Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.854105 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.854179 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.854226 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.854274 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.854308 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.854339 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.854396 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.854450 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mcks\" (UniqueName: \"kubernetes.io/projected/84956b58-bb77-4d59-9bab-5112c6660a05-kube-api-access-4mcks\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.854499 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.854538 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.854564 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.955630 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.955739 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.955818 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.955868 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.955916 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.956008 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.956081 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mcks\" (UniqueName: \"kubernetes.io/projected/84956b58-bb77-4d59-9bab-5112c6660a05-kube-api-access-4mcks\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.956119 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.956168 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.956203 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.956271 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.958666 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.959087 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.962870 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.963582 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.963921 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.965305 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.966353 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.968340 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.977185 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.984719 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mcks\" (UniqueName: \"kubernetes.io/projected/84956b58-bb77-4d59-9bab-5112c6660a05-kube-api-access-4mcks\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:29 crc kubenswrapper[4946]: I1128 09:42:29.989071 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:30 crc kubenswrapper[4946]: I1128 09:42:30.111802 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:42:30 crc kubenswrapper[4946]: I1128 09:42:30.696541 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl"] Nov 28 09:42:30 crc kubenswrapper[4946]: W1128 09:42:30.700682 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84956b58_bb77_4d59_9bab_5112c6660a05.slice/crio-73560e8342a417bb2bcef19bceddb0f077dfa6dadf95778f77bb45a2e4d9dbbc WatchSource:0}: Error finding container 73560e8342a417bb2bcef19bceddb0f077dfa6dadf95778f77bb45a2e4d9dbbc: Status 404 returned error can't find the container with id 73560e8342a417bb2bcef19bceddb0f077dfa6dadf95778f77bb45a2e4d9dbbc Nov 28 09:42:31 crc kubenswrapper[4946]: I1128 09:42:31.644780 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" event={"ID":"84956b58-bb77-4d59-9bab-5112c6660a05","Type":"ContainerStarted","Data":"73560e8342a417bb2bcef19bceddb0f077dfa6dadf95778f77bb45a2e4d9dbbc"} Nov 28 09:42:32 crc kubenswrapper[4946]: I1128 09:42:32.661411 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" event={"ID":"84956b58-bb77-4d59-9bab-5112c6660a05","Type":"ContainerStarted","Data":"4e319fdf3f7f922d55f20d9242ed363d35c584c190fd3eaf2963bfce119657ca"} Nov 28 09:42:32 crc kubenswrapper[4946]: I1128 09:42:32.693131 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" podStartSLOduration=2.53928098 podStartE2EDuration="3.69310291s" podCreationTimestamp="2025-11-28 09:42:29 +0000 UTC" firstStartedPulling="2025-11-28 09:42:30.70486409 +0000 UTC m=+10205.082929221" lastFinishedPulling="2025-11-28 09:42:31.858686 +0000 UTC m=+10206.236751151" observedRunningTime="2025-11-28 09:42:32.683709998 +0000 UTC m=+10207.061775149" watchObservedRunningTime="2025-11-28 09:42:32.69310291 +0000 UTC m=+10207.071168061" Nov 28 09:42:46 crc kubenswrapper[4946]: I1128 09:42:46.481751 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-chmpn"] Nov 28 09:42:46 crc kubenswrapper[4946]: I1128 09:42:46.489269 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chmpn" Nov 28 09:42:46 crc kubenswrapper[4946]: I1128 09:42:46.504476 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-chmpn"] Nov 28 09:42:46 crc kubenswrapper[4946]: I1128 09:42:46.662991 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad932fde-2de8-404c-8290-5933aad8ad35-utilities\") pod \"certified-operators-chmpn\" (UID: \"ad932fde-2de8-404c-8290-5933aad8ad35\") " pod="openshift-marketplace/certified-operators-chmpn" Nov 28 09:42:46 crc kubenswrapper[4946]: I1128 09:42:46.663351 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad932fde-2de8-404c-8290-5933aad8ad35-catalog-content\") pod \"certified-operators-chmpn\" (UID: \"ad932fde-2de8-404c-8290-5933aad8ad35\") " pod="openshift-marketplace/certified-operators-chmpn" Nov 28 09:42:46 crc kubenswrapper[4946]: I1128 09:42:46.663520 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qmqr\" (UniqueName: \"kubernetes.io/projected/ad932fde-2de8-404c-8290-5933aad8ad35-kube-api-access-9qmqr\") pod \"certified-operators-chmpn\" (UID: \"ad932fde-2de8-404c-8290-5933aad8ad35\") " pod="openshift-marketplace/certified-operators-chmpn" Nov 28 09:42:46 crc kubenswrapper[4946]: I1128 09:42:46.765626 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad932fde-2de8-404c-8290-5933aad8ad35-catalog-content\") pod \"certified-operators-chmpn\" (UID: \"ad932fde-2de8-404c-8290-5933aad8ad35\") " pod="openshift-marketplace/certified-operators-chmpn" Nov 28 09:42:46 crc kubenswrapper[4946]: I1128 09:42:46.765694 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qmqr\" (UniqueName: \"kubernetes.io/projected/ad932fde-2de8-404c-8290-5933aad8ad35-kube-api-access-9qmqr\") pod \"certified-operators-chmpn\" (UID: \"ad932fde-2de8-404c-8290-5933aad8ad35\") " pod="openshift-marketplace/certified-operators-chmpn" Nov 28 09:42:46 crc kubenswrapper[4946]: I1128 09:42:46.765847 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad932fde-2de8-404c-8290-5933aad8ad35-utilities\") pod \"certified-operators-chmpn\" (UID: \"ad932fde-2de8-404c-8290-5933aad8ad35\") " pod="openshift-marketplace/certified-operators-chmpn" Nov 28 09:42:46 crc kubenswrapper[4946]: I1128 09:42:46.766506 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad932fde-2de8-404c-8290-5933aad8ad35-catalog-content\") pod \"certified-operators-chmpn\" (UID: \"ad932fde-2de8-404c-8290-5933aad8ad35\") " pod="openshift-marketplace/certified-operators-chmpn" Nov 28 09:42:46 crc kubenswrapper[4946]: I1128 09:42:46.766763 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad932fde-2de8-404c-8290-5933aad8ad35-utilities\") pod \"certified-operators-chmpn\" (UID: \"ad932fde-2de8-404c-8290-5933aad8ad35\") " pod="openshift-marketplace/certified-operators-chmpn" Nov 28 09:42:46 crc kubenswrapper[4946]: I1128 09:42:46.791874 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qmqr\" (UniqueName: \"kubernetes.io/projected/ad932fde-2de8-404c-8290-5933aad8ad35-kube-api-access-9qmqr\") pod \"certified-operators-chmpn\" (UID: \"ad932fde-2de8-404c-8290-5933aad8ad35\") " pod="openshift-marketplace/certified-operators-chmpn" Nov 28 09:42:46 crc kubenswrapper[4946]: I1128 09:42:46.820662 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chmpn" Nov 28 09:42:47 crc kubenswrapper[4946]: I1128 09:42:47.355764 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-chmpn"] Nov 28 09:42:47 crc kubenswrapper[4946]: E1128 09:42:47.732533 4946 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad932fde_2de8_404c_8290_5933aad8ad35.slice/crio-conmon-fb83ed6d26c438d99c4395cab3894675397c8013e7c27a912d334cdc654b6b56.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad932fde_2de8_404c_8290_5933aad8ad35.slice/crio-fb83ed6d26c438d99c4395cab3894675397c8013e7c27a912d334cdc654b6b56.scope\": RecentStats: unable to find data in memory cache]" Nov 28 09:42:47 crc kubenswrapper[4946]: I1128 09:42:47.857625 4946 generic.go:334] "Generic (PLEG): container finished" podID="ad932fde-2de8-404c-8290-5933aad8ad35" containerID="fb83ed6d26c438d99c4395cab3894675397c8013e7c27a912d334cdc654b6b56" exitCode=0 Nov 28 09:42:47 crc kubenswrapper[4946]: I1128 09:42:47.858011 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chmpn" event={"ID":"ad932fde-2de8-404c-8290-5933aad8ad35","Type":"ContainerDied","Data":"fb83ed6d26c438d99c4395cab3894675397c8013e7c27a912d334cdc654b6b56"} Nov 28 09:42:47 crc kubenswrapper[4946]: I1128 09:42:47.858042 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chmpn" event={"ID":"ad932fde-2de8-404c-8290-5933aad8ad35","Type":"ContainerStarted","Data":"2a14eb4befa258af28d428696cb6545ec3dbcded5b170b25163791b625ce1457"} Nov 28 09:42:48 crc kubenswrapper[4946]: I1128 09:42:48.831477 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h56jl"] Nov 28 09:42:48 crc kubenswrapper[4946]: I1128 09:42:48.833983 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h56jl" Nov 28 09:42:48 crc kubenswrapper[4946]: I1128 09:42:48.845318 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h56jl"] Nov 28 09:42:49 crc kubenswrapper[4946]: I1128 09:42:49.017667 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6429\" (UniqueName: \"kubernetes.io/projected/ffe455da-e83d-4ba4-bccb-c7cfe3a0beef-kube-api-access-g6429\") pod \"community-operators-h56jl\" (UID: \"ffe455da-e83d-4ba4-bccb-c7cfe3a0beef\") " pod="openshift-marketplace/community-operators-h56jl" Nov 28 09:42:49 crc kubenswrapper[4946]: I1128 09:42:49.017963 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffe455da-e83d-4ba4-bccb-c7cfe3a0beef-utilities\") pod \"community-operators-h56jl\" (UID: \"ffe455da-e83d-4ba4-bccb-c7cfe3a0beef\") " pod="openshift-marketplace/community-operators-h56jl" Nov 28 09:42:49 crc kubenswrapper[4946]: I1128 09:42:49.017989 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffe455da-e83d-4ba4-bccb-c7cfe3a0beef-catalog-content\") pod \"community-operators-h56jl\" (UID: \"ffe455da-e83d-4ba4-bccb-c7cfe3a0beef\") " pod="openshift-marketplace/community-operators-h56jl" Nov 28 09:42:49 crc kubenswrapper[4946]: I1128 09:42:49.120776 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6429\" (UniqueName: \"kubernetes.io/projected/ffe455da-e83d-4ba4-bccb-c7cfe3a0beef-kube-api-access-g6429\") pod \"community-operators-h56jl\" (UID: \"ffe455da-e83d-4ba4-bccb-c7cfe3a0beef\") " pod="openshift-marketplace/community-operators-h56jl" Nov 28 09:42:49 crc kubenswrapper[4946]: I1128 09:42:49.120846 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffe455da-e83d-4ba4-bccb-c7cfe3a0beef-utilities\") pod \"community-operators-h56jl\" (UID: \"ffe455da-e83d-4ba4-bccb-c7cfe3a0beef\") " pod="openshift-marketplace/community-operators-h56jl" Nov 28 09:42:49 crc kubenswrapper[4946]: I1128 09:42:49.120886 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffe455da-e83d-4ba4-bccb-c7cfe3a0beef-catalog-content\") pod \"community-operators-h56jl\" (UID: \"ffe455da-e83d-4ba4-bccb-c7cfe3a0beef\") " pod="openshift-marketplace/community-operators-h56jl" Nov 28 09:42:49 crc kubenswrapper[4946]: I1128 09:42:49.121217 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffe455da-e83d-4ba4-bccb-c7cfe3a0beef-utilities\") pod \"community-operators-h56jl\" (UID: \"ffe455da-e83d-4ba4-bccb-c7cfe3a0beef\") " pod="openshift-marketplace/community-operators-h56jl" Nov 28 09:42:49 crc kubenswrapper[4946]: I1128 09:42:49.121250 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffe455da-e83d-4ba4-bccb-c7cfe3a0beef-catalog-content\") pod \"community-operators-h56jl\" (UID: \"ffe455da-e83d-4ba4-bccb-c7cfe3a0beef\") " pod="openshift-marketplace/community-operators-h56jl" Nov 28 09:42:49 crc kubenswrapper[4946]: I1128 09:42:49.142057 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6429\" (UniqueName: \"kubernetes.io/projected/ffe455da-e83d-4ba4-bccb-c7cfe3a0beef-kube-api-access-g6429\") pod \"community-operators-h56jl\" (UID: \"ffe455da-e83d-4ba4-bccb-c7cfe3a0beef\") " pod="openshift-marketplace/community-operators-h56jl" Nov 28 09:42:49 crc kubenswrapper[4946]: I1128 09:42:49.177814 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h56jl" Nov 28 09:42:49 crc kubenswrapper[4946]: W1128 09:42:49.801866 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffe455da_e83d_4ba4_bccb_c7cfe3a0beef.slice/crio-ba2837b1eff12e6bdfe83a322b67d6855a40ffd2f79910ae03d7f5a9b13e14dd WatchSource:0}: Error finding container ba2837b1eff12e6bdfe83a322b67d6855a40ffd2f79910ae03d7f5a9b13e14dd: Status 404 returned error can't find the container with id ba2837b1eff12e6bdfe83a322b67d6855a40ffd2f79910ae03d7f5a9b13e14dd Nov 28 09:42:49 crc kubenswrapper[4946]: I1128 09:42:49.811983 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h56jl"] Nov 28 09:42:49 crc kubenswrapper[4946]: I1128 09:42:49.894128 4946 generic.go:334] "Generic (PLEG): container finished" podID="ad932fde-2de8-404c-8290-5933aad8ad35" containerID="2345f98c7488107940f3dfbc3ea93e96bf00f80d59e32c214a31582883b836de" exitCode=0 Nov 28 09:42:49 crc kubenswrapper[4946]: I1128 09:42:49.895086 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chmpn" event={"ID":"ad932fde-2de8-404c-8290-5933aad8ad35","Type":"ContainerDied","Data":"2345f98c7488107940f3dfbc3ea93e96bf00f80d59e32c214a31582883b836de"} Nov 28 09:42:49 crc kubenswrapper[4946]: I1128 09:42:49.897045 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h56jl" event={"ID":"ffe455da-e83d-4ba4-bccb-c7cfe3a0beef","Type":"ContainerStarted","Data":"ba2837b1eff12e6bdfe83a322b67d6855a40ffd2f79910ae03d7f5a9b13e14dd"} Nov 28 09:42:50 crc kubenswrapper[4946]: I1128 09:42:50.908709 4946 generic.go:334] "Generic (PLEG): container finished" podID="ffe455da-e83d-4ba4-bccb-c7cfe3a0beef" containerID="bae65919f3c9900001d85f4e68be19b98fa67ac1cb7b55be639b1c3b92941102" exitCode=0 Nov 28 09:42:50 crc kubenswrapper[4946]: I1128 09:42:50.908917 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h56jl" event={"ID":"ffe455da-e83d-4ba4-bccb-c7cfe3a0beef","Type":"ContainerDied","Data":"bae65919f3c9900001d85f4e68be19b98fa67ac1cb7b55be639b1c3b92941102"} Nov 28 09:42:51 crc kubenswrapper[4946]: I1128 09:42:51.921066 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chmpn" event={"ID":"ad932fde-2de8-404c-8290-5933aad8ad35","Type":"ContainerStarted","Data":"293c896459351b946ac150edd81fbbece7b7c0f58fd978192374adfaf7030060"} Nov 28 09:42:51 crc kubenswrapper[4946]: I1128 09:42:51.924860 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h56jl" event={"ID":"ffe455da-e83d-4ba4-bccb-c7cfe3a0beef","Type":"ContainerStarted","Data":"9951613204a482aaef077820ba491ead0a732f55db17528f44c7671cd7a62f3b"} Nov 28 09:42:51 crc kubenswrapper[4946]: I1128 09:42:51.941364 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-chmpn" podStartSLOduration=2.892572653 podStartE2EDuration="5.941347909s" podCreationTimestamp="2025-11-28 09:42:46 +0000 UTC" firstStartedPulling="2025-11-28 09:42:47.86008162 +0000 UTC m=+10222.238146731" lastFinishedPulling="2025-11-28 09:42:50.908856886 +0000 UTC m=+10225.286921987" observedRunningTime="2025-11-28 09:42:51.938744185 +0000 UTC m=+10226.316809296" watchObservedRunningTime="2025-11-28 09:42:51.941347909 +0000 UTC m=+10226.319413020" Nov 28 09:42:52 crc kubenswrapper[4946]: I1128 09:42:52.936963 4946 generic.go:334] "Generic (PLEG): container finished" podID="ffe455da-e83d-4ba4-bccb-c7cfe3a0beef" containerID="9951613204a482aaef077820ba491ead0a732f55db17528f44c7671cd7a62f3b" exitCode=0 Nov 28 09:42:52 crc kubenswrapper[4946]: I1128 09:42:52.938703 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h56jl" event={"ID":"ffe455da-e83d-4ba4-bccb-c7cfe3a0beef","Type":"ContainerDied","Data":"9951613204a482aaef077820ba491ead0a732f55db17528f44c7671cd7a62f3b"} Nov 28 09:42:52 crc kubenswrapper[4946]: I1128 09:42:52.938730 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h56jl" event={"ID":"ffe455da-e83d-4ba4-bccb-c7cfe3a0beef","Type":"ContainerStarted","Data":"3d9c7ef64581266860c92e76b7915d01e4003bb50791fb88d31c91b42f162fb8"} Nov 28 09:42:52 crc kubenswrapper[4946]: I1128 09:42:52.959146 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h56jl" podStartSLOduration=3.519022947 podStartE2EDuration="4.959126559s" podCreationTimestamp="2025-11-28 09:42:48 +0000 UTC" firstStartedPulling="2025-11-28 09:42:50.911335827 +0000 UTC m=+10225.289400938" lastFinishedPulling="2025-11-28 09:42:52.351439439 +0000 UTC m=+10226.729504550" observedRunningTime="2025-11-28 09:42:52.95673387 +0000 UTC m=+10227.334798981" watchObservedRunningTime="2025-11-28 09:42:52.959126559 +0000 UTC m=+10227.337191670" Nov 28 09:42:54 crc kubenswrapper[4946]: I1128 09:42:54.730557 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:42:54 crc kubenswrapper[4946]: I1128 09:42:54.730963 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:42:56 crc kubenswrapper[4946]: I1128 09:42:56.821823 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-chmpn" Nov 28 09:42:56 crc kubenswrapper[4946]: I1128 09:42:56.822312 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-chmpn" Nov 28 09:42:56 crc kubenswrapper[4946]: I1128 09:42:56.921361 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-chmpn" Nov 28 09:42:57 crc kubenswrapper[4946]: I1128 09:42:57.060327 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-chmpn" Nov 28 09:42:57 crc kubenswrapper[4946]: I1128 09:42:57.236310 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-chmpn"] Nov 28 09:42:59 crc kubenswrapper[4946]: I1128 09:42:59.032044 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-chmpn" podUID="ad932fde-2de8-404c-8290-5933aad8ad35" containerName="registry-server" containerID="cri-o://293c896459351b946ac150edd81fbbece7b7c0f58fd978192374adfaf7030060" gracePeriod=2 Nov 28 09:42:59 crc kubenswrapper[4946]: I1128 09:42:59.178098 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h56jl" Nov 28 09:42:59 crc kubenswrapper[4946]: I1128 09:42:59.178269 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h56jl" Nov 28 09:42:59 crc kubenswrapper[4946]: I1128 09:42:59.251699 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h56jl" Nov 28 09:42:59 crc kubenswrapper[4946]: I1128 09:42:59.575260 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chmpn" Nov 28 09:42:59 crc kubenswrapper[4946]: I1128 09:42:59.698334 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad932fde-2de8-404c-8290-5933aad8ad35-utilities\") pod \"ad932fde-2de8-404c-8290-5933aad8ad35\" (UID: \"ad932fde-2de8-404c-8290-5933aad8ad35\") " Nov 28 09:42:59 crc kubenswrapper[4946]: I1128 09:42:59.698772 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qmqr\" (UniqueName: \"kubernetes.io/projected/ad932fde-2de8-404c-8290-5933aad8ad35-kube-api-access-9qmqr\") pod \"ad932fde-2de8-404c-8290-5933aad8ad35\" (UID: \"ad932fde-2de8-404c-8290-5933aad8ad35\") " Nov 28 09:42:59 crc kubenswrapper[4946]: I1128 09:42:59.699112 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad932fde-2de8-404c-8290-5933aad8ad35-catalog-content\") pod \"ad932fde-2de8-404c-8290-5933aad8ad35\" (UID: \"ad932fde-2de8-404c-8290-5933aad8ad35\") " Nov 28 09:42:59 crc kubenswrapper[4946]: I1128 09:42:59.700344 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad932fde-2de8-404c-8290-5933aad8ad35-utilities" (OuterVolumeSpecName: "utilities") pod "ad932fde-2de8-404c-8290-5933aad8ad35" (UID: "ad932fde-2de8-404c-8290-5933aad8ad35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:42:59 crc kubenswrapper[4946]: I1128 09:42:59.700816 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad932fde-2de8-404c-8290-5933aad8ad35-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 09:42:59 crc kubenswrapper[4946]: I1128 09:42:59.709685 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad932fde-2de8-404c-8290-5933aad8ad35-kube-api-access-9qmqr" (OuterVolumeSpecName: "kube-api-access-9qmqr") pod "ad932fde-2de8-404c-8290-5933aad8ad35" (UID: "ad932fde-2de8-404c-8290-5933aad8ad35"). InnerVolumeSpecName "kube-api-access-9qmqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:42:59 crc kubenswrapper[4946]: I1128 09:42:59.741380 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad932fde-2de8-404c-8290-5933aad8ad35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad932fde-2de8-404c-8290-5933aad8ad35" (UID: "ad932fde-2de8-404c-8290-5933aad8ad35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:42:59 crc kubenswrapper[4946]: I1128 09:42:59.802881 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qmqr\" (UniqueName: \"kubernetes.io/projected/ad932fde-2de8-404c-8290-5933aad8ad35-kube-api-access-9qmqr\") on node \"crc\" DevicePath \"\"" Nov 28 09:42:59 crc kubenswrapper[4946]: I1128 09:42:59.802921 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad932fde-2de8-404c-8290-5933aad8ad35-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 09:43:00 crc kubenswrapper[4946]: I1128 09:43:00.050996 4946 generic.go:334] "Generic (PLEG): container finished" podID="ad932fde-2de8-404c-8290-5933aad8ad35" containerID="293c896459351b946ac150edd81fbbece7b7c0f58fd978192374adfaf7030060" exitCode=0 Nov 28 09:43:00 crc kubenswrapper[4946]: I1128 09:43:00.051085 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chmpn" Nov 28 09:43:00 crc kubenswrapper[4946]: I1128 09:43:00.051161 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chmpn" event={"ID":"ad932fde-2de8-404c-8290-5933aad8ad35","Type":"ContainerDied","Data":"293c896459351b946ac150edd81fbbece7b7c0f58fd978192374adfaf7030060"} Nov 28 09:43:00 crc kubenswrapper[4946]: I1128 09:43:00.051235 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chmpn" event={"ID":"ad932fde-2de8-404c-8290-5933aad8ad35","Type":"ContainerDied","Data":"2a14eb4befa258af28d428696cb6545ec3dbcded5b170b25163791b625ce1457"} Nov 28 09:43:00 crc kubenswrapper[4946]: I1128 09:43:00.051260 4946 scope.go:117] "RemoveContainer" containerID="293c896459351b946ac150edd81fbbece7b7c0f58fd978192374adfaf7030060" Nov 28 09:43:00 crc kubenswrapper[4946]: I1128 09:43:00.082079 4946 scope.go:117] "RemoveContainer" containerID="2345f98c7488107940f3dfbc3ea93e96bf00f80d59e32c214a31582883b836de" Nov 28 09:43:00 crc kubenswrapper[4946]: I1128 09:43:00.086431 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-chmpn"] Nov 28 09:43:00 crc kubenswrapper[4946]: I1128 09:43:00.100500 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-chmpn"] Nov 28 09:43:00 crc kubenswrapper[4946]: I1128 09:43:00.105841 4946 scope.go:117] "RemoveContainer" containerID="fb83ed6d26c438d99c4395cab3894675397c8013e7c27a912d334cdc654b6b56" Nov 28 09:43:00 crc kubenswrapper[4946]: I1128 09:43:00.179830 4946 scope.go:117] "RemoveContainer" containerID="293c896459351b946ac150edd81fbbece7b7c0f58fd978192374adfaf7030060" Nov 28 09:43:00 crc kubenswrapper[4946]: E1128 09:43:00.180408 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"293c896459351b946ac150edd81fbbece7b7c0f58fd978192374adfaf7030060\": container with ID starting with 293c896459351b946ac150edd81fbbece7b7c0f58fd978192374adfaf7030060 not found: ID does not exist" containerID="293c896459351b946ac150edd81fbbece7b7c0f58fd978192374adfaf7030060" Nov 28 09:43:00 crc kubenswrapper[4946]: I1128 09:43:00.180487 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"293c896459351b946ac150edd81fbbece7b7c0f58fd978192374adfaf7030060"} err="failed to get container status \"293c896459351b946ac150edd81fbbece7b7c0f58fd978192374adfaf7030060\": rpc error: code = NotFound desc = could not find container \"293c896459351b946ac150edd81fbbece7b7c0f58fd978192374adfaf7030060\": container with ID starting with 293c896459351b946ac150edd81fbbece7b7c0f58fd978192374adfaf7030060 not found: ID does not exist" Nov 28 09:43:00 crc kubenswrapper[4946]: I1128 09:43:00.180521 4946 scope.go:117] "RemoveContainer" containerID="2345f98c7488107940f3dfbc3ea93e96bf00f80d59e32c214a31582883b836de" Nov 28 09:43:00 crc kubenswrapper[4946]: E1128 09:43:00.180946 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2345f98c7488107940f3dfbc3ea93e96bf00f80d59e32c214a31582883b836de\": container with ID starting with 2345f98c7488107940f3dfbc3ea93e96bf00f80d59e32c214a31582883b836de not found: ID does not exist" containerID="2345f98c7488107940f3dfbc3ea93e96bf00f80d59e32c214a31582883b836de" Nov 28 09:43:00 crc kubenswrapper[4946]: I1128 09:43:00.181058 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2345f98c7488107940f3dfbc3ea93e96bf00f80d59e32c214a31582883b836de"} err="failed to get container status \"2345f98c7488107940f3dfbc3ea93e96bf00f80d59e32c214a31582883b836de\": rpc error: code = NotFound desc = could not find container \"2345f98c7488107940f3dfbc3ea93e96bf00f80d59e32c214a31582883b836de\": container with ID starting with 2345f98c7488107940f3dfbc3ea93e96bf00f80d59e32c214a31582883b836de not found: ID does not exist" Nov 28 09:43:00 crc kubenswrapper[4946]: I1128 09:43:00.181161 4946 scope.go:117] "RemoveContainer" containerID="fb83ed6d26c438d99c4395cab3894675397c8013e7c27a912d334cdc654b6b56" Nov 28 09:43:00 crc kubenswrapper[4946]: E1128 09:43:00.181792 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb83ed6d26c438d99c4395cab3894675397c8013e7c27a912d334cdc654b6b56\": container with ID starting with fb83ed6d26c438d99c4395cab3894675397c8013e7c27a912d334cdc654b6b56 not found: ID does not exist" containerID="fb83ed6d26c438d99c4395cab3894675397c8013e7c27a912d334cdc654b6b56" Nov 28 09:43:00 crc kubenswrapper[4946]: I1128 09:43:00.181821 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb83ed6d26c438d99c4395cab3894675397c8013e7c27a912d334cdc654b6b56"} err="failed to get container status \"fb83ed6d26c438d99c4395cab3894675397c8013e7c27a912d334cdc654b6b56\": rpc error: code = NotFound desc = could not find container \"fb83ed6d26c438d99c4395cab3894675397c8013e7c27a912d334cdc654b6b56\": container with ID starting with fb83ed6d26c438d99c4395cab3894675397c8013e7c27a912d334cdc654b6b56 not found: ID does not exist" Nov 28 09:43:00 crc kubenswrapper[4946]: I1128 09:43:00.185282 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h56jl" Nov 28 09:43:02 crc kubenswrapper[4946]: I1128 09:43:02.006874 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad932fde-2de8-404c-8290-5933aad8ad35" path="/var/lib/kubelet/pods/ad932fde-2de8-404c-8290-5933aad8ad35/volumes" Nov 28 09:43:02 crc kubenswrapper[4946]: I1128 09:43:02.620632 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h56jl"] Nov 28 09:43:03 crc kubenswrapper[4946]: I1128 09:43:03.086691 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h56jl" podUID="ffe455da-e83d-4ba4-bccb-c7cfe3a0beef" containerName="registry-server" containerID="cri-o://3d9c7ef64581266860c92e76b7915d01e4003bb50791fb88d31c91b42f162fb8" gracePeriod=2 Nov 28 09:43:04 crc kubenswrapper[4946]: I1128 09:43:04.099244 4946 generic.go:334] "Generic (PLEG): container finished" podID="ffe455da-e83d-4ba4-bccb-c7cfe3a0beef" containerID="3d9c7ef64581266860c92e76b7915d01e4003bb50791fb88d31c91b42f162fb8" exitCode=0 Nov 28 09:43:04 crc kubenswrapper[4946]: I1128 09:43:04.099485 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h56jl" event={"ID":"ffe455da-e83d-4ba4-bccb-c7cfe3a0beef","Type":"ContainerDied","Data":"3d9c7ef64581266860c92e76b7915d01e4003bb50791fb88d31c91b42f162fb8"} Nov 28 09:43:04 crc kubenswrapper[4946]: I1128 09:43:04.293128 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h56jl" Nov 28 09:43:04 crc kubenswrapper[4946]: I1128 09:43:04.403519 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffe455da-e83d-4ba4-bccb-c7cfe3a0beef-utilities\") pod \"ffe455da-e83d-4ba4-bccb-c7cfe3a0beef\" (UID: \"ffe455da-e83d-4ba4-bccb-c7cfe3a0beef\") " Nov 28 09:43:04 crc kubenswrapper[4946]: I1128 09:43:04.403691 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffe455da-e83d-4ba4-bccb-c7cfe3a0beef-catalog-content\") pod \"ffe455da-e83d-4ba4-bccb-c7cfe3a0beef\" (UID: \"ffe455da-e83d-4ba4-bccb-c7cfe3a0beef\") " Nov 28 09:43:04 crc kubenswrapper[4946]: I1128 09:43:04.403731 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6429\" (UniqueName: \"kubernetes.io/projected/ffe455da-e83d-4ba4-bccb-c7cfe3a0beef-kube-api-access-g6429\") pod \"ffe455da-e83d-4ba4-bccb-c7cfe3a0beef\" (UID: \"ffe455da-e83d-4ba4-bccb-c7cfe3a0beef\") " Nov 28 09:43:04 crc kubenswrapper[4946]: I1128 09:43:04.404858 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffe455da-e83d-4ba4-bccb-c7cfe3a0beef-utilities" (OuterVolumeSpecName: "utilities") pod "ffe455da-e83d-4ba4-bccb-c7cfe3a0beef" (UID: "ffe455da-e83d-4ba4-bccb-c7cfe3a0beef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:43:04 crc kubenswrapper[4946]: I1128 09:43:04.410072 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffe455da-e83d-4ba4-bccb-c7cfe3a0beef-kube-api-access-g6429" (OuterVolumeSpecName: "kube-api-access-g6429") pod "ffe455da-e83d-4ba4-bccb-c7cfe3a0beef" (UID: "ffe455da-e83d-4ba4-bccb-c7cfe3a0beef"). InnerVolumeSpecName "kube-api-access-g6429". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:43:04 crc kubenswrapper[4946]: I1128 09:43:04.458433 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffe455da-e83d-4ba4-bccb-c7cfe3a0beef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffe455da-e83d-4ba4-bccb-c7cfe3a0beef" (UID: "ffe455da-e83d-4ba4-bccb-c7cfe3a0beef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:43:04 crc kubenswrapper[4946]: I1128 09:43:04.506797 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffe455da-e83d-4ba4-bccb-c7cfe3a0beef-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 09:43:04 crc kubenswrapper[4946]: I1128 09:43:04.506890 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffe455da-e83d-4ba4-bccb-c7cfe3a0beef-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 09:43:04 crc kubenswrapper[4946]: I1128 09:43:04.506918 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6429\" (UniqueName: \"kubernetes.io/projected/ffe455da-e83d-4ba4-bccb-c7cfe3a0beef-kube-api-access-g6429\") on node \"crc\" DevicePath \"\"" Nov 28 09:43:05 crc kubenswrapper[4946]: I1128 09:43:05.149852 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h56jl" event={"ID":"ffe455da-e83d-4ba4-bccb-c7cfe3a0beef","Type":"ContainerDied","Data":"ba2837b1eff12e6bdfe83a322b67d6855a40ffd2f79910ae03d7f5a9b13e14dd"} Nov 28 09:43:05 crc kubenswrapper[4946]: I1128 09:43:05.149907 4946 scope.go:117] "RemoveContainer" containerID="3d9c7ef64581266860c92e76b7915d01e4003bb50791fb88d31c91b42f162fb8" Nov 28 09:43:05 crc kubenswrapper[4946]: I1128 09:43:05.150043 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h56jl" Nov 28 09:43:05 crc kubenswrapper[4946]: I1128 09:43:05.185414 4946 scope.go:117] "RemoveContainer" containerID="9951613204a482aaef077820ba491ead0a732f55db17528f44c7671cd7a62f3b" Nov 28 09:43:05 crc kubenswrapper[4946]: I1128 09:43:05.189639 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h56jl"] Nov 28 09:43:05 crc kubenswrapper[4946]: I1128 09:43:05.203840 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h56jl"] Nov 28 09:43:05 crc kubenswrapper[4946]: I1128 09:43:05.212347 4946 scope.go:117] "RemoveContainer" containerID="bae65919f3c9900001d85f4e68be19b98fa67ac1cb7b55be639b1c3b92941102" Nov 28 09:43:06 crc kubenswrapper[4946]: I1128 09:43:06.003779 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffe455da-e83d-4ba4-bccb-c7cfe3a0beef" path="/var/lib/kubelet/pods/ffe455da-e83d-4ba4-bccb-c7cfe3a0beef/volumes" Nov 28 09:43:24 crc kubenswrapper[4946]: I1128 09:43:24.731000 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:43:24 crc kubenswrapper[4946]: I1128 09:43:24.731655 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:43:54 crc kubenswrapper[4946]: I1128 09:43:54.730942 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:43:54 crc kubenswrapper[4946]: I1128 09:43:54.731709 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:43:54 crc kubenswrapper[4946]: I1128 09:43:54.731781 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 09:43:54 crc kubenswrapper[4946]: I1128 09:43:54.733233 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 09:43:54 crc kubenswrapper[4946]: I1128 09:43:54.733372 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" gracePeriod=600 Nov 28 09:43:54 crc kubenswrapper[4946]: E1128 09:43:54.864871 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:43:55 crc kubenswrapper[4946]: I1128 09:43:55.792017 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" exitCode=0 Nov 28 09:43:55 crc kubenswrapper[4946]: I1128 09:43:55.792120 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29"} Nov 28 09:43:55 crc kubenswrapper[4946]: I1128 09:43:55.792421 4946 scope.go:117] "RemoveContainer" containerID="0ec5e3eb54e35f954dba98a8c8522fadf3fba9bfecbc9e8dcd7fcd5944c599d0" Nov 28 09:43:55 crc kubenswrapper[4946]: I1128 09:43:55.793286 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:43:55 crc kubenswrapper[4946]: E1128 09:43:55.793699 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:44:06 crc kubenswrapper[4946]: I1128 09:44:06.990362 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:44:06 crc kubenswrapper[4946]: E1128 09:44:06.991719 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:44:18 crc kubenswrapper[4946]: I1128 09:44:18.991323 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:44:18 crc kubenswrapper[4946]: E1128 09:44:18.992177 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:44:33 crc kubenswrapper[4946]: I1128 09:44:33.990732 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:44:33 crc kubenswrapper[4946]: E1128 09:44:33.991850 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:44:47 crc kubenswrapper[4946]: I1128 09:44:47.990196 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:44:47 crc kubenswrapper[4946]: E1128 09:44:47.991040 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:44:58 crc kubenswrapper[4946]: I1128 09:44:58.990280 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:44:58 crc kubenswrapper[4946]: E1128 09:44:58.991387 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:45:00 crc kubenswrapper[4946]: I1128 09:45:00.166320 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405385-x2g6l"] Nov 28 09:45:00 crc kubenswrapper[4946]: E1128 09:45:00.168138 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffe455da-e83d-4ba4-bccb-c7cfe3a0beef" containerName="extract-utilities" Nov 28 09:45:00 crc kubenswrapper[4946]: I1128 09:45:00.168248 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffe455da-e83d-4ba4-bccb-c7cfe3a0beef" containerName="extract-utilities" Nov 28 09:45:00 crc kubenswrapper[4946]: E1128 09:45:00.168332 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad932fde-2de8-404c-8290-5933aad8ad35" containerName="registry-server" Nov 28 09:45:00 crc kubenswrapper[4946]: I1128 09:45:00.168415 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad932fde-2de8-404c-8290-5933aad8ad35" containerName="registry-server" Nov 28 09:45:00 crc kubenswrapper[4946]: E1128 09:45:00.168508 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad932fde-2de8-404c-8290-5933aad8ad35" containerName="extract-utilities" Nov 28 09:45:00 crc kubenswrapper[4946]: I1128 09:45:00.168594 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad932fde-2de8-404c-8290-5933aad8ad35" containerName="extract-utilities" Nov 28 09:45:00 crc kubenswrapper[4946]: E1128 09:45:00.168683 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffe455da-e83d-4ba4-bccb-c7cfe3a0beef" containerName="registry-server" Nov 28 09:45:00 crc kubenswrapper[4946]: I1128 09:45:00.168767 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffe455da-e83d-4ba4-bccb-c7cfe3a0beef" containerName="registry-server" Nov 28 09:45:00 crc kubenswrapper[4946]: E1128 09:45:00.168845 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffe455da-e83d-4ba4-bccb-c7cfe3a0beef" containerName="extract-content" Nov 28 09:45:00 crc kubenswrapper[4946]: I1128 09:45:00.168917 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffe455da-e83d-4ba4-bccb-c7cfe3a0beef" containerName="extract-content" Nov 28 09:45:00 crc kubenswrapper[4946]: E1128 09:45:00.169022 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad932fde-2de8-404c-8290-5933aad8ad35" containerName="extract-content" Nov 28 09:45:00 crc kubenswrapper[4946]: I1128 09:45:00.169096 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad932fde-2de8-404c-8290-5933aad8ad35" containerName="extract-content" Nov 28 09:45:00 crc kubenswrapper[4946]: I1128 09:45:00.169431 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffe455da-e83d-4ba4-bccb-c7cfe3a0beef" containerName="registry-server" Nov 28 09:45:00 crc kubenswrapper[4946]: I1128 09:45:00.169574 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad932fde-2de8-404c-8290-5933aad8ad35" containerName="registry-server" Nov 28 09:45:00 crc kubenswrapper[4946]: I1128 09:45:00.172098 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405385-x2g6l" Nov 28 09:45:00 crc kubenswrapper[4946]: I1128 09:45:00.176658 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 09:45:00 crc kubenswrapper[4946]: I1128 09:45:00.178405 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 09:45:00 crc kubenswrapper[4946]: I1128 09:45:00.201799 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405385-x2g6l"] Nov 28 09:45:00 crc kubenswrapper[4946]: I1128 09:45:00.273518 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7hlv\" (UniqueName: \"kubernetes.io/projected/d11c7c32-24ab-4e4b-be99-c3851c45a894-kube-api-access-f7hlv\") pod \"collect-profiles-29405385-x2g6l\" (UID: \"d11c7c32-24ab-4e4b-be99-c3851c45a894\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405385-x2g6l" Nov 28 09:45:00 crc kubenswrapper[4946]: I1128 09:45:00.273606 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d11c7c32-24ab-4e4b-be99-c3851c45a894-config-volume\") pod \"collect-profiles-29405385-x2g6l\" (UID: \"d11c7c32-24ab-4e4b-be99-c3851c45a894\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405385-x2g6l" Nov 28 09:45:00 crc kubenswrapper[4946]: I1128 09:45:00.273843 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d11c7c32-24ab-4e4b-be99-c3851c45a894-secret-volume\") pod \"collect-profiles-29405385-x2g6l\" (UID: \"d11c7c32-24ab-4e4b-be99-c3851c45a894\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405385-x2g6l" Nov 28 09:45:00 crc kubenswrapper[4946]: I1128 09:45:00.376782 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7hlv\" (UniqueName: \"kubernetes.io/projected/d11c7c32-24ab-4e4b-be99-c3851c45a894-kube-api-access-f7hlv\") pod \"collect-profiles-29405385-x2g6l\" (UID: \"d11c7c32-24ab-4e4b-be99-c3851c45a894\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405385-x2g6l" Nov 28 09:45:00 crc kubenswrapper[4946]: I1128 09:45:00.376986 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d11c7c32-24ab-4e4b-be99-c3851c45a894-config-volume\") pod \"collect-profiles-29405385-x2g6l\" (UID: \"d11c7c32-24ab-4e4b-be99-c3851c45a894\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405385-x2g6l" Nov 28 09:45:00 crc kubenswrapper[4946]: I1128 09:45:00.377108 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d11c7c32-24ab-4e4b-be99-c3851c45a894-secret-volume\") pod \"collect-profiles-29405385-x2g6l\" (UID: \"d11c7c32-24ab-4e4b-be99-c3851c45a894\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405385-x2g6l" Nov 28 09:45:00 crc kubenswrapper[4946]: I1128 09:45:00.379002 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d11c7c32-24ab-4e4b-be99-c3851c45a894-config-volume\") pod \"collect-profiles-29405385-x2g6l\" (UID: \"d11c7c32-24ab-4e4b-be99-c3851c45a894\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405385-x2g6l" Nov 28 09:45:00 crc kubenswrapper[4946]: I1128 09:45:00.386605 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d11c7c32-24ab-4e4b-be99-c3851c45a894-secret-volume\") pod \"collect-profiles-29405385-x2g6l\" (UID: \"d11c7c32-24ab-4e4b-be99-c3851c45a894\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405385-x2g6l" Nov 28 09:45:00 crc kubenswrapper[4946]: I1128 09:45:00.407082 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7hlv\" (UniqueName: \"kubernetes.io/projected/d11c7c32-24ab-4e4b-be99-c3851c45a894-kube-api-access-f7hlv\") pod \"collect-profiles-29405385-x2g6l\" (UID: \"d11c7c32-24ab-4e4b-be99-c3851c45a894\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405385-x2g6l" Nov 28 09:45:00 crc kubenswrapper[4946]: I1128 09:45:00.505979 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405385-x2g6l" Nov 28 09:45:01 crc kubenswrapper[4946]: I1128 09:45:01.026071 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405385-x2g6l"] Nov 28 09:45:01 crc kubenswrapper[4946]: E1128 09:45:01.514905 4946 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd11c7c32_24ab_4e4b_be99_c3851c45a894.slice/crio-5e0cbd9db30f17c3af1b7c3212f468240f63c4b319e1bc90af04083a5066ba26.scope\": RecentStats: unable to find data in memory cache]" Nov 28 09:45:01 crc kubenswrapper[4946]: I1128 09:45:01.612923 4946 generic.go:334] "Generic (PLEG): container finished" podID="d11c7c32-24ab-4e4b-be99-c3851c45a894" containerID="5e0cbd9db30f17c3af1b7c3212f468240f63c4b319e1bc90af04083a5066ba26" exitCode=0 Nov 28 09:45:01 crc kubenswrapper[4946]: I1128 09:45:01.612983 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405385-x2g6l" event={"ID":"d11c7c32-24ab-4e4b-be99-c3851c45a894","Type":"ContainerDied","Data":"5e0cbd9db30f17c3af1b7c3212f468240f63c4b319e1bc90af04083a5066ba26"} Nov 28 09:45:01 crc kubenswrapper[4946]: I1128 09:45:01.613013 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405385-x2g6l" event={"ID":"d11c7c32-24ab-4e4b-be99-c3851c45a894","Type":"ContainerStarted","Data":"4874af2f788d4610248cdd8abdfb82ee26c915299185d4b9d4815d27e5cfc9af"} Nov 28 09:45:03 crc kubenswrapper[4946]: I1128 09:45:03.072388 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405385-x2g6l" Nov 28 09:45:03 crc kubenswrapper[4946]: I1128 09:45:03.244346 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d11c7c32-24ab-4e4b-be99-c3851c45a894-config-volume\") pod \"d11c7c32-24ab-4e4b-be99-c3851c45a894\" (UID: \"d11c7c32-24ab-4e4b-be99-c3851c45a894\") " Nov 28 09:45:03 crc kubenswrapper[4946]: I1128 09:45:03.244571 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d11c7c32-24ab-4e4b-be99-c3851c45a894-secret-volume\") pod \"d11c7c32-24ab-4e4b-be99-c3851c45a894\" (UID: \"d11c7c32-24ab-4e4b-be99-c3851c45a894\") " Nov 28 09:45:03 crc kubenswrapper[4946]: I1128 09:45:03.244626 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7hlv\" (UniqueName: \"kubernetes.io/projected/d11c7c32-24ab-4e4b-be99-c3851c45a894-kube-api-access-f7hlv\") pod \"d11c7c32-24ab-4e4b-be99-c3851c45a894\" (UID: \"d11c7c32-24ab-4e4b-be99-c3851c45a894\") " Nov 28 09:45:03 crc kubenswrapper[4946]: I1128 09:45:03.246004 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d11c7c32-24ab-4e4b-be99-c3851c45a894-config-volume" (OuterVolumeSpecName: "config-volume") pod "d11c7c32-24ab-4e4b-be99-c3851c45a894" (UID: "d11c7c32-24ab-4e4b-be99-c3851c45a894"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:45:03 crc kubenswrapper[4946]: I1128 09:45:03.254707 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d11c7c32-24ab-4e4b-be99-c3851c45a894-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d11c7c32-24ab-4e4b-be99-c3851c45a894" (UID: "d11c7c32-24ab-4e4b-be99-c3851c45a894"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:45:03 crc kubenswrapper[4946]: I1128 09:45:03.254816 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d11c7c32-24ab-4e4b-be99-c3851c45a894-kube-api-access-f7hlv" (OuterVolumeSpecName: "kube-api-access-f7hlv") pod "d11c7c32-24ab-4e4b-be99-c3851c45a894" (UID: "d11c7c32-24ab-4e4b-be99-c3851c45a894"). InnerVolumeSpecName "kube-api-access-f7hlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:45:03 crc kubenswrapper[4946]: I1128 09:45:03.347056 4946 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d11c7c32-24ab-4e4b-be99-c3851c45a894-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 09:45:03 crc kubenswrapper[4946]: I1128 09:45:03.347111 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7hlv\" (UniqueName: \"kubernetes.io/projected/d11c7c32-24ab-4e4b-be99-c3851c45a894-kube-api-access-f7hlv\") on node \"crc\" DevicePath \"\"" Nov 28 09:45:03 crc kubenswrapper[4946]: I1128 09:45:03.347131 4946 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d11c7c32-24ab-4e4b-be99-c3851c45a894-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 09:45:03 crc kubenswrapper[4946]: I1128 09:45:03.640354 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405385-x2g6l" event={"ID":"d11c7c32-24ab-4e4b-be99-c3851c45a894","Type":"ContainerDied","Data":"4874af2f788d4610248cdd8abdfb82ee26c915299185d4b9d4815d27e5cfc9af"} Nov 28 09:45:03 crc kubenswrapper[4946]: I1128 09:45:03.640393 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4874af2f788d4610248cdd8abdfb82ee26c915299185d4b9d4815d27e5cfc9af" Nov 28 09:45:03 crc kubenswrapper[4946]: I1128 09:45:03.640483 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405385-x2g6l" Nov 28 09:45:04 crc kubenswrapper[4946]: I1128 09:45:04.167248 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405340-4qmvl"] Nov 28 09:45:04 crc kubenswrapper[4946]: I1128 09:45:04.182226 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405340-4qmvl"] Nov 28 09:45:06 crc kubenswrapper[4946]: I1128 09:45:06.009905 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0282a26a-f451-4607-9fb7-35a20456c493" path="/var/lib/kubelet/pods/0282a26a-f451-4607-9fb7-35a20456c493/volumes" Nov 28 09:45:11 crc kubenswrapper[4946]: I1128 09:45:11.990768 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:45:11 crc kubenswrapper[4946]: E1128 09:45:11.992190 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:45:26 crc kubenswrapper[4946]: I1128 09:45:25.999533 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:45:26 crc kubenswrapper[4946]: E1128 09:45:26.000860 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:45:30 crc kubenswrapper[4946]: I1128 09:45:30.969966 4946 generic.go:334] "Generic (PLEG): container finished" podID="84956b58-bb77-4d59-9bab-5112c6660a05" containerID="4e319fdf3f7f922d55f20d9242ed363d35c584c190fd3eaf2963bfce119657ca" exitCode=0 Nov 28 09:45:30 crc kubenswrapper[4946]: I1128 09:45:30.970063 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" event={"ID":"84956b58-bb77-4d59-9bab-5112c6660a05","Type":"ContainerDied","Data":"4e319fdf3f7f922d55f20d9242ed363d35c584c190fd3eaf2963bfce119657ca"} Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.414363 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.554491 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-inventory\") pod \"84956b58-bb77-4d59-9bab-5112c6660a05\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.554754 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cells-global-config-0\") pod \"84956b58-bb77-4d59-9bab-5112c6660a05\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.554803 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mcks\" (UniqueName: \"kubernetes.io/projected/84956b58-bb77-4d59-9bab-5112c6660a05-kube-api-access-4mcks\") pod \"84956b58-bb77-4d59-9bab-5112c6660a05\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.554829 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cells-global-config-1\") pod \"84956b58-bb77-4d59-9bab-5112c6660a05\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.554871 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-migration-ssh-key-1\") pod \"84956b58-bb77-4d59-9bab-5112c6660a05\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.554953 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-ssh-key\") pod \"84956b58-bb77-4d59-9bab-5112c6660a05\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.554973 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cell1-compute-config-1\") pod \"84956b58-bb77-4d59-9bab-5112c6660a05\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.555015 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cell1-compute-config-0\") pod \"84956b58-bb77-4d59-9bab-5112c6660a05\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.555041 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cell1-combined-ca-bundle\") pod \"84956b58-bb77-4d59-9bab-5112c6660a05\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.555061 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-migration-ssh-key-0\") pod \"84956b58-bb77-4d59-9bab-5112c6660a05\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.555208 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-ceph\") pod \"84956b58-bb77-4d59-9bab-5112c6660a05\" (UID: \"84956b58-bb77-4d59-9bab-5112c6660a05\") " Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.560031 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-ceph" (OuterVolumeSpecName: "ceph") pod "84956b58-bb77-4d59-9bab-5112c6660a05" (UID: "84956b58-bb77-4d59-9bab-5112c6660a05"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.560171 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84956b58-bb77-4d59-9bab-5112c6660a05-kube-api-access-4mcks" (OuterVolumeSpecName: "kube-api-access-4mcks") pod "84956b58-bb77-4d59-9bab-5112c6660a05" (UID: "84956b58-bb77-4d59-9bab-5112c6660a05"). InnerVolumeSpecName "kube-api-access-4mcks". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.560720 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "84956b58-bb77-4d59-9bab-5112c6660a05" (UID: "84956b58-bb77-4d59-9bab-5112c6660a05"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.580865 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cells-global-config-1" (OuterVolumeSpecName: "nova-cells-global-config-1") pod "84956b58-bb77-4d59-9bab-5112c6660a05" (UID: "84956b58-bb77-4d59-9bab-5112c6660a05"). InnerVolumeSpecName "nova-cells-global-config-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.582100 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "84956b58-bb77-4d59-9bab-5112c6660a05" (UID: "84956b58-bb77-4d59-9bab-5112c6660a05"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.589805 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "84956b58-bb77-4d59-9bab-5112c6660a05" (UID: "84956b58-bb77-4d59-9bab-5112c6660a05"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.589831 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "84956b58-bb77-4d59-9bab-5112c6660a05" (UID: "84956b58-bb77-4d59-9bab-5112c6660a05"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.589887 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "84956b58-bb77-4d59-9bab-5112c6660a05" (UID: "84956b58-bb77-4d59-9bab-5112c6660a05"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.598165 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-inventory" (OuterVolumeSpecName: "inventory") pod "84956b58-bb77-4d59-9bab-5112c6660a05" (UID: "84956b58-bb77-4d59-9bab-5112c6660a05"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.613400 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "84956b58-bb77-4d59-9bab-5112c6660a05" (UID: "84956b58-bb77-4d59-9bab-5112c6660a05"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.618372 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "84956b58-bb77-4d59-9bab-5112c6660a05" (UID: "84956b58-bb77-4d59-9bab-5112c6660a05"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.657852 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mcks\" (UniqueName: \"kubernetes.io/projected/84956b58-bb77-4d59-9bab-5112c6660a05-kube-api-access-4mcks\") on node \"crc\" DevicePath \"\"" Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.657960 4946 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cells-global-config-1\") on node \"crc\" DevicePath \"\"" Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.657983 4946 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.657997 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.658011 4946 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.658025 4946 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.658036 4946 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.658052 4946 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.658065 4946 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-ceph\") on node \"crc\" DevicePath \"\"" Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.658098 4946 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84956b58-bb77-4d59-9bab-5112c6660a05-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.658110 4946 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/84956b58-bb77-4d59-9bab-5112c6660a05-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.999314 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.999516 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl" event={"ID":"84956b58-bb77-4d59-9bab-5112c6660a05","Type":"ContainerDied","Data":"73560e8342a417bb2bcef19bceddb0f077dfa6dadf95778f77bb45a2e4d9dbbc"} Nov 28 09:45:32 crc kubenswrapper[4946]: I1128 09:45:32.999577 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73560e8342a417bb2bcef19bceddb0f077dfa6dadf95778f77bb45a2e4d9dbbc" Nov 28 09:45:37 crc kubenswrapper[4946]: I1128 09:45:37.990160 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:45:37 crc kubenswrapper[4946]: E1128 09:45:37.991456 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:45:51 crc kubenswrapper[4946]: I1128 09:45:51.990581 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:45:51 crc kubenswrapper[4946]: E1128 09:45:51.993354 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:45:53 crc kubenswrapper[4946]: I1128 09:45:53.605297 4946 scope.go:117] "RemoveContainer" containerID="dccc546b1b091d76212c3293e5da353c92143e3a1d1d0b80f95be1072e8bfb30" Nov 28 09:45:57 crc kubenswrapper[4946]: I1128 09:45:57.531096 4946 trace.go:236] Trace[389871645]: "Calculate volume metrics of ovndbcluster-nb-etc-ovn for pod openstack/ovsdbserver-nb-2" (28-Nov-2025 09:45:56.401) (total time: 1129ms): Nov 28 09:45:57 crc kubenswrapper[4946]: Trace[389871645]: [1.129103459s] [1.129103459s] END Nov 28 09:46:04 crc kubenswrapper[4946]: I1128 09:46:04.991206 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:46:04 crc kubenswrapper[4946]: E1128 09:46:04.992673 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:46:18 crc kubenswrapper[4946]: I1128 09:46:18.991071 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:46:18 crc kubenswrapper[4946]: E1128 09:46:18.991986 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:46:30 crc kubenswrapper[4946]: I1128 09:46:30.990779 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:46:30 crc kubenswrapper[4946]: E1128 09:46:30.991719 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:46:43 crc kubenswrapper[4946]: I1128 09:46:43.990650 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:46:43 crc kubenswrapper[4946]: E1128 09:46:43.991714 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:46:56 crc kubenswrapper[4946]: I1128 09:46:56.057781 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:46:56 crc kubenswrapper[4946]: E1128 09:46:56.058914 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:47:05 crc kubenswrapper[4946]: I1128 09:47:05.756891 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Nov 28 09:47:05 crc kubenswrapper[4946]: I1128 09:47:05.757660 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-copy-data" podUID="f79fc6d4-617a-4ee4-95a3-9bfb249b7327" containerName="adoption" containerID="cri-o://4169463718a6ee9ad980ddf0369d0b77d0bc7ebf0b0843e67b272706799061d9" gracePeriod=30 Nov 28 09:47:07 crc kubenswrapper[4946]: I1128 09:47:07.990504 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:47:07 crc kubenswrapper[4946]: E1128 09:47:07.991343 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:47:18 crc kubenswrapper[4946]: I1128 09:47:18.992439 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:47:18 crc kubenswrapper[4946]: E1128 09:47:18.993569 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:47:31 crc kubenswrapper[4946]: I1128 09:47:31.991794 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:47:31 crc kubenswrapper[4946]: E1128 09:47:31.992929 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:47:36 crc kubenswrapper[4946]: I1128 09:47:36.118185 4946 generic.go:334] "Generic (PLEG): container finished" podID="f79fc6d4-617a-4ee4-95a3-9bfb249b7327" containerID="4169463718a6ee9ad980ddf0369d0b77d0bc7ebf0b0843e67b272706799061d9" exitCode=137 Nov 28 09:47:36 crc kubenswrapper[4946]: I1128 09:47:36.118725 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"f79fc6d4-617a-4ee4-95a3-9bfb249b7327","Type":"ContainerDied","Data":"4169463718a6ee9ad980ddf0369d0b77d0bc7ebf0b0843e67b272706799061d9"} Nov 28 09:47:36 crc kubenswrapper[4946]: I1128 09:47:36.436735 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 28 09:47:36 crc kubenswrapper[4946]: I1128 09:47:36.562499 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj5br\" (UniqueName: \"kubernetes.io/projected/f79fc6d4-617a-4ee4-95a3-9bfb249b7327-kube-api-access-nj5br\") pod \"f79fc6d4-617a-4ee4-95a3-9bfb249b7327\" (UID: \"f79fc6d4-617a-4ee4-95a3-9bfb249b7327\") " Nov 28 09:47:36 crc kubenswrapper[4946]: I1128 09:47:36.563394 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mariadb-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6c931a98-c5e9-4bb5-adef-ed47e9be370d\") pod \"f79fc6d4-617a-4ee4-95a3-9bfb249b7327\" (UID: \"f79fc6d4-617a-4ee4-95a3-9bfb249b7327\") " Nov 28 09:47:36 crc kubenswrapper[4946]: I1128 09:47:36.569110 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f79fc6d4-617a-4ee4-95a3-9bfb249b7327-kube-api-access-nj5br" (OuterVolumeSpecName: "kube-api-access-nj5br") pod "f79fc6d4-617a-4ee4-95a3-9bfb249b7327" (UID: "f79fc6d4-617a-4ee4-95a3-9bfb249b7327"). InnerVolumeSpecName "kube-api-access-nj5br". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:47:36 crc kubenswrapper[4946]: I1128 09:47:36.581143 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6c931a98-c5e9-4bb5-adef-ed47e9be370d" (OuterVolumeSpecName: "mariadb-data") pod "f79fc6d4-617a-4ee4-95a3-9bfb249b7327" (UID: "f79fc6d4-617a-4ee4-95a3-9bfb249b7327"). InnerVolumeSpecName "pvc-6c931a98-c5e9-4bb5-adef-ed47e9be370d". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 28 09:47:36 crc kubenswrapper[4946]: I1128 09:47:36.667520 4946 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6c931a98-c5e9-4bb5-adef-ed47e9be370d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6c931a98-c5e9-4bb5-adef-ed47e9be370d\") on node \"crc\" " Nov 28 09:47:36 crc kubenswrapper[4946]: I1128 09:47:36.667582 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj5br\" (UniqueName: \"kubernetes.io/projected/f79fc6d4-617a-4ee4-95a3-9bfb249b7327-kube-api-access-nj5br\") on node \"crc\" DevicePath \"\"" Nov 28 09:47:36 crc kubenswrapper[4946]: I1128 09:47:36.697725 4946 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 28 09:47:36 crc kubenswrapper[4946]: I1128 09:47:36.698255 4946 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6c931a98-c5e9-4bb5-adef-ed47e9be370d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6c931a98-c5e9-4bb5-adef-ed47e9be370d") on node "crc" Nov 28 09:47:36 crc kubenswrapper[4946]: I1128 09:47:36.769957 4946 reconciler_common.go:293] "Volume detached for volume \"pvc-6c931a98-c5e9-4bb5-adef-ed47e9be370d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6c931a98-c5e9-4bb5-adef-ed47e9be370d\") on node \"crc\" DevicePath \"\"" Nov 28 09:47:37 crc kubenswrapper[4946]: I1128 09:47:37.139822 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"f79fc6d4-617a-4ee4-95a3-9bfb249b7327","Type":"ContainerDied","Data":"c56886783167515324d73c2d8fa9b3334f2cd10d3d163284f4d46704f7f11631"} Nov 28 09:47:37 crc kubenswrapper[4946]: I1128 09:47:37.139892 4946 scope.go:117] "RemoveContainer" containerID="4169463718a6ee9ad980ddf0369d0b77d0bc7ebf0b0843e67b272706799061d9" Nov 28 09:47:37 crc kubenswrapper[4946]: I1128 09:47:37.139960 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 28 09:47:37 crc kubenswrapper[4946]: I1128 09:47:37.208341 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Nov 28 09:47:37 crc kubenswrapper[4946]: I1128 09:47:37.223393 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-copy-data"] Nov 28 09:47:37 crc kubenswrapper[4946]: I1128 09:47:37.982825 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Nov 28 09:47:37 crc kubenswrapper[4946]: I1128 09:47:37.983180 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-copy-data" podUID="bb6ec39d-f695-4672-97ab-1abee5a16d04" containerName="adoption" containerID="cri-o://9fa555db011d896b07aaffd467b20fb27b12b1a8643760a3bc263d025a1c69cd" gracePeriod=30 Nov 28 09:47:38 crc kubenswrapper[4946]: I1128 09:47:38.014354 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f79fc6d4-617a-4ee4-95a3-9bfb249b7327" path="/var/lib/kubelet/pods/f79fc6d4-617a-4ee4-95a3-9bfb249b7327/volumes" Nov 28 09:47:42 crc kubenswrapper[4946]: I1128 09:47:42.991378 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:47:42 crc kubenswrapper[4946]: E1128 09:47:42.992237 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:47:56 crc kubenswrapper[4946]: I1128 09:47:56.990761 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:47:56 crc kubenswrapper[4946]: E1128 09:47:56.991647 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:48:08 crc kubenswrapper[4946]: I1128 09:48:08.601197 4946 generic.go:334] "Generic (PLEG): container finished" podID="bb6ec39d-f695-4672-97ab-1abee5a16d04" containerID="9fa555db011d896b07aaffd467b20fb27b12b1a8643760a3bc263d025a1c69cd" exitCode=137 Nov 28 09:48:08 crc kubenswrapper[4946]: I1128 09:48:08.601830 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"bb6ec39d-f695-4672-97ab-1abee5a16d04","Type":"ContainerDied","Data":"9fa555db011d896b07aaffd467b20fb27b12b1a8643760a3bc263d025a1c69cd"} Nov 28 09:48:08 crc kubenswrapper[4946]: I1128 09:48:08.601865 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"bb6ec39d-f695-4672-97ab-1abee5a16d04","Type":"ContainerDied","Data":"f6c57f2facb95861a040de435da0d11d52e0c33d572712af56f615c61474d54a"} Nov 28 09:48:08 crc kubenswrapper[4946]: I1128 09:48:08.601881 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6c57f2facb95861a040de435da0d11d52e0c33d572712af56f615c61474d54a" Nov 28 09:48:08 crc kubenswrapper[4946]: I1128 09:48:08.606191 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 28 09:48:08 crc kubenswrapper[4946]: I1128 09:48:08.685210 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/bb6ec39d-f695-4672-97ab-1abee5a16d04-ovn-data-cert\") pod \"bb6ec39d-f695-4672-97ab-1abee5a16d04\" (UID: \"bb6ec39d-f695-4672-97ab-1abee5a16d04\") " Nov 28 09:48:08 crc kubenswrapper[4946]: I1128 09:48:08.685384 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbf5q\" (UniqueName: \"kubernetes.io/projected/bb6ec39d-f695-4672-97ab-1abee5a16d04-kube-api-access-bbf5q\") pod \"bb6ec39d-f695-4672-97ab-1abee5a16d04\" (UID: \"bb6ec39d-f695-4672-97ab-1abee5a16d04\") " Nov 28 09:48:08 crc kubenswrapper[4946]: I1128 09:48:08.686258 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56d395d5-8cb9-4311-ba02-1f17954cb811\") pod \"bb6ec39d-f695-4672-97ab-1abee5a16d04\" (UID: \"bb6ec39d-f695-4672-97ab-1abee5a16d04\") " Nov 28 09:48:08 crc kubenswrapper[4946]: I1128 09:48:08.700940 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb6ec39d-f695-4672-97ab-1abee5a16d04-ovn-data-cert" (OuterVolumeSpecName: "ovn-data-cert") pod "bb6ec39d-f695-4672-97ab-1abee5a16d04" (UID: "bb6ec39d-f695-4672-97ab-1abee5a16d04"). InnerVolumeSpecName "ovn-data-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 09:48:08 crc kubenswrapper[4946]: I1128 09:48:08.701226 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb6ec39d-f695-4672-97ab-1abee5a16d04-kube-api-access-bbf5q" (OuterVolumeSpecName: "kube-api-access-bbf5q") pod "bb6ec39d-f695-4672-97ab-1abee5a16d04" (UID: "bb6ec39d-f695-4672-97ab-1abee5a16d04"). InnerVolumeSpecName "kube-api-access-bbf5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:48:08 crc kubenswrapper[4946]: I1128 09:48:08.708691 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56d395d5-8cb9-4311-ba02-1f17954cb811" (OuterVolumeSpecName: "ovn-data") pod "bb6ec39d-f695-4672-97ab-1abee5a16d04" (UID: "bb6ec39d-f695-4672-97ab-1abee5a16d04"). InnerVolumeSpecName "pvc-56d395d5-8cb9-4311-ba02-1f17954cb811". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 28 09:48:08 crc kubenswrapper[4946]: I1128 09:48:08.789420 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbf5q\" (UniqueName: \"kubernetes.io/projected/bb6ec39d-f695-4672-97ab-1abee5a16d04-kube-api-access-bbf5q\") on node \"crc\" DevicePath \"\"" Nov 28 09:48:08 crc kubenswrapper[4946]: I1128 09:48:08.789495 4946 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-56d395d5-8cb9-4311-ba02-1f17954cb811\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56d395d5-8cb9-4311-ba02-1f17954cb811\") on node \"crc\" " Nov 28 09:48:08 crc kubenswrapper[4946]: I1128 09:48:08.789513 4946 reconciler_common.go:293] "Volume detached for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/bb6ec39d-f695-4672-97ab-1abee5a16d04-ovn-data-cert\") on node \"crc\" DevicePath \"\"" Nov 28 09:48:08 crc kubenswrapper[4946]: I1128 09:48:08.823724 4946 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 28 09:48:08 crc kubenswrapper[4946]: I1128 09:48:08.823860 4946 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-56d395d5-8cb9-4311-ba02-1f17954cb811" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56d395d5-8cb9-4311-ba02-1f17954cb811") on node "crc" Nov 28 09:48:08 crc kubenswrapper[4946]: I1128 09:48:08.892101 4946 reconciler_common.go:293] "Volume detached for volume \"pvc-56d395d5-8cb9-4311-ba02-1f17954cb811\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56d395d5-8cb9-4311-ba02-1f17954cb811\") on node \"crc\" DevicePath \"\"" Nov 28 09:48:08 crc kubenswrapper[4946]: I1128 09:48:08.990309 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:48:08 crc kubenswrapper[4946]: E1128 09:48:08.990826 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:48:09 crc kubenswrapper[4946]: I1128 09:48:09.614695 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 28 09:48:09 crc kubenswrapper[4946]: I1128 09:48:09.659875 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Nov 28 09:48:09 crc kubenswrapper[4946]: I1128 09:48:09.671816 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-copy-data"] Nov 28 09:48:10 crc kubenswrapper[4946]: I1128 09:48:10.008723 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb6ec39d-f695-4672-97ab-1abee5a16d04" path="/var/lib/kubelet/pods/bb6ec39d-f695-4672-97ab-1abee5a16d04/volumes" Nov 28 09:48:19 crc kubenswrapper[4946]: I1128 09:48:19.990808 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:48:19 crc kubenswrapper[4946]: E1128 09:48:19.992078 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:48:32 crc kubenswrapper[4946]: I1128 09:48:32.990563 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:48:32 crc kubenswrapper[4946]: E1128 09:48:32.992025 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:48:44 crc kubenswrapper[4946]: I1128 09:48:44.990352 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:48:44 crc kubenswrapper[4946]: E1128 09:48:44.991576 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:48:53 crc kubenswrapper[4946]: I1128 09:48:53.775498 4946 scope.go:117] "RemoveContainer" containerID="9fa555db011d896b07aaffd467b20fb27b12b1a8643760a3bc263d025a1c69cd" Nov 28 09:48:59 crc kubenswrapper[4946]: I1128 09:48:59.989990 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:49:01 crc kubenswrapper[4946]: I1128 09:49:01.253860 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"9a62cfe45010406e7171d6e8faf0db01357bdb37025af68919c69344f90739c1"} Nov 28 09:49:57 crc kubenswrapper[4946]: I1128 09:49:57.166692 4946 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="a5bdfa1c-6408-40f5-a9db-4b991fd2b022" containerName="galera" probeResult="failure" output="command timed out" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.116403 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mdjb5"] Nov 28 09:50:53 crc kubenswrapper[4946]: E1128 09:50:53.118169 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79fc6d4-617a-4ee4-95a3-9bfb249b7327" containerName="adoption" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.118208 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79fc6d4-617a-4ee4-95a3-9bfb249b7327" containerName="adoption" Nov 28 09:50:53 crc kubenswrapper[4946]: E1128 09:50:53.118260 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84956b58-bb77-4d59-9bab-5112c6660a05" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.118281 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="84956b58-bb77-4d59-9bab-5112c6660a05" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Nov 28 09:50:53 crc kubenswrapper[4946]: E1128 09:50:53.118307 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb6ec39d-f695-4672-97ab-1abee5a16d04" containerName="adoption" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.118322 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb6ec39d-f695-4672-97ab-1abee5a16d04" containerName="adoption" Nov 28 09:50:53 crc kubenswrapper[4946]: E1128 09:50:53.118365 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11c7c32-24ab-4e4b-be99-c3851c45a894" containerName="collect-profiles" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.118381 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11c7c32-24ab-4e4b-be99-c3851c45a894" containerName="collect-profiles" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.119023 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb6ec39d-f695-4672-97ab-1abee5a16d04" containerName="adoption" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.119071 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="84956b58-bb77-4d59-9bab-5112c6660a05" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.119107 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="f79fc6d4-617a-4ee4-95a3-9bfb249b7327" containerName="adoption" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.119147 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11c7c32-24ab-4e4b-be99-c3851c45a894" containerName="collect-profiles" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.124541 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mdjb5" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.134653 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mdjb5"] Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.178681 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cda7564c-7a7e-4d25-ade4-cd8b4d0602a2-catalog-content\") pod \"redhat-marketplace-mdjb5\" (UID: \"cda7564c-7a7e-4d25-ade4-cd8b4d0602a2\") " pod="openshift-marketplace/redhat-marketplace-mdjb5" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.178759 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc8hp\" (UniqueName: \"kubernetes.io/projected/cda7564c-7a7e-4d25-ade4-cd8b4d0602a2-kube-api-access-rc8hp\") pod \"redhat-marketplace-mdjb5\" (UID: \"cda7564c-7a7e-4d25-ade4-cd8b4d0602a2\") " pod="openshift-marketplace/redhat-marketplace-mdjb5" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.178901 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cda7564c-7a7e-4d25-ade4-cd8b4d0602a2-utilities\") pod \"redhat-marketplace-mdjb5\" (UID: \"cda7564c-7a7e-4d25-ade4-cd8b4d0602a2\") " pod="openshift-marketplace/redhat-marketplace-mdjb5" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.280563 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cda7564c-7a7e-4d25-ade4-cd8b4d0602a2-utilities\") pod \"redhat-marketplace-mdjb5\" (UID: \"cda7564c-7a7e-4d25-ade4-cd8b4d0602a2\") " pod="openshift-marketplace/redhat-marketplace-mdjb5" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.280829 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cda7564c-7a7e-4d25-ade4-cd8b4d0602a2-catalog-content\") pod \"redhat-marketplace-mdjb5\" (UID: \"cda7564c-7a7e-4d25-ade4-cd8b4d0602a2\") " pod="openshift-marketplace/redhat-marketplace-mdjb5" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.280902 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc8hp\" (UniqueName: \"kubernetes.io/projected/cda7564c-7a7e-4d25-ade4-cd8b4d0602a2-kube-api-access-rc8hp\") pod \"redhat-marketplace-mdjb5\" (UID: \"cda7564c-7a7e-4d25-ade4-cd8b4d0602a2\") " pod="openshift-marketplace/redhat-marketplace-mdjb5" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.281362 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cda7564c-7a7e-4d25-ade4-cd8b4d0602a2-utilities\") pod \"redhat-marketplace-mdjb5\" (UID: \"cda7564c-7a7e-4d25-ade4-cd8b4d0602a2\") " pod="openshift-marketplace/redhat-marketplace-mdjb5" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.281885 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cda7564c-7a7e-4d25-ade4-cd8b4d0602a2-catalog-content\") pod \"redhat-marketplace-mdjb5\" (UID: \"cda7564c-7a7e-4d25-ade4-cd8b4d0602a2\") " pod="openshift-marketplace/redhat-marketplace-mdjb5" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.300983 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-flcl7"] Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.304396 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flcl7" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.332768 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-flcl7"] Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.376289 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc8hp\" (UniqueName: \"kubernetes.io/projected/cda7564c-7a7e-4d25-ade4-cd8b4d0602a2-kube-api-access-rc8hp\") pod \"redhat-marketplace-mdjb5\" (UID: \"cda7564c-7a7e-4d25-ade4-cd8b4d0602a2\") " pod="openshift-marketplace/redhat-marketplace-mdjb5" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.385770 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qns9\" (UniqueName: \"kubernetes.io/projected/10eab113-a372-4e4f-86f9-ca468122a925-kube-api-access-2qns9\") pod \"redhat-operators-flcl7\" (UID: \"10eab113-a372-4e4f-86f9-ca468122a925\") " pod="openshift-marketplace/redhat-operators-flcl7" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.385872 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10eab113-a372-4e4f-86f9-ca468122a925-utilities\") pod \"redhat-operators-flcl7\" (UID: \"10eab113-a372-4e4f-86f9-ca468122a925\") " pod="openshift-marketplace/redhat-operators-flcl7" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.386010 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10eab113-a372-4e4f-86f9-ca468122a925-catalog-content\") pod \"redhat-operators-flcl7\" (UID: \"10eab113-a372-4e4f-86f9-ca468122a925\") " pod="openshift-marketplace/redhat-operators-flcl7" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.470020 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mdjb5" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.487305 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10eab113-a372-4e4f-86f9-ca468122a925-catalog-content\") pod \"redhat-operators-flcl7\" (UID: \"10eab113-a372-4e4f-86f9-ca468122a925\") " pod="openshift-marketplace/redhat-operators-flcl7" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.487405 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qns9\" (UniqueName: \"kubernetes.io/projected/10eab113-a372-4e4f-86f9-ca468122a925-kube-api-access-2qns9\") pod \"redhat-operators-flcl7\" (UID: \"10eab113-a372-4e4f-86f9-ca468122a925\") " pod="openshift-marketplace/redhat-operators-flcl7" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.487458 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10eab113-a372-4e4f-86f9-ca468122a925-utilities\") pod \"redhat-operators-flcl7\" (UID: \"10eab113-a372-4e4f-86f9-ca468122a925\") " pod="openshift-marketplace/redhat-operators-flcl7" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.487889 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10eab113-a372-4e4f-86f9-ca468122a925-utilities\") pod \"redhat-operators-flcl7\" (UID: \"10eab113-a372-4e4f-86f9-ca468122a925\") " pod="openshift-marketplace/redhat-operators-flcl7" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.488486 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10eab113-a372-4e4f-86f9-ca468122a925-catalog-content\") pod \"redhat-operators-flcl7\" (UID: \"10eab113-a372-4e4f-86f9-ca468122a925\") " pod="openshift-marketplace/redhat-operators-flcl7" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.518908 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qns9\" (UniqueName: \"kubernetes.io/projected/10eab113-a372-4e4f-86f9-ca468122a925-kube-api-access-2qns9\") pod \"redhat-operators-flcl7\" (UID: \"10eab113-a372-4e4f-86f9-ca468122a925\") " pod="openshift-marketplace/redhat-operators-flcl7" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.743143 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flcl7" Nov 28 09:50:53 crc kubenswrapper[4946]: I1128 09:50:53.962390 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mdjb5"] Nov 28 09:50:54 crc kubenswrapper[4946]: I1128 09:50:54.221044 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-flcl7"] Nov 28 09:50:54 crc kubenswrapper[4946]: W1128 09:50:54.230508 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10eab113_a372_4e4f_86f9_ca468122a925.slice/crio-445a35ff2a5d8a79308aafdf8d93b6c8906ff2a0cb186d56cd4767aa73dbaa6b WatchSource:0}: Error finding container 445a35ff2a5d8a79308aafdf8d93b6c8906ff2a0cb186d56cd4767aa73dbaa6b: Status 404 returned error can't find the container with id 445a35ff2a5d8a79308aafdf8d93b6c8906ff2a0cb186d56cd4767aa73dbaa6b Nov 28 09:50:54 crc kubenswrapper[4946]: I1128 09:50:54.965714 4946 generic.go:334] "Generic (PLEG): container finished" podID="10eab113-a372-4e4f-86f9-ca468122a925" containerID="114e4901520332a45c63363457527a1af22578310a421a621f12f2843714c409" exitCode=0 Nov 28 09:50:54 crc kubenswrapper[4946]: I1128 09:50:54.965805 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flcl7" event={"ID":"10eab113-a372-4e4f-86f9-ca468122a925","Type":"ContainerDied","Data":"114e4901520332a45c63363457527a1af22578310a421a621f12f2843714c409"} Nov 28 09:50:54 crc kubenswrapper[4946]: I1128 09:50:54.965844 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flcl7" event={"ID":"10eab113-a372-4e4f-86f9-ca468122a925","Type":"ContainerStarted","Data":"445a35ff2a5d8a79308aafdf8d93b6c8906ff2a0cb186d56cd4767aa73dbaa6b"} Nov 28 09:50:54 crc kubenswrapper[4946]: I1128 09:50:54.969513 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 09:50:54 crc kubenswrapper[4946]: I1128 09:50:54.970799 4946 generic.go:334] "Generic (PLEG): container finished" podID="cda7564c-7a7e-4d25-ade4-cd8b4d0602a2" containerID="0509496f1dcd8067fd88981eb708cd0a7e3c317fcbd17db58a4adac577e755b7" exitCode=0 Nov 28 09:50:54 crc kubenswrapper[4946]: I1128 09:50:54.970856 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdjb5" event={"ID":"cda7564c-7a7e-4d25-ade4-cd8b4d0602a2","Type":"ContainerDied","Data":"0509496f1dcd8067fd88981eb708cd0a7e3c317fcbd17db58a4adac577e755b7"} Nov 28 09:50:54 crc kubenswrapper[4946]: I1128 09:50:54.970893 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdjb5" event={"ID":"cda7564c-7a7e-4d25-ade4-cd8b4d0602a2","Type":"ContainerStarted","Data":"579e89513e77036c63972326828a843cf6f4d39428d3746d0eee72136ffc4dd6"} Nov 28 09:50:55 crc kubenswrapper[4946]: I1128 09:50:55.987690 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdjb5" event={"ID":"cda7564c-7a7e-4d25-ade4-cd8b4d0602a2","Type":"ContainerStarted","Data":"788002876b3cc846e7658ec4429e1d5a9b0f02f263c96bbc10ffa92b361aa127"} Nov 28 09:50:57 crc kubenswrapper[4946]: I1128 09:50:57.005508 4946 generic.go:334] "Generic (PLEG): container finished" podID="cda7564c-7a7e-4d25-ade4-cd8b4d0602a2" containerID="788002876b3cc846e7658ec4429e1d5a9b0f02f263c96bbc10ffa92b361aa127" exitCode=0 Nov 28 09:50:57 crc kubenswrapper[4946]: I1128 09:50:57.005596 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdjb5" event={"ID":"cda7564c-7a7e-4d25-ade4-cd8b4d0602a2","Type":"ContainerDied","Data":"788002876b3cc846e7658ec4429e1d5a9b0f02f263c96bbc10ffa92b361aa127"} Nov 28 09:50:57 crc kubenswrapper[4946]: I1128 09:50:57.010736 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flcl7" event={"ID":"10eab113-a372-4e4f-86f9-ca468122a925","Type":"ContainerStarted","Data":"bc65923409dad00164eddc6606d73f68915c8c88ec3e0c58f0eb4e9e49e8a1fd"} Nov 28 09:50:59 crc kubenswrapper[4946]: I1128 09:50:59.042225 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdjb5" event={"ID":"cda7564c-7a7e-4d25-ade4-cd8b4d0602a2","Type":"ContainerStarted","Data":"dcd783e946c64dd4bf80f5c203977dd35257575b33ff0f288c12c9b45069a070"} Nov 28 09:51:00 crc kubenswrapper[4946]: I1128 09:51:00.062822 4946 generic.go:334] "Generic (PLEG): container finished" podID="10eab113-a372-4e4f-86f9-ca468122a925" containerID="bc65923409dad00164eddc6606d73f68915c8c88ec3e0c58f0eb4e9e49e8a1fd" exitCode=0 Nov 28 09:51:00 crc kubenswrapper[4946]: I1128 09:51:00.062910 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flcl7" event={"ID":"10eab113-a372-4e4f-86f9-ca468122a925","Type":"ContainerDied","Data":"bc65923409dad00164eddc6606d73f68915c8c88ec3e0c58f0eb4e9e49e8a1fd"} Nov 28 09:51:00 crc kubenswrapper[4946]: I1128 09:51:00.137925 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mdjb5" podStartSLOduration=3.848594141 podStartE2EDuration="7.137890537s" podCreationTimestamp="2025-11-28 09:50:53 +0000 UTC" firstStartedPulling="2025-11-28 09:50:54.972609942 +0000 UTC m=+10709.350675083" lastFinishedPulling="2025-11-28 09:50:58.261906358 +0000 UTC m=+10712.639971479" observedRunningTime="2025-11-28 09:51:00.122065406 +0000 UTC m=+10714.500130557" watchObservedRunningTime="2025-11-28 09:51:00.137890537 +0000 UTC m=+10714.515955678" Nov 28 09:51:01 crc kubenswrapper[4946]: I1128 09:51:01.076415 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flcl7" event={"ID":"10eab113-a372-4e4f-86f9-ca468122a925","Type":"ContainerStarted","Data":"85608dd9fb246dca7e09b47e08ee80534b94523ea9977ae759b17e0c312ca5cd"} Nov 28 09:51:03 crc kubenswrapper[4946]: I1128 09:51:03.471552 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mdjb5" Nov 28 09:51:03 crc kubenswrapper[4946]: I1128 09:51:03.472228 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mdjb5" Nov 28 09:51:03 crc kubenswrapper[4946]: I1128 09:51:03.548280 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mdjb5" Nov 28 09:51:03 crc kubenswrapper[4946]: I1128 09:51:03.583667 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-flcl7" podStartSLOduration=5.013337478 podStartE2EDuration="10.583638858s" podCreationTimestamp="2025-11-28 09:50:53 +0000 UTC" firstStartedPulling="2025-11-28 09:50:54.968550331 +0000 UTC m=+10709.346615472" lastFinishedPulling="2025-11-28 09:51:00.538851731 +0000 UTC m=+10714.916916852" observedRunningTime="2025-11-28 09:51:01.117688187 +0000 UTC m=+10715.495753338" watchObservedRunningTime="2025-11-28 09:51:03.583638858 +0000 UTC m=+10717.961704009" Nov 28 09:51:03 crc kubenswrapper[4946]: I1128 09:51:03.744567 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-flcl7" Nov 28 09:51:03 crc kubenswrapper[4946]: I1128 09:51:03.744623 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-flcl7" Nov 28 09:51:04 crc kubenswrapper[4946]: I1128 09:51:04.197316 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mdjb5" Nov 28 09:51:04 crc kubenswrapper[4946]: I1128 09:51:04.816243 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-flcl7" podUID="10eab113-a372-4e4f-86f9-ca468122a925" containerName="registry-server" probeResult="failure" output=< Nov 28 09:51:04 crc kubenswrapper[4946]: timeout: failed to connect service ":50051" within 1s Nov 28 09:51:04 crc kubenswrapper[4946]: > Nov 28 09:51:06 crc kubenswrapper[4946]: I1128 09:51:06.911444 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mdjb5"] Nov 28 09:51:06 crc kubenswrapper[4946]: I1128 09:51:06.912076 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mdjb5" podUID="cda7564c-7a7e-4d25-ade4-cd8b4d0602a2" containerName="registry-server" containerID="cri-o://dcd783e946c64dd4bf80f5c203977dd35257575b33ff0f288c12c9b45069a070" gracePeriod=2 Nov 28 09:51:07 crc kubenswrapper[4946]: I1128 09:51:07.153418 4946 generic.go:334] "Generic (PLEG): container finished" podID="cda7564c-7a7e-4d25-ade4-cd8b4d0602a2" containerID="dcd783e946c64dd4bf80f5c203977dd35257575b33ff0f288c12c9b45069a070" exitCode=0 Nov 28 09:51:07 crc kubenswrapper[4946]: I1128 09:51:07.153477 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdjb5" event={"ID":"cda7564c-7a7e-4d25-ade4-cd8b4d0602a2","Type":"ContainerDied","Data":"dcd783e946c64dd4bf80f5c203977dd35257575b33ff0f288c12c9b45069a070"} Nov 28 09:51:07 crc kubenswrapper[4946]: I1128 09:51:07.522449 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mdjb5" Nov 28 09:51:07 crc kubenswrapper[4946]: I1128 09:51:07.655537 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cda7564c-7a7e-4d25-ade4-cd8b4d0602a2-catalog-content\") pod \"cda7564c-7a7e-4d25-ade4-cd8b4d0602a2\" (UID: \"cda7564c-7a7e-4d25-ade4-cd8b4d0602a2\") " Nov 28 09:51:07 crc kubenswrapper[4946]: I1128 09:51:07.655641 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc8hp\" (UniqueName: \"kubernetes.io/projected/cda7564c-7a7e-4d25-ade4-cd8b4d0602a2-kube-api-access-rc8hp\") pod \"cda7564c-7a7e-4d25-ade4-cd8b4d0602a2\" (UID: \"cda7564c-7a7e-4d25-ade4-cd8b4d0602a2\") " Nov 28 09:51:07 crc kubenswrapper[4946]: I1128 09:51:07.655841 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cda7564c-7a7e-4d25-ade4-cd8b4d0602a2-utilities\") pod \"cda7564c-7a7e-4d25-ade4-cd8b4d0602a2\" (UID: \"cda7564c-7a7e-4d25-ade4-cd8b4d0602a2\") " Nov 28 09:51:07 crc kubenswrapper[4946]: I1128 09:51:07.656395 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cda7564c-7a7e-4d25-ade4-cd8b4d0602a2-utilities" (OuterVolumeSpecName: "utilities") pod "cda7564c-7a7e-4d25-ade4-cd8b4d0602a2" (UID: "cda7564c-7a7e-4d25-ade4-cd8b4d0602a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:51:07 crc kubenswrapper[4946]: I1128 09:51:07.663876 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cda7564c-7a7e-4d25-ade4-cd8b4d0602a2-kube-api-access-rc8hp" (OuterVolumeSpecName: "kube-api-access-rc8hp") pod "cda7564c-7a7e-4d25-ade4-cd8b4d0602a2" (UID: "cda7564c-7a7e-4d25-ade4-cd8b4d0602a2"). InnerVolumeSpecName "kube-api-access-rc8hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:51:07 crc kubenswrapper[4946]: I1128 09:51:07.678330 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cda7564c-7a7e-4d25-ade4-cd8b4d0602a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cda7564c-7a7e-4d25-ade4-cd8b4d0602a2" (UID: "cda7564c-7a7e-4d25-ade4-cd8b4d0602a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:51:07 crc kubenswrapper[4946]: I1128 09:51:07.759809 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cda7564c-7a7e-4d25-ade4-cd8b4d0602a2-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 09:51:07 crc kubenswrapper[4946]: I1128 09:51:07.759860 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cda7564c-7a7e-4d25-ade4-cd8b4d0602a2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 09:51:07 crc kubenswrapper[4946]: I1128 09:51:07.759885 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc8hp\" (UniqueName: \"kubernetes.io/projected/cda7564c-7a7e-4d25-ade4-cd8b4d0602a2-kube-api-access-rc8hp\") on node \"crc\" DevicePath \"\"" Nov 28 09:51:08 crc kubenswrapper[4946]: I1128 09:51:08.172229 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdjb5" event={"ID":"cda7564c-7a7e-4d25-ade4-cd8b4d0602a2","Type":"ContainerDied","Data":"579e89513e77036c63972326828a843cf6f4d39428d3746d0eee72136ffc4dd6"} Nov 28 09:51:08 crc kubenswrapper[4946]: I1128 09:51:08.172311 4946 scope.go:117] "RemoveContainer" containerID="dcd783e946c64dd4bf80f5c203977dd35257575b33ff0f288c12c9b45069a070" Nov 28 09:51:08 crc kubenswrapper[4946]: I1128 09:51:08.172317 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mdjb5" Nov 28 09:51:08 crc kubenswrapper[4946]: I1128 09:51:08.219589 4946 scope.go:117] "RemoveContainer" containerID="788002876b3cc846e7658ec4429e1d5a9b0f02f263c96bbc10ffa92b361aa127" Nov 28 09:51:08 crc kubenswrapper[4946]: I1128 09:51:08.226691 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mdjb5"] Nov 28 09:51:08 crc kubenswrapper[4946]: I1128 09:51:08.242081 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mdjb5"] Nov 28 09:51:08 crc kubenswrapper[4946]: I1128 09:51:08.265301 4946 scope.go:117] "RemoveContainer" containerID="0509496f1dcd8067fd88981eb708cd0a7e3c317fcbd17db58a4adac577e755b7" Nov 28 09:51:10 crc kubenswrapper[4946]: I1128 09:51:10.011200 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cda7564c-7a7e-4d25-ade4-cd8b4d0602a2" path="/var/lib/kubelet/pods/cda7564c-7a7e-4d25-ade4-cd8b4d0602a2/volumes" Nov 28 09:51:13 crc kubenswrapper[4946]: I1128 09:51:13.833930 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-flcl7" Nov 28 09:51:13 crc kubenswrapper[4946]: I1128 09:51:13.937610 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-flcl7" Nov 28 09:51:14 crc kubenswrapper[4946]: I1128 09:51:14.489559 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-flcl7"] Nov 28 09:51:15 crc kubenswrapper[4946]: I1128 09:51:15.346570 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-flcl7" podUID="10eab113-a372-4e4f-86f9-ca468122a925" containerName="registry-server" containerID="cri-o://85608dd9fb246dca7e09b47e08ee80534b94523ea9977ae759b17e0c312ca5cd" gracePeriod=2 Nov 28 09:51:16 crc kubenswrapper[4946]: I1128 09:51:16.737041 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flcl7" Nov 28 09:51:16 crc kubenswrapper[4946]: I1128 09:51:16.817383 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10eab113-a372-4e4f-86f9-ca468122a925-utilities\") pod \"10eab113-a372-4e4f-86f9-ca468122a925\" (UID: \"10eab113-a372-4e4f-86f9-ca468122a925\") " Nov 28 09:51:16 crc kubenswrapper[4946]: I1128 09:51:16.817636 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qns9\" (UniqueName: \"kubernetes.io/projected/10eab113-a372-4e4f-86f9-ca468122a925-kube-api-access-2qns9\") pod \"10eab113-a372-4e4f-86f9-ca468122a925\" (UID: \"10eab113-a372-4e4f-86f9-ca468122a925\") " Nov 28 09:51:16 crc kubenswrapper[4946]: I1128 09:51:16.817661 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10eab113-a372-4e4f-86f9-ca468122a925-catalog-content\") pod \"10eab113-a372-4e4f-86f9-ca468122a925\" (UID: \"10eab113-a372-4e4f-86f9-ca468122a925\") " Nov 28 09:51:16 crc kubenswrapper[4946]: I1128 09:51:16.818922 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10eab113-a372-4e4f-86f9-ca468122a925-utilities" (OuterVolumeSpecName: "utilities") pod "10eab113-a372-4e4f-86f9-ca468122a925" (UID: "10eab113-a372-4e4f-86f9-ca468122a925"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:51:16 crc kubenswrapper[4946]: I1128 09:51:16.825695 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10eab113-a372-4e4f-86f9-ca468122a925-kube-api-access-2qns9" (OuterVolumeSpecName: "kube-api-access-2qns9") pod "10eab113-a372-4e4f-86f9-ca468122a925" (UID: "10eab113-a372-4e4f-86f9-ca468122a925"). InnerVolumeSpecName "kube-api-access-2qns9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:51:16 crc kubenswrapper[4946]: I1128 09:51:16.920121 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qns9\" (UniqueName: \"kubernetes.io/projected/10eab113-a372-4e4f-86f9-ca468122a925-kube-api-access-2qns9\") on node \"crc\" DevicePath \"\"" Nov 28 09:51:16 crc kubenswrapper[4946]: I1128 09:51:16.920155 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10eab113-a372-4e4f-86f9-ca468122a925-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 09:51:16 crc kubenswrapper[4946]: I1128 09:51:16.931357 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10eab113-a372-4e4f-86f9-ca468122a925-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10eab113-a372-4e4f-86f9-ca468122a925" (UID: "10eab113-a372-4e4f-86f9-ca468122a925"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:51:17 crc kubenswrapper[4946]: I1128 09:51:17.022923 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10eab113-a372-4e4f-86f9-ca468122a925-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 09:51:17 crc kubenswrapper[4946]: I1128 09:51:17.380075 4946 generic.go:334] "Generic (PLEG): container finished" podID="10eab113-a372-4e4f-86f9-ca468122a925" containerID="85608dd9fb246dca7e09b47e08ee80534b94523ea9977ae759b17e0c312ca5cd" exitCode=0 Nov 28 09:51:17 crc kubenswrapper[4946]: I1128 09:51:17.380145 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flcl7" event={"ID":"10eab113-a372-4e4f-86f9-ca468122a925","Type":"ContainerDied","Data":"85608dd9fb246dca7e09b47e08ee80534b94523ea9977ae759b17e0c312ca5cd"} Nov 28 09:51:17 crc kubenswrapper[4946]: I1128 09:51:17.380239 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flcl7" event={"ID":"10eab113-a372-4e4f-86f9-ca468122a925","Type":"ContainerDied","Data":"445a35ff2a5d8a79308aafdf8d93b6c8906ff2a0cb186d56cd4767aa73dbaa6b"} Nov 28 09:51:17 crc kubenswrapper[4946]: I1128 09:51:17.380291 4946 scope.go:117] "RemoveContainer" containerID="85608dd9fb246dca7e09b47e08ee80534b94523ea9977ae759b17e0c312ca5cd" Nov 28 09:51:17 crc kubenswrapper[4946]: I1128 09:51:17.380602 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flcl7" Nov 28 09:51:17 crc kubenswrapper[4946]: I1128 09:51:17.419381 4946 scope.go:117] "RemoveContainer" containerID="bc65923409dad00164eddc6606d73f68915c8c88ec3e0c58f0eb4e9e49e8a1fd" Nov 28 09:51:17 crc kubenswrapper[4946]: I1128 09:51:17.433190 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-flcl7"] Nov 28 09:51:17 crc kubenswrapper[4946]: I1128 09:51:17.447165 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-flcl7"] Nov 28 09:51:17 crc kubenswrapper[4946]: I1128 09:51:17.462368 4946 scope.go:117] "RemoveContainer" containerID="114e4901520332a45c63363457527a1af22578310a421a621f12f2843714c409" Nov 28 09:51:17 crc kubenswrapper[4946]: I1128 09:51:17.524453 4946 scope.go:117] "RemoveContainer" containerID="85608dd9fb246dca7e09b47e08ee80534b94523ea9977ae759b17e0c312ca5cd" Nov 28 09:51:17 crc kubenswrapper[4946]: E1128 09:51:17.525330 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85608dd9fb246dca7e09b47e08ee80534b94523ea9977ae759b17e0c312ca5cd\": container with ID starting with 85608dd9fb246dca7e09b47e08ee80534b94523ea9977ae759b17e0c312ca5cd not found: ID does not exist" containerID="85608dd9fb246dca7e09b47e08ee80534b94523ea9977ae759b17e0c312ca5cd" Nov 28 09:51:17 crc kubenswrapper[4946]: I1128 09:51:17.525421 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85608dd9fb246dca7e09b47e08ee80534b94523ea9977ae759b17e0c312ca5cd"} err="failed to get container status \"85608dd9fb246dca7e09b47e08ee80534b94523ea9977ae759b17e0c312ca5cd\": rpc error: code = NotFound desc = could not find container \"85608dd9fb246dca7e09b47e08ee80534b94523ea9977ae759b17e0c312ca5cd\": container with ID starting with 85608dd9fb246dca7e09b47e08ee80534b94523ea9977ae759b17e0c312ca5cd not found: ID does not exist" Nov 28 09:51:17 crc kubenswrapper[4946]: I1128 09:51:17.525487 4946 scope.go:117] "RemoveContainer" containerID="bc65923409dad00164eddc6606d73f68915c8c88ec3e0c58f0eb4e9e49e8a1fd" Nov 28 09:51:17 crc kubenswrapper[4946]: E1128 09:51:17.526024 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc65923409dad00164eddc6606d73f68915c8c88ec3e0c58f0eb4e9e49e8a1fd\": container with ID starting with bc65923409dad00164eddc6606d73f68915c8c88ec3e0c58f0eb4e9e49e8a1fd not found: ID does not exist" containerID="bc65923409dad00164eddc6606d73f68915c8c88ec3e0c58f0eb4e9e49e8a1fd" Nov 28 09:51:17 crc kubenswrapper[4946]: I1128 09:51:17.526063 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc65923409dad00164eddc6606d73f68915c8c88ec3e0c58f0eb4e9e49e8a1fd"} err="failed to get container status \"bc65923409dad00164eddc6606d73f68915c8c88ec3e0c58f0eb4e9e49e8a1fd\": rpc error: code = NotFound desc = could not find container \"bc65923409dad00164eddc6606d73f68915c8c88ec3e0c58f0eb4e9e49e8a1fd\": container with ID starting with bc65923409dad00164eddc6606d73f68915c8c88ec3e0c58f0eb4e9e49e8a1fd not found: ID does not exist" Nov 28 09:51:17 crc kubenswrapper[4946]: I1128 09:51:17.526087 4946 scope.go:117] "RemoveContainer" containerID="114e4901520332a45c63363457527a1af22578310a421a621f12f2843714c409" Nov 28 09:51:17 crc kubenswrapper[4946]: E1128 09:51:17.526592 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"114e4901520332a45c63363457527a1af22578310a421a621f12f2843714c409\": container with ID starting with 114e4901520332a45c63363457527a1af22578310a421a621f12f2843714c409 not found: ID does not exist" containerID="114e4901520332a45c63363457527a1af22578310a421a621f12f2843714c409" Nov 28 09:51:17 crc kubenswrapper[4946]: I1128 09:51:17.526703 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"114e4901520332a45c63363457527a1af22578310a421a621f12f2843714c409"} err="failed to get container status \"114e4901520332a45c63363457527a1af22578310a421a621f12f2843714c409\": rpc error: code = NotFound desc = could not find container \"114e4901520332a45c63363457527a1af22578310a421a621f12f2843714c409\": container with ID starting with 114e4901520332a45c63363457527a1af22578310a421a621f12f2843714c409 not found: ID does not exist" Nov 28 09:51:18 crc kubenswrapper[4946]: I1128 09:51:18.012312 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10eab113-a372-4e4f-86f9-ca468122a925" path="/var/lib/kubelet/pods/10eab113-a372-4e4f-86f9-ca468122a925/volumes" Nov 28 09:51:24 crc kubenswrapper[4946]: I1128 09:51:24.731373 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:51:24 crc kubenswrapper[4946]: I1128 09:51:24.732068 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:51:54 crc kubenswrapper[4946]: I1128 09:51:54.730651 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:51:54 crc kubenswrapper[4946]: I1128 09:51:54.731749 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:52:24 crc kubenswrapper[4946]: I1128 09:52:24.730127 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:52:24 crc kubenswrapper[4946]: I1128 09:52:24.730584 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:52:24 crc kubenswrapper[4946]: I1128 09:52:24.730636 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 09:52:24 crc kubenswrapper[4946]: I1128 09:52:24.731334 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9a62cfe45010406e7171d6e8faf0db01357bdb37025af68919c69344f90739c1"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 09:52:24 crc kubenswrapper[4946]: I1128 09:52:24.731372 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://9a62cfe45010406e7171d6e8faf0db01357bdb37025af68919c69344f90739c1" gracePeriod=600 Nov 28 09:52:25 crc kubenswrapper[4946]: I1128 09:52:25.280455 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="9a62cfe45010406e7171d6e8faf0db01357bdb37025af68919c69344f90739c1" exitCode=0 Nov 28 09:52:25 crc kubenswrapper[4946]: I1128 09:52:25.280509 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"9a62cfe45010406e7171d6e8faf0db01357bdb37025af68919c69344f90739c1"} Nov 28 09:52:25 crc kubenswrapper[4946]: I1128 09:52:25.280924 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700"} Nov 28 09:52:25 crc kubenswrapper[4946]: I1128 09:52:25.280971 4946 scope.go:117] "RemoveContainer" containerID="8f83fc9ef9651ce9a741597d9dbec67f4360270c35ac4dbbeeea2e39c064bf29" Nov 28 09:53:12 crc kubenswrapper[4946]: I1128 09:53:12.194049 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-chjvs"] Nov 28 09:53:12 crc kubenswrapper[4946]: E1128 09:53:12.195483 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda7564c-7a7e-4d25-ade4-cd8b4d0602a2" containerName="registry-server" Nov 28 09:53:12 crc kubenswrapper[4946]: I1128 09:53:12.195507 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda7564c-7a7e-4d25-ade4-cd8b4d0602a2" containerName="registry-server" Nov 28 09:53:12 crc kubenswrapper[4946]: E1128 09:53:12.195545 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10eab113-a372-4e4f-86f9-ca468122a925" containerName="extract-utilities" Nov 28 09:53:12 crc kubenswrapper[4946]: I1128 09:53:12.195560 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="10eab113-a372-4e4f-86f9-ca468122a925" containerName="extract-utilities" Nov 28 09:53:12 crc kubenswrapper[4946]: E1128 09:53:12.195580 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda7564c-7a7e-4d25-ade4-cd8b4d0602a2" containerName="extract-content" Nov 28 09:53:12 crc kubenswrapper[4946]: I1128 09:53:12.195595 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda7564c-7a7e-4d25-ade4-cd8b4d0602a2" containerName="extract-content" Nov 28 09:53:12 crc kubenswrapper[4946]: E1128 09:53:12.195620 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10eab113-a372-4e4f-86f9-ca468122a925" containerName="extract-content" Nov 28 09:53:12 crc kubenswrapper[4946]: I1128 09:53:12.195635 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="10eab113-a372-4e4f-86f9-ca468122a925" containerName="extract-content" Nov 28 09:53:12 crc kubenswrapper[4946]: E1128 09:53:12.195669 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda7564c-7a7e-4d25-ade4-cd8b4d0602a2" containerName="extract-utilities" Nov 28 09:53:12 crc kubenswrapper[4946]: I1128 09:53:12.195682 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda7564c-7a7e-4d25-ade4-cd8b4d0602a2" containerName="extract-utilities" Nov 28 09:53:12 crc kubenswrapper[4946]: E1128 09:53:12.195713 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10eab113-a372-4e4f-86f9-ca468122a925" containerName="registry-server" Nov 28 09:53:12 crc kubenswrapper[4946]: I1128 09:53:12.195726 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="10eab113-a372-4e4f-86f9-ca468122a925" containerName="registry-server" Nov 28 09:53:12 crc kubenswrapper[4946]: I1128 09:53:12.196106 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="10eab113-a372-4e4f-86f9-ca468122a925" containerName="registry-server" Nov 28 09:53:12 crc kubenswrapper[4946]: I1128 09:53:12.196180 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda7564c-7a7e-4d25-ade4-cd8b4d0602a2" containerName="registry-server" Nov 28 09:53:12 crc kubenswrapper[4946]: I1128 09:53:12.199543 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chjvs" Nov 28 09:53:12 crc kubenswrapper[4946]: I1128 09:53:12.206177 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-chjvs"] Nov 28 09:53:12 crc kubenswrapper[4946]: I1128 09:53:12.223635 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a55f92f-68a0-43df-aec1-78bd4dd964e8-catalog-content\") pod \"community-operators-chjvs\" (UID: \"3a55f92f-68a0-43df-aec1-78bd4dd964e8\") " pod="openshift-marketplace/community-operators-chjvs" Nov 28 09:53:12 crc kubenswrapper[4946]: I1128 09:53:12.223835 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a55f92f-68a0-43df-aec1-78bd4dd964e8-utilities\") pod \"community-operators-chjvs\" (UID: \"3a55f92f-68a0-43df-aec1-78bd4dd964e8\") " pod="openshift-marketplace/community-operators-chjvs" Nov 28 09:53:12 crc kubenswrapper[4946]: I1128 09:53:12.223894 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhqxh\" (UniqueName: \"kubernetes.io/projected/3a55f92f-68a0-43df-aec1-78bd4dd964e8-kube-api-access-jhqxh\") pod \"community-operators-chjvs\" (UID: \"3a55f92f-68a0-43df-aec1-78bd4dd964e8\") " pod="openshift-marketplace/community-operators-chjvs" Nov 28 09:53:12 crc kubenswrapper[4946]: I1128 09:53:12.326274 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a55f92f-68a0-43df-aec1-78bd4dd964e8-utilities\") pod \"community-operators-chjvs\" (UID: \"3a55f92f-68a0-43df-aec1-78bd4dd964e8\") " pod="openshift-marketplace/community-operators-chjvs" Nov 28 09:53:12 crc kubenswrapper[4946]: I1128 09:53:12.332623 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhqxh\" (UniqueName: \"kubernetes.io/projected/3a55f92f-68a0-43df-aec1-78bd4dd964e8-kube-api-access-jhqxh\") pod \"community-operators-chjvs\" (UID: \"3a55f92f-68a0-43df-aec1-78bd4dd964e8\") " pod="openshift-marketplace/community-operators-chjvs" Nov 28 09:53:12 crc kubenswrapper[4946]: I1128 09:53:12.332712 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a55f92f-68a0-43df-aec1-78bd4dd964e8-utilities\") pod \"community-operators-chjvs\" (UID: \"3a55f92f-68a0-43df-aec1-78bd4dd964e8\") " pod="openshift-marketplace/community-operators-chjvs" Nov 28 09:53:12 crc kubenswrapper[4946]: I1128 09:53:12.332854 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a55f92f-68a0-43df-aec1-78bd4dd964e8-catalog-content\") pod \"community-operators-chjvs\" (UID: \"3a55f92f-68a0-43df-aec1-78bd4dd964e8\") " pod="openshift-marketplace/community-operators-chjvs" Nov 28 09:53:12 crc kubenswrapper[4946]: I1128 09:53:12.333654 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a55f92f-68a0-43df-aec1-78bd4dd964e8-catalog-content\") pod \"community-operators-chjvs\" (UID: \"3a55f92f-68a0-43df-aec1-78bd4dd964e8\") " pod="openshift-marketplace/community-operators-chjvs" Nov 28 09:53:12 crc kubenswrapper[4946]: I1128 09:53:12.365165 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhqxh\" (UniqueName: \"kubernetes.io/projected/3a55f92f-68a0-43df-aec1-78bd4dd964e8-kube-api-access-jhqxh\") pod \"community-operators-chjvs\" (UID: \"3a55f92f-68a0-43df-aec1-78bd4dd964e8\") " pod="openshift-marketplace/community-operators-chjvs" Nov 28 09:53:12 crc kubenswrapper[4946]: I1128 09:53:12.530149 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chjvs" Nov 28 09:53:13 crc kubenswrapper[4946]: I1128 09:53:13.059738 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-chjvs"] Nov 28 09:53:13 crc kubenswrapper[4946]: W1128 09:53:13.320640 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a55f92f_68a0_43df_aec1_78bd4dd964e8.slice/crio-32661197d065833d4f0c3bc53ac78cb4de50b8cb2801ac83c4e24f128a44ea79 WatchSource:0}: Error finding container 32661197d065833d4f0c3bc53ac78cb4de50b8cb2801ac83c4e24f128a44ea79: Status 404 returned error can't find the container with id 32661197d065833d4f0c3bc53ac78cb4de50b8cb2801ac83c4e24f128a44ea79 Nov 28 09:53:14 crc kubenswrapper[4946]: I1128 09:53:14.090677 4946 generic.go:334] "Generic (PLEG): container finished" podID="3a55f92f-68a0-43df-aec1-78bd4dd964e8" containerID="93f448434fd2f8214a01e1a7b40407344897325b8c465f7c396add40b0eb8f53" exitCode=0 Nov 28 09:53:14 crc kubenswrapper[4946]: I1128 09:53:14.090804 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chjvs" event={"ID":"3a55f92f-68a0-43df-aec1-78bd4dd964e8","Type":"ContainerDied","Data":"93f448434fd2f8214a01e1a7b40407344897325b8c465f7c396add40b0eb8f53"} Nov 28 09:53:14 crc kubenswrapper[4946]: I1128 09:53:14.091513 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chjvs" event={"ID":"3a55f92f-68a0-43df-aec1-78bd4dd964e8","Type":"ContainerStarted","Data":"32661197d065833d4f0c3bc53ac78cb4de50b8cb2801ac83c4e24f128a44ea79"} Nov 28 09:53:16 crc kubenswrapper[4946]: I1128 09:53:16.114578 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chjvs" event={"ID":"3a55f92f-68a0-43df-aec1-78bd4dd964e8","Type":"ContainerStarted","Data":"96ccc01a1aa6fbef746f58bba40ab4b12218169820bb5f8679592913f82aa8f8"} Nov 28 09:53:17 crc kubenswrapper[4946]: I1128 09:53:17.134095 4946 generic.go:334] "Generic (PLEG): container finished" podID="3a55f92f-68a0-43df-aec1-78bd4dd964e8" containerID="96ccc01a1aa6fbef746f58bba40ab4b12218169820bb5f8679592913f82aa8f8" exitCode=0 Nov 28 09:53:17 crc kubenswrapper[4946]: I1128 09:53:17.134231 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chjvs" event={"ID":"3a55f92f-68a0-43df-aec1-78bd4dd964e8","Type":"ContainerDied","Data":"96ccc01a1aa6fbef746f58bba40ab4b12218169820bb5f8679592913f82aa8f8"} Nov 28 09:53:18 crc kubenswrapper[4946]: I1128 09:53:18.147723 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chjvs" event={"ID":"3a55f92f-68a0-43df-aec1-78bd4dd964e8","Type":"ContainerStarted","Data":"49cd6cde230fccdf9f9b872aa93af2e8f59f70a1a93d376e569b8e4c6186678a"} Nov 28 09:53:18 crc kubenswrapper[4946]: I1128 09:53:18.176228 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-chjvs" podStartSLOduration=2.604842095 podStartE2EDuration="6.176202759s" podCreationTimestamp="2025-11-28 09:53:12 +0000 UTC" firstStartedPulling="2025-11-28 09:53:14.094172221 +0000 UTC m=+10848.472237342" lastFinishedPulling="2025-11-28 09:53:17.665532885 +0000 UTC m=+10852.043598006" observedRunningTime="2025-11-28 09:53:18.167413042 +0000 UTC m=+10852.545478173" watchObservedRunningTime="2025-11-28 09:53:18.176202759 +0000 UTC m=+10852.554267890" Nov 28 09:53:22 crc kubenswrapper[4946]: I1128 09:53:22.530498 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-chjvs" Nov 28 09:53:22 crc kubenswrapper[4946]: I1128 09:53:22.531130 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-chjvs" Nov 28 09:53:22 crc kubenswrapper[4946]: I1128 09:53:22.604142 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-chjvs" Nov 28 09:53:23 crc kubenswrapper[4946]: I1128 09:53:23.286338 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-chjvs" Nov 28 09:53:23 crc kubenswrapper[4946]: I1128 09:53:23.355771 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-chjvs"] Nov 28 09:53:25 crc kubenswrapper[4946]: I1128 09:53:25.243015 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-chjvs" podUID="3a55f92f-68a0-43df-aec1-78bd4dd964e8" containerName="registry-server" containerID="cri-o://49cd6cde230fccdf9f9b872aa93af2e8f59f70a1a93d376e569b8e4c6186678a" gracePeriod=2 Nov 28 09:53:25 crc kubenswrapper[4946]: I1128 09:53:25.866776 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chjvs" Nov 28 09:53:25 crc kubenswrapper[4946]: I1128 09:53:25.980700 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a55f92f-68a0-43df-aec1-78bd4dd964e8-catalog-content\") pod \"3a55f92f-68a0-43df-aec1-78bd4dd964e8\" (UID: \"3a55f92f-68a0-43df-aec1-78bd4dd964e8\") " Nov 28 09:53:25 crc kubenswrapper[4946]: I1128 09:53:25.981303 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhqxh\" (UniqueName: \"kubernetes.io/projected/3a55f92f-68a0-43df-aec1-78bd4dd964e8-kube-api-access-jhqxh\") pod \"3a55f92f-68a0-43df-aec1-78bd4dd964e8\" (UID: \"3a55f92f-68a0-43df-aec1-78bd4dd964e8\") " Nov 28 09:53:25 crc kubenswrapper[4946]: I1128 09:53:25.981347 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a55f92f-68a0-43df-aec1-78bd4dd964e8-utilities\") pod \"3a55f92f-68a0-43df-aec1-78bd4dd964e8\" (UID: \"3a55f92f-68a0-43df-aec1-78bd4dd964e8\") " Nov 28 09:53:25 crc kubenswrapper[4946]: I1128 09:53:25.982215 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a55f92f-68a0-43df-aec1-78bd4dd964e8-utilities" (OuterVolumeSpecName: "utilities") pod "3a55f92f-68a0-43df-aec1-78bd4dd964e8" (UID: "3a55f92f-68a0-43df-aec1-78bd4dd964e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:53:26 crc kubenswrapper[4946]: I1128 09:53:26.041440 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a55f92f-68a0-43df-aec1-78bd4dd964e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a55f92f-68a0-43df-aec1-78bd4dd964e8" (UID: "3a55f92f-68a0-43df-aec1-78bd4dd964e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:53:26 crc kubenswrapper[4946]: I1128 09:53:26.085637 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a55f92f-68a0-43df-aec1-78bd4dd964e8-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 09:53:26 crc kubenswrapper[4946]: I1128 09:53:26.085870 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a55f92f-68a0-43df-aec1-78bd4dd964e8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 09:53:26 crc kubenswrapper[4946]: I1128 09:53:26.256679 4946 generic.go:334] "Generic (PLEG): container finished" podID="3a55f92f-68a0-43df-aec1-78bd4dd964e8" containerID="49cd6cde230fccdf9f9b872aa93af2e8f59f70a1a93d376e569b8e4c6186678a" exitCode=0 Nov 28 09:53:26 crc kubenswrapper[4946]: I1128 09:53:26.256725 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chjvs" event={"ID":"3a55f92f-68a0-43df-aec1-78bd4dd964e8","Type":"ContainerDied","Data":"49cd6cde230fccdf9f9b872aa93af2e8f59f70a1a93d376e569b8e4c6186678a"} Nov 28 09:53:26 crc kubenswrapper[4946]: I1128 09:53:26.256760 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chjvs" event={"ID":"3a55f92f-68a0-43df-aec1-78bd4dd964e8","Type":"ContainerDied","Data":"32661197d065833d4f0c3bc53ac78cb4de50b8cb2801ac83c4e24f128a44ea79"} Nov 28 09:53:26 crc kubenswrapper[4946]: I1128 09:53:26.256765 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chjvs" Nov 28 09:53:26 crc kubenswrapper[4946]: I1128 09:53:26.256777 4946 scope.go:117] "RemoveContainer" containerID="49cd6cde230fccdf9f9b872aa93af2e8f59f70a1a93d376e569b8e4c6186678a" Nov 28 09:53:26 crc kubenswrapper[4946]: I1128 09:53:26.280250 4946 scope.go:117] "RemoveContainer" containerID="96ccc01a1aa6fbef746f58bba40ab4b12218169820bb5f8679592913f82aa8f8" Nov 28 09:53:26 crc kubenswrapper[4946]: I1128 09:53:26.667271 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a55f92f-68a0-43df-aec1-78bd4dd964e8-kube-api-access-jhqxh" (OuterVolumeSpecName: "kube-api-access-jhqxh") pod "3a55f92f-68a0-43df-aec1-78bd4dd964e8" (UID: "3a55f92f-68a0-43df-aec1-78bd4dd964e8"). InnerVolumeSpecName "kube-api-access-jhqxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:53:26 crc kubenswrapper[4946]: I1128 09:53:26.683862 4946 scope.go:117] "RemoveContainer" containerID="93f448434fd2f8214a01e1a7b40407344897325b8c465f7c396add40b0eb8f53" Nov 28 09:53:26 crc kubenswrapper[4946]: I1128 09:53:26.705721 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhqxh\" (UniqueName: \"kubernetes.io/projected/3a55f92f-68a0-43df-aec1-78bd4dd964e8-kube-api-access-jhqxh\") on node \"crc\" DevicePath \"\"" Nov 28 09:53:26 crc kubenswrapper[4946]: I1128 09:53:26.902693 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-chjvs"] Nov 28 09:53:26 crc kubenswrapper[4946]: I1128 09:53:26.915310 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-chjvs"] Nov 28 09:53:26 crc kubenswrapper[4946]: I1128 09:53:26.919475 4946 scope.go:117] "RemoveContainer" containerID="49cd6cde230fccdf9f9b872aa93af2e8f59f70a1a93d376e569b8e4c6186678a" Nov 28 09:53:26 crc kubenswrapper[4946]: E1128 09:53:26.919866 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49cd6cde230fccdf9f9b872aa93af2e8f59f70a1a93d376e569b8e4c6186678a\": container with ID starting with 49cd6cde230fccdf9f9b872aa93af2e8f59f70a1a93d376e569b8e4c6186678a not found: ID does not exist" containerID="49cd6cde230fccdf9f9b872aa93af2e8f59f70a1a93d376e569b8e4c6186678a" Nov 28 09:53:26 crc kubenswrapper[4946]: I1128 09:53:26.919958 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49cd6cde230fccdf9f9b872aa93af2e8f59f70a1a93d376e569b8e4c6186678a"} err="failed to get container status \"49cd6cde230fccdf9f9b872aa93af2e8f59f70a1a93d376e569b8e4c6186678a\": rpc error: code = NotFound desc = could not find container \"49cd6cde230fccdf9f9b872aa93af2e8f59f70a1a93d376e569b8e4c6186678a\": container with ID starting with 49cd6cde230fccdf9f9b872aa93af2e8f59f70a1a93d376e569b8e4c6186678a not found: ID does not exist" Nov 28 09:53:26 crc kubenswrapper[4946]: I1128 09:53:26.920048 4946 scope.go:117] "RemoveContainer" containerID="96ccc01a1aa6fbef746f58bba40ab4b12218169820bb5f8679592913f82aa8f8" Nov 28 09:53:26 crc kubenswrapper[4946]: E1128 09:53:26.920458 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96ccc01a1aa6fbef746f58bba40ab4b12218169820bb5f8679592913f82aa8f8\": container with ID starting with 96ccc01a1aa6fbef746f58bba40ab4b12218169820bb5f8679592913f82aa8f8 not found: ID does not exist" containerID="96ccc01a1aa6fbef746f58bba40ab4b12218169820bb5f8679592913f82aa8f8" Nov 28 09:53:26 crc kubenswrapper[4946]: I1128 09:53:26.920555 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96ccc01a1aa6fbef746f58bba40ab4b12218169820bb5f8679592913f82aa8f8"} err="failed to get container status \"96ccc01a1aa6fbef746f58bba40ab4b12218169820bb5f8679592913f82aa8f8\": rpc error: code = NotFound desc = could not find container \"96ccc01a1aa6fbef746f58bba40ab4b12218169820bb5f8679592913f82aa8f8\": container with ID starting with 96ccc01a1aa6fbef746f58bba40ab4b12218169820bb5f8679592913f82aa8f8 not found: ID does not exist" Nov 28 09:53:26 crc kubenswrapper[4946]: I1128 09:53:26.920621 4946 scope.go:117] "RemoveContainer" containerID="93f448434fd2f8214a01e1a7b40407344897325b8c465f7c396add40b0eb8f53" Nov 28 09:53:26 crc kubenswrapper[4946]: E1128 09:53:26.921061 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93f448434fd2f8214a01e1a7b40407344897325b8c465f7c396add40b0eb8f53\": container with ID starting with 93f448434fd2f8214a01e1a7b40407344897325b8c465f7c396add40b0eb8f53 not found: ID does not exist" containerID="93f448434fd2f8214a01e1a7b40407344897325b8c465f7c396add40b0eb8f53" Nov 28 09:53:26 crc kubenswrapper[4946]: I1128 09:53:26.921148 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93f448434fd2f8214a01e1a7b40407344897325b8c465f7c396add40b0eb8f53"} err="failed to get container status \"93f448434fd2f8214a01e1a7b40407344897325b8c465f7c396add40b0eb8f53\": rpc error: code = NotFound desc = could not find container \"93f448434fd2f8214a01e1a7b40407344897325b8c465f7c396add40b0eb8f53\": container with ID starting with 93f448434fd2f8214a01e1a7b40407344897325b8c465f7c396add40b0eb8f53 not found: ID does not exist" Nov 28 09:53:28 crc kubenswrapper[4946]: I1128 09:53:28.030051 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a55f92f-68a0-43df-aec1-78bd4dd964e8" path="/var/lib/kubelet/pods/3a55f92f-68a0-43df-aec1-78bd4dd964e8/volumes" Nov 28 09:54:54 crc kubenswrapper[4946]: I1128 09:54:54.731145 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:54:54 crc kubenswrapper[4946]: I1128 09:54:54.732058 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:55:24 crc kubenswrapper[4946]: I1128 09:55:24.730506 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:55:24 crc kubenswrapper[4946]: I1128 09:55:24.731222 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:55:54 crc kubenswrapper[4946]: I1128 09:55:54.730840 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 09:55:54 crc kubenswrapper[4946]: I1128 09:55:54.731867 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 09:55:54 crc kubenswrapper[4946]: I1128 09:55:54.731960 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 09:55:54 crc kubenswrapper[4946]: I1128 09:55:54.733596 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 09:55:54 crc kubenswrapper[4946]: I1128 09:55:54.733734 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" gracePeriod=600 Nov 28 09:55:55 crc kubenswrapper[4946]: I1128 09:55:55.121876 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" exitCode=0 Nov 28 09:55:55 crc kubenswrapper[4946]: I1128 09:55:55.122014 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700"} Nov 28 09:55:55 crc kubenswrapper[4946]: I1128 09:55:55.122079 4946 scope.go:117] "RemoveContainer" containerID="9a62cfe45010406e7171d6e8faf0db01357bdb37025af68919c69344f90739c1" Nov 28 09:55:55 crc kubenswrapper[4946]: E1128 09:55:55.465641 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:55:56 crc kubenswrapper[4946]: I1128 09:55:56.135227 4946 scope.go:117] "RemoveContainer" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" Nov 28 09:55:56 crc kubenswrapper[4946]: E1128 09:55:56.135522 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:56:10 crc kubenswrapper[4946]: I1128 09:56:10.991897 4946 scope.go:117] "RemoveContainer" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" Nov 28 09:56:10 crc kubenswrapper[4946]: E1128 09:56:10.993692 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:56:23 crc kubenswrapper[4946]: I1128 09:56:23.991580 4946 scope.go:117] "RemoveContainer" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" Nov 28 09:56:23 crc kubenswrapper[4946]: E1128 09:56:23.992609 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:56:32 crc kubenswrapper[4946]: I1128 09:56:32.044931 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gxkjh"] Nov 28 09:56:32 crc kubenswrapper[4946]: E1128 09:56:32.046351 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a55f92f-68a0-43df-aec1-78bd4dd964e8" containerName="extract-utilities" Nov 28 09:56:32 crc kubenswrapper[4946]: I1128 09:56:32.046379 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a55f92f-68a0-43df-aec1-78bd4dd964e8" containerName="extract-utilities" Nov 28 09:56:32 crc kubenswrapper[4946]: E1128 09:56:32.046438 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a55f92f-68a0-43df-aec1-78bd4dd964e8" containerName="registry-server" Nov 28 09:56:32 crc kubenswrapper[4946]: I1128 09:56:32.046452 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a55f92f-68a0-43df-aec1-78bd4dd964e8" containerName="registry-server" Nov 28 09:56:32 crc kubenswrapper[4946]: E1128 09:56:32.046503 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a55f92f-68a0-43df-aec1-78bd4dd964e8" containerName="extract-content" Nov 28 09:56:32 crc kubenswrapper[4946]: I1128 09:56:32.046516 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a55f92f-68a0-43df-aec1-78bd4dd964e8" containerName="extract-content" Nov 28 09:56:32 crc kubenswrapper[4946]: I1128 09:56:32.046880 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a55f92f-68a0-43df-aec1-78bd4dd964e8" containerName="registry-server" Nov 28 09:56:32 crc kubenswrapper[4946]: I1128 09:56:32.049641 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxkjh" Nov 28 09:56:32 crc kubenswrapper[4946]: I1128 09:56:32.059807 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gxkjh"] Nov 28 09:56:32 crc kubenswrapper[4946]: I1128 09:56:32.148604 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9164bedb-5437-4108-b615-c3f937cd939d-catalog-content\") pod \"certified-operators-gxkjh\" (UID: \"9164bedb-5437-4108-b615-c3f937cd939d\") " pod="openshift-marketplace/certified-operators-gxkjh" Nov 28 09:56:32 crc kubenswrapper[4946]: I1128 09:56:32.148682 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhxl2\" (UniqueName: \"kubernetes.io/projected/9164bedb-5437-4108-b615-c3f937cd939d-kube-api-access-fhxl2\") pod \"certified-operators-gxkjh\" (UID: \"9164bedb-5437-4108-b615-c3f937cd939d\") " pod="openshift-marketplace/certified-operators-gxkjh" Nov 28 09:56:32 crc kubenswrapper[4946]: I1128 09:56:32.149446 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9164bedb-5437-4108-b615-c3f937cd939d-utilities\") pod \"certified-operators-gxkjh\" (UID: \"9164bedb-5437-4108-b615-c3f937cd939d\") " pod="openshift-marketplace/certified-operators-gxkjh" Nov 28 09:56:32 crc kubenswrapper[4946]: I1128 09:56:32.251579 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9164bedb-5437-4108-b615-c3f937cd939d-utilities\") pod \"certified-operators-gxkjh\" (UID: \"9164bedb-5437-4108-b615-c3f937cd939d\") " pod="openshift-marketplace/certified-operators-gxkjh" Nov 28 09:56:32 crc kubenswrapper[4946]: I1128 09:56:32.252008 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9164bedb-5437-4108-b615-c3f937cd939d-catalog-content\") pod \"certified-operators-gxkjh\" (UID: \"9164bedb-5437-4108-b615-c3f937cd939d\") " pod="openshift-marketplace/certified-operators-gxkjh" Nov 28 09:56:32 crc kubenswrapper[4946]: I1128 09:56:32.252049 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhxl2\" (UniqueName: \"kubernetes.io/projected/9164bedb-5437-4108-b615-c3f937cd939d-kube-api-access-fhxl2\") pod \"certified-operators-gxkjh\" (UID: \"9164bedb-5437-4108-b615-c3f937cd939d\") " pod="openshift-marketplace/certified-operators-gxkjh" Nov 28 09:56:32 crc kubenswrapper[4946]: I1128 09:56:32.252153 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9164bedb-5437-4108-b615-c3f937cd939d-utilities\") pod \"certified-operators-gxkjh\" (UID: \"9164bedb-5437-4108-b615-c3f937cd939d\") " pod="openshift-marketplace/certified-operators-gxkjh" Nov 28 09:56:32 crc kubenswrapper[4946]: I1128 09:56:32.252498 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9164bedb-5437-4108-b615-c3f937cd939d-catalog-content\") pod \"certified-operators-gxkjh\" (UID: \"9164bedb-5437-4108-b615-c3f937cd939d\") " pod="openshift-marketplace/certified-operators-gxkjh" Nov 28 09:56:32 crc kubenswrapper[4946]: I1128 09:56:32.275647 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhxl2\" (UniqueName: \"kubernetes.io/projected/9164bedb-5437-4108-b615-c3f937cd939d-kube-api-access-fhxl2\") pod \"certified-operators-gxkjh\" (UID: \"9164bedb-5437-4108-b615-c3f937cd939d\") " pod="openshift-marketplace/certified-operators-gxkjh" Nov 28 09:56:32 crc kubenswrapper[4946]: I1128 09:56:32.403911 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxkjh" Nov 28 09:56:33 crc kubenswrapper[4946]: I1128 09:56:33.056240 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gxkjh"] Nov 28 09:56:33 crc kubenswrapper[4946]: I1128 09:56:33.681543 4946 generic.go:334] "Generic (PLEG): container finished" podID="9164bedb-5437-4108-b615-c3f937cd939d" containerID="3620a36bb61cf63800a53fb3a78afdca32dfc9e1116abf2915cc50716d44ae02" exitCode=0 Nov 28 09:56:33 crc kubenswrapper[4946]: I1128 09:56:33.681679 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxkjh" event={"ID":"9164bedb-5437-4108-b615-c3f937cd939d","Type":"ContainerDied","Data":"3620a36bb61cf63800a53fb3a78afdca32dfc9e1116abf2915cc50716d44ae02"} Nov 28 09:56:33 crc kubenswrapper[4946]: I1128 09:56:33.682250 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxkjh" event={"ID":"9164bedb-5437-4108-b615-c3f937cd939d","Type":"ContainerStarted","Data":"5905ce3efb1974ead80d3e71ca4ee2acc9533f157a7a60a9323a6a5ecbe96465"} Nov 28 09:56:34 crc kubenswrapper[4946]: I1128 09:56:34.695604 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 09:56:34 crc kubenswrapper[4946]: I1128 09:56:34.990317 4946 scope.go:117] "RemoveContainer" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" Nov 28 09:56:34 crc kubenswrapper[4946]: E1128 09:56:34.991074 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:56:36 crc kubenswrapper[4946]: I1128 09:56:36.715768 4946 generic.go:334] "Generic (PLEG): container finished" podID="9164bedb-5437-4108-b615-c3f937cd939d" containerID="0abcf649e7accf85f2cd2ff5d7be822416d0536da268d9ea9956038832c78a41" exitCode=0 Nov 28 09:56:36 crc kubenswrapper[4946]: I1128 09:56:36.715855 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxkjh" event={"ID":"9164bedb-5437-4108-b615-c3f937cd939d","Type":"ContainerDied","Data":"0abcf649e7accf85f2cd2ff5d7be822416d0536da268d9ea9956038832c78a41"} Nov 28 09:56:37 crc kubenswrapper[4946]: I1128 09:56:37.727327 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxkjh" event={"ID":"9164bedb-5437-4108-b615-c3f937cd939d","Type":"ContainerStarted","Data":"b3d929ea99414bdb1d066bb870074d6ab3ee791bf28985143f4c451562871aca"} Nov 28 09:56:37 crc kubenswrapper[4946]: I1128 09:56:37.746572 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gxkjh" podStartSLOduration=3.240795703 podStartE2EDuration="5.746554466s" podCreationTimestamp="2025-11-28 09:56:32 +0000 UTC" firstStartedPulling="2025-11-28 09:56:34.695151136 +0000 UTC m=+11049.073216287" lastFinishedPulling="2025-11-28 09:56:37.200909939 +0000 UTC m=+11051.578975050" observedRunningTime="2025-11-28 09:56:37.744253359 +0000 UTC m=+11052.122318470" watchObservedRunningTime="2025-11-28 09:56:37.746554466 +0000 UTC m=+11052.124619577" Nov 28 09:56:42 crc kubenswrapper[4946]: I1128 09:56:42.404849 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gxkjh" Nov 28 09:56:42 crc kubenswrapper[4946]: I1128 09:56:42.405705 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gxkjh" Nov 28 09:56:42 crc kubenswrapper[4946]: I1128 09:56:42.468604 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gxkjh" Nov 28 09:56:42 crc kubenswrapper[4946]: I1128 09:56:42.883558 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gxkjh" Nov 28 09:56:42 crc kubenswrapper[4946]: I1128 09:56:42.950575 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gxkjh"] Nov 28 09:56:44 crc kubenswrapper[4946]: I1128 09:56:44.824222 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gxkjh" podUID="9164bedb-5437-4108-b615-c3f937cd939d" containerName="registry-server" containerID="cri-o://b3d929ea99414bdb1d066bb870074d6ab3ee791bf28985143f4c451562871aca" gracePeriod=2 Nov 28 09:56:45 crc kubenswrapper[4946]: I1128 09:56:45.423866 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxkjh" Nov 28 09:56:45 crc kubenswrapper[4946]: I1128 09:56:45.508910 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9164bedb-5437-4108-b615-c3f937cd939d-catalog-content\") pod \"9164bedb-5437-4108-b615-c3f937cd939d\" (UID: \"9164bedb-5437-4108-b615-c3f937cd939d\") " Nov 28 09:56:45 crc kubenswrapper[4946]: I1128 09:56:45.509032 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9164bedb-5437-4108-b615-c3f937cd939d-utilities\") pod \"9164bedb-5437-4108-b615-c3f937cd939d\" (UID: \"9164bedb-5437-4108-b615-c3f937cd939d\") " Nov 28 09:56:45 crc kubenswrapper[4946]: I1128 09:56:45.509232 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhxl2\" (UniqueName: \"kubernetes.io/projected/9164bedb-5437-4108-b615-c3f937cd939d-kube-api-access-fhxl2\") pod \"9164bedb-5437-4108-b615-c3f937cd939d\" (UID: \"9164bedb-5437-4108-b615-c3f937cd939d\") " Nov 28 09:56:45 crc kubenswrapper[4946]: I1128 09:56:45.510297 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9164bedb-5437-4108-b615-c3f937cd939d-utilities" (OuterVolumeSpecName: "utilities") pod "9164bedb-5437-4108-b615-c3f937cd939d" (UID: "9164bedb-5437-4108-b615-c3f937cd939d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:56:45 crc kubenswrapper[4946]: I1128 09:56:45.523059 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9164bedb-5437-4108-b615-c3f937cd939d-kube-api-access-fhxl2" (OuterVolumeSpecName: "kube-api-access-fhxl2") pod "9164bedb-5437-4108-b615-c3f937cd939d" (UID: "9164bedb-5437-4108-b615-c3f937cd939d"). InnerVolumeSpecName "kube-api-access-fhxl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 09:56:45 crc kubenswrapper[4946]: I1128 09:56:45.573140 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9164bedb-5437-4108-b615-c3f937cd939d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9164bedb-5437-4108-b615-c3f937cd939d" (UID: "9164bedb-5437-4108-b615-c3f937cd939d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 09:56:45 crc kubenswrapper[4946]: I1128 09:56:45.612845 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhxl2\" (UniqueName: \"kubernetes.io/projected/9164bedb-5437-4108-b615-c3f937cd939d-kube-api-access-fhxl2\") on node \"crc\" DevicePath \"\"" Nov 28 09:56:45 crc kubenswrapper[4946]: I1128 09:56:45.612878 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9164bedb-5437-4108-b615-c3f937cd939d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 09:56:45 crc kubenswrapper[4946]: I1128 09:56:45.612889 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9164bedb-5437-4108-b615-c3f937cd939d-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 09:56:45 crc kubenswrapper[4946]: I1128 09:56:45.838339 4946 generic.go:334] "Generic (PLEG): container finished" podID="9164bedb-5437-4108-b615-c3f937cd939d" containerID="b3d929ea99414bdb1d066bb870074d6ab3ee791bf28985143f4c451562871aca" exitCode=0 Nov 28 09:56:45 crc kubenswrapper[4946]: I1128 09:56:45.838406 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxkjh" event={"ID":"9164bedb-5437-4108-b615-c3f937cd939d","Type":"ContainerDied","Data":"b3d929ea99414bdb1d066bb870074d6ab3ee791bf28985143f4c451562871aca"} Nov 28 09:56:45 crc kubenswrapper[4946]: I1128 09:56:45.839515 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxkjh" event={"ID":"9164bedb-5437-4108-b615-c3f937cd939d","Type":"ContainerDied","Data":"5905ce3efb1974ead80d3e71ca4ee2acc9533f157a7a60a9323a6a5ecbe96465"} Nov 28 09:56:45 crc kubenswrapper[4946]: I1128 09:56:45.839591 4946 scope.go:117] "RemoveContainer" containerID="b3d929ea99414bdb1d066bb870074d6ab3ee791bf28985143f4c451562871aca" Nov 28 09:56:45 crc kubenswrapper[4946]: I1128 09:56:45.838438 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxkjh" Nov 28 09:56:45 crc kubenswrapper[4946]: I1128 09:56:45.891040 4946 scope.go:117] "RemoveContainer" containerID="0abcf649e7accf85f2cd2ff5d7be822416d0536da268d9ea9956038832c78a41" Nov 28 09:56:45 crc kubenswrapper[4946]: I1128 09:56:45.897554 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gxkjh"] Nov 28 09:56:45 crc kubenswrapper[4946]: I1128 09:56:45.917069 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gxkjh"] Nov 28 09:56:45 crc kubenswrapper[4946]: I1128 09:56:45.922247 4946 scope.go:117] "RemoveContainer" containerID="3620a36bb61cf63800a53fb3a78afdca32dfc9e1116abf2915cc50716d44ae02" Nov 28 09:56:45 crc kubenswrapper[4946]: I1128 09:56:45.966951 4946 scope.go:117] "RemoveContainer" containerID="b3d929ea99414bdb1d066bb870074d6ab3ee791bf28985143f4c451562871aca" Nov 28 09:56:45 crc kubenswrapper[4946]: E1128 09:56:45.967366 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3d929ea99414bdb1d066bb870074d6ab3ee791bf28985143f4c451562871aca\": container with ID starting with b3d929ea99414bdb1d066bb870074d6ab3ee791bf28985143f4c451562871aca not found: ID does not exist" containerID="b3d929ea99414bdb1d066bb870074d6ab3ee791bf28985143f4c451562871aca" Nov 28 09:56:45 crc kubenswrapper[4946]: I1128 09:56:45.967407 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3d929ea99414bdb1d066bb870074d6ab3ee791bf28985143f4c451562871aca"} err="failed to get container status \"b3d929ea99414bdb1d066bb870074d6ab3ee791bf28985143f4c451562871aca\": rpc error: code = NotFound desc = could not find container \"b3d929ea99414bdb1d066bb870074d6ab3ee791bf28985143f4c451562871aca\": container with ID starting with b3d929ea99414bdb1d066bb870074d6ab3ee791bf28985143f4c451562871aca not found: ID does not exist" Nov 28 09:56:45 crc kubenswrapper[4946]: I1128 09:56:45.967430 4946 scope.go:117] "RemoveContainer" containerID="0abcf649e7accf85f2cd2ff5d7be822416d0536da268d9ea9956038832c78a41" Nov 28 09:56:45 crc kubenswrapper[4946]: E1128 09:56:45.967664 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0abcf649e7accf85f2cd2ff5d7be822416d0536da268d9ea9956038832c78a41\": container with ID starting with 0abcf649e7accf85f2cd2ff5d7be822416d0536da268d9ea9956038832c78a41 not found: ID does not exist" containerID="0abcf649e7accf85f2cd2ff5d7be822416d0536da268d9ea9956038832c78a41" Nov 28 09:56:45 crc kubenswrapper[4946]: I1128 09:56:45.967694 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0abcf649e7accf85f2cd2ff5d7be822416d0536da268d9ea9956038832c78a41"} err="failed to get container status \"0abcf649e7accf85f2cd2ff5d7be822416d0536da268d9ea9956038832c78a41\": rpc error: code = NotFound desc = could not find container \"0abcf649e7accf85f2cd2ff5d7be822416d0536da268d9ea9956038832c78a41\": container with ID starting with 0abcf649e7accf85f2cd2ff5d7be822416d0536da268d9ea9956038832c78a41 not found: ID does not exist" Nov 28 09:56:45 crc kubenswrapper[4946]: I1128 09:56:45.967714 4946 scope.go:117] "RemoveContainer" containerID="3620a36bb61cf63800a53fb3a78afdca32dfc9e1116abf2915cc50716d44ae02" Nov 28 09:56:45 crc kubenswrapper[4946]: E1128 09:56:45.967959 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3620a36bb61cf63800a53fb3a78afdca32dfc9e1116abf2915cc50716d44ae02\": container with ID starting with 3620a36bb61cf63800a53fb3a78afdca32dfc9e1116abf2915cc50716d44ae02 not found: ID does not exist" containerID="3620a36bb61cf63800a53fb3a78afdca32dfc9e1116abf2915cc50716d44ae02" Nov 28 09:56:45 crc kubenswrapper[4946]: I1128 09:56:45.967997 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3620a36bb61cf63800a53fb3a78afdca32dfc9e1116abf2915cc50716d44ae02"} err="failed to get container status \"3620a36bb61cf63800a53fb3a78afdca32dfc9e1116abf2915cc50716d44ae02\": rpc error: code = NotFound desc = could not find container \"3620a36bb61cf63800a53fb3a78afdca32dfc9e1116abf2915cc50716d44ae02\": container with ID starting with 3620a36bb61cf63800a53fb3a78afdca32dfc9e1116abf2915cc50716d44ae02 not found: ID does not exist" Nov 28 09:56:46 crc kubenswrapper[4946]: I1128 09:56:46.002934 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9164bedb-5437-4108-b615-c3f937cd939d" path="/var/lib/kubelet/pods/9164bedb-5437-4108-b615-c3f937cd939d/volumes" Nov 28 09:56:48 crc kubenswrapper[4946]: I1128 09:56:48.991048 4946 scope.go:117] "RemoveContainer" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" Nov 28 09:56:48 crc kubenswrapper[4946]: E1128 09:56:48.992082 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:57:02 crc kubenswrapper[4946]: I1128 09:57:02.989717 4946 scope.go:117] "RemoveContainer" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" Nov 28 09:57:02 crc kubenswrapper[4946]: E1128 09:57:02.990760 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:57:13 crc kubenswrapper[4946]: I1128 09:57:13.990100 4946 scope.go:117] "RemoveContainer" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" Nov 28 09:57:13 crc kubenswrapper[4946]: E1128 09:57:13.990868 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:57:26 crc kubenswrapper[4946]: I1128 09:57:26.990649 4946 scope.go:117] "RemoveContainer" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" Nov 28 09:57:26 crc kubenswrapper[4946]: E1128 09:57:26.992175 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:57:41 crc kubenswrapper[4946]: I1128 09:57:41.996669 4946 scope.go:117] "RemoveContainer" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" Nov 28 09:57:41 crc kubenswrapper[4946]: E1128 09:57:41.998105 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:57:52 crc kubenswrapper[4946]: I1128 09:57:52.991159 4946 scope.go:117] "RemoveContainer" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" Nov 28 09:57:52 crc kubenswrapper[4946]: E1128 09:57:52.992555 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:58:04 crc kubenswrapper[4946]: I1128 09:58:04.989604 4946 scope.go:117] "RemoveContainer" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" Nov 28 09:58:04 crc kubenswrapper[4946]: E1128 09:58:04.990394 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:58:16 crc kubenswrapper[4946]: I1128 09:58:16.990975 4946 scope.go:117] "RemoveContainer" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" Nov 28 09:58:16 crc kubenswrapper[4946]: E1128 09:58:16.992381 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.826204 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 28 09:58:19 crc kubenswrapper[4946]: E1128 09:58:19.827450 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9164bedb-5437-4108-b615-c3f937cd939d" containerName="extract-content" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.827502 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="9164bedb-5437-4108-b615-c3f937cd939d" containerName="extract-content" Nov 28 09:58:19 crc kubenswrapper[4946]: E1128 09:58:19.827522 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9164bedb-5437-4108-b615-c3f937cd939d" containerName="registry-server" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.827534 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="9164bedb-5437-4108-b615-c3f937cd939d" containerName="registry-server" Nov 28 09:58:19 crc kubenswrapper[4946]: E1128 09:58:19.827602 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9164bedb-5437-4108-b615-c3f937cd939d" containerName="extract-utilities" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.827618 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="9164bedb-5437-4108-b615-c3f937cd939d" containerName="extract-utilities" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.827959 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="9164bedb-5437-4108-b615-c3f937cd939d" containerName="registry-server" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.829180 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.842891 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.844624 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.845156 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zsz6p" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.854026 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.866145 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.877510 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.877571 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1e51973b-27b2-4f5f-9073-0ba9d14f9593-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.877630 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1e51973b-27b2-4f5f-9073-0ba9d14f9593-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.877669 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e51973b-27b2-4f5f-9073-0ba9d14f9593-config-data\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.877749 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1e51973b-27b2-4f5f-9073-0ba9d14f9593-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.877815 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1e51973b-27b2-4f5f-9073-0ba9d14f9593-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.877860 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e51973b-27b2-4f5f-9073-0ba9d14f9593-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.877892 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1e51973b-27b2-4f5f-9073-0ba9d14f9593-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.877913 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96xpn\" (UniqueName: \"kubernetes.io/projected/1e51973b-27b2-4f5f-9073-0ba9d14f9593-kube-api-access-96xpn\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.980159 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.980201 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1e51973b-27b2-4f5f-9073-0ba9d14f9593-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.980565 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.980261 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1e51973b-27b2-4f5f-9073-0ba9d14f9593-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.980701 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e51973b-27b2-4f5f-9073-0ba9d14f9593-config-data\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.980789 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1e51973b-27b2-4f5f-9073-0ba9d14f9593-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.980809 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1e51973b-27b2-4f5f-9073-0ba9d14f9593-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.980875 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1e51973b-27b2-4f5f-9073-0ba9d14f9593-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.980924 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e51973b-27b2-4f5f-9073-0ba9d14f9593-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.980958 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1e51973b-27b2-4f5f-9073-0ba9d14f9593-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.980982 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96xpn\" (UniqueName: \"kubernetes.io/projected/1e51973b-27b2-4f5f-9073-0ba9d14f9593-kube-api-access-96xpn\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.981587 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1e51973b-27b2-4f5f-9073-0ba9d14f9593-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.982324 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1e51973b-27b2-4f5f-9073-0ba9d14f9593-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.985103 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e51973b-27b2-4f5f-9073-0ba9d14f9593-config-data\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.992716 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e51973b-27b2-4f5f-9073-0ba9d14f9593-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.994690 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1e51973b-27b2-4f5f-9073-0ba9d14f9593-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.997377 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96xpn\" (UniqueName: \"kubernetes.io/projected/1e51973b-27b2-4f5f-9073-0ba9d14f9593-kube-api-access-96xpn\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:19 crc kubenswrapper[4946]: I1128 09:58:19.997879 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1e51973b-27b2-4f5f-9073-0ba9d14f9593-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:20 crc kubenswrapper[4946]: I1128 09:58:20.011644 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " pod="openstack/tempest-tests-tempest" Nov 28 09:58:20 crc kubenswrapper[4946]: I1128 09:58:20.176916 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 28 09:58:20 crc kubenswrapper[4946]: W1128 09:58:20.751273 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e51973b_27b2_4f5f_9073_0ba9d14f9593.slice/crio-69d9fe38429938a27fe27f0bf54af1649fca649f3000311b09eaec283d370049 WatchSource:0}: Error finding container 69d9fe38429938a27fe27f0bf54af1649fca649f3000311b09eaec283d370049: Status 404 returned error can't find the container with id 69d9fe38429938a27fe27f0bf54af1649fca649f3000311b09eaec283d370049 Nov 28 09:58:20 crc kubenswrapper[4946]: I1128 09:58:20.763573 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 28 09:58:21 crc kubenswrapper[4946]: I1128 09:58:21.092609 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1e51973b-27b2-4f5f-9073-0ba9d14f9593","Type":"ContainerStarted","Data":"69d9fe38429938a27fe27f0bf54af1649fca649f3000311b09eaec283d370049"} Nov 28 09:58:27 crc kubenswrapper[4946]: I1128 09:58:27.990510 4946 scope.go:117] "RemoveContainer" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" Nov 28 09:58:27 crc kubenswrapper[4946]: E1128 09:58:27.991305 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:58:39 crc kubenswrapper[4946]: I1128 09:58:39.990542 4946 scope.go:117] "RemoveContainer" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" Nov 28 09:58:39 crc kubenswrapper[4946]: E1128 09:58:39.991498 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:58:51 crc kubenswrapper[4946]: I1128 09:58:51.990728 4946 scope.go:117] "RemoveContainer" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" Nov 28 09:58:51 crc kubenswrapper[4946]: E1128 09:58:51.991492 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:59:06 crc kubenswrapper[4946]: I1128 09:59:06.992739 4946 scope.go:117] "RemoveContainer" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" Nov 28 09:59:06 crc kubenswrapper[4946]: E1128 09:59:06.993926 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:59:17 crc kubenswrapper[4946]: I1128 09:59:17.993907 4946 scope.go:117] "RemoveContainer" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" Nov 28 09:59:17 crc kubenswrapper[4946]: E1128 09:59:17.994644 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:59:21 crc kubenswrapper[4946]: E1128 09:59:21.267407 4946 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:ec6795435865dd5348b4ff40c6a1ad98" Nov 28 09:59:21 crc kubenswrapper[4946]: E1128 09:59:21.267782 4946 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:ec6795435865dd5348b4ff40c6a1ad98" Nov 28 09:59:21 crc kubenswrapper[4946]: E1128 09:59:21.267933 4946 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:ec6795435865dd5348b4ff40c6a1ad98,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96xpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(1e51973b-27b2-4f5f-9073-0ba9d14f9593): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 09:59:21 crc kubenswrapper[4946]: E1128 09:59:21.269109 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="1e51973b-27b2-4f5f-9073-0ba9d14f9593" Nov 28 09:59:21 crc kubenswrapper[4946]: E1128 09:59:21.860952 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:ec6795435865dd5348b4ff40c6a1ad98\\\"\"" pod="openstack/tempest-tests-tempest" podUID="1e51973b-27b2-4f5f-9073-0ba9d14f9593" Nov 28 09:59:29 crc kubenswrapper[4946]: I1128 09:59:29.991177 4946 scope.go:117] "RemoveContainer" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" Nov 28 09:59:29 crc kubenswrapper[4946]: E1128 09:59:29.992320 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:59:34 crc kubenswrapper[4946]: I1128 09:59:34.214824 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 28 09:59:36 crc kubenswrapper[4946]: I1128 09:59:36.072744 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1e51973b-27b2-4f5f-9073-0ba9d14f9593","Type":"ContainerStarted","Data":"e4cb4a317a4a9017de161711a2626e068b88bca5b234366c7d8648fd06bca872"} Nov 28 09:59:36 crc kubenswrapper[4946]: I1128 09:59:36.095909 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.637298075 podStartE2EDuration="1m18.095889114s" podCreationTimestamp="2025-11-28 09:58:18 +0000 UTC" firstStartedPulling="2025-11-28 09:58:20.753954581 +0000 UTC m=+11155.132019692" lastFinishedPulling="2025-11-28 09:59:34.21254562 +0000 UTC m=+11228.590610731" observedRunningTime="2025-11-28 09:59:36.092112 +0000 UTC m=+11230.470177121" watchObservedRunningTime="2025-11-28 09:59:36.095889114 +0000 UTC m=+11230.473954235" Nov 28 09:59:42 crc kubenswrapper[4946]: I1128 09:59:42.990044 4946 scope.go:117] "RemoveContainer" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" Nov 28 09:59:42 crc kubenswrapper[4946]: E1128 09:59:42.990908 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 09:59:57 crc kubenswrapper[4946]: I1128 09:59:57.992081 4946 scope.go:117] "RemoveContainer" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" Nov 28 09:59:57 crc kubenswrapper[4946]: E1128 09:59:57.993439 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:00:00 crc kubenswrapper[4946]: I1128 10:00:00.154129 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405400-4qf9w"] Nov 28 10:00:00 crc kubenswrapper[4946]: I1128 10:00:00.156186 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405400-4qf9w" Nov 28 10:00:00 crc kubenswrapper[4946]: I1128 10:00:00.161303 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 10:00:00 crc kubenswrapper[4946]: I1128 10:00:00.161342 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 10:00:00 crc kubenswrapper[4946]: I1128 10:00:00.168560 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405400-4qf9w"] Nov 28 10:00:00 crc kubenswrapper[4946]: I1128 10:00:00.305158 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/271b97ef-e121-4efd-88d3-0f41e7bfa89a-secret-volume\") pod \"collect-profiles-29405400-4qf9w\" (UID: \"271b97ef-e121-4efd-88d3-0f41e7bfa89a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405400-4qf9w" Nov 28 10:00:00 crc kubenswrapper[4946]: I1128 10:00:00.305266 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kg86\" (UniqueName: \"kubernetes.io/projected/271b97ef-e121-4efd-88d3-0f41e7bfa89a-kube-api-access-6kg86\") pod \"collect-profiles-29405400-4qf9w\" (UID: \"271b97ef-e121-4efd-88d3-0f41e7bfa89a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405400-4qf9w" Nov 28 10:00:00 crc kubenswrapper[4946]: I1128 10:00:00.305424 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/271b97ef-e121-4efd-88d3-0f41e7bfa89a-config-volume\") pod \"collect-profiles-29405400-4qf9w\" (UID: \"271b97ef-e121-4efd-88d3-0f41e7bfa89a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405400-4qf9w" Nov 28 10:00:00 crc kubenswrapper[4946]: I1128 10:00:00.409697 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/271b97ef-e121-4efd-88d3-0f41e7bfa89a-config-volume\") pod \"collect-profiles-29405400-4qf9w\" (UID: \"271b97ef-e121-4efd-88d3-0f41e7bfa89a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405400-4qf9w" Nov 28 10:00:00 crc kubenswrapper[4946]: I1128 10:00:00.409779 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/271b97ef-e121-4efd-88d3-0f41e7bfa89a-secret-volume\") pod \"collect-profiles-29405400-4qf9w\" (UID: \"271b97ef-e121-4efd-88d3-0f41e7bfa89a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405400-4qf9w" Nov 28 10:00:00 crc kubenswrapper[4946]: I1128 10:00:00.409836 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kg86\" (UniqueName: \"kubernetes.io/projected/271b97ef-e121-4efd-88d3-0f41e7bfa89a-kube-api-access-6kg86\") pod \"collect-profiles-29405400-4qf9w\" (UID: \"271b97ef-e121-4efd-88d3-0f41e7bfa89a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405400-4qf9w" Nov 28 10:00:00 crc kubenswrapper[4946]: I1128 10:00:00.410989 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/271b97ef-e121-4efd-88d3-0f41e7bfa89a-config-volume\") pod \"collect-profiles-29405400-4qf9w\" (UID: \"271b97ef-e121-4efd-88d3-0f41e7bfa89a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405400-4qf9w" Nov 28 10:00:00 crc kubenswrapper[4946]: I1128 10:00:00.416979 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/271b97ef-e121-4efd-88d3-0f41e7bfa89a-secret-volume\") pod \"collect-profiles-29405400-4qf9w\" (UID: \"271b97ef-e121-4efd-88d3-0f41e7bfa89a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405400-4qf9w" Nov 28 10:00:00 crc kubenswrapper[4946]: I1128 10:00:00.435288 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kg86\" (UniqueName: \"kubernetes.io/projected/271b97ef-e121-4efd-88d3-0f41e7bfa89a-kube-api-access-6kg86\") pod \"collect-profiles-29405400-4qf9w\" (UID: \"271b97ef-e121-4efd-88d3-0f41e7bfa89a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405400-4qf9w" Nov 28 10:00:00 crc kubenswrapper[4946]: I1128 10:00:00.481813 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405400-4qf9w" Nov 28 10:00:01 crc kubenswrapper[4946]: W1128 10:00:01.040361 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod271b97ef_e121_4efd_88d3_0f41e7bfa89a.slice/crio-51ed79c9b2bc7c031f54f5fcea68363ca1836772cb66c2f940b7238da6a72f9d WatchSource:0}: Error finding container 51ed79c9b2bc7c031f54f5fcea68363ca1836772cb66c2f940b7238da6a72f9d: Status 404 returned error can't find the container with id 51ed79c9b2bc7c031f54f5fcea68363ca1836772cb66c2f940b7238da6a72f9d Nov 28 10:00:01 crc kubenswrapper[4946]: I1128 10:00:01.067280 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405400-4qf9w"] Nov 28 10:00:01 crc kubenswrapper[4946]: I1128 10:00:01.377046 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405400-4qf9w" event={"ID":"271b97ef-e121-4efd-88d3-0f41e7bfa89a","Type":"ContainerStarted","Data":"d121bcf7b4fa058a8942dcc3561a0ae1d2be3d4781515ccca57741a01f7de9d4"} Nov 28 10:00:01 crc kubenswrapper[4946]: I1128 10:00:01.378631 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405400-4qf9w" event={"ID":"271b97ef-e121-4efd-88d3-0f41e7bfa89a","Type":"ContainerStarted","Data":"51ed79c9b2bc7c031f54f5fcea68363ca1836772cb66c2f940b7238da6a72f9d"} Nov 28 10:00:01 crc kubenswrapper[4946]: I1128 10:00:01.392837 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29405400-4qf9w" podStartSLOduration=1.3928087439999999 podStartE2EDuration="1.392808744s" podCreationTimestamp="2025-11-28 10:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 10:00:01.391229145 +0000 UTC m=+11255.769294256" watchObservedRunningTime="2025-11-28 10:00:01.392808744 +0000 UTC m=+11255.770873855" Nov 28 10:00:02 crc kubenswrapper[4946]: I1128 10:00:02.388898 4946 generic.go:334] "Generic (PLEG): container finished" podID="271b97ef-e121-4efd-88d3-0f41e7bfa89a" containerID="d121bcf7b4fa058a8942dcc3561a0ae1d2be3d4781515ccca57741a01f7de9d4" exitCode=0 Nov 28 10:00:02 crc kubenswrapper[4946]: I1128 10:00:02.389993 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405400-4qf9w" event={"ID":"271b97ef-e121-4efd-88d3-0f41e7bfa89a","Type":"ContainerDied","Data":"d121bcf7b4fa058a8942dcc3561a0ae1d2be3d4781515ccca57741a01f7de9d4"} Nov 28 10:00:03 crc kubenswrapper[4946]: I1128 10:00:03.899271 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405400-4qf9w" Nov 28 10:00:03 crc kubenswrapper[4946]: I1128 10:00:03.995421 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/271b97ef-e121-4efd-88d3-0f41e7bfa89a-config-volume\") pod \"271b97ef-e121-4efd-88d3-0f41e7bfa89a\" (UID: \"271b97ef-e121-4efd-88d3-0f41e7bfa89a\") " Nov 28 10:00:03 crc kubenswrapper[4946]: I1128 10:00:03.995558 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kg86\" (UniqueName: \"kubernetes.io/projected/271b97ef-e121-4efd-88d3-0f41e7bfa89a-kube-api-access-6kg86\") pod \"271b97ef-e121-4efd-88d3-0f41e7bfa89a\" (UID: \"271b97ef-e121-4efd-88d3-0f41e7bfa89a\") " Nov 28 10:00:03 crc kubenswrapper[4946]: I1128 10:00:03.995748 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/271b97ef-e121-4efd-88d3-0f41e7bfa89a-secret-volume\") pod \"271b97ef-e121-4efd-88d3-0f41e7bfa89a\" (UID: \"271b97ef-e121-4efd-88d3-0f41e7bfa89a\") " Nov 28 10:00:04 crc kubenswrapper[4946]: I1128 10:00:04.001761 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/271b97ef-e121-4efd-88d3-0f41e7bfa89a-config-volume" (OuterVolumeSpecName: "config-volume") pod "271b97ef-e121-4efd-88d3-0f41e7bfa89a" (UID: "271b97ef-e121-4efd-88d3-0f41e7bfa89a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 10:00:04 crc kubenswrapper[4946]: I1128 10:00:04.002564 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/271b97ef-e121-4efd-88d3-0f41e7bfa89a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "271b97ef-e121-4efd-88d3-0f41e7bfa89a" (UID: "271b97ef-e121-4efd-88d3-0f41e7bfa89a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 10:00:04 crc kubenswrapper[4946]: I1128 10:00:04.037324 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/271b97ef-e121-4efd-88d3-0f41e7bfa89a-kube-api-access-6kg86" (OuterVolumeSpecName: "kube-api-access-6kg86") pod "271b97ef-e121-4efd-88d3-0f41e7bfa89a" (UID: "271b97ef-e121-4efd-88d3-0f41e7bfa89a"). InnerVolumeSpecName "kube-api-access-6kg86". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 10:00:04 crc kubenswrapper[4946]: I1128 10:00:04.099603 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kg86\" (UniqueName: \"kubernetes.io/projected/271b97ef-e121-4efd-88d3-0f41e7bfa89a-kube-api-access-6kg86\") on node \"crc\" DevicePath \"\"" Nov 28 10:00:04 crc kubenswrapper[4946]: I1128 10:00:04.099653 4946 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/271b97ef-e121-4efd-88d3-0f41e7bfa89a-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 10:00:04 crc kubenswrapper[4946]: I1128 10:00:04.099666 4946 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/271b97ef-e121-4efd-88d3-0f41e7bfa89a-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 10:00:04 crc kubenswrapper[4946]: I1128 10:00:04.414464 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405400-4qf9w" event={"ID":"271b97ef-e121-4efd-88d3-0f41e7bfa89a","Type":"ContainerDied","Data":"51ed79c9b2bc7c031f54f5fcea68363ca1836772cb66c2f940b7238da6a72f9d"} Nov 28 10:00:04 crc kubenswrapper[4946]: I1128 10:00:04.414522 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51ed79c9b2bc7c031f54f5fcea68363ca1836772cb66c2f940b7238da6a72f9d" Nov 28 10:00:04 crc kubenswrapper[4946]: I1128 10:00:04.414572 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405400-4qf9w" Nov 28 10:00:04 crc kubenswrapper[4946]: I1128 10:00:04.467605 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405355-l7czx"] Nov 28 10:00:04 crc kubenswrapper[4946]: I1128 10:00:04.482011 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405355-l7czx"] Nov 28 10:00:06 crc kubenswrapper[4946]: I1128 10:00:06.007378 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe928ed1-b263-4791-b82c-4f64a33ae019" path="/var/lib/kubelet/pods/fe928ed1-b263-4791-b82c-4f64a33ae019/volumes" Nov 28 10:00:12 crc kubenswrapper[4946]: I1128 10:00:12.990645 4946 scope.go:117] "RemoveContainer" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" Nov 28 10:00:12 crc kubenswrapper[4946]: E1128 10:00:12.991525 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:00:26 crc kubenswrapper[4946]: I1128 10:00:26.005242 4946 scope.go:117] "RemoveContainer" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" Nov 28 10:00:26 crc kubenswrapper[4946]: E1128 10:00:26.006526 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:00:39 crc kubenswrapper[4946]: I1128 10:00:39.990928 4946 scope.go:117] "RemoveContainer" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" Nov 28 10:00:39 crc kubenswrapper[4946]: E1128 10:00:39.993187 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:00:54 crc kubenswrapper[4946]: I1128 10:00:54.215107 4946 scope.go:117] "RemoveContainer" containerID="529c3e529d9511872e5a58898f9f735421fb51b3513de54545f6333f6e097b6b" Nov 28 10:00:54 crc kubenswrapper[4946]: I1128 10:00:54.990093 4946 scope.go:117] "RemoveContainer" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" Nov 28 10:00:55 crc kubenswrapper[4946]: I1128 10:00:55.956939 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"c1affc8b427685239fe541e69eac3ff653f1f5e77fbd18f4ae8409b7412b67d1"} Nov 28 10:01:00 crc kubenswrapper[4946]: I1128 10:01:00.161942 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29405401-xhftw"] Nov 28 10:01:00 crc kubenswrapper[4946]: E1128 10:01:00.162825 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="271b97ef-e121-4efd-88d3-0f41e7bfa89a" containerName="collect-profiles" Nov 28 10:01:00 crc kubenswrapper[4946]: I1128 10:01:00.162838 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="271b97ef-e121-4efd-88d3-0f41e7bfa89a" containerName="collect-profiles" Nov 28 10:01:00 crc kubenswrapper[4946]: I1128 10:01:00.163045 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="271b97ef-e121-4efd-88d3-0f41e7bfa89a" containerName="collect-profiles" Nov 28 10:01:00 crc kubenswrapper[4946]: I1128 10:01:00.163773 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405401-xhftw" Nov 28 10:01:00 crc kubenswrapper[4946]: I1128 10:01:00.177982 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29405401-xhftw"] Nov 28 10:01:00 crc kubenswrapper[4946]: I1128 10:01:00.301089 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ff17d683-e6a0-4b28-97e2-256e71ee0a7e-fernet-keys\") pod \"keystone-cron-29405401-xhftw\" (UID: \"ff17d683-e6a0-4b28-97e2-256e71ee0a7e\") " pod="openstack/keystone-cron-29405401-xhftw" Nov 28 10:01:00 crc kubenswrapper[4946]: I1128 10:01:00.301142 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff17d683-e6a0-4b28-97e2-256e71ee0a7e-combined-ca-bundle\") pod \"keystone-cron-29405401-xhftw\" (UID: \"ff17d683-e6a0-4b28-97e2-256e71ee0a7e\") " pod="openstack/keystone-cron-29405401-xhftw" Nov 28 10:01:00 crc kubenswrapper[4946]: I1128 10:01:00.301253 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwg5w\" (UniqueName: \"kubernetes.io/projected/ff17d683-e6a0-4b28-97e2-256e71ee0a7e-kube-api-access-fwg5w\") pod \"keystone-cron-29405401-xhftw\" (UID: \"ff17d683-e6a0-4b28-97e2-256e71ee0a7e\") " pod="openstack/keystone-cron-29405401-xhftw" Nov 28 10:01:00 crc kubenswrapper[4946]: I1128 10:01:00.301297 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff17d683-e6a0-4b28-97e2-256e71ee0a7e-config-data\") pod \"keystone-cron-29405401-xhftw\" (UID: \"ff17d683-e6a0-4b28-97e2-256e71ee0a7e\") " pod="openstack/keystone-cron-29405401-xhftw" Nov 28 10:01:00 crc kubenswrapper[4946]: I1128 10:01:00.403594 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwg5w\" (UniqueName: \"kubernetes.io/projected/ff17d683-e6a0-4b28-97e2-256e71ee0a7e-kube-api-access-fwg5w\") pod \"keystone-cron-29405401-xhftw\" (UID: \"ff17d683-e6a0-4b28-97e2-256e71ee0a7e\") " pod="openstack/keystone-cron-29405401-xhftw" Nov 28 10:01:00 crc kubenswrapper[4946]: I1128 10:01:00.403663 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff17d683-e6a0-4b28-97e2-256e71ee0a7e-config-data\") pod \"keystone-cron-29405401-xhftw\" (UID: \"ff17d683-e6a0-4b28-97e2-256e71ee0a7e\") " pod="openstack/keystone-cron-29405401-xhftw" Nov 28 10:01:00 crc kubenswrapper[4946]: I1128 10:01:00.404023 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ff17d683-e6a0-4b28-97e2-256e71ee0a7e-fernet-keys\") pod \"keystone-cron-29405401-xhftw\" (UID: \"ff17d683-e6a0-4b28-97e2-256e71ee0a7e\") " pod="openstack/keystone-cron-29405401-xhftw" Nov 28 10:01:00 crc kubenswrapper[4946]: I1128 10:01:00.404050 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff17d683-e6a0-4b28-97e2-256e71ee0a7e-combined-ca-bundle\") pod \"keystone-cron-29405401-xhftw\" (UID: \"ff17d683-e6a0-4b28-97e2-256e71ee0a7e\") " pod="openstack/keystone-cron-29405401-xhftw" Nov 28 10:01:00 crc kubenswrapper[4946]: I1128 10:01:00.410113 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff17d683-e6a0-4b28-97e2-256e71ee0a7e-combined-ca-bundle\") pod \"keystone-cron-29405401-xhftw\" (UID: \"ff17d683-e6a0-4b28-97e2-256e71ee0a7e\") " pod="openstack/keystone-cron-29405401-xhftw" Nov 28 10:01:00 crc kubenswrapper[4946]: I1128 10:01:00.410174 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff17d683-e6a0-4b28-97e2-256e71ee0a7e-config-data\") pod \"keystone-cron-29405401-xhftw\" (UID: \"ff17d683-e6a0-4b28-97e2-256e71ee0a7e\") " pod="openstack/keystone-cron-29405401-xhftw" Nov 28 10:01:00 crc kubenswrapper[4946]: I1128 10:01:00.421719 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ff17d683-e6a0-4b28-97e2-256e71ee0a7e-fernet-keys\") pod \"keystone-cron-29405401-xhftw\" (UID: \"ff17d683-e6a0-4b28-97e2-256e71ee0a7e\") " pod="openstack/keystone-cron-29405401-xhftw" Nov 28 10:01:00 crc kubenswrapper[4946]: I1128 10:01:00.435979 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwg5w\" (UniqueName: \"kubernetes.io/projected/ff17d683-e6a0-4b28-97e2-256e71ee0a7e-kube-api-access-fwg5w\") pod \"keystone-cron-29405401-xhftw\" (UID: \"ff17d683-e6a0-4b28-97e2-256e71ee0a7e\") " pod="openstack/keystone-cron-29405401-xhftw" Nov 28 10:01:00 crc kubenswrapper[4946]: I1128 10:01:00.504505 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405401-xhftw" Nov 28 10:01:01 crc kubenswrapper[4946]: I1128 10:01:01.088816 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29405401-xhftw"] Nov 28 10:01:01 crc kubenswrapper[4946]: I1128 10:01:01.388405 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405401-xhftw" event={"ID":"ff17d683-e6a0-4b28-97e2-256e71ee0a7e","Type":"ContainerStarted","Data":"4541babe3cdfdf49cf32ed3df0a5506ba57c11b12796e91df634b5d945d4bb2b"} Nov 28 10:01:01 crc kubenswrapper[4946]: I1128 10:01:01.388504 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405401-xhftw" event={"ID":"ff17d683-e6a0-4b28-97e2-256e71ee0a7e","Type":"ContainerStarted","Data":"541da5c121091ca2571b73e450f4c37c78b0f06f7f400b4abe10c305920bbfc4"} Nov 28 10:01:01 crc kubenswrapper[4946]: I1128 10:01:01.408442 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29405401-xhftw" podStartSLOduration=1.408425409 podStartE2EDuration="1.408425409s" podCreationTimestamp="2025-11-28 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 10:01:01.406669116 +0000 UTC m=+11315.784734227" watchObservedRunningTime="2025-11-28 10:01:01.408425409 +0000 UTC m=+11315.786490520" Nov 28 10:01:03 crc kubenswrapper[4946]: I1128 10:01:03.405992 4946 generic.go:334] "Generic (PLEG): container finished" podID="ff17d683-e6a0-4b28-97e2-256e71ee0a7e" containerID="4541babe3cdfdf49cf32ed3df0a5506ba57c11b12796e91df634b5d945d4bb2b" exitCode=0 Nov 28 10:01:03 crc kubenswrapper[4946]: I1128 10:01:03.406042 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405401-xhftw" event={"ID":"ff17d683-e6a0-4b28-97e2-256e71ee0a7e","Type":"ContainerDied","Data":"4541babe3cdfdf49cf32ed3df0a5506ba57c11b12796e91df634b5d945d4bb2b"} Nov 28 10:01:04 crc kubenswrapper[4946]: I1128 10:01:04.880917 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405401-xhftw" Nov 28 10:01:05 crc kubenswrapper[4946]: I1128 10:01:05.001284 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwg5w\" (UniqueName: \"kubernetes.io/projected/ff17d683-e6a0-4b28-97e2-256e71ee0a7e-kube-api-access-fwg5w\") pod \"ff17d683-e6a0-4b28-97e2-256e71ee0a7e\" (UID: \"ff17d683-e6a0-4b28-97e2-256e71ee0a7e\") " Nov 28 10:01:05 crc kubenswrapper[4946]: I1128 10:01:05.001350 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff17d683-e6a0-4b28-97e2-256e71ee0a7e-combined-ca-bundle\") pod \"ff17d683-e6a0-4b28-97e2-256e71ee0a7e\" (UID: \"ff17d683-e6a0-4b28-97e2-256e71ee0a7e\") " Nov 28 10:01:05 crc kubenswrapper[4946]: I1128 10:01:05.001414 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff17d683-e6a0-4b28-97e2-256e71ee0a7e-config-data\") pod \"ff17d683-e6a0-4b28-97e2-256e71ee0a7e\" (UID: \"ff17d683-e6a0-4b28-97e2-256e71ee0a7e\") " Nov 28 10:01:05 crc kubenswrapper[4946]: I1128 10:01:05.001512 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ff17d683-e6a0-4b28-97e2-256e71ee0a7e-fernet-keys\") pod \"ff17d683-e6a0-4b28-97e2-256e71ee0a7e\" (UID: \"ff17d683-e6a0-4b28-97e2-256e71ee0a7e\") " Nov 28 10:01:05 crc kubenswrapper[4946]: I1128 10:01:05.017662 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff17d683-e6a0-4b28-97e2-256e71ee0a7e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ff17d683-e6a0-4b28-97e2-256e71ee0a7e" (UID: "ff17d683-e6a0-4b28-97e2-256e71ee0a7e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 10:01:05 crc kubenswrapper[4946]: I1128 10:01:05.017723 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff17d683-e6a0-4b28-97e2-256e71ee0a7e-kube-api-access-fwg5w" (OuterVolumeSpecName: "kube-api-access-fwg5w") pod "ff17d683-e6a0-4b28-97e2-256e71ee0a7e" (UID: "ff17d683-e6a0-4b28-97e2-256e71ee0a7e"). InnerVolumeSpecName "kube-api-access-fwg5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 10:01:05 crc kubenswrapper[4946]: I1128 10:01:05.041555 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff17d683-e6a0-4b28-97e2-256e71ee0a7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff17d683-e6a0-4b28-97e2-256e71ee0a7e" (UID: "ff17d683-e6a0-4b28-97e2-256e71ee0a7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 10:01:05 crc kubenswrapper[4946]: I1128 10:01:05.094529 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff17d683-e6a0-4b28-97e2-256e71ee0a7e-config-data" (OuterVolumeSpecName: "config-data") pod "ff17d683-e6a0-4b28-97e2-256e71ee0a7e" (UID: "ff17d683-e6a0-4b28-97e2-256e71ee0a7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 10:01:05 crc kubenswrapper[4946]: I1128 10:01:05.104973 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwg5w\" (UniqueName: \"kubernetes.io/projected/ff17d683-e6a0-4b28-97e2-256e71ee0a7e-kube-api-access-fwg5w\") on node \"crc\" DevicePath \"\"" Nov 28 10:01:05 crc kubenswrapper[4946]: I1128 10:01:05.105189 4946 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff17d683-e6a0-4b28-97e2-256e71ee0a7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 10:01:05 crc kubenswrapper[4946]: I1128 10:01:05.105200 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff17d683-e6a0-4b28-97e2-256e71ee0a7e-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 10:01:05 crc kubenswrapper[4946]: I1128 10:01:05.105208 4946 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ff17d683-e6a0-4b28-97e2-256e71ee0a7e-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 10:01:05 crc kubenswrapper[4946]: I1128 10:01:05.448851 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405401-xhftw" event={"ID":"ff17d683-e6a0-4b28-97e2-256e71ee0a7e","Type":"ContainerDied","Data":"541da5c121091ca2571b73e450f4c37c78b0f06f7f400b4abe10c305920bbfc4"} Nov 28 10:01:05 crc kubenswrapper[4946]: I1128 10:01:05.449226 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="541da5c121091ca2571b73e450f4c37c78b0f06f7f400b4abe10c305920bbfc4" Nov 28 10:01:05 crc kubenswrapper[4946]: I1128 10:01:05.448950 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405401-xhftw" Nov 28 10:01:36 crc kubenswrapper[4946]: I1128 10:01:36.313958 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fts45"] Nov 28 10:01:36 crc kubenswrapper[4946]: E1128 10:01:36.316383 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff17d683-e6a0-4b28-97e2-256e71ee0a7e" containerName="keystone-cron" Nov 28 10:01:36 crc kubenswrapper[4946]: I1128 10:01:36.316407 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff17d683-e6a0-4b28-97e2-256e71ee0a7e" containerName="keystone-cron" Nov 28 10:01:36 crc kubenswrapper[4946]: I1128 10:01:36.316697 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff17d683-e6a0-4b28-97e2-256e71ee0a7e" containerName="keystone-cron" Nov 28 10:01:36 crc kubenswrapper[4946]: I1128 10:01:36.318740 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fts45" Nov 28 10:01:36 crc kubenswrapper[4946]: I1128 10:01:36.329347 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fts45"] Nov 28 10:01:36 crc kubenswrapper[4946]: I1128 10:01:36.412827 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1266533f-f63c-49ce-bdbf-ea5b76fedecf-utilities\") pod \"redhat-operators-fts45\" (UID: \"1266533f-f63c-49ce-bdbf-ea5b76fedecf\") " pod="openshift-marketplace/redhat-operators-fts45" Nov 28 10:01:36 crc kubenswrapper[4946]: I1128 10:01:36.412914 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qgxs\" (UniqueName: \"kubernetes.io/projected/1266533f-f63c-49ce-bdbf-ea5b76fedecf-kube-api-access-4qgxs\") pod \"redhat-operators-fts45\" (UID: \"1266533f-f63c-49ce-bdbf-ea5b76fedecf\") " pod="openshift-marketplace/redhat-operators-fts45" Nov 28 10:01:36 crc kubenswrapper[4946]: I1128 10:01:36.412943 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1266533f-f63c-49ce-bdbf-ea5b76fedecf-catalog-content\") pod \"redhat-operators-fts45\" (UID: \"1266533f-f63c-49ce-bdbf-ea5b76fedecf\") " pod="openshift-marketplace/redhat-operators-fts45" Nov 28 10:01:36 crc kubenswrapper[4946]: I1128 10:01:36.514784 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1266533f-f63c-49ce-bdbf-ea5b76fedecf-utilities\") pod \"redhat-operators-fts45\" (UID: \"1266533f-f63c-49ce-bdbf-ea5b76fedecf\") " pod="openshift-marketplace/redhat-operators-fts45" Nov 28 10:01:36 crc kubenswrapper[4946]: I1128 10:01:36.514923 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qgxs\" (UniqueName: \"kubernetes.io/projected/1266533f-f63c-49ce-bdbf-ea5b76fedecf-kube-api-access-4qgxs\") pod \"redhat-operators-fts45\" (UID: \"1266533f-f63c-49ce-bdbf-ea5b76fedecf\") " pod="openshift-marketplace/redhat-operators-fts45" Nov 28 10:01:36 crc kubenswrapper[4946]: I1128 10:01:36.514955 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1266533f-f63c-49ce-bdbf-ea5b76fedecf-catalog-content\") pod \"redhat-operators-fts45\" (UID: \"1266533f-f63c-49ce-bdbf-ea5b76fedecf\") " pod="openshift-marketplace/redhat-operators-fts45" Nov 28 10:01:36 crc kubenswrapper[4946]: I1128 10:01:36.515326 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1266533f-f63c-49ce-bdbf-ea5b76fedecf-utilities\") pod \"redhat-operators-fts45\" (UID: \"1266533f-f63c-49ce-bdbf-ea5b76fedecf\") " pod="openshift-marketplace/redhat-operators-fts45" Nov 28 10:01:36 crc kubenswrapper[4946]: I1128 10:01:36.515599 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1266533f-f63c-49ce-bdbf-ea5b76fedecf-catalog-content\") pod \"redhat-operators-fts45\" (UID: \"1266533f-f63c-49ce-bdbf-ea5b76fedecf\") " pod="openshift-marketplace/redhat-operators-fts45" Nov 28 10:01:36 crc kubenswrapper[4946]: I1128 10:01:36.532358 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qgxs\" (UniqueName: \"kubernetes.io/projected/1266533f-f63c-49ce-bdbf-ea5b76fedecf-kube-api-access-4qgxs\") pod \"redhat-operators-fts45\" (UID: \"1266533f-f63c-49ce-bdbf-ea5b76fedecf\") " pod="openshift-marketplace/redhat-operators-fts45" Nov 28 10:01:36 crc kubenswrapper[4946]: I1128 10:01:36.699492 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fts45" Nov 28 10:01:37 crc kubenswrapper[4946]: I1128 10:01:37.213278 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fts45"] Nov 28 10:01:37 crc kubenswrapper[4946]: I1128 10:01:37.831117 4946 generic.go:334] "Generic (PLEG): container finished" podID="1266533f-f63c-49ce-bdbf-ea5b76fedecf" containerID="ed7b3009e3e5abaae4fe7b2858a7f9ba8d829e084e08d079659346bd504f88b7" exitCode=0 Nov 28 10:01:37 crc kubenswrapper[4946]: I1128 10:01:37.831197 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fts45" event={"ID":"1266533f-f63c-49ce-bdbf-ea5b76fedecf","Type":"ContainerDied","Data":"ed7b3009e3e5abaae4fe7b2858a7f9ba8d829e084e08d079659346bd504f88b7"} Nov 28 10:01:37 crc kubenswrapper[4946]: I1128 10:01:37.831323 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fts45" event={"ID":"1266533f-f63c-49ce-bdbf-ea5b76fedecf","Type":"ContainerStarted","Data":"c1a2770f23106084ff4ae11405d63e246a0af62ac0033aad09c42d402a2c6e92"} Nov 28 10:01:37 crc kubenswrapper[4946]: I1128 10:01:37.833084 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 10:01:38 crc kubenswrapper[4946]: I1128 10:01:38.842906 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fts45" event={"ID":"1266533f-f63c-49ce-bdbf-ea5b76fedecf","Type":"ContainerStarted","Data":"2feb7f97814a6a88b6488cfd5612ed4075644952909abb4c3ec1f5bd508697d2"} Nov 28 10:01:42 crc kubenswrapper[4946]: I1128 10:01:42.228277 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rvnmp"] Nov 28 10:01:42 crc kubenswrapper[4946]: I1128 10:01:42.232276 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvnmp" Nov 28 10:01:42 crc kubenswrapper[4946]: I1128 10:01:42.252370 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvnmp"] Nov 28 10:01:42 crc kubenswrapper[4946]: I1128 10:01:42.344800 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/945a385c-8dcc-456e-9d2d-36d2675b8f43-utilities\") pod \"redhat-marketplace-rvnmp\" (UID: \"945a385c-8dcc-456e-9d2d-36d2675b8f43\") " pod="openshift-marketplace/redhat-marketplace-rvnmp" Nov 28 10:01:42 crc kubenswrapper[4946]: I1128 10:01:42.345193 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/945a385c-8dcc-456e-9d2d-36d2675b8f43-catalog-content\") pod \"redhat-marketplace-rvnmp\" (UID: \"945a385c-8dcc-456e-9d2d-36d2675b8f43\") " pod="openshift-marketplace/redhat-marketplace-rvnmp" Nov 28 10:01:42 crc kubenswrapper[4946]: I1128 10:01:42.345436 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nx7l\" (UniqueName: \"kubernetes.io/projected/945a385c-8dcc-456e-9d2d-36d2675b8f43-kube-api-access-9nx7l\") pod \"redhat-marketplace-rvnmp\" (UID: \"945a385c-8dcc-456e-9d2d-36d2675b8f43\") " pod="openshift-marketplace/redhat-marketplace-rvnmp" Nov 28 10:01:42 crc kubenswrapper[4946]: I1128 10:01:42.447812 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/945a385c-8dcc-456e-9d2d-36d2675b8f43-catalog-content\") pod \"redhat-marketplace-rvnmp\" (UID: \"945a385c-8dcc-456e-9d2d-36d2675b8f43\") " pod="openshift-marketplace/redhat-marketplace-rvnmp" Nov 28 10:01:42 crc kubenswrapper[4946]: I1128 10:01:42.447947 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nx7l\" (UniqueName: \"kubernetes.io/projected/945a385c-8dcc-456e-9d2d-36d2675b8f43-kube-api-access-9nx7l\") pod \"redhat-marketplace-rvnmp\" (UID: \"945a385c-8dcc-456e-9d2d-36d2675b8f43\") " pod="openshift-marketplace/redhat-marketplace-rvnmp" Nov 28 10:01:42 crc kubenswrapper[4946]: I1128 10:01:42.448106 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/945a385c-8dcc-456e-9d2d-36d2675b8f43-utilities\") pod \"redhat-marketplace-rvnmp\" (UID: \"945a385c-8dcc-456e-9d2d-36d2675b8f43\") " pod="openshift-marketplace/redhat-marketplace-rvnmp" Nov 28 10:01:42 crc kubenswrapper[4946]: I1128 10:01:42.448856 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/945a385c-8dcc-456e-9d2d-36d2675b8f43-utilities\") pod \"redhat-marketplace-rvnmp\" (UID: \"945a385c-8dcc-456e-9d2d-36d2675b8f43\") " pod="openshift-marketplace/redhat-marketplace-rvnmp" Nov 28 10:01:42 crc kubenswrapper[4946]: I1128 10:01:42.449211 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/945a385c-8dcc-456e-9d2d-36d2675b8f43-catalog-content\") pod \"redhat-marketplace-rvnmp\" (UID: \"945a385c-8dcc-456e-9d2d-36d2675b8f43\") " pod="openshift-marketplace/redhat-marketplace-rvnmp" Nov 28 10:01:42 crc kubenswrapper[4946]: I1128 10:01:42.474279 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nx7l\" (UniqueName: \"kubernetes.io/projected/945a385c-8dcc-456e-9d2d-36d2675b8f43-kube-api-access-9nx7l\") pod \"redhat-marketplace-rvnmp\" (UID: \"945a385c-8dcc-456e-9d2d-36d2675b8f43\") " pod="openshift-marketplace/redhat-marketplace-rvnmp" Nov 28 10:01:42 crc kubenswrapper[4946]: I1128 10:01:42.573259 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvnmp" Nov 28 10:01:42 crc kubenswrapper[4946]: I1128 10:01:42.916876 4946 generic.go:334] "Generic (PLEG): container finished" podID="1266533f-f63c-49ce-bdbf-ea5b76fedecf" containerID="2feb7f97814a6a88b6488cfd5612ed4075644952909abb4c3ec1f5bd508697d2" exitCode=0 Nov 28 10:01:42 crc kubenswrapper[4946]: I1128 10:01:42.917183 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fts45" event={"ID":"1266533f-f63c-49ce-bdbf-ea5b76fedecf","Type":"ContainerDied","Data":"2feb7f97814a6a88b6488cfd5612ed4075644952909abb4c3ec1f5bd508697d2"} Nov 28 10:01:43 crc kubenswrapper[4946]: W1128 10:01:43.083511 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod945a385c_8dcc_456e_9d2d_36d2675b8f43.slice/crio-5630f7549e953d126624785e3b9f0a8fbc5eb455733c6b733de8574799238de6 WatchSource:0}: Error finding container 5630f7549e953d126624785e3b9f0a8fbc5eb455733c6b733de8574799238de6: Status 404 returned error can't find the container with id 5630f7549e953d126624785e3b9f0a8fbc5eb455733c6b733de8574799238de6 Nov 28 10:01:43 crc kubenswrapper[4946]: I1128 10:01:43.098052 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvnmp"] Nov 28 10:01:43 crc kubenswrapper[4946]: I1128 10:01:43.931540 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fts45" event={"ID":"1266533f-f63c-49ce-bdbf-ea5b76fedecf","Type":"ContainerStarted","Data":"ded4cd47451841a04c233c248f68eb06f7c97b0e70d939660d073097e843809e"} Nov 28 10:01:43 crc kubenswrapper[4946]: I1128 10:01:43.933735 4946 generic.go:334] "Generic (PLEG): container finished" podID="945a385c-8dcc-456e-9d2d-36d2675b8f43" containerID="d06dc5edb3936cba788c51c01ceb5058a79f8fb7da245575d46392af26213e81" exitCode=0 Nov 28 10:01:43 crc kubenswrapper[4946]: I1128 10:01:43.933793 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvnmp" event={"ID":"945a385c-8dcc-456e-9d2d-36d2675b8f43","Type":"ContainerDied","Data":"d06dc5edb3936cba788c51c01ceb5058a79f8fb7da245575d46392af26213e81"} Nov 28 10:01:43 crc kubenswrapper[4946]: I1128 10:01:43.933822 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvnmp" event={"ID":"945a385c-8dcc-456e-9d2d-36d2675b8f43","Type":"ContainerStarted","Data":"5630f7549e953d126624785e3b9f0a8fbc5eb455733c6b733de8574799238de6"} Nov 28 10:01:43 crc kubenswrapper[4946]: I1128 10:01:43.958106 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fts45" podStartSLOduration=2.266232038 podStartE2EDuration="7.958087762s" podCreationTimestamp="2025-11-28 10:01:36 +0000 UTC" firstStartedPulling="2025-11-28 10:01:37.832854527 +0000 UTC m=+11352.210919628" lastFinishedPulling="2025-11-28 10:01:43.524710231 +0000 UTC m=+11357.902775352" observedRunningTime="2025-11-28 10:01:43.953196331 +0000 UTC m=+11358.331261462" watchObservedRunningTime="2025-11-28 10:01:43.958087762 +0000 UTC m=+11358.336152883" Nov 28 10:01:44 crc kubenswrapper[4946]: I1128 10:01:44.947484 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvnmp" event={"ID":"945a385c-8dcc-456e-9d2d-36d2675b8f43","Type":"ContainerStarted","Data":"23b43a6050fb995c5d57c90c409c3832a53c2a60d07d718711fcc475ce91b74d"} Nov 28 10:01:45 crc kubenswrapper[4946]: I1128 10:01:45.957245 4946 generic.go:334] "Generic (PLEG): container finished" podID="945a385c-8dcc-456e-9d2d-36d2675b8f43" containerID="23b43a6050fb995c5d57c90c409c3832a53c2a60d07d718711fcc475ce91b74d" exitCode=0 Nov 28 10:01:45 crc kubenswrapper[4946]: I1128 10:01:45.957288 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvnmp" event={"ID":"945a385c-8dcc-456e-9d2d-36d2675b8f43","Type":"ContainerDied","Data":"23b43a6050fb995c5d57c90c409c3832a53c2a60d07d718711fcc475ce91b74d"} Nov 28 10:01:46 crc kubenswrapper[4946]: I1128 10:01:46.699897 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fts45" Nov 28 10:01:46 crc kubenswrapper[4946]: I1128 10:01:46.700223 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fts45" Nov 28 10:01:46 crc kubenswrapper[4946]: I1128 10:01:46.968788 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvnmp" event={"ID":"945a385c-8dcc-456e-9d2d-36d2675b8f43","Type":"ContainerStarted","Data":"adab64441a40c8136c2577fa65232c7486142faf117e7494c4e6a8f2ad225098"} Nov 28 10:01:46 crc kubenswrapper[4946]: I1128 10:01:46.988127 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rvnmp" podStartSLOduration=2.307528987 podStartE2EDuration="4.988108173s" podCreationTimestamp="2025-11-28 10:01:42 +0000 UTC" firstStartedPulling="2025-11-28 10:01:43.93659621 +0000 UTC m=+11358.314661341" lastFinishedPulling="2025-11-28 10:01:46.617175396 +0000 UTC m=+11360.995240527" observedRunningTime="2025-11-28 10:01:46.983614532 +0000 UTC m=+11361.361679643" watchObservedRunningTime="2025-11-28 10:01:46.988108173 +0000 UTC m=+11361.366173284" Nov 28 10:01:47 crc kubenswrapper[4946]: I1128 10:01:47.772125 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fts45" podUID="1266533f-f63c-49ce-bdbf-ea5b76fedecf" containerName="registry-server" probeResult="failure" output=< Nov 28 10:01:47 crc kubenswrapper[4946]: timeout: failed to connect service ":50051" within 1s Nov 28 10:01:47 crc kubenswrapper[4946]: > Nov 28 10:01:52 crc kubenswrapper[4946]: I1128 10:01:52.573417 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rvnmp" Nov 28 10:01:52 crc kubenswrapper[4946]: I1128 10:01:52.573907 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rvnmp" Nov 28 10:01:52 crc kubenswrapper[4946]: I1128 10:01:52.626931 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rvnmp" Nov 28 10:01:53 crc kubenswrapper[4946]: I1128 10:01:53.069997 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rvnmp" Nov 28 10:01:53 crc kubenswrapper[4946]: I1128 10:01:53.125396 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvnmp"] Nov 28 10:01:55 crc kubenswrapper[4946]: I1128 10:01:55.045059 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rvnmp" podUID="945a385c-8dcc-456e-9d2d-36d2675b8f43" containerName="registry-server" containerID="cri-o://adab64441a40c8136c2577fa65232c7486142faf117e7494c4e6a8f2ad225098" gracePeriod=2 Nov 28 10:01:55 crc kubenswrapper[4946]: I1128 10:01:55.664760 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvnmp" Nov 28 10:01:55 crc kubenswrapper[4946]: I1128 10:01:55.782153 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/945a385c-8dcc-456e-9d2d-36d2675b8f43-catalog-content\") pod \"945a385c-8dcc-456e-9d2d-36d2675b8f43\" (UID: \"945a385c-8dcc-456e-9d2d-36d2675b8f43\") " Nov 28 10:01:55 crc kubenswrapper[4946]: I1128 10:01:55.782253 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nx7l\" (UniqueName: \"kubernetes.io/projected/945a385c-8dcc-456e-9d2d-36d2675b8f43-kube-api-access-9nx7l\") pod \"945a385c-8dcc-456e-9d2d-36d2675b8f43\" (UID: \"945a385c-8dcc-456e-9d2d-36d2675b8f43\") " Nov 28 10:01:55 crc kubenswrapper[4946]: I1128 10:01:55.783123 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/945a385c-8dcc-456e-9d2d-36d2675b8f43-utilities\") pod \"945a385c-8dcc-456e-9d2d-36d2675b8f43\" (UID: \"945a385c-8dcc-456e-9d2d-36d2675b8f43\") " Nov 28 10:01:55 crc kubenswrapper[4946]: I1128 10:01:55.783834 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/945a385c-8dcc-456e-9d2d-36d2675b8f43-utilities" (OuterVolumeSpecName: "utilities") pod "945a385c-8dcc-456e-9d2d-36d2675b8f43" (UID: "945a385c-8dcc-456e-9d2d-36d2675b8f43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:01:55 crc kubenswrapper[4946]: I1128 10:01:55.787499 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/945a385c-8dcc-456e-9d2d-36d2675b8f43-kube-api-access-9nx7l" (OuterVolumeSpecName: "kube-api-access-9nx7l") pod "945a385c-8dcc-456e-9d2d-36d2675b8f43" (UID: "945a385c-8dcc-456e-9d2d-36d2675b8f43"). InnerVolumeSpecName "kube-api-access-9nx7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 10:01:55 crc kubenswrapper[4946]: I1128 10:01:55.797684 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/945a385c-8dcc-456e-9d2d-36d2675b8f43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "945a385c-8dcc-456e-9d2d-36d2675b8f43" (UID: "945a385c-8dcc-456e-9d2d-36d2675b8f43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:01:55 crc kubenswrapper[4946]: I1128 10:01:55.886012 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/945a385c-8dcc-456e-9d2d-36d2675b8f43-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 10:01:55 crc kubenswrapper[4946]: I1128 10:01:55.886047 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nx7l\" (UniqueName: \"kubernetes.io/projected/945a385c-8dcc-456e-9d2d-36d2675b8f43-kube-api-access-9nx7l\") on node \"crc\" DevicePath \"\"" Nov 28 10:01:55 crc kubenswrapper[4946]: I1128 10:01:55.886064 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/945a385c-8dcc-456e-9d2d-36d2675b8f43-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 10:01:56 crc kubenswrapper[4946]: I1128 10:01:56.063306 4946 generic.go:334] "Generic (PLEG): container finished" podID="945a385c-8dcc-456e-9d2d-36d2675b8f43" containerID="adab64441a40c8136c2577fa65232c7486142faf117e7494c4e6a8f2ad225098" exitCode=0 Nov 28 10:01:56 crc kubenswrapper[4946]: I1128 10:01:56.063357 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvnmp" event={"ID":"945a385c-8dcc-456e-9d2d-36d2675b8f43","Type":"ContainerDied","Data":"adab64441a40c8136c2577fa65232c7486142faf117e7494c4e6a8f2ad225098"} Nov 28 10:01:56 crc kubenswrapper[4946]: I1128 10:01:56.063383 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvnmp" event={"ID":"945a385c-8dcc-456e-9d2d-36d2675b8f43","Type":"ContainerDied","Data":"5630f7549e953d126624785e3b9f0a8fbc5eb455733c6b733de8574799238de6"} Nov 28 10:01:56 crc kubenswrapper[4946]: I1128 10:01:56.063399 4946 scope.go:117] "RemoveContainer" containerID="adab64441a40c8136c2577fa65232c7486142faf117e7494c4e6a8f2ad225098" Nov 28 10:01:56 crc kubenswrapper[4946]: I1128 10:01:56.063537 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvnmp" Nov 28 10:01:56 crc kubenswrapper[4946]: I1128 10:01:56.089024 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvnmp"] Nov 28 10:01:56 crc kubenswrapper[4946]: I1128 10:01:56.091400 4946 scope.go:117] "RemoveContainer" containerID="23b43a6050fb995c5d57c90c409c3832a53c2a60d07d718711fcc475ce91b74d" Nov 28 10:01:56 crc kubenswrapper[4946]: I1128 10:01:56.106170 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvnmp"] Nov 28 10:01:56 crc kubenswrapper[4946]: I1128 10:01:56.130166 4946 scope.go:117] "RemoveContainer" containerID="d06dc5edb3936cba788c51c01ceb5058a79f8fb7da245575d46392af26213e81" Nov 28 10:01:56 crc kubenswrapper[4946]: I1128 10:01:56.183213 4946 scope.go:117] "RemoveContainer" containerID="adab64441a40c8136c2577fa65232c7486142faf117e7494c4e6a8f2ad225098" Nov 28 10:01:56 crc kubenswrapper[4946]: E1128 10:01:56.183670 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adab64441a40c8136c2577fa65232c7486142faf117e7494c4e6a8f2ad225098\": container with ID starting with adab64441a40c8136c2577fa65232c7486142faf117e7494c4e6a8f2ad225098 not found: ID does not exist" containerID="adab64441a40c8136c2577fa65232c7486142faf117e7494c4e6a8f2ad225098" Nov 28 10:01:56 crc kubenswrapper[4946]: I1128 10:01:56.183702 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adab64441a40c8136c2577fa65232c7486142faf117e7494c4e6a8f2ad225098"} err="failed to get container status \"adab64441a40c8136c2577fa65232c7486142faf117e7494c4e6a8f2ad225098\": rpc error: code = NotFound desc = could not find container \"adab64441a40c8136c2577fa65232c7486142faf117e7494c4e6a8f2ad225098\": container with ID starting with adab64441a40c8136c2577fa65232c7486142faf117e7494c4e6a8f2ad225098 not found: ID does not exist" Nov 28 10:01:56 crc kubenswrapper[4946]: I1128 10:01:56.183721 4946 scope.go:117] "RemoveContainer" containerID="23b43a6050fb995c5d57c90c409c3832a53c2a60d07d718711fcc475ce91b74d" Nov 28 10:01:56 crc kubenswrapper[4946]: E1128 10:01:56.184063 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23b43a6050fb995c5d57c90c409c3832a53c2a60d07d718711fcc475ce91b74d\": container with ID starting with 23b43a6050fb995c5d57c90c409c3832a53c2a60d07d718711fcc475ce91b74d not found: ID does not exist" containerID="23b43a6050fb995c5d57c90c409c3832a53c2a60d07d718711fcc475ce91b74d" Nov 28 10:01:56 crc kubenswrapper[4946]: I1128 10:01:56.184108 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23b43a6050fb995c5d57c90c409c3832a53c2a60d07d718711fcc475ce91b74d"} err="failed to get container status \"23b43a6050fb995c5d57c90c409c3832a53c2a60d07d718711fcc475ce91b74d\": rpc error: code = NotFound desc = could not find container \"23b43a6050fb995c5d57c90c409c3832a53c2a60d07d718711fcc475ce91b74d\": container with ID starting with 23b43a6050fb995c5d57c90c409c3832a53c2a60d07d718711fcc475ce91b74d not found: ID does not exist" Nov 28 10:01:56 crc kubenswrapper[4946]: I1128 10:01:56.184134 4946 scope.go:117] "RemoveContainer" containerID="d06dc5edb3936cba788c51c01ceb5058a79f8fb7da245575d46392af26213e81" Nov 28 10:01:56 crc kubenswrapper[4946]: E1128 10:01:56.186436 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d06dc5edb3936cba788c51c01ceb5058a79f8fb7da245575d46392af26213e81\": container with ID starting with d06dc5edb3936cba788c51c01ceb5058a79f8fb7da245575d46392af26213e81 not found: ID does not exist" containerID="d06dc5edb3936cba788c51c01ceb5058a79f8fb7da245575d46392af26213e81" Nov 28 10:01:56 crc kubenswrapper[4946]: I1128 10:01:56.186486 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d06dc5edb3936cba788c51c01ceb5058a79f8fb7da245575d46392af26213e81"} err="failed to get container status \"d06dc5edb3936cba788c51c01ceb5058a79f8fb7da245575d46392af26213e81\": rpc error: code = NotFound desc = could not find container \"d06dc5edb3936cba788c51c01ceb5058a79f8fb7da245575d46392af26213e81\": container with ID starting with d06dc5edb3936cba788c51c01ceb5058a79f8fb7da245575d46392af26213e81 not found: ID does not exist" Nov 28 10:01:57 crc kubenswrapper[4946]: I1128 10:01:57.746024 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fts45" podUID="1266533f-f63c-49ce-bdbf-ea5b76fedecf" containerName="registry-server" probeResult="failure" output=< Nov 28 10:01:57 crc kubenswrapper[4946]: timeout: failed to connect service ":50051" within 1s Nov 28 10:01:57 crc kubenswrapper[4946]: > Nov 28 10:01:58 crc kubenswrapper[4946]: I1128 10:01:58.004307 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="945a385c-8dcc-456e-9d2d-36d2675b8f43" path="/var/lib/kubelet/pods/945a385c-8dcc-456e-9d2d-36d2675b8f43/volumes" Nov 28 10:02:06 crc kubenswrapper[4946]: I1128 10:02:06.763196 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fts45" Nov 28 10:02:06 crc kubenswrapper[4946]: I1128 10:02:06.850277 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fts45" Nov 28 10:02:07 crc kubenswrapper[4946]: I1128 10:02:07.521760 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fts45"] Nov 28 10:02:08 crc kubenswrapper[4946]: I1128 10:02:08.185242 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fts45" podUID="1266533f-f63c-49ce-bdbf-ea5b76fedecf" containerName="registry-server" containerID="cri-o://ded4cd47451841a04c233c248f68eb06f7c97b0e70d939660d073097e843809e" gracePeriod=2 Nov 28 10:02:08 crc kubenswrapper[4946]: I1128 10:02:08.891884 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fts45" Nov 28 10:02:08 crc kubenswrapper[4946]: I1128 10:02:08.970122 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1266533f-f63c-49ce-bdbf-ea5b76fedecf-catalog-content\") pod \"1266533f-f63c-49ce-bdbf-ea5b76fedecf\" (UID: \"1266533f-f63c-49ce-bdbf-ea5b76fedecf\") " Nov 28 10:02:08 crc kubenswrapper[4946]: I1128 10:02:08.970593 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1266533f-f63c-49ce-bdbf-ea5b76fedecf-utilities\") pod \"1266533f-f63c-49ce-bdbf-ea5b76fedecf\" (UID: \"1266533f-f63c-49ce-bdbf-ea5b76fedecf\") " Nov 28 10:02:08 crc kubenswrapper[4946]: I1128 10:02:08.970672 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qgxs\" (UniqueName: \"kubernetes.io/projected/1266533f-f63c-49ce-bdbf-ea5b76fedecf-kube-api-access-4qgxs\") pod \"1266533f-f63c-49ce-bdbf-ea5b76fedecf\" (UID: \"1266533f-f63c-49ce-bdbf-ea5b76fedecf\") " Nov 28 10:02:08 crc kubenswrapper[4946]: I1128 10:02:08.971382 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1266533f-f63c-49ce-bdbf-ea5b76fedecf-utilities" (OuterVolumeSpecName: "utilities") pod "1266533f-f63c-49ce-bdbf-ea5b76fedecf" (UID: "1266533f-f63c-49ce-bdbf-ea5b76fedecf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:02:08 crc kubenswrapper[4946]: I1128 10:02:08.989714 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1266533f-f63c-49ce-bdbf-ea5b76fedecf-kube-api-access-4qgxs" (OuterVolumeSpecName: "kube-api-access-4qgxs") pod "1266533f-f63c-49ce-bdbf-ea5b76fedecf" (UID: "1266533f-f63c-49ce-bdbf-ea5b76fedecf"). InnerVolumeSpecName "kube-api-access-4qgxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 10:02:09 crc kubenswrapper[4946]: I1128 10:02:09.065383 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1266533f-f63c-49ce-bdbf-ea5b76fedecf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1266533f-f63c-49ce-bdbf-ea5b76fedecf" (UID: "1266533f-f63c-49ce-bdbf-ea5b76fedecf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:02:09 crc kubenswrapper[4946]: I1128 10:02:09.073637 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1266533f-f63c-49ce-bdbf-ea5b76fedecf-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 10:02:09 crc kubenswrapper[4946]: I1128 10:02:09.073671 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qgxs\" (UniqueName: \"kubernetes.io/projected/1266533f-f63c-49ce-bdbf-ea5b76fedecf-kube-api-access-4qgxs\") on node \"crc\" DevicePath \"\"" Nov 28 10:02:09 crc kubenswrapper[4946]: I1128 10:02:09.073683 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1266533f-f63c-49ce-bdbf-ea5b76fedecf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 10:02:09 crc kubenswrapper[4946]: I1128 10:02:09.198924 4946 generic.go:334] "Generic (PLEG): container finished" podID="1266533f-f63c-49ce-bdbf-ea5b76fedecf" containerID="ded4cd47451841a04c233c248f68eb06f7c97b0e70d939660d073097e843809e" exitCode=0 Nov 28 10:02:09 crc kubenswrapper[4946]: I1128 10:02:09.198990 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fts45" event={"ID":"1266533f-f63c-49ce-bdbf-ea5b76fedecf","Type":"ContainerDied","Data":"ded4cd47451841a04c233c248f68eb06f7c97b0e70d939660d073097e843809e"} Nov 28 10:02:09 crc kubenswrapper[4946]: I1128 10:02:09.198999 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fts45" Nov 28 10:02:09 crc kubenswrapper[4946]: I1128 10:02:09.199040 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fts45" event={"ID":"1266533f-f63c-49ce-bdbf-ea5b76fedecf","Type":"ContainerDied","Data":"c1a2770f23106084ff4ae11405d63e246a0af62ac0033aad09c42d402a2c6e92"} Nov 28 10:02:09 crc kubenswrapper[4946]: I1128 10:02:09.199069 4946 scope.go:117] "RemoveContainer" containerID="ded4cd47451841a04c233c248f68eb06f7c97b0e70d939660d073097e843809e" Nov 28 10:02:09 crc kubenswrapper[4946]: I1128 10:02:09.228330 4946 scope.go:117] "RemoveContainer" containerID="2feb7f97814a6a88b6488cfd5612ed4075644952909abb4c3ec1f5bd508697d2" Nov 28 10:02:09 crc kubenswrapper[4946]: I1128 10:02:09.242744 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fts45"] Nov 28 10:02:09 crc kubenswrapper[4946]: I1128 10:02:09.251725 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fts45"] Nov 28 10:02:09 crc kubenswrapper[4946]: I1128 10:02:09.254801 4946 scope.go:117] "RemoveContainer" containerID="ed7b3009e3e5abaae4fe7b2858a7f9ba8d829e084e08d079659346bd504f88b7" Nov 28 10:02:09 crc kubenswrapper[4946]: I1128 10:02:09.309959 4946 scope.go:117] "RemoveContainer" containerID="ded4cd47451841a04c233c248f68eb06f7c97b0e70d939660d073097e843809e" Nov 28 10:02:09 crc kubenswrapper[4946]: E1128 10:02:09.311496 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ded4cd47451841a04c233c248f68eb06f7c97b0e70d939660d073097e843809e\": container with ID starting with ded4cd47451841a04c233c248f68eb06f7c97b0e70d939660d073097e843809e not found: ID does not exist" containerID="ded4cd47451841a04c233c248f68eb06f7c97b0e70d939660d073097e843809e" Nov 28 10:02:09 crc kubenswrapper[4946]: I1128 10:02:09.311535 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ded4cd47451841a04c233c248f68eb06f7c97b0e70d939660d073097e843809e"} err="failed to get container status \"ded4cd47451841a04c233c248f68eb06f7c97b0e70d939660d073097e843809e\": rpc error: code = NotFound desc = could not find container \"ded4cd47451841a04c233c248f68eb06f7c97b0e70d939660d073097e843809e\": container with ID starting with ded4cd47451841a04c233c248f68eb06f7c97b0e70d939660d073097e843809e not found: ID does not exist" Nov 28 10:02:09 crc kubenswrapper[4946]: I1128 10:02:09.311557 4946 scope.go:117] "RemoveContainer" containerID="2feb7f97814a6a88b6488cfd5612ed4075644952909abb4c3ec1f5bd508697d2" Nov 28 10:02:09 crc kubenswrapper[4946]: E1128 10:02:09.311895 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2feb7f97814a6a88b6488cfd5612ed4075644952909abb4c3ec1f5bd508697d2\": container with ID starting with 2feb7f97814a6a88b6488cfd5612ed4075644952909abb4c3ec1f5bd508697d2 not found: ID does not exist" containerID="2feb7f97814a6a88b6488cfd5612ed4075644952909abb4c3ec1f5bd508697d2" Nov 28 10:02:09 crc kubenswrapper[4946]: I1128 10:02:09.311915 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2feb7f97814a6a88b6488cfd5612ed4075644952909abb4c3ec1f5bd508697d2"} err="failed to get container status \"2feb7f97814a6a88b6488cfd5612ed4075644952909abb4c3ec1f5bd508697d2\": rpc error: code = NotFound desc = could not find container \"2feb7f97814a6a88b6488cfd5612ed4075644952909abb4c3ec1f5bd508697d2\": container with ID starting with 2feb7f97814a6a88b6488cfd5612ed4075644952909abb4c3ec1f5bd508697d2 not found: ID does not exist" Nov 28 10:02:09 crc kubenswrapper[4946]: I1128 10:02:09.311930 4946 scope.go:117] "RemoveContainer" containerID="ed7b3009e3e5abaae4fe7b2858a7f9ba8d829e084e08d079659346bd504f88b7" Nov 28 10:02:09 crc kubenswrapper[4946]: E1128 10:02:09.312203 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed7b3009e3e5abaae4fe7b2858a7f9ba8d829e084e08d079659346bd504f88b7\": container with ID starting with ed7b3009e3e5abaae4fe7b2858a7f9ba8d829e084e08d079659346bd504f88b7 not found: ID does not exist" containerID="ed7b3009e3e5abaae4fe7b2858a7f9ba8d829e084e08d079659346bd504f88b7" Nov 28 10:02:09 crc kubenswrapper[4946]: I1128 10:02:09.312249 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed7b3009e3e5abaae4fe7b2858a7f9ba8d829e084e08d079659346bd504f88b7"} err="failed to get container status \"ed7b3009e3e5abaae4fe7b2858a7f9ba8d829e084e08d079659346bd504f88b7\": rpc error: code = NotFound desc = could not find container \"ed7b3009e3e5abaae4fe7b2858a7f9ba8d829e084e08d079659346bd504f88b7\": container with ID starting with ed7b3009e3e5abaae4fe7b2858a7f9ba8d829e084e08d079659346bd504f88b7 not found: ID does not exist" Nov 28 10:02:10 crc kubenswrapper[4946]: I1128 10:02:10.008033 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1266533f-f63c-49ce-bdbf-ea5b76fedecf" path="/var/lib/kubelet/pods/1266533f-f63c-49ce-bdbf-ea5b76fedecf/volumes" Nov 28 10:03:14 crc kubenswrapper[4946]: I1128 10:03:14.045247 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vftrt"] Nov 28 10:03:14 crc kubenswrapper[4946]: E1128 10:03:14.046222 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="945a385c-8dcc-456e-9d2d-36d2675b8f43" containerName="extract-content" Nov 28 10:03:14 crc kubenswrapper[4946]: I1128 10:03:14.046236 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="945a385c-8dcc-456e-9d2d-36d2675b8f43" containerName="extract-content" Nov 28 10:03:14 crc kubenswrapper[4946]: E1128 10:03:14.046255 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="945a385c-8dcc-456e-9d2d-36d2675b8f43" containerName="extract-utilities" Nov 28 10:03:14 crc kubenswrapper[4946]: I1128 10:03:14.046262 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="945a385c-8dcc-456e-9d2d-36d2675b8f43" containerName="extract-utilities" Nov 28 10:03:14 crc kubenswrapper[4946]: E1128 10:03:14.046275 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1266533f-f63c-49ce-bdbf-ea5b76fedecf" containerName="extract-utilities" Nov 28 10:03:14 crc kubenswrapper[4946]: I1128 10:03:14.046281 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1266533f-f63c-49ce-bdbf-ea5b76fedecf" containerName="extract-utilities" Nov 28 10:03:14 crc kubenswrapper[4946]: E1128 10:03:14.046308 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1266533f-f63c-49ce-bdbf-ea5b76fedecf" containerName="registry-server" Nov 28 10:03:14 crc kubenswrapper[4946]: I1128 10:03:14.046315 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1266533f-f63c-49ce-bdbf-ea5b76fedecf" containerName="registry-server" Nov 28 10:03:14 crc kubenswrapper[4946]: E1128 10:03:14.046333 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1266533f-f63c-49ce-bdbf-ea5b76fedecf" containerName="extract-content" Nov 28 10:03:14 crc kubenswrapper[4946]: I1128 10:03:14.046339 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1266533f-f63c-49ce-bdbf-ea5b76fedecf" containerName="extract-content" Nov 28 10:03:14 crc kubenswrapper[4946]: E1128 10:03:14.046357 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="945a385c-8dcc-456e-9d2d-36d2675b8f43" containerName="registry-server" Nov 28 10:03:14 crc kubenswrapper[4946]: I1128 10:03:14.046362 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="945a385c-8dcc-456e-9d2d-36d2675b8f43" containerName="registry-server" Nov 28 10:03:14 crc kubenswrapper[4946]: I1128 10:03:14.046578 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="1266533f-f63c-49ce-bdbf-ea5b76fedecf" containerName="registry-server" Nov 28 10:03:14 crc kubenswrapper[4946]: I1128 10:03:14.046597 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="945a385c-8dcc-456e-9d2d-36d2675b8f43" containerName="registry-server" Nov 28 10:03:14 crc kubenswrapper[4946]: I1128 10:03:14.048028 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vftrt" Nov 28 10:03:14 crc kubenswrapper[4946]: I1128 10:03:14.071836 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vftrt"] Nov 28 10:03:14 crc kubenswrapper[4946]: I1128 10:03:14.238936 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc199ece-4792-42a8-8f4a-adcd3e9f81cd-utilities\") pod \"community-operators-vftrt\" (UID: \"fc199ece-4792-42a8-8f4a-adcd3e9f81cd\") " pod="openshift-marketplace/community-operators-vftrt" Nov 28 10:03:14 crc kubenswrapper[4946]: I1128 10:03:14.239397 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqbpm\" (UniqueName: \"kubernetes.io/projected/fc199ece-4792-42a8-8f4a-adcd3e9f81cd-kube-api-access-xqbpm\") pod \"community-operators-vftrt\" (UID: \"fc199ece-4792-42a8-8f4a-adcd3e9f81cd\") " pod="openshift-marketplace/community-operators-vftrt" Nov 28 10:03:14 crc kubenswrapper[4946]: I1128 10:03:14.239585 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc199ece-4792-42a8-8f4a-adcd3e9f81cd-catalog-content\") pod \"community-operators-vftrt\" (UID: \"fc199ece-4792-42a8-8f4a-adcd3e9f81cd\") " pod="openshift-marketplace/community-operators-vftrt" Nov 28 10:03:14 crc kubenswrapper[4946]: I1128 10:03:14.341621 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc199ece-4792-42a8-8f4a-adcd3e9f81cd-catalog-content\") pod \"community-operators-vftrt\" (UID: \"fc199ece-4792-42a8-8f4a-adcd3e9f81cd\") " pod="openshift-marketplace/community-operators-vftrt" Nov 28 10:03:14 crc kubenswrapper[4946]: I1128 10:03:14.341727 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc199ece-4792-42a8-8f4a-adcd3e9f81cd-utilities\") pod \"community-operators-vftrt\" (UID: \"fc199ece-4792-42a8-8f4a-adcd3e9f81cd\") " pod="openshift-marketplace/community-operators-vftrt" Nov 28 10:03:14 crc kubenswrapper[4946]: I1128 10:03:14.341751 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqbpm\" (UniqueName: \"kubernetes.io/projected/fc199ece-4792-42a8-8f4a-adcd3e9f81cd-kube-api-access-xqbpm\") pod \"community-operators-vftrt\" (UID: \"fc199ece-4792-42a8-8f4a-adcd3e9f81cd\") " pod="openshift-marketplace/community-operators-vftrt" Nov 28 10:03:14 crc kubenswrapper[4946]: I1128 10:03:14.342416 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc199ece-4792-42a8-8f4a-adcd3e9f81cd-catalog-content\") pod \"community-operators-vftrt\" (UID: \"fc199ece-4792-42a8-8f4a-adcd3e9f81cd\") " pod="openshift-marketplace/community-operators-vftrt" Nov 28 10:03:14 crc kubenswrapper[4946]: I1128 10:03:14.342426 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc199ece-4792-42a8-8f4a-adcd3e9f81cd-utilities\") pod \"community-operators-vftrt\" (UID: \"fc199ece-4792-42a8-8f4a-adcd3e9f81cd\") " pod="openshift-marketplace/community-operators-vftrt" Nov 28 10:03:14 crc kubenswrapper[4946]: I1128 10:03:14.368449 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqbpm\" (UniqueName: \"kubernetes.io/projected/fc199ece-4792-42a8-8f4a-adcd3e9f81cd-kube-api-access-xqbpm\") pod \"community-operators-vftrt\" (UID: \"fc199ece-4792-42a8-8f4a-adcd3e9f81cd\") " pod="openshift-marketplace/community-operators-vftrt" Nov 28 10:03:14 crc kubenswrapper[4946]: I1128 10:03:14.373945 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vftrt" Nov 28 10:03:14 crc kubenswrapper[4946]: I1128 10:03:14.952677 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vftrt"] Nov 28 10:03:15 crc kubenswrapper[4946]: I1128 10:03:15.933058 4946 generic.go:334] "Generic (PLEG): container finished" podID="fc199ece-4792-42a8-8f4a-adcd3e9f81cd" containerID="36a91a00ae5d48225ca20a5176c47e633eae074cb782242b0a13fe86fd0f66b0" exitCode=0 Nov 28 10:03:15 crc kubenswrapper[4946]: I1128 10:03:15.933162 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vftrt" event={"ID":"fc199ece-4792-42a8-8f4a-adcd3e9f81cd","Type":"ContainerDied","Data":"36a91a00ae5d48225ca20a5176c47e633eae074cb782242b0a13fe86fd0f66b0"} Nov 28 10:03:15 crc kubenswrapper[4946]: I1128 10:03:15.934296 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vftrt" event={"ID":"fc199ece-4792-42a8-8f4a-adcd3e9f81cd","Type":"ContainerStarted","Data":"52622abf3683004237e087d2c46529dc3b5f985d0dc515df35aac037ad5f8ee8"} Nov 28 10:03:16 crc kubenswrapper[4946]: I1128 10:03:16.947399 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vftrt" event={"ID":"fc199ece-4792-42a8-8f4a-adcd3e9f81cd","Type":"ContainerStarted","Data":"7a52f0b51297b0546a1aeff7fd5fefbc18f03019de1911c9fe4da20773d262fc"} Nov 28 10:03:17 crc kubenswrapper[4946]: I1128 10:03:17.961253 4946 generic.go:334] "Generic (PLEG): container finished" podID="fc199ece-4792-42a8-8f4a-adcd3e9f81cd" containerID="7a52f0b51297b0546a1aeff7fd5fefbc18f03019de1911c9fe4da20773d262fc" exitCode=0 Nov 28 10:03:17 crc kubenswrapper[4946]: I1128 10:03:17.961312 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vftrt" event={"ID":"fc199ece-4792-42a8-8f4a-adcd3e9f81cd","Type":"ContainerDied","Data":"7a52f0b51297b0546a1aeff7fd5fefbc18f03019de1911c9fe4da20773d262fc"} Nov 28 10:03:19 crc kubenswrapper[4946]: I1128 10:03:19.984537 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vftrt" event={"ID":"fc199ece-4792-42a8-8f4a-adcd3e9f81cd","Type":"ContainerStarted","Data":"b1b4ae5ae3357cef2eb8d79069d754fb1180b6257a3c8a9d46861b008d454114"} Nov 28 10:03:20 crc kubenswrapper[4946]: I1128 10:03:20.022952 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vftrt" podStartSLOduration=3.12233619 podStartE2EDuration="6.022934093s" podCreationTimestamp="2025-11-28 10:03:14 +0000 UTC" firstStartedPulling="2025-11-28 10:03:15.934832153 +0000 UTC m=+11450.312897254" lastFinishedPulling="2025-11-28 10:03:18.835429986 +0000 UTC m=+11453.213495157" observedRunningTime="2025-11-28 10:03:20.003505023 +0000 UTC m=+11454.381570174" watchObservedRunningTime="2025-11-28 10:03:20.022934093 +0000 UTC m=+11454.400999194" Nov 28 10:03:24 crc kubenswrapper[4946]: I1128 10:03:24.374937 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vftrt" Nov 28 10:03:24 crc kubenswrapper[4946]: I1128 10:03:24.375398 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vftrt" Nov 28 10:03:24 crc kubenswrapper[4946]: I1128 10:03:24.425803 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vftrt" Nov 28 10:03:24 crc kubenswrapper[4946]: I1128 10:03:24.730884 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 10:03:24 crc kubenswrapper[4946]: I1128 10:03:24.730937 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 10:03:25 crc kubenswrapper[4946]: I1128 10:03:25.091589 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vftrt" Nov 28 10:03:25 crc kubenswrapper[4946]: I1128 10:03:25.159351 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vftrt"] Nov 28 10:03:27 crc kubenswrapper[4946]: I1128 10:03:27.063506 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vftrt" podUID="fc199ece-4792-42a8-8f4a-adcd3e9f81cd" containerName="registry-server" containerID="cri-o://b1b4ae5ae3357cef2eb8d79069d754fb1180b6257a3c8a9d46861b008d454114" gracePeriod=2 Nov 28 10:03:27 crc kubenswrapper[4946]: I1128 10:03:27.720208 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vftrt" Nov 28 10:03:27 crc kubenswrapper[4946]: I1128 10:03:27.849902 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc199ece-4792-42a8-8f4a-adcd3e9f81cd-catalog-content\") pod \"fc199ece-4792-42a8-8f4a-adcd3e9f81cd\" (UID: \"fc199ece-4792-42a8-8f4a-adcd3e9f81cd\") " Nov 28 10:03:27 crc kubenswrapper[4946]: I1128 10:03:27.849976 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqbpm\" (UniqueName: \"kubernetes.io/projected/fc199ece-4792-42a8-8f4a-adcd3e9f81cd-kube-api-access-xqbpm\") pod \"fc199ece-4792-42a8-8f4a-adcd3e9f81cd\" (UID: \"fc199ece-4792-42a8-8f4a-adcd3e9f81cd\") " Nov 28 10:03:27 crc kubenswrapper[4946]: I1128 10:03:27.850017 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc199ece-4792-42a8-8f4a-adcd3e9f81cd-utilities\") pod \"fc199ece-4792-42a8-8f4a-adcd3e9f81cd\" (UID: \"fc199ece-4792-42a8-8f4a-adcd3e9f81cd\") " Nov 28 10:03:27 crc kubenswrapper[4946]: I1128 10:03:27.853684 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc199ece-4792-42a8-8f4a-adcd3e9f81cd-utilities" (OuterVolumeSpecName: "utilities") pod "fc199ece-4792-42a8-8f4a-adcd3e9f81cd" (UID: "fc199ece-4792-42a8-8f4a-adcd3e9f81cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:03:27 crc kubenswrapper[4946]: I1128 10:03:27.861249 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc199ece-4792-42a8-8f4a-adcd3e9f81cd-kube-api-access-xqbpm" (OuterVolumeSpecName: "kube-api-access-xqbpm") pod "fc199ece-4792-42a8-8f4a-adcd3e9f81cd" (UID: "fc199ece-4792-42a8-8f4a-adcd3e9f81cd"). InnerVolumeSpecName "kube-api-access-xqbpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 10:03:27 crc kubenswrapper[4946]: I1128 10:03:27.911991 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc199ece-4792-42a8-8f4a-adcd3e9f81cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc199ece-4792-42a8-8f4a-adcd3e9f81cd" (UID: "fc199ece-4792-42a8-8f4a-adcd3e9f81cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:03:27 crc kubenswrapper[4946]: I1128 10:03:27.952919 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc199ece-4792-42a8-8f4a-adcd3e9f81cd-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 10:03:27 crc kubenswrapper[4946]: I1128 10:03:27.953165 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc199ece-4792-42a8-8f4a-adcd3e9f81cd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 10:03:27 crc kubenswrapper[4946]: I1128 10:03:27.953242 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqbpm\" (UniqueName: \"kubernetes.io/projected/fc199ece-4792-42a8-8f4a-adcd3e9f81cd-kube-api-access-xqbpm\") on node \"crc\" DevicePath \"\"" Nov 28 10:03:28 crc kubenswrapper[4946]: I1128 10:03:28.074992 4946 generic.go:334] "Generic (PLEG): container finished" podID="fc199ece-4792-42a8-8f4a-adcd3e9f81cd" containerID="b1b4ae5ae3357cef2eb8d79069d754fb1180b6257a3c8a9d46861b008d454114" exitCode=0 Nov 28 10:03:28 crc kubenswrapper[4946]: I1128 10:03:28.075035 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vftrt" event={"ID":"fc199ece-4792-42a8-8f4a-adcd3e9f81cd","Type":"ContainerDied","Data":"b1b4ae5ae3357cef2eb8d79069d754fb1180b6257a3c8a9d46861b008d454114"} Nov 28 10:03:28 crc kubenswrapper[4946]: I1128 10:03:28.075061 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vftrt" event={"ID":"fc199ece-4792-42a8-8f4a-adcd3e9f81cd","Type":"ContainerDied","Data":"52622abf3683004237e087d2c46529dc3b5f985d0dc515df35aac037ad5f8ee8"} Nov 28 10:03:28 crc kubenswrapper[4946]: I1128 10:03:28.075078 4946 scope.go:117] "RemoveContainer" containerID="b1b4ae5ae3357cef2eb8d79069d754fb1180b6257a3c8a9d46861b008d454114" Nov 28 10:03:28 crc kubenswrapper[4946]: I1128 10:03:28.075212 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vftrt" Nov 28 10:03:28 crc kubenswrapper[4946]: I1128 10:03:28.107649 4946 scope.go:117] "RemoveContainer" containerID="7a52f0b51297b0546a1aeff7fd5fefbc18f03019de1911c9fe4da20773d262fc" Nov 28 10:03:28 crc kubenswrapper[4946]: I1128 10:03:28.113104 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vftrt"] Nov 28 10:03:28 crc kubenswrapper[4946]: I1128 10:03:28.186666 4946 scope.go:117] "RemoveContainer" containerID="36a91a00ae5d48225ca20a5176c47e633eae074cb782242b0a13fe86fd0f66b0" Nov 28 10:03:28 crc kubenswrapper[4946]: I1128 10:03:28.189673 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vftrt"] Nov 28 10:03:28 crc kubenswrapper[4946]: I1128 10:03:28.244834 4946 scope.go:117] "RemoveContainer" containerID="b1b4ae5ae3357cef2eb8d79069d754fb1180b6257a3c8a9d46861b008d454114" Nov 28 10:03:28 crc kubenswrapper[4946]: E1128 10:03:28.245187 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1b4ae5ae3357cef2eb8d79069d754fb1180b6257a3c8a9d46861b008d454114\": container with ID starting with b1b4ae5ae3357cef2eb8d79069d754fb1180b6257a3c8a9d46861b008d454114 not found: ID does not exist" containerID="b1b4ae5ae3357cef2eb8d79069d754fb1180b6257a3c8a9d46861b008d454114" Nov 28 10:03:28 crc kubenswrapper[4946]: I1128 10:03:28.245218 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1b4ae5ae3357cef2eb8d79069d754fb1180b6257a3c8a9d46861b008d454114"} err="failed to get container status \"b1b4ae5ae3357cef2eb8d79069d754fb1180b6257a3c8a9d46861b008d454114\": rpc error: code = NotFound desc = could not find container \"b1b4ae5ae3357cef2eb8d79069d754fb1180b6257a3c8a9d46861b008d454114\": container with ID starting with b1b4ae5ae3357cef2eb8d79069d754fb1180b6257a3c8a9d46861b008d454114 not found: ID does not exist" Nov 28 10:03:28 crc kubenswrapper[4946]: I1128 10:03:28.245236 4946 scope.go:117] "RemoveContainer" containerID="7a52f0b51297b0546a1aeff7fd5fefbc18f03019de1911c9fe4da20773d262fc" Nov 28 10:03:28 crc kubenswrapper[4946]: E1128 10:03:28.249208 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a52f0b51297b0546a1aeff7fd5fefbc18f03019de1911c9fe4da20773d262fc\": container with ID starting with 7a52f0b51297b0546a1aeff7fd5fefbc18f03019de1911c9fe4da20773d262fc not found: ID does not exist" containerID="7a52f0b51297b0546a1aeff7fd5fefbc18f03019de1911c9fe4da20773d262fc" Nov 28 10:03:28 crc kubenswrapper[4946]: I1128 10:03:28.249245 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a52f0b51297b0546a1aeff7fd5fefbc18f03019de1911c9fe4da20773d262fc"} err="failed to get container status \"7a52f0b51297b0546a1aeff7fd5fefbc18f03019de1911c9fe4da20773d262fc\": rpc error: code = NotFound desc = could not find container \"7a52f0b51297b0546a1aeff7fd5fefbc18f03019de1911c9fe4da20773d262fc\": container with ID starting with 7a52f0b51297b0546a1aeff7fd5fefbc18f03019de1911c9fe4da20773d262fc not found: ID does not exist" Nov 28 10:03:28 crc kubenswrapper[4946]: I1128 10:03:28.249269 4946 scope.go:117] "RemoveContainer" containerID="36a91a00ae5d48225ca20a5176c47e633eae074cb782242b0a13fe86fd0f66b0" Nov 28 10:03:28 crc kubenswrapper[4946]: E1128 10:03:28.252833 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36a91a00ae5d48225ca20a5176c47e633eae074cb782242b0a13fe86fd0f66b0\": container with ID starting with 36a91a00ae5d48225ca20a5176c47e633eae074cb782242b0a13fe86fd0f66b0 not found: ID does not exist" containerID="36a91a00ae5d48225ca20a5176c47e633eae074cb782242b0a13fe86fd0f66b0" Nov 28 10:03:28 crc kubenswrapper[4946]: I1128 10:03:28.252862 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36a91a00ae5d48225ca20a5176c47e633eae074cb782242b0a13fe86fd0f66b0"} err="failed to get container status \"36a91a00ae5d48225ca20a5176c47e633eae074cb782242b0a13fe86fd0f66b0\": rpc error: code = NotFound desc = could not find container \"36a91a00ae5d48225ca20a5176c47e633eae074cb782242b0a13fe86fd0f66b0\": container with ID starting with 36a91a00ae5d48225ca20a5176c47e633eae074cb782242b0a13fe86fd0f66b0 not found: ID does not exist" Nov 28 10:03:30 crc kubenswrapper[4946]: I1128 10:03:30.004385 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc199ece-4792-42a8-8f4a-adcd3e9f81cd" path="/var/lib/kubelet/pods/fc199ece-4792-42a8-8f4a-adcd3e9f81cd/volumes" Nov 28 10:03:54 crc kubenswrapper[4946]: I1128 10:03:54.731022 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 10:03:54 crc kubenswrapper[4946]: I1128 10:03:54.731591 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 10:04:24 crc kubenswrapper[4946]: I1128 10:04:24.731363 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 10:04:24 crc kubenswrapper[4946]: I1128 10:04:24.732284 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 10:04:24 crc kubenswrapper[4946]: I1128 10:04:24.732415 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 10:04:24 crc kubenswrapper[4946]: I1128 10:04:24.733344 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1affc8b427685239fe541e69eac3ff653f1f5e77fbd18f4ae8409b7412b67d1"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 10:04:24 crc kubenswrapper[4946]: I1128 10:04:24.733405 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://c1affc8b427685239fe541e69eac3ff653f1f5e77fbd18f4ae8409b7412b67d1" gracePeriod=600 Nov 28 10:04:25 crc kubenswrapper[4946]: I1128 10:04:25.660107 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="c1affc8b427685239fe541e69eac3ff653f1f5e77fbd18f4ae8409b7412b67d1" exitCode=0 Nov 28 10:04:25 crc kubenswrapper[4946]: I1128 10:04:25.660211 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"c1affc8b427685239fe541e69eac3ff653f1f5e77fbd18f4ae8409b7412b67d1"} Nov 28 10:04:25 crc kubenswrapper[4946]: I1128 10:04:25.660572 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9"} Nov 28 10:04:25 crc kubenswrapper[4946]: I1128 10:04:25.660592 4946 scope.go:117] "RemoveContainer" containerID="ccd8a815e9e04d19954df00f8767b62b750ccc8573bf748b49fd656899ed7700" Nov 28 10:06:52 crc kubenswrapper[4946]: I1128 10:06:52.388328 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vxszb"] Nov 28 10:06:52 crc kubenswrapper[4946]: E1128 10:06:52.390641 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc199ece-4792-42a8-8f4a-adcd3e9f81cd" containerName="extract-utilities" Nov 28 10:06:52 crc kubenswrapper[4946]: I1128 10:06:52.390664 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc199ece-4792-42a8-8f4a-adcd3e9f81cd" containerName="extract-utilities" Nov 28 10:06:52 crc kubenswrapper[4946]: E1128 10:06:52.390689 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc199ece-4792-42a8-8f4a-adcd3e9f81cd" containerName="registry-server" Nov 28 10:06:52 crc kubenswrapper[4946]: I1128 10:06:52.390719 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc199ece-4792-42a8-8f4a-adcd3e9f81cd" containerName="registry-server" Nov 28 10:06:52 crc kubenswrapper[4946]: E1128 10:06:52.390736 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc199ece-4792-42a8-8f4a-adcd3e9f81cd" containerName="extract-content" Nov 28 10:06:52 crc kubenswrapper[4946]: I1128 10:06:52.390743 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc199ece-4792-42a8-8f4a-adcd3e9f81cd" containerName="extract-content" Nov 28 10:06:52 crc kubenswrapper[4946]: I1128 10:06:52.391380 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc199ece-4792-42a8-8f4a-adcd3e9f81cd" containerName="registry-server" Nov 28 10:06:52 crc kubenswrapper[4946]: I1128 10:06:52.396825 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vxszb" Nov 28 10:06:52 crc kubenswrapper[4946]: I1128 10:06:52.446160 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vxszb"] Nov 28 10:06:52 crc kubenswrapper[4946]: I1128 10:06:52.554852 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0869dfb0-405e-46a7-ab67-083695ce298f-catalog-content\") pod \"certified-operators-vxszb\" (UID: \"0869dfb0-405e-46a7-ab67-083695ce298f\") " pod="openshift-marketplace/certified-operators-vxszb" Nov 28 10:06:52 crc kubenswrapper[4946]: I1128 10:06:52.555219 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0869dfb0-405e-46a7-ab67-083695ce298f-utilities\") pod \"certified-operators-vxszb\" (UID: \"0869dfb0-405e-46a7-ab67-083695ce298f\") " pod="openshift-marketplace/certified-operators-vxszb" Nov 28 10:06:52 crc kubenswrapper[4946]: I1128 10:06:52.555353 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq6h8\" (UniqueName: \"kubernetes.io/projected/0869dfb0-405e-46a7-ab67-083695ce298f-kube-api-access-hq6h8\") pod \"certified-operators-vxszb\" (UID: \"0869dfb0-405e-46a7-ab67-083695ce298f\") " pod="openshift-marketplace/certified-operators-vxszb" Nov 28 10:06:52 crc kubenswrapper[4946]: I1128 10:06:52.656814 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0869dfb0-405e-46a7-ab67-083695ce298f-catalog-content\") pod \"certified-operators-vxszb\" (UID: \"0869dfb0-405e-46a7-ab67-083695ce298f\") " pod="openshift-marketplace/certified-operators-vxszb" Nov 28 10:06:52 crc kubenswrapper[4946]: I1128 10:06:52.656880 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0869dfb0-405e-46a7-ab67-083695ce298f-utilities\") pod \"certified-operators-vxszb\" (UID: \"0869dfb0-405e-46a7-ab67-083695ce298f\") " pod="openshift-marketplace/certified-operators-vxszb" Nov 28 10:06:52 crc kubenswrapper[4946]: I1128 10:06:52.657055 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq6h8\" (UniqueName: \"kubernetes.io/projected/0869dfb0-405e-46a7-ab67-083695ce298f-kube-api-access-hq6h8\") pod \"certified-operators-vxszb\" (UID: \"0869dfb0-405e-46a7-ab67-083695ce298f\") " pod="openshift-marketplace/certified-operators-vxszb" Nov 28 10:06:52 crc kubenswrapper[4946]: I1128 10:06:52.657353 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0869dfb0-405e-46a7-ab67-083695ce298f-catalog-content\") pod \"certified-operators-vxszb\" (UID: \"0869dfb0-405e-46a7-ab67-083695ce298f\") " pod="openshift-marketplace/certified-operators-vxszb" Nov 28 10:06:52 crc kubenswrapper[4946]: I1128 10:06:52.657487 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0869dfb0-405e-46a7-ab67-083695ce298f-utilities\") pod \"certified-operators-vxszb\" (UID: \"0869dfb0-405e-46a7-ab67-083695ce298f\") " pod="openshift-marketplace/certified-operators-vxszb" Nov 28 10:06:52 crc kubenswrapper[4946]: I1128 10:06:52.687429 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq6h8\" (UniqueName: \"kubernetes.io/projected/0869dfb0-405e-46a7-ab67-083695ce298f-kube-api-access-hq6h8\") pod \"certified-operators-vxszb\" (UID: \"0869dfb0-405e-46a7-ab67-083695ce298f\") " pod="openshift-marketplace/certified-operators-vxszb" Nov 28 10:06:52 crc kubenswrapper[4946]: I1128 10:06:52.738636 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vxszb" Nov 28 10:06:53 crc kubenswrapper[4946]: I1128 10:06:53.440872 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vxszb"] Nov 28 10:06:54 crc kubenswrapper[4946]: I1128 10:06:54.329327 4946 generic.go:334] "Generic (PLEG): container finished" podID="0869dfb0-405e-46a7-ab67-083695ce298f" containerID="c30f34417ba0b23b43eef19f8605bece310c0d090031fb01084b102c5a97b376" exitCode=0 Nov 28 10:06:54 crc kubenswrapper[4946]: I1128 10:06:54.329388 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxszb" event={"ID":"0869dfb0-405e-46a7-ab67-083695ce298f","Type":"ContainerDied","Data":"c30f34417ba0b23b43eef19f8605bece310c0d090031fb01084b102c5a97b376"} Nov 28 10:06:54 crc kubenswrapper[4946]: I1128 10:06:54.329971 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxszb" event={"ID":"0869dfb0-405e-46a7-ab67-083695ce298f","Type":"ContainerStarted","Data":"723a8758ed67d1860e11b674e821e7d522930e18cee0d70ad57841bb0b582b8f"} Nov 28 10:06:54 crc kubenswrapper[4946]: I1128 10:06:54.331680 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 10:06:54 crc kubenswrapper[4946]: I1128 10:06:54.730938 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 10:06:54 crc kubenswrapper[4946]: I1128 10:06:54.731000 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 10:06:56 crc kubenswrapper[4946]: I1128 10:06:56.352138 4946 generic.go:334] "Generic (PLEG): container finished" podID="0869dfb0-405e-46a7-ab67-083695ce298f" containerID="fe68cd9c2c37a7315d8f6d6584201d5ebd654357fb4bc04b4e256c29afafaabb" exitCode=0 Nov 28 10:06:56 crc kubenswrapper[4946]: I1128 10:06:56.352194 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxszb" event={"ID":"0869dfb0-405e-46a7-ab67-083695ce298f","Type":"ContainerDied","Data":"fe68cd9c2c37a7315d8f6d6584201d5ebd654357fb4bc04b4e256c29afafaabb"} Nov 28 10:06:59 crc kubenswrapper[4946]: I1128 10:06:59.399123 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxszb" event={"ID":"0869dfb0-405e-46a7-ab67-083695ce298f","Type":"ContainerStarted","Data":"f66fff2514c18018923bb1ed0a18d2f2f6c2c2155e18774e9a5c0baccd26a241"} Nov 28 10:06:59 crc kubenswrapper[4946]: I1128 10:06:59.428238 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vxszb" podStartSLOduration=3.500980975 podStartE2EDuration="7.428215622s" podCreationTimestamp="2025-11-28 10:06:52 +0000 UTC" firstStartedPulling="2025-11-28 10:06:54.331422952 +0000 UTC m=+11668.709488063" lastFinishedPulling="2025-11-28 10:06:58.258657559 +0000 UTC m=+11672.636722710" observedRunningTime="2025-11-28 10:06:59.417521068 +0000 UTC m=+11673.795586209" watchObservedRunningTime="2025-11-28 10:06:59.428215622 +0000 UTC m=+11673.806280733" Nov 28 10:07:02 crc kubenswrapper[4946]: I1128 10:07:02.738770 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vxszb" Nov 28 10:07:02 crc kubenswrapper[4946]: I1128 10:07:02.739426 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vxszb" Nov 28 10:07:02 crc kubenswrapper[4946]: I1128 10:07:02.808328 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vxszb" Nov 28 10:07:03 crc kubenswrapper[4946]: I1128 10:07:03.522544 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vxszb" Nov 28 10:07:03 crc kubenswrapper[4946]: I1128 10:07:03.606738 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vxszb"] Nov 28 10:07:05 crc kubenswrapper[4946]: I1128 10:07:05.472895 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vxszb" podUID="0869dfb0-405e-46a7-ab67-083695ce298f" containerName="registry-server" containerID="cri-o://f66fff2514c18018923bb1ed0a18d2f2f6c2c2155e18774e9a5c0baccd26a241" gracePeriod=2 Nov 28 10:07:06 crc kubenswrapper[4946]: I1128 10:07:06.389434 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vxszb" Nov 28 10:07:06 crc kubenswrapper[4946]: I1128 10:07:06.482877 4946 generic.go:334] "Generic (PLEG): container finished" podID="0869dfb0-405e-46a7-ab67-083695ce298f" containerID="f66fff2514c18018923bb1ed0a18d2f2f6c2c2155e18774e9a5c0baccd26a241" exitCode=0 Nov 28 10:07:06 crc kubenswrapper[4946]: I1128 10:07:06.482918 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxszb" event={"ID":"0869dfb0-405e-46a7-ab67-083695ce298f","Type":"ContainerDied","Data":"f66fff2514c18018923bb1ed0a18d2f2f6c2c2155e18774e9a5c0baccd26a241"} Nov 28 10:07:06 crc kubenswrapper[4946]: I1128 10:07:06.482942 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vxszb" Nov 28 10:07:06 crc kubenswrapper[4946]: I1128 10:07:06.482949 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxszb" event={"ID":"0869dfb0-405e-46a7-ab67-083695ce298f","Type":"ContainerDied","Data":"723a8758ed67d1860e11b674e821e7d522930e18cee0d70ad57841bb0b582b8f"} Nov 28 10:07:06 crc kubenswrapper[4946]: I1128 10:07:06.482970 4946 scope.go:117] "RemoveContainer" containerID="f66fff2514c18018923bb1ed0a18d2f2f6c2c2155e18774e9a5c0baccd26a241" Nov 28 10:07:06 crc kubenswrapper[4946]: I1128 10:07:06.502481 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0869dfb0-405e-46a7-ab67-083695ce298f-utilities\") pod \"0869dfb0-405e-46a7-ab67-083695ce298f\" (UID: \"0869dfb0-405e-46a7-ab67-083695ce298f\") " Nov 28 10:07:06 crc kubenswrapper[4946]: I1128 10:07:06.502579 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq6h8\" (UniqueName: \"kubernetes.io/projected/0869dfb0-405e-46a7-ab67-083695ce298f-kube-api-access-hq6h8\") pod \"0869dfb0-405e-46a7-ab67-083695ce298f\" (UID: \"0869dfb0-405e-46a7-ab67-083695ce298f\") " Nov 28 10:07:06 crc kubenswrapper[4946]: I1128 10:07:06.502934 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0869dfb0-405e-46a7-ab67-083695ce298f-catalog-content\") pod \"0869dfb0-405e-46a7-ab67-083695ce298f\" (UID: \"0869dfb0-405e-46a7-ab67-083695ce298f\") " Nov 28 10:07:06 crc kubenswrapper[4946]: I1128 10:07:06.503296 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0869dfb0-405e-46a7-ab67-083695ce298f-utilities" (OuterVolumeSpecName: "utilities") pod "0869dfb0-405e-46a7-ab67-083695ce298f" (UID: "0869dfb0-405e-46a7-ab67-083695ce298f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:07:06 crc kubenswrapper[4946]: I1128 10:07:06.503663 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0869dfb0-405e-46a7-ab67-083695ce298f-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 10:07:06 crc kubenswrapper[4946]: I1128 10:07:06.508333 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0869dfb0-405e-46a7-ab67-083695ce298f-kube-api-access-hq6h8" (OuterVolumeSpecName: "kube-api-access-hq6h8") pod "0869dfb0-405e-46a7-ab67-083695ce298f" (UID: "0869dfb0-405e-46a7-ab67-083695ce298f"). InnerVolumeSpecName "kube-api-access-hq6h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 10:07:06 crc kubenswrapper[4946]: I1128 10:07:06.515433 4946 scope.go:117] "RemoveContainer" containerID="fe68cd9c2c37a7315d8f6d6584201d5ebd654357fb4bc04b4e256c29afafaabb" Nov 28 10:07:06 crc kubenswrapper[4946]: I1128 10:07:06.577800 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0869dfb0-405e-46a7-ab67-083695ce298f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0869dfb0-405e-46a7-ab67-083695ce298f" (UID: "0869dfb0-405e-46a7-ab67-083695ce298f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:07:06 crc kubenswrapper[4946]: I1128 10:07:06.602808 4946 scope.go:117] "RemoveContainer" containerID="c30f34417ba0b23b43eef19f8605bece310c0d090031fb01084b102c5a97b376" Nov 28 10:07:06 crc kubenswrapper[4946]: I1128 10:07:06.605391 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0869dfb0-405e-46a7-ab67-083695ce298f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 10:07:06 crc kubenswrapper[4946]: I1128 10:07:06.605436 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq6h8\" (UniqueName: \"kubernetes.io/projected/0869dfb0-405e-46a7-ab67-083695ce298f-kube-api-access-hq6h8\") on node \"crc\" DevicePath \"\"" Nov 28 10:07:06 crc kubenswrapper[4946]: I1128 10:07:06.631677 4946 scope.go:117] "RemoveContainer" containerID="f66fff2514c18018923bb1ed0a18d2f2f6c2c2155e18774e9a5c0baccd26a241" Nov 28 10:07:06 crc kubenswrapper[4946]: E1128 10:07:06.632164 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f66fff2514c18018923bb1ed0a18d2f2f6c2c2155e18774e9a5c0baccd26a241\": container with ID starting with f66fff2514c18018923bb1ed0a18d2f2f6c2c2155e18774e9a5c0baccd26a241 not found: ID does not exist" containerID="f66fff2514c18018923bb1ed0a18d2f2f6c2c2155e18774e9a5c0baccd26a241" Nov 28 10:07:06 crc kubenswrapper[4946]: I1128 10:07:06.632286 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f66fff2514c18018923bb1ed0a18d2f2f6c2c2155e18774e9a5c0baccd26a241"} err="failed to get container status \"f66fff2514c18018923bb1ed0a18d2f2f6c2c2155e18774e9a5c0baccd26a241\": rpc error: code = NotFound desc = could not find container \"f66fff2514c18018923bb1ed0a18d2f2f6c2c2155e18774e9a5c0baccd26a241\": container with ID starting with f66fff2514c18018923bb1ed0a18d2f2f6c2c2155e18774e9a5c0baccd26a241 not found: ID does not exist" Nov 28 10:07:06 crc kubenswrapper[4946]: I1128 10:07:06.632385 4946 scope.go:117] "RemoveContainer" containerID="fe68cd9c2c37a7315d8f6d6584201d5ebd654357fb4bc04b4e256c29afafaabb" Nov 28 10:07:06 crc kubenswrapper[4946]: E1128 10:07:06.632807 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe68cd9c2c37a7315d8f6d6584201d5ebd654357fb4bc04b4e256c29afafaabb\": container with ID starting with fe68cd9c2c37a7315d8f6d6584201d5ebd654357fb4bc04b4e256c29afafaabb not found: ID does not exist" containerID="fe68cd9c2c37a7315d8f6d6584201d5ebd654357fb4bc04b4e256c29afafaabb" Nov 28 10:07:06 crc kubenswrapper[4946]: I1128 10:07:06.632901 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe68cd9c2c37a7315d8f6d6584201d5ebd654357fb4bc04b4e256c29afafaabb"} err="failed to get container status \"fe68cd9c2c37a7315d8f6d6584201d5ebd654357fb4bc04b4e256c29afafaabb\": rpc error: code = NotFound desc = could not find container \"fe68cd9c2c37a7315d8f6d6584201d5ebd654357fb4bc04b4e256c29afafaabb\": container with ID starting with fe68cd9c2c37a7315d8f6d6584201d5ebd654357fb4bc04b4e256c29afafaabb not found: ID does not exist" Nov 28 10:07:06 crc kubenswrapper[4946]: I1128 10:07:06.632978 4946 scope.go:117] "RemoveContainer" containerID="c30f34417ba0b23b43eef19f8605bece310c0d090031fb01084b102c5a97b376" Nov 28 10:07:06 crc kubenswrapper[4946]: E1128 10:07:06.633687 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c30f34417ba0b23b43eef19f8605bece310c0d090031fb01084b102c5a97b376\": container with ID starting with c30f34417ba0b23b43eef19f8605bece310c0d090031fb01084b102c5a97b376 not found: ID does not exist" containerID="c30f34417ba0b23b43eef19f8605bece310c0d090031fb01084b102c5a97b376" Nov 28 10:07:06 crc kubenswrapper[4946]: I1128 10:07:06.633728 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c30f34417ba0b23b43eef19f8605bece310c0d090031fb01084b102c5a97b376"} err="failed to get container status \"c30f34417ba0b23b43eef19f8605bece310c0d090031fb01084b102c5a97b376\": rpc error: code = NotFound desc = could not find container \"c30f34417ba0b23b43eef19f8605bece310c0d090031fb01084b102c5a97b376\": container with ID starting with c30f34417ba0b23b43eef19f8605bece310c0d090031fb01084b102c5a97b376 not found: ID does not exist" Nov 28 10:07:06 crc kubenswrapper[4946]: I1128 10:07:06.820586 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vxszb"] Nov 28 10:07:06 crc kubenswrapper[4946]: I1128 10:07:06.832099 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vxszb"] Nov 28 10:07:08 crc kubenswrapper[4946]: I1128 10:07:08.004553 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0869dfb0-405e-46a7-ab67-083695ce298f" path="/var/lib/kubelet/pods/0869dfb0-405e-46a7-ab67-083695ce298f/volumes" Nov 28 10:07:24 crc kubenswrapper[4946]: I1128 10:07:24.730732 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 10:07:24 crc kubenswrapper[4946]: I1128 10:07:24.731497 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 10:07:54 crc kubenswrapper[4946]: I1128 10:07:54.731302 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 10:07:54 crc kubenswrapper[4946]: I1128 10:07:54.732007 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 10:07:54 crc kubenswrapper[4946]: I1128 10:07:54.732088 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 10:07:54 crc kubenswrapper[4946]: I1128 10:07:54.732944 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 10:07:54 crc kubenswrapper[4946]: I1128 10:07:54.733002 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" gracePeriod=600 Nov 28 10:07:54 crc kubenswrapper[4946]: E1128 10:07:54.870033 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:07:55 crc kubenswrapper[4946]: I1128 10:07:55.030002 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" exitCode=0 Nov 28 10:07:55 crc kubenswrapper[4946]: I1128 10:07:55.030054 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9"} Nov 28 10:07:55 crc kubenswrapper[4946]: I1128 10:07:55.030094 4946 scope.go:117] "RemoveContainer" containerID="c1affc8b427685239fe541e69eac3ff653f1f5e77fbd18f4ae8409b7412b67d1" Nov 28 10:07:55 crc kubenswrapper[4946]: I1128 10:07:55.031405 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:07:55 crc kubenswrapper[4946]: E1128 10:07:55.032232 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:08:06 crc kubenswrapper[4946]: I1128 10:08:06.990037 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:08:06 crc kubenswrapper[4946]: E1128 10:08:06.990822 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:08:21 crc kubenswrapper[4946]: I1128 10:08:21.990304 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:08:21 crc kubenswrapper[4946]: E1128 10:08:21.991810 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:08:32 crc kubenswrapper[4946]: I1128 10:08:32.989778 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:08:32 crc kubenswrapper[4946]: E1128 10:08:32.990364 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:08:46 crc kubenswrapper[4946]: I1128 10:08:46.989922 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:08:46 crc kubenswrapper[4946]: E1128 10:08:46.991082 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:08:57 crc kubenswrapper[4946]: I1128 10:08:57.990793 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:08:57 crc kubenswrapper[4946]: E1128 10:08:57.991633 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:09:09 crc kubenswrapper[4946]: I1128 10:09:09.990241 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:09:09 crc kubenswrapper[4946]: E1128 10:09:09.991192 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:09:24 crc kubenswrapper[4946]: I1128 10:09:24.990725 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:09:24 crc kubenswrapper[4946]: E1128 10:09:24.991614 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:09:37 crc kubenswrapper[4946]: I1128 10:09:37.990736 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:09:37 crc kubenswrapper[4946]: E1128 10:09:37.991363 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:09:49 crc kubenswrapper[4946]: I1128 10:09:49.990655 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:09:49 crc kubenswrapper[4946]: E1128 10:09:49.991412 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:10:01 crc kubenswrapper[4946]: I1128 10:10:01.991695 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:10:01 crc kubenswrapper[4946]: E1128 10:10:01.993096 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:10:13 crc kubenswrapper[4946]: I1128 10:10:13.989946 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:10:13 crc kubenswrapper[4946]: E1128 10:10:13.990692 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:10:27 crc kubenswrapper[4946]: I1128 10:10:27.990644 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:10:27 crc kubenswrapper[4946]: E1128 10:10:27.991515 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:10:39 crc kubenswrapper[4946]: I1128 10:10:39.990600 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:10:39 crc kubenswrapper[4946]: E1128 10:10:39.991488 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:10:53 crc kubenswrapper[4946]: I1128 10:10:53.990366 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:10:53 crc kubenswrapper[4946]: E1128 10:10:53.991255 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:11:07 crc kubenswrapper[4946]: I1128 10:11:07.990610 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:11:07 crc kubenswrapper[4946]: E1128 10:11:07.991878 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:11:18 crc kubenswrapper[4946]: I1128 10:11:18.990430 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:11:18 crc kubenswrapper[4946]: E1128 10:11:18.991275 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:11:29 crc kubenswrapper[4946]: I1128 10:11:29.990556 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:11:29 crc kubenswrapper[4946]: E1128 10:11:29.991254 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:11:40 crc kubenswrapper[4946]: I1128 10:11:40.989979 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:11:40 crc kubenswrapper[4946]: E1128 10:11:40.990770 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:11:49 crc kubenswrapper[4946]: I1128 10:11:49.379577 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9whvt"] Nov 28 10:11:49 crc kubenswrapper[4946]: E1128 10:11:49.381951 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0869dfb0-405e-46a7-ab67-083695ce298f" containerName="extract-content" Nov 28 10:11:49 crc kubenswrapper[4946]: I1128 10:11:49.381985 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0869dfb0-405e-46a7-ab67-083695ce298f" containerName="extract-content" Nov 28 10:11:49 crc kubenswrapper[4946]: E1128 10:11:49.382054 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0869dfb0-405e-46a7-ab67-083695ce298f" containerName="extract-utilities" Nov 28 10:11:49 crc kubenswrapper[4946]: I1128 10:11:49.382068 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0869dfb0-405e-46a7-ab67-083695ce298f" containerName="extract-utilities" Nov 28 10:11:49 crc kubenswrapper[4946]: E1128 10:11:49.382091 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0869dfb0-405e-46a7-ab67-083695ce298f" containerName="registry-server" Nov 28 10:11:49 crc kubenswrapper[4946]: I1128 10:11:49.382104 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0869dfb0-405e-46a7-ab67-083695ce298f" containerName="registry-server" Nov 28 10:11:49 crc kubenswrapper[4946]: I1128 10:11:49.382637 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="0869dfb0-405e-46a7-ab67-083695ce298f" containerName="registry-server" Nov 28 10:11:49 crc kubenswrapper[4946]: I1128 10:11:49.386163 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9whvt" Nov 28 10:11:49 crc kubenswrapper[4946]: I1128 10:11:49.411515 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9whvt"] Nov 28 10:11:49 crc kubenswrapper[4946]: I1128 10:11:49.442626 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w27vl\" (UniqueName: \"kubernetes.io/projected/347f7978-e4b2-4473-8384-ee9be067a689-kube-api-access-w27vl\") pod \"redhat-marketplace-9whvt\" (UID: \"347f7978-e4b2-4473-8384-ee9be067a689\") " pod="openshift-marketplace/redhat-marketplace-9whvt" Nov 28 10:11:49 crc kubenswrapper[4946]: I1128 10:11:49.442735 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/347f7978-e4b2-4473-8384-ee9be067a689-catalog-content\") pod \"redhat-marketplace-9whvt\" (UID: \"347f7978-e4b2-4473-8384-ee9be067a689\") " pod="openshift-marketplace/redhat-marketplace-9whvt" Nov 28 10:11:49 crc kubenswrapper[4946]: I1128 10:11:49.443045 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/347f7978-e4b2-4473-8384-ee9be067a689-utilities\") pod \"redhat-marketplace-9whvt\" (UID: \"347f7978-e4b2-4473-8384-ee9be067a689\") " pod="openshift-marketplace/redhat-marketplace-9whvt" Nov 28 10:11:49 crc kubenswrapper[4946]: I1128 10:11:49.544808 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w27vl\" (UniqueName: \"kubernetes.io/projected/347f7978-e4b2-4473-8384-ee9be067a689-kube-api-access-w27vl\") pod \"redhat-marketplace-9whvt\" (UID: \"347f7978-e4b2-4473-8384-ee9be067a689\") " pod="openshift-marketplace/redhat-marketplace-9whvt" Nov 28 10:11:49 crc kubenswrapper[4946]: I1128 10:11:49.544961 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/347f7978-e4b2-4473-8384-ee9be067a689-catalog-content\") pod \"redhat-marketplace-9whvt\" (UID: \"347f7978-e4b2-4473-8384-ee9be067a689\") " pod="openshift-marketplace/redhat-marketplace-9whvt" Nov 28 10:11:49 crc kubenswrapper[4946]: I1128 10:11:49.545158 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/347f7978-e4b2-4473-8384-ee9be067a689-utilities\") pod \"redhat-marketplace-9whvt\" (UID: \"347f7978-e4b2-4473-8384-ee9be067a689\") " pod="openshift-marketplace/redhat-marketplace-9whvt" Nov 28 10:11:49 crc kubenswrapper[4946]: I1128 10:11:49.545657 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/347f7978-e4b2-4473-8384-ee9be067a689-catalog-content\") pod \"redhat-marketplace-9whvt\" (UID: \"347f7978-e4b2-4473-8384-ee9be067a689\") " pod="openshift-marketplace/redhat-marketplace-9whvt" Nov 28 10:11:49 crc kubenswrapper[4946]: I1128 10:11:49.545743 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/347f7978-e4b2-4473-8384-ee9be067a689-utilities\") pod \"redhat-marketplace-9whvt\" (UID: \"347f7978-e4b2-4473-8384-ee9be067a689\") " pod="openshift-marketplace/redhat-marketplace-9whvt" Nov 28 10:11:49 crc kubenswrapper[4946]: I1128 10:11:49.577967 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w27vl\" (UniqueName: \"kubernetes.io/projected/347f7978-e4b2-4473-8384-ee9be067a689-kube-api-access-w27vl\") pod \"redhat-marketplace-9whvt\" (UID: \"347f7978-e4b2-4473-8384-ee9be067a689\") " pod="openshift-marketplace/redhat-marketplace-9whvt" Nov 28 10:11:49 crc kubenswrapper[4946]: I1128 10:11:49.711703 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9whvt" Nov 28 10:11:50 crc kubenswrapper[4946]: I1128 10:11:50.258729 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9whvt"] Nov 28 10:11:50 crc kubenswrapper[4946]: I1128 10:11:50.615918 4946 generic.go:334] "Generic (PLEG): container finished" podID="347f7978-e4b2-4473-8384-ee9be067a689" containerID="3d3b686b08e7d72626bc43413d1fb75e77dca4b7a2126c5cf9007e4ddd755e08" exitCode=0 Nov 28 10:11:50 crc kubenswrapper[4946]: I1128 10:11:50.616017 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9whvt" event={"ID":"347f7978-e4b2-4473-8384-ee9be067a689","Type":"ContainerDied","Data":"3d3b686b08e7d72626bc43413d1fb75e77dca4b7a2126c5cf9007e4ddd755e08"} Nov 28 10:11:50 crc kubenswrapper[4946]: I1128 10:11:50.616244 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9whvt" event={"ID":"347f7978-e4b2-4473-8384-ee9be067a689","Type":"ContainerStarted","Data":"66eed3990f021747de00c8169114f9834b46d2a43e9b9cd4badba934a8484948"} Nov 28 10:11:51 crc kubenswrapper[4946]: I1128 10:11:51.628195 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9whvt" event={"ID":"347f7978-e4b2-4473-8384-ee9be067a689","Type":"ContainerStarted","Data":"7cf19b65dd799fef0884651c6ae9e0e5eb5ead30f4f9a85948ca51e43136a0fb"} Nov 28 10:11:52 crc kubenswrapper[4946]: I1128 10:11:52.640647 4946 generic.go:334] "Generic (PLEG): container finished" podID="347f7978-e4b2-4473-8384-ee9be067a689" containerID="7cf19b65dd799fef0884651c6ae9e0e5eb5ead30f4f9a85948ca51e43136a0fb" exitCode=0 Nov 28 10:11:52 crc kubenswrapper[4946]: I1128 10:11:52.640885 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9whvt" event={"ID":"347f7978-e4b2-4473-8384-ee9be067a689","Type":"ContainerDied","Data":"7cf19b65dd799fef0884651c6ae9e0e5eb5ead30f4f9a85948ca51e43136a0fb"} Nov 28 10:11:52 crc kubenswrapper[4946]: I1128 10:11:52.989604 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:11:52 crc kubenswrapper[4946]: E1128 10:11:52.989846 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:11:53 crc kubenswrapper[4946]: I1128 10:11:53.652605 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9whvt" event={"ID":"347f7978-e4b2-4473-8384-ee9be067a689","Type":"ContainerStarted","Data":"9ae8c70110165cacacf7f31398861378a42a8f02a1162a4691b84086675f23ae"} Nov 28 10:11:53 crc kubenswrapper[4946]: I1128 10:11:53.681296 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9whvt" podStartSLOduration=2.22915747 podStartE2EDuration="4.68127882s" podCreationTimestamp="2025-11-28 10:11:49 +0000 UTC" firstStartedPulling="2025-11-28 10:11:50.617574285 +0000 UTC m=+11964.995639396" lastFinishedPulling="2025-11-28 10:11:53.069695635 +0000 UTC m=+11967.447760746" observedRunningTime="2025-11-28 10:11:53.670290778 +0000 UTC m=+11968.048355939" watchObservedRunningTime="2025-11-28 10:11:53.68127882 +0000 UTC m=+11968.059343931" Nov 28 10:11:59 crc kubenswrapper[4946]: I1128 10:11:59.712781 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9whvt" Nov 28 10:11:59 crc kubenswrapper[4946]: I1128 10:11:59.713251 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9whvt" Nov 28 10:11:59 crc kubenswrapper[4946]: I1128 10:11:59.776664 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9whvt" Nov 28 10:11:59 crc kubenswrapper[4946]: I1128 10:11:59.923128 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9whvt" Nov 28 10:12:00 crc kubenswrapper[4946]: I1128 10:12:00.018489 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9whvt"] Nov 28 10:12:01 crc kubenswrapper[4946]: I1128 10:12:01.872039 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9whvt" podUID="347f7978-e4b2-4473-8384-ee9be067a689" containerName="registry-server" containerID="cri-o://9ae8c70110165cacacf7f31398861378a42a8f02a1162a4691b84086675f23ae" gracePeriod=2 Nov 28 10:12:02 crc kubenswrapper[4946]: I1128 10:12:02.561488 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9whvt" Nov 28 10:12:02 crc kubenswrapper[4946]: I1128 10:12:02.680206 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/347f7978-e4b2-4473-8384-ee9be067a689-utilities\") pod \"347f7978-e4b2-4473-8384-ee9be067a689\" (UID: \"347f7978-e4b2-4473-8384-ee9be067a689\") " Nov 28 10:12:02 crc kubenswrapper[4946]: I1128 10:12:02.680298 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/347f7978-e4b2-4473-8384-ee9be067a689-catalog-content\") pod \"347f7978-e4b2-4473-8384-ee9be067a689\" (UID: \"347f7978-e4b2-4473-8384-ee9be067a689\") " Nov 28 10:12:02 crc kubenswrapper[4946]: I1128 10:12:02.680485 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w27vl\" (UniqueName: \"kubernetes.io/projected/347f7978-e4b2-4473-8384-ee9be067a689-kube-api-access-w27vl\") pod \"347f7978-e4b2-4473-8384-ee9be067a689\" (UID: \"347f7978-e4b2-4473-8384-ee9be067a689\") " Nov 28 10:12:02 crc kubenswrapper[4946]: I1128 10:12:02.681139 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/347f7978-e4b2-4473-8384-ee9be067a689-utilities" (OuterVolumeSpecName: "utilities") pod "347f7978-e4b2-4473-8384-ee9be067a689" (UID: "347f7978-e4b2-4473-8384-ee9be067a689"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:12:02 crc kubenswrapper[4946]: I1128 10:12:02.691751 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347f7978-e4b2-4473-8384-ee9be067a689-kube-api-access-w27vl" (OuterVolumeSpecName: "kube-api-access-w27vl") pod "347f7978-e4b2-4473-8384-ee9be067a689" (UID: "347f7978-e4b2-4473-8384-ee9be067a689"). InnerVolumeSpecName "kube-api-access-w27vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 10:12:02 crc kubenswrapper[4946]: I1128 10:12:02.707606 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/347f7978-e4b2-4473-8384-ee9be067a689-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "347f7978-e4b2-4473-8384-ee9be067a689" (UID: "347f7978-e4b2-4473-8384-ee9be067a689"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:12:02 crc kubenswrapper[4946]: I1128 10:12:02.782362 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/347f7978-e4b2-4473-8384-ee9be067a689-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 10:12:02 crc kubenswrapper[4946]: I1128 10:12:02.782401 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/347f7978-e4b2-4473-8384-ee9be067a689-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 10:12:02 crc kubenswrapper[4946]: I1128 10:12:02.782414 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w27vl\" (UniqueName: \"kubernetes.io/projected/347f7978-e4b2-4473-8384-ee9be067a689-kube-api-access-w27vl\") on node \"crc\" DevicePath \"\"" Nov 28 10:12:02 crc kubenswrapper[4946]: I1128 10:12:02.888362 4946 generic.go:334] "Generic (PLEG): container finished" podID="347f7978-e4b2-4473-8384-ee9be067a689" containerID="9ae8c70110165cacacf7f31398861378a42a8f02a1162a4691b84086675f23ae" exitCode=0 Nov 28 10:12:02 crc kubenswrapper[4946]: I1128 10:12:02.888433 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9whvt" event={"ID":"347f7978-e4b2-4473-8384-ee9be067a689","Type":"ContainerDied","Data":"9ae8c70110165cacacf7f31398861378a42a8f02a1162a4691b84086675f23ae"} Nov 28 10:12:02 crc kubenswrapper[4946]: I1128 10:12:02.888453 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9whvt" Nov 28 10:12:02 crc kubenswrapper[4946]: I1128 10:12:02.888511 4946 scope.go:117] "RemoveContainer" containerID="9ae8c70110165cacacf7f31398861378a42a8f02a1162a4691b84086675f23ae" Nov 28 10:12:02 crc kubenswrapper[4946]: I1128 10:12:02.888489 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9whvt" event={"ID":"347f7978-e4b2-4473-8384-ee9be067a689","Type":"ContainerDied","Data":"66eed3990f021747de00c8169114f9834b46d2a43e9b9cd4badba934a8484948"} Nov 28 10:12:02 crc kubenswrapper[4946]: I1128 10:12:02.928665 4946 scope.go:117] "RemoveContainer" containerID="7cf19b65dd799fef0884651c6ae9e0e5eb5ead30f4f9a85948ca51e43136a0fb" Nov 28 10:12:02 crc kubenswrapper[4946]: I1128 10:12:02.955088 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9whvt"] Nov 28 10:12:02 crc kubenswrapper[4946]: I1128 10:12:02.972446 4946 scope.go:117] "RemoveContainer" containerID="3d3b686b08e7d72626bc43413d1fb75e77dca4b7a2126c5cf9007e4ddd755e08" Nov 28 10:12:02 crc kubenswrapper[4946]: I1128 10:12:02.974059 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9whvt"] Nov 28 10:12:03 crc kubenswrapper[4946]: I1128 10:12:03.002080 4946 scope.go:117] "RemoveContainer" containerID="9ae8c70110165cacacf7f31398861378a42a8f02a1162a4691b84086675f23ae" Nov 28 10:12:03 crc kubenswrapper[4946]: E1128 10:12:03.002794 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ae8c70110165cacacf7f31398861378a42a8f02a1162a4691b84086675f23ae\": container with ID starting with 9ae8c70110165cacacf7f31398861378a42a8f02a1162a4691b84086675f23ae not found: ID does not exist" containerID="9ae8c70110165cacacf7f31398861378a42a8f02a1162a4691b84086675f23ae" Nov 28 10:12:03 crc kubenswrapper[4946]: I1128 10:12:03.002867 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ae8c70110165cacacf7f31398861378a42a8f02a1162a4691b84086675f23ae"} err="failed to get container status \"9ae8c70110165cacacf7f31398861378a42a8f02a1162a4691b84086675f23ae\": rpc error: code = NotFound desc = could not find container \"9ae8c70110165cacacf7f31398861378a42a8f02a1162a4691b84086675f23ae\": container with ID starting with 9ae8c70110165cacacf7f31398861378a42a8f02a1162a4691b84086675f23ae not found: ID does not exist" Nov 28 10:12:03 crc kubenswrapper[4946]: I1128 10:12:03.002889 4946 scope.go:117] "RemoveContainer" containerID="7cf19b65dd799fef0884651c6ae9e0e5eb5ead30f4f9a85948ca51e43136a0fb" Nov 28 10:12:03 crc kubenswrapper[4946]: E1128 10:12:03.003402 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cf19b65dd799fef0884651c6ae9e0e5eb5ead30f4f9a85948ca51e43136a0fb\": container with ID starting with 7cf19b65dd799fef0884651c6ae9e0e5eb5ead30f4f9a85948ca51e43136a0fb not found: ID does not exist" containerID="7cf19b65dd799fef0884651c6ae9e0e5eb5ead30f4f9a85948ca51e43136a0fb" Nov 28 10:12:03 crc kubenswrapper[4946]: I1128 10:12:03.003440 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cf19b65dd799fef0884651c6ae9e0e5eb5ead30f4f9a85948ca51e43136a0fb"} err="failed to get container status \"7cf19b65dd799fef0884651c6ae9e0e5eb5ead30f4f9a85948ca51e43136a0fb\": rpc error: code = NotFound desc = could not find container \"7cf19b65dd799fef0884651c6ae9e0e5eb5ead30f4f9a85948ca51e43136a0fb\": container with ID starting with 7cf19b65dd799fef0884651c6ae9e0e5eb5ead30f4f9a85948ca51e43136a0fb not found: ID does not exist" Nov 28 10:12:03 crc kubenswrapper[4946]: I1128 10:12:03.003483 4946 scope.go:117] "RemoveContainer" containerID="3d3b686b08e7d72626bc43413d1fb75e77dca4b7a2126c5cf9007e4ddd755e08" Nov 28 10:12:03 crc kubenswrapper[4946]: E1128 10:12:03.004080 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d3b686b08e7d72626bc43413d1fb75e77dca4b7a2126c5cf9007e4ddd755e08\": container with ID starting with 3d3b686b08e7d72626bc43413d1fb75e77dca4b7a2126c5cf9007e4ddd755e08 not found: ID does not exist" containerID="3d3b686b08e7d72626bc43413d1fb75e77dca4b7a2126c5cf9007e4ddd755e08" Nov 28 10:12:03 crc kubenswrapper[4946]: I1128 10:12:03.004098 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d3b686b08e7d72626bc43413d1fb75e77dca4b7a2126c5cf9007e4ddd755e08"} err="failed to get container status \"3d3b686b08e7d72626bc43413d1fb75e77dca4b7a2126c5cf9007e4ddd755e08\": rpc error: code = NotFound desc = could not find container \"3d3b686b08e7d72626bc43413d1fb75e77dca4b7a2126c5cf9007e4ddd755e08\": container with ID starting with 3d3b686b08e7d72626bc43413d1fb75e77dca4b7a2126c5cf9007e4ddd755e08 not found: ID does not exist" Nov 28 10:12:04 crc kubenswrapper[4946]: I1128 10:12:04.002827 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="347f7978-e4b2-4473-8384-ee9be067a689" path="/var/lib/kubelet/pods/347f7978-e4b2-4473-8384-ee9be067a689/volumes" Nov 28 10:12:05 crc kubenswrapper[4946]: I1128 10:12:05.996767 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:12:05 crc kubenswrapper[4946]: E1128 10:12:05.997129 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:12:19 crc kubenswrapper[4946]: I1128 10:12:19.990327 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:12:19 crc kubenswrapper[4946]: E1128 10:12:19.991093 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:12:30 crc kubenswrapper[4946]: I1128 10:12:30.990419 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:12:30 crc kubenswrapper[4946]: E1128 10:12:30.991781 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:12:44 crc kubenswrapper[4946]: I1128 10:12:44.990170 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:12:44 crc kubenswrapper[4946]: E1128 10:12:44.990804 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:12:55 crc kubenswrapper[4946]: I1128 10:12:55.998563 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:12:56 crc kubenswrapper[4946]: I1128 10:12:56.451036 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"ffde5dbc4752bb52de3ffee0d7f0302a34a211bf129849b6cf44262925feacbd"} Nov 28 10:12:58 crc kubenswrapper[4946]: I1128 10:12:58.626685 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7rsfr"] Nov 28 10:12:58 crc kubenswrapper[4946]: E1128 10:12:58.628951 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347f7978-e4b2-4473-8384-ee9be067a689" containerName="extract-utilities" Nov 28 10:12:58 crc kubenswrapper[4946]: I1128 10:12:58.629042 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="347f7978-e4b2-4473-8384-ee9be067a689" containerName="extract-utilities" Nov 28 10:12:58 crc kubenswrapper[4946]: E1128 10:12:58.629165 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347f7978-e4b2-4473-8384-ee9be067a689" containerName="registry-server" Nov 28 10:12:58 crc kubenswrapper[4946]: I1128 10:12:58.629251 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="347f7978-e4b2-4473-8384-ee9be067a689" containerName="registry-server" Nov 28 10:12:58 crc kubenswrapper[4946]: E1128 10:12:58.629342 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347f7978-e4b2-4473-8384-ee9be067a689" containerName="extract-content" Nov 28 10:12:58 crc kubenswrapper[4946]: I1128 10:12:58.629405 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="347f7978-e4b2-4473-8384-ee9be067a689" containerName="extract-content" Nov 28 10:12:58 crc kubenswrapper[4946]: I1128 10:12:58.629687 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="347f7978-e4b2-4473-8384-ee9be067a689" containerName="registry-server" Nov 28 10:12:58 crc kubenswrapper[4946]: I1128 10:12:58.631217 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7rsfr" Nov 28 10:12:58 crc kubenswrapper[4946]: I1128 10:12:58.638960 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7rsfr"] Nov 28 10:12:58 crc kubenswrapper[4946]: I1128 10:12:58.725494 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtw7b\" (UniqueName: \"kubernetes.io/projected/0a4ab934-755d-45d5-a0f2-083e612faa8c-kube-api-access-gtw7b\") pod \"redhat-operators-7rsfr\" (UID: \"0a4ab934-755d-45d5-a0f2-083e612faa8c\") " pod="openshift-marketplace/redhat-operators-7rsfr" Nov 28 10:12:58 crc kubenswrapper[4946]: I1128 10:12:58.725573 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a4ab934-755d-45d5-a0f2-083e612faa8c-utilities\") pod \"redhat-operators-7rsfr\" (UID: \"0a4ab934-755d-45d5-a0f2-083e612faa8c\") " pod="openshift-marketplace/redhat-operators-7rsfr" Nov 28 10:12:58 crc kubenswrapper[4946]: I1128 10:12:58.725607 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a4ab934-755d-45d5-a0f2-083e612faa8c-catalog-content\") pod \"redhat-operators-7rsfr\" (UID: \"0a4ab934-755d-45d5-a0f2-083e612faa8c\") " pod="openshift-marketplace/redhat-operators-7rsfr" Nov 28 10:12:58 crc kubenswrapper[4946]: I1128 10:12:58.827877 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtw7b\" (UniqueName: \"kubernetes.io/projected/0a4ab934-755d-45d5-a0f2-083e612faa8c-kube-api-access-gtw7b\") pod \"redhat-operators-7rsfr\" (UID: \"0a4ab934-755d-45d5-a0f2-083e612faa8c\") " pod="openshift-marketplace/redhat-operators-7rsfr" Nov 28 10:12:58 crc kubenswrapper[4946]: I1128 10:12:58.828224 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a4ab934-755d-45d5-a0f2-083e612faa8c-utilities\") pod \"redhat-operators-7rsfr\" (UID: \"0a4ab934-755d-45d5-a0f2-083e612faa8c\") " pod="openshift-marketplace/redhat-operators-7rsfr" Nov 28 10:12:58 crc kubenswrapper[4946]: I1128 10:12:58.828361 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a4ab934-755d-45d5-a0f2-083e612faa8c-catalog-content\") pod \"redhat-operators-7rsfr\" (UID: \"0a4ab934-755d-45d5-a0f2-083e612faa8c\") " pod="openshift-marketplace/redhat-operators-7rsfr" Nov 28 10:12:58 crc kubenswrapper[4946]: I1128 10:12:58.828853 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a4ab934-755d-45d5-a0f2-083e612faa8c-utilities\") pod \"redhat-operators-7rsfr\" (UID: \"0a4ab934-755d-45d5-a0f2-083e612faa8c\") " pod="openshift-marketplace/redhat-operators-7rsfr" Nov 28 10:12:58 crc kubenswrapper[4946]: I1128 10:12:58.828874 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a4ab934-755d-45d5-a0f2-083e612faa8c-catalog-content\") pod \"redhat-operators-7rsfr\" (UID: \"0a4ab934-755d-45d5-a0f2-083e612faa8c\") " pod="openshift-marketplace/redhat-operators-7rsfr" Nov 28 10:12:58 crc kubenswrapper[4946]: I1128 10:12:58.851734 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtw7b\" (UniqueName: \"kubernetes.io/projected/0a4ab934-755d-45d5-a0f2-083e612faa8c-kube-api-access-gtw7b\") pod \"redhat-operators-7rsfr\" (UID: \"0a4ab934-755d-45d5-a0f2-083e612faa8c\") " pod="openshift-marketplace/redhat-operators-7rsfr" Nov 28 10:12:58 crc kubenswrapper[4946]: I1128 10:12:58.957719 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7rsfr" Nov 28 10:12:59 crc kubenswrapper[4946]: I1128 10:12:59.434287 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7rsfr"] Nov 28 10:12:59 crc kubenswrapper[4946]: W1128 10:12:59.454170 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a4ab934_755d_45d5_a0f2_083e612faa8c.slice/crio-5517ac498617c6f4a4831c632d6fd66378397504972310856ab111529ed9577b WatchSource:0}: Error finding container 5517ac498617c6f4a4831c632d6fd66378397504972310856ab111529ed9577b: Status 404 returned error can't find the container with id 5517ac498617c6f4a4831c632d6fd66378397504972310856ab111529ed9577b Nov 28 10:12:59 crc kubenswrapper[4946]: I1128 10:12:59.480544 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rsfr" event={"ID":"0a4ab934-755d-45d5-a0f2-083e612faa8c","Type":"ContainerStarted","Data":"5517ac498617c6f4a4831c632d6fd66378397504972310856ab111529ed9577b"} Nov 28 10:13:00 crc kubenswrapper[4946]: I1128 10:13:00.492225 4946 generic.go:334] "Generic (PLEG): container finished" podID="0a4ab934-755d-45d5-a0f2-083e612faa8c" containerID="521b133b55afef6f55fe2d5880962ff51223de65bb004cff5a178c7026e5c4f8" exitCode=0 Nov 28 10:13:00 crc kubenswrapper[4946]: I1128 10:13:00.492590 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rsfr" event={"ID":"0a4ab934-755d-45d5-a0f2-083e612faa8c","Type":"ContainerDied","Data":"521b133b55afef6f55fe2d5880962ff51223de65bb004cff5a178c7026e5c4f8"} Nov 28 10:13:00 crc kubenswrapper[4946]: I1128 10:13:00.494777 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 10:13:02 crc kubenswrapper[4946]: I1128 10:13:02.515224 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rsfr" event={"ID":"0a4ab934-755d-45d5-a0f2-083e612faa8c","Type":"ContainerStarted","Data":"9765b63c97c10620e067cc8ce5897ce90aeeccf2dfac76642b6163dd7517ad57"} Nov 28 10:13:04 crc kubenswrapper[4946]: I1128 10:13:04.543449 4946 generic.go:334] "Generic (PLEG): container finished" podID="0a4ab934-755d-45d5-a0f2-083e612faa8c" containerID="9765b63c97c10620e067cc8ce5897ce90aeeccf2dfac76642b6163dd7517ad57" exitCode=0 Nov 28 10:13:04 crc kubenswrapper[4946]: I1128 10:13:04.543559 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rsfr" event={"ID":"0a4ab934-755d-45d5-a0f2-083e612faa8c","Type":"ContainerDied","Data":"9765b63c97c10620e067cc8ce5897ce90aeeccf2dfac76642b6163dd7517ad57"} Nov 28 10:13:06 crc kubenswrapper[4946]: I1128 10:13:06.562168 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rsfr" event={"ID":"0a4ab934-755d-45d5-a0f2-083e612faa8c","Type":"ContainerStarted","Data":"056fe01688bf644d9871d70cb5d3c9c2e37c57a50bb1ce7828fda61ca094802f"} Nov 28 10:13:06 crc kubenswrapper[4946]: I1128 10:13:06.592676 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7rsfr" podStartSLOduration=3.81623893 podStartE2EDuration="8.592655852s" podCreationTimestamp="2025-11-28 10:12:58 +0000 UTC" firstStartedPulling="2025-11-28 10:13:00.494415744 +0000 UTC m=+12034.872480855" lastFinishedPulling="2025-11-28 10:13:05.270832656 +0000 UTC m=+12039.648897777" observedRunningTime="2025-11-28 10:13:06.580366348 +0000 UTC m=+12040.958431459" watchObservedRunningTime="2025-11-28 10:13:06.592655852 +0000 UTC m=+12040.970720963" Nov 28 10:13:08 crc kubenswrapper[4946]: I1128 10:13:08.958373 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7rsfr" Nov 28 10:13:08 crc kubenswrapper[4946]: I1128 10:13:08.959620 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7rsfr" Nov 28 10:13:10 crc kubenswrapper[4946]: I1128 10:13:10.007937 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7rsfr" podUID="0a4ab934-755d-45d5-a0f2-083e612faa8c" containerName="registry-server" probeResult="failure" output=< Nov 28 10:13:10 crc kubenswrapper[4946]: timeout: failed to connect service ":50051" within 1s Nov 28 10:13:10 crc kubenswrapper[4946]: > Nov 28 10:13:19 crc kubenswrapper[4946]: I1128 10:13:19.026570 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7rsfr" Nov 28 10:13:19 crc kubenswrapper[4946]: I1128 10:13:19.100700 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7rsfr" Nov 28 10:13:19 crc kubenswrapper[4946]: I1128 10:13:19.275333 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7rsfr"] Nov 28 10:13:20 crc kubenswrapper[4946]: I1128 10:13:20.714049 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7rsfr" podUID="0a4ab934-755d-45d5-a0f2-083e612faa8c" containerName="registry-server" containerID="cri-o://056fe01688bf644d9871d70cb5d3c9c2e37c57a50bb1ce7828fda61ca094802f" gracePeriod=2 Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.303768 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7rsfr" Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.408724 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a4ab934-755d-45d5-a0f2-083e612faa8c-utilities\") pod \"0a4ab934-755d-45d5-a0f2-083e612faa8c\" (UID: \"0a4ab934-755d-45d5-a0f2-083e612faa8c\") " Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.408875 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a4ab934-755d-45d5-a0f2-083e612faa8c-catalog-content\") pod \"0a4ab934-755d-45d5-a0f2-083e612faa8c\" (UID: \"0a4ab934-755d-45d5-a0f2-083e612faa8c\") " Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.408938 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtw7b\" (UniqueName: \"kubernetes.io/projected/0a4ab934-755d-45d5-a0f2-083e612faa8c-kube-api-access-gtw7b\") pod \"0a4ab934-755d-45d5-a0f2-083e612faa8c\" (UID: \"0a4ab934-755d-45d5-a0f2-083e612faa8c\") " Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.409619 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a4ab934-755d-45d5-a0f2-083e612faa8c-utilities" (OuterVolumeSpecName: "utilities") pod "0a4ab934-755d-45d5-a0f2-083e612faa8c" (UID: "0a4ab934-755d-45d5-a0f2-083e612faa8c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.413992 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a4ab934-755d-45d5-a0f2-083e612faa8c-kube-api-access-gtw7b" (OuterVolumeSpecName: "kube-api-access-gtw7b") pod "0a4ab934-755d-45d5-a0f2-083e612faa8c" (UID: "0a4ab934-755d-45d5-a0f2-083e612faa8c"). InnerVolumeSpecName "kube-api-access-gtw7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.511840 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a4ab934-755d-45d5-a0f2-083e612faa8c-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.511876 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtw7b\" (UniqueName: \"kubernetes.io/projected/0a4ab934-755d-45d5-a0f2-083e612faa8c-kube-api-access-gtw7b\") on node \"crc\" DevicePath \"\"" Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.521859 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a4ab934-755d-45d5-a0f2-083e612faa8c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a4ab934-755d-45d5-a0f2-083e612faa8c" (UID: "0a4ab934-755d-45d5-a0f2-083e612faa8c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.613281 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a4ab934-755d-45d5-a0f2-083e612faa8c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.727529 4946 generic.go:334] "Generic (PLEG): container finished" podID="0a4ab934-755d-45d5-a0f2-083e612faa8c" containerID="056fe01688bf644d9871d70cb5d3c9c2e37c57a50bb1ce7828fda61ca094802f" exitCode=0 Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.727594 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rsfr" event={"ID":"0a4ab934-755d-45d5-a0f2-083e612faa8c","Type":"ContainerDied","Data":"056fe01688bf644d9871d70cb5d3c9c2e37c57a50bb1ce7828fda61ca094802f"} Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.727625 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rsfr" event={"ID":"0a4ab934-755d-45d5-a0f2-083e612faa8c","Type":"ContainerDied","Data":"5517ac498617c6f4a4831c632d6fd66378397504972310856ab111529ed9577b"} Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.727744 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7rsfr" Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.727748 4946 scope.go:117] "RemoveContainer" containerID="056fe01688bf644d9871d70cb5d3c9c2e37c57a50bb1ce7828fda61ca094802f" Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.764917 4946 scope.go:117] "RemoveContainer" containerID="9765b63c97c10620e067cc8ce5897ce90aeeccf2dfac76642b6163dd7517ad57" Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.774613 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7rsfr"] Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.787424 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7rsfr"] Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.793378 4946 scope.go:117] "RemoveContainer" containerID="521b133b55afef6f55fe2d5880962ff51223de65bb004cff5a178c7026e5c4f8" Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.850180 4946 scope.go:117] "RemoveContainer" containerID="056fe01688bf644d9871d70cb5d3c9c2e37c57a50bb1ce7828fda61ca094802f" Nov 28 10:13:21 crc kubenswrapper[4946]: E1128 10:13:21.853037 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"056fe01688bf644d9871d70cb5d3c9c2e37c57a50bb1ce7828fda61ca094802f\": container with ID starting with 056fe01688bf644d9871d70cb5d3c9c2e37c57a50bb1ce7828fda61ca094802f not found: ID does not exist" containerID="056fe01688bf644d9871d70cb5d3c9c2e37c57a50bb1ce7828fda61ca094802f" Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.853071 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"056fe01688bf644d9871d70cb5d3c9c2e37c57a50bb1ce7828fda61ca094802f"} err="failed to get container status \"056fe01688bf644d9871d70cb5d3c9c2e37c57a50bb1ce7828fda61ca094802f\": rpc error: code = NotFound desc = could not find container \"056fe01688bf644d9871d70cb5d3c9c2e37c57a50bb1ce7828fda61ca094802f\": container with ID starting with 056fe01688bf644d9871d70cb5d3c9c2e37c57a50bb1ce7828fda61ca094802f not found: ID does not exist" Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.853093 4946 scope.go:117] "RemoveContainer" containerID="9765b63c97c10620e067cc8ce5897ce90aeeccf2dfac76642b6163dd7517ad57" Nov 28 10:13:21 crc kubenswrapper[4946]: E1128 10:13:21.853844 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9765b63c97c10620e067cc8ce5897ce90aeeccf2dfac76642b6163dd7517ad57\": container with ID starting with 9765b63c97c10620e067cc8ce5897ce90aeeccf2dfac76642b6163dd7517ad57 not found: ID does not exist" containerID="9765b63c97c10620e067cc8ce5897ce90aeeccf2dfac76642b6163dd7517ad57" Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.853899 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9765b63c97c10620e067cc8ce5897ce90aeeccf2dfac76642b6163dd7517ad57"} err="failed to get container status \"9765b63c97c10620e067cc8ce5897ce90aeeccf2dfac76642b6163dd7517ad57\": rpc error: code = NotFound desc = could not find container \"9765b63c97c10620e067cc8ce5897ce90aeeccf2dfac76642b6163dd7517ad57\": container with ID starting with 9765b63c97c10620e067cc8ce5897ce90aeeccf2dfac76642b6163dd7517ad57 not found: ID does not exist" Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.853937 4946 scope.go:117] "RemoveContainer" containerID="521b133b55afef6f55fe2d5880962ff51223de65bb004cff5a178c7026e5c4f8" Nov 28 10:13:21 crc kubenswrapper[4946]: E1128 10:13:21.854427 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"521b133b55afef6f55fe2d5880962ff51223de65bb004cff5a178c7026e5c4f8\": container with ID starting with 521b133b55afef6f55fe2d5880962ff51223de65bb004cff5a178c7026e5c4f8 not found: ID does not exist" containerID="521b133b55afef6f55fe2d5880962ff51223de65bb004cff5a178c7026e5c4f8" Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.854456 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"521b133b55afef6f55fe2d5880962ff51223de65bb004cff5a178c7026e5c4f8"} err="failed to get container status \"521b133b55afef6f55fe2d5880962ff51223de65bb004cff5a178c7026e5c4f8\": rpc error: code = NotFound desc = could not find container \"521b133b55afef6f55fe2d5880962ff51223de65bb004cff5a178c7026e5c4f8\": container with ID starting with 521b133b55afef6f55fe2d5880962ff51223de65bb004cff5a178c7026e5c4f8 not found: ID does not exist" Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.885975 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8lmjl"] Nov 28 10:13:21 crc kubenswrapper[4946]: E1128 10:13:21.886546 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a4ab934-755d-45d5-a0f2-083e612faa8c" containerName="extract-content" Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.886566 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a4ab934-755d-45d5-a0f2-083e612faa8c" containerName="extract-content" Nov 28 10:13:21 crc kubenswrapper[4946]: E1128 10:13:21.886585 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a4ab934-755d-45d5-a0f2-083e612faa8c" containerName="registry-server" Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.886592 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a4ab934-755d-45d5-a0f2-083e612faa8c" containerName="registry-server" Nov 28 10:13:21 crc kubenswrapper[4946]: E1128 10:13:21.886621 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a4ab934-755d-45d5-a0f2-083e612faa8c" containerName="extract-utilities" Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.886630 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a4ab934-755d-45d5-a0f2-083e612faa8c" containerName="extract-utilities" Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.886871 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a4ab934-755d-45d5-a0f2-083e612faa8c" containerName="registry-server" Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.890796 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8lmjl" Nov 28 10:13:21 crc kubenswrapper[4946]: I1128 10:13:21.900778 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8lmjl"] Nov 28 10:13:22 crc kubenswrapper[4946]: I1128 10:13:22.003182 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a4ab934-755d-45d5-a0f2-083e612faa8c" path="/var/lib/kubelet/pods/0a4ab934-755d-45d5-a0f2-083e612faa8c/volumes" Nov 28 10:13:22 crc kubenswrapper[4946]: I1128 10:13:22.030676 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16ef5f68-6a6e-4bb4-b379-c1246c255151-utilities\") pod \"community-operators-8lmjl\" (UID: \"16ef5f68-6a6e-4bb4-b379-c1246c255151\") " pod="openshift-marketplace/community-operators-8lmjl" Nov 28 10:13:22 crc kubenswrapper[4946]: I1128 10:13:22.032656 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16ef5f68-6a6e-4bb4-b379-c1246c255151-catalog-content\") pod \"community-operators-8lmjl\" (UID: \"16ef5f68-6a6e-4bb4-b379-c1246c255151\") " pod="openshift-marketplace/community-operators-8lmjl" Nov 28 10:13:22 crc kubenswrapper[4946]: I1128 10:13:22.032860 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98x7d\" (UniqueName: \"kubernetes.io/projected/16ef5f68-6a6e-4bb4-b379-c1246c255151-kube-api-access-98x7d\") pod \"community-operators-8lmjl\" (UID: \"16ef5f68-6a6e-4bb4-b379-c1246c255151\") " pod="openshift-marketplace/community-operators-8lmjl" Nov 28 10:13:22 crc kubenswrapper[4946]: I1128 10:13:22.134480 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98x7d\" (UniqueName: \"kubernetes.io/projected/16ef5f68-6a6e-4bb4-b379-c1246c255151-kube-api-access-98x7d\") pod \"community-operators-8lmjl\" (UID: \"16ef5f68-6a6e-4bb4-b379-c1246c255151\") " pod="openshift-marketplace/community-operators-8lmjl" Nov 28 10:13:22 crc kubenswrapper[4946]: I1128 10:13:22.134627 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16ef5f68-6a6e-4bb4-b379-c1246c255151-utilities\") pod \"community-operators-8lmjl\" (UID: \"16ef5f68-6a6e-4bb4-b379-c1246c255151\") " pod="openshift-marketplace/community-operators-8lmjl" Nov 28 10:13:22 crc kubenswrapper[4946]: I1128 10:13:22.134754 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16ef5f68-6a6e-4bb4-b379-c1246c255151-catalog-content\") pod \"community-operators-8lmjl\" (UID: \"16ef5f68-6a6e-4bb4-b379-c1246c255151\") " pod="openshift-marketplace/community-operators-8lmjl" Nov 28 10:13:22 crc kubenswrapper[4946]: I1128 10:13:22.135264 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16ef5f68-6a6e-4bb4-b379-c1246c255151-utilities\") pod \"community-operators-8lmjl\" (UID: \"16ef5f68-6a6e-4bb4-b379-c1246c255151\") " pod="openshift-marketplace/community-operators-8lmjl" Nov 28 10:13:22 crc kubenswrapper[4946]: I1128 10:13:22.135289 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16ef5f68-6a6e-4bb4-b379-c1246c255151-catalog-content\") pod \"community-operators-8lmjl\" (UID: \"16ef5f68-6a6e-4bb4-b379-c1246c255151\") " pod="openshift-marketplace/community-operators-8lmjl" Nov 28 10:13:22 crc kubenswrapper[4946]: I1128 10:13:22.170502 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98x7d\" (UniqueName: \"kubernetes.io/projected/16ef5f68-6a6e-4bb4-b379-c1246c255151-kube-api-access-98x7d\") pod \"community-operators-8lmjl\" (UID: \"16ef5f68-6a6e-4bb4-b379-c1246c255151\") " pod="openshift-marketplace/community-operators-8lmjl" Nov 28 10:13:22 crc kubenswrapper[4946]: I1128 10:13:22.207536 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8lmjl" Nov 28 10:13:22 crc kubenswrapper[4946]: I1128 10:13:22.769605 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8lmjl"] Nov 28 10:13:23 crc kubenswrapper[4946]: I1128 10:13:23.758598 4946 generic.go:334] "Generic (PLEG): container finished" podID="16ef5f68-6a6e-4bb4-b379-c1246c255151" containerID="ac93afc9c9ad7e7263afbe46b242b3d1bd7e4c78ec2861b58e28603ccfef6f7e" exitCode=0 Nov 28 10:13:23 crc kubenswrapper[4946]: I1128 10:13:23.758694 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lmjl" event={"ID":"16ef5f68-6a6e-4bb4-b379-c1246c255151","Type":"ContainerDied","Data":"ac93afc9c9ad7e7263afbe46b242b3d1bd7e4c78ec2861b58e28603ccfef6f7e"} Nov 28 10:13:23 crc kubenswrapper[4946]: I1128 10:13:23.758933 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lmjl" event={"ID":"16ef5f68-6a6e-4bb4-b379-c1246c255151","Type":"ContainerStarted","Data":"55bdf270e5169022973306e652c60f6145bacace41bf65202612557dbb51155a"} Nov 28 10:13:25 crc kubenswrapper[4946]: I1128 10:13:25.785108 4946 generic.go:334] "Generic (PLEG): container finished" podID="16ef5f68-6a6e-4bb4-b379-c1246c255151" containerID="99fca58c75ce42ee462ed96d82c80fc8b43bc1c623bdd21044f1755ed5d684f3" exitCode=0 Nov 28 10:13:25 crc kubenswrapper[4946]: I1128 10:13:25.785488 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lmjl" event={"ID":"16ef5f68-6a6e-4bb4-b379-c1246c255151","Type":"ContainerDied","Data":"99fca58c75ce42ee462ed96d82c80fc8b43bc1c623bdd21044f1755ed5d684f3"} Nov 28 10:13:26 crc kubenswrapper[4946]: I1128 10:13:26.800747 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lmjl" event={"ID":"16ef5f68-6a6e-4bb4-b379-c1246c255151","Type":"ContainerStarted","Data":"806745224bee16074a507b70178389a87041cd0878d91a4829ccde83671f4d84"} Nov 28 10:13:26 crc kubenswrapper[4946]: I1128 10:13:26.842155 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8lmjl" podStartSLOduration=3.192913331 podStartE2EDuration="5.842128803s" podCreationTimestamp="2025-11-28 10:13:21 +0000 UTC" firstStartedPulling="2025-11-28 10:13:23.761278454 +0000 UTC m=+12058.139343575" lastFinishedPulling="2025-11-28 10:13:26.410493916 +0000 UTC m=+12060.788559047" observedRunningTime="2025-11-28 10:13:26.82422258 +0000 UTC m=+12061.202287751" watchObservedRunningTime="2025-11-28 10:13:26.842128803 +0000 UTC m=+12061.220193944" Nov 28 10:13:32 crc kubenswrapper[4946]: I1128 10:13:32.208397 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8lmjl" Nov 28 10:13:32 crc kubenswrapper[4946]: I1128 10:13:32.209278 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8lmjl" Nov 28 10:13:32 crc kubenswrapper[4946]: I1128 10:13:32.284175 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8lmjl" Nov 28 10:13:32 crc kubenswrapper[4946]: I1128 10:13:32.941875 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8lmjl" Nov 28 10:13:33 crc kubenswrapper[4946]: I1128 10:13:33.011208 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8lmjl"] Nov 28 10:13:34 crc kubenswrapper[4946]: I1128 10:13:34.907495 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8lmjl" podUID="16ef5f68-6a6e-4bb4-b379-c1246c255151" containerName="registry-server" containerID="cri-o://806745224bee16074a507b70178389a87041cd0878d91a4829ccde83671f4d84" gracePeriod=2 Nov 28 10:13:35 crc kubenswrapper[4946]: I1128 10:13:35.503175 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8lmjl" Nov 28 10:13:35 crc kubenswrapper[4946]: I1128 10:13:35.656580 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98x7d\" (UniqueName: \"kubernetes.io/projected/16ef5f68-6a6e-4bb4-b379-c1246c255151-kube-api-access-98x7d\") pod \"16ef5f68-6a6e-4bb4-b379-c1246c255151\" (UID: \"16ef5f68-6a6e-4bb4-b379-c1246c255151\") " Nov 28 10:13:35 crc kubenswrapper[4946]: I1128 10:13:35.656639 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16ef5f68-6a6e-4bb4-b379-c1246c255151-catalog-content\") pod \"16ef5f68-6a6e-4bb4-b379-c1246c255151\" (UID: \"16ef5f68-6a6e-4bb4-b379-c1246c255151\") " Nov 28 10:13:35 crc kubenswrapper[4946]: I1128 10:13:35.656748 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16ef5f68-6a6e-4bb4-b379-c1246c255151-utilities\") pod \"16ef5f68-6a6e-4bb4-b379-c1246c255151\" (UID: \"16ef5f68-6a6e-4bb4-b379-c1246c255151\") " Nov 28 10:13:35 crc kubenswrapper[4946]: I1128 10:13:35.658097 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16ef5f68-6a6e-4bb4-b379-c1246c255151-utilities" (OuterVolumeSpecName: "utilities") pod "16ef5f68-6a6e-4bb4-b379-c1246c255151" (UID: "16ef5f68-6a6e-4bb4-b379-c1246c255151"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:13:35 crc kubenswrapper[4946]: I1128 10:13:35.658554 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16ef5f68-6a6e-4bb4-b379-c1246c255151-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 10:13:35 crc kubenswrapper[4946]: I1128 10:13:35.664200 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16ef5f68-6a6e-4bb4-b379-c1246c255151-kube-api-access-98x7d" (OuterVolumeSpecName: "kube-api-access-98x7d") pod "16ef5f68-6a6e-4bb4-b379-c1246c255151" (UID: "16ef5f68-6a6e-4bb4-b379-c1246c255151"). InnerVolumeSpecName "kube-api-access-98x7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 10:13:35 crc kubenswrapper[4946]: I1128 10:13:35.724658 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16ef5f68-6a6e-4bb4-b379-c1246c255151-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16ef5f68-6a6e-4bb4-b379-c1246c255151" (UID: "16ef5f68-6a6e-4bb4-b379-c1246c255151"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:13:35 crc kubenswrapper[4946]: I1128 10:13:35.760194 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98x7d\" (UniqueName: \"kubernetes.io/projected/16ef5f68-6a6e-4bb4-b379-c1246c255151-kube-api-access-98x7d\") on node \"crc\" DevicePath \"\"" Nov 28 10:13:35 crc kubenswrapper[4946]: I1128 10:13:35.760343 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16ef5f68-6a6e-4bb4-b379-c1246c255151-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 10:13:35 crc kubenswrapper[4946]: I1128 10:13:35.920640 4946 generic.go:334] "Generic (PLEG): container finished" podID="16ef5f68-6a6e-4bb4-b379-c1246c255151" containerID="806745224bee16074a507b70178389a87041cd0878d91a4829ccde83671f4d84" exitCode=0 Nov 28 10:13:35 crc kubenswrapper[4946]: I1128 10:13:35.920702 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lmjl" event={"ID":"16ef5f68-6a6e-4bb4-b379-c1246c255151","Type":"ContainerDied","Data":"806745224bee16074a507b70178389a87041cd0878d91a4829ccde83671f4d84"} Nov 28 10:13:35 crc kubenswrapper[4946]: I1128 10:13:35.920729 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8lmjl" Nov 28 10:13:35 crc kubenswrapper[4946]: I1128 10:13:35.920751 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lmjl" event={"ID":"16ef5f68-6a6e-4bb4-b379-c1246c255151","Type":"ContainerDied","Data":"55bdf270e5169022973306e652c60f6145bacace41bf65202612557dbb51155a"} Nov 28 10:13:35 crc kubenswrapper[4946]: I1128 10:13:35.920779 4946 scope.go:117] "RemoveContainer" containerID="806745224bee16074a507b70178389a87041cd0878d91a4829ccde83671f4d84" Nov 28 10:13:35 crc kubenswrapper[4946]: I1128 10:13:35.957083 4946 scope.go:117] "RemoveContainer" containerID="99fca58c75ce42ee462ed96d82c80fc8b43bc1c623bdd21044f1755ed5d684f3" Nov 28 10:13:35 crc kubenswrapper[4946]: I1128 10:13:35.964454 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8lmjl"] Nov 28 10:13:35 crc kubenswrapper[4946]: I1128 10:13:35.975663 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8lmjl"] Nov 28 10:13:36 crc kubenswrapper[4946]: I1128 10:13:36.007444 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16ef5f68-6a6e-4bb4-b379-c1246c255151" path="/var/lib/kubelet/pods/16ef5f68-6a6e-4bb4-b379-c1246c255151/volumes" Nov 28 10:13:36 crc kubenswrapper[4946]: I1128 10:13:36.030146 4946 scope.go:117] "RemoveContainer" containerID="ac93afc9c9ad7e7263afbe46b242b3d1bd7e4c78ec2861b58e28603ccfef6f7e" Nov 28 10:13:36 crc kubenswrapper[4946]: I1128 10:13:36.055053 4946 scope.go:117] "RemoveContainer" containerID="806745224bee16074a507b70178389a87041cd0878d91a4829ccde83671f4d84" Nov 28 10:13:36 crc kubenswrapper[4946]: E1128 10:13:36.055493 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"806745224bee16074a507b70178389a87041cd0878d91a4829ccde83671f4d84\": container with ID starting with 806745224bee16074a507b70178389a87041cd0878d91a4829ccde83671f4d84 not found: ID does not exist" containerID="806745224bee16074a507b70178389a87041cd0878d91a4829ccde83671f4d84" Nov 28 10:13:36 crc kubenswrapper[4946]: I1128 10:13:36.055532 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"806745224bee16074a507b70178389a87041cd0878d91a4829ccde83671f4d84"} err="failed to get container status \"806745224bee16074a507b70178389a87041cd0878d91a4829ccde83671f4d84\": rpc error: code = NotFound desc = could not find container \"806745224bee16074a507b70178389a87041cd0878d91a4829ccde83671f4d84\": container with ID starting with 806745224bee16074a507b70178389a87041cd0878d91a4829ccde83671f4d84 not found: ID does not exist" Nov 28 10:13:36 crc kubenswrapper[4946]: I1128 10:13:36.055558 4946 scope.go:117] "RemoveContainer" containerID="99fca58c75ce42ee462ed96d82c80fc8b43bc1c623bdd21044f1755ed5d684f3" Nov 28 10:13:36 crc kubenswrapper[4946]: E1128 10:13:36.055994 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99fca58c75ce42ee462ed96d82c80fc8b43bc1c623bdd21044f1755ed5d684f3\": container with ID starting with 99fca58c75ce42ee462ed96d82c80fc8b43bc1c623bdd21044f1755ed5d684f3 not found: ID does not exist" containerID="99fca58c75ce42ee462ed96d82c80fc8b43bc1c623bdd21044f1755ed5d684f3" Nov 28 10:13:36 crc kubenswrapper[4946]: I1128 10:13:36.056047 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99fca58c75ce42ee462ed96d82c80fc8b43bc1c623bdd21044f1755ed5d684f3"} err="failed to get container status \"99fca58c75ce42ee462ed96d82c80fc8b43bc1c623bdd21044f1755ed5d684f3\": rpc error: code = NotFound desc = could not find container \"99fca58c75ce42ee462ed96d82c80fc8b43bc1c623bdd21044f1755ed5d684f3\": container with ID starting with 99fca58c75ce42ee462ed96d82c80fc8b43bc1c623bdd21044f1755ed5d684f3 not found: ID does not exist" Nov 28 10:13:36 crc kubenswrapper[4946]: I1128 10:13:36.056084 4946 scope.go:117] "RemoveContainer" containerID="ac93afc9c9ad7e7263afbe46b242b3d1bd7e4c78ec2861b58e28603ccfef6f7e" Nov 28 10:13:36 crc kubenswrapper[4946]: E1128 10:13:36.056384 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac93afc9c9ad7e7263afbe46b242b3d1bd7e4c78ec2861b58e28603ccfef6f7e\": container with ID starting with ac93afc9c9ad7e7263afbe46b242b3d1bd7e4c78ec2861b58e28603ccfef6f7e not found: ID does not exist" containerID="ac93afc9c9ad7e7263afbe46b242b3d1bd7e4c78ec2861b58e28603ccfef6f7e" Nov 28 10:13:36 crc kubenswrapper[4946]: I1128 10:13:36.056451 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac93afc9c9ad7e7263afbe46b242b3d1bd7e4c78ec2861b58e28603ccfef6f7e"} err="failed to get container status \"ac93afc9c9ad7e7263afbe46b242b3d1bd7e4c78ec2861b58e28603ccfef6f7e\": rpc error: code = NotFound desc = could not find container \"ac93afc9c9ad7e7263afbe46b242b3d1bd7e4c78ec2861b58e28603ccfef6f7e\": container with ID starting with ac93afc9c9ad7e7263afbe46b242b3d1bd7e4c78ec2861b58e28603ccfef6f7e not found: ID does not exist" Nov 28 10:15:00 crc kubenswrapper[4946]: I1128 10:15:00.162553 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405415-qz5j4"] Nov 28 10:15:00 crc kubenswrapper[4946]: E1128 10:15:00.164401 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16ef5f68-6a6e-4bb4-b379-c1246c255151" containerName="registry-server" Nov 28 10:15:00 crc kubenswrapper[4946]: I1128 10:15:00.164489 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="16ef5f68-6a6e-4bb4-b379-c1246c255151" containerName="registry-server" Nov 28 10:15:00 crc kubenswrapper[4946]: E1128 10:15:00.164552 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16ef5f68-6a6e-4bb4-b379-c1246c255151" containerName="extract-utilities" Nov 28 10:15:00 crc kubenswrapper[4946]: I1128 10:15:00.164607 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="16ef5f68-6a6e-4bb4-b379-c1246c255151" containerName="extract-utilities" Nov 28 10:15:00 crc kubenswrapper[4946]: E1128 10:15:00.164672 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16ef5f68-6a6e-4bb4-b379-c1246c255151" containerName="extract-content" Nov 28 10:15:00 crc kubenswrapper[4946]: I1128 10:15:00.164722 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="16ef5f68-6a6e-4bb4-b379-c1246c255151" containerName="extract-content" Nov 28 10:15:00 crc kubenswrapper[4946]: I1128 10:15:00.164996 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="16ef5f68-6a6e-4bb4-b379-c1246c255151" containerName="registry-server" Nov 28 10:15:00 crc kubenswrapper[4946]: I1128 10:15:00.165959 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405415-qz5j4" Nov 28 10:15:00 crc kubenswrapper[4946]: I1128 10:15:00.168248 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 10:15:00 crc kubenswrapper[4946]: I1128 10:15:00.168257 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 10:15:00 crc kubenswrapper[4946]: I1128 10:15:00.178322 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405415-qz5j4"] Nov 28 10:15:00 crc kubenswrapper[4946]: I1128 10:15:00.218692 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mmcm\" (UniqueName: \"kubernetes.io/projected/b7e81dbd-0d99-434a-8d4f-f4dda08c5b04-kube-api-access-5mmcm\") pod \"collect-profiles-29405415-qz5j4\" (UID: \"b7e81dbd-0d99-434a-8d4f-f4dda08c5b04\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405415-qz5j4" Nov 28 10:15:00 crc kubenswrapper[4946]: I1128 10:15:00.218780 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7e81dbd-0d99-434a-8d4f-f4dda08c5b04-config-volume\") pod \"collect-profiles-29405415-qz5j4\" (UID: \"b7e81dbd-0d99-434a-8d4f-f4dda08c5b04\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405415-qz5j4" Nov 28 10:15:00 crc kubenswrapper[4946]: I1128 10:15:00.218900 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7e81dbd-0d99-434a-8d4f-f4dda08c5b04-secret-volume\") pod \"collect-profiles-29405415-qz5j4\" (UID: \"b7e81dbd-0d99-434a-8d4f-f4dda08c5b04\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405415-qz5j4" Nov 28 10:15:00 crc kubenswrapper[4946]: I1128 10:15:00.320457 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mmcm\" (UniqueName: \"kubernetes.io/projected/b7e81dbd-0d99-434a-8d4f-f4dda08c5b04-kube-api-access-5mmcm\") pod \"collect-profiles-29405415-qz5j4\" (UID: \"b7e81dbd-0d99-434a-8d4f-f4dda08c5b04\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405415-qz5j4" Nov 28 10:15:00 crc kubenswrapper[4946]: I1128 10:15:00.320553 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7e81dbd-0d99-434a-8d4f-f4dda08c5b04-config-volume\") pod \"collect-profiles-29405415-qz5j4\" (UID: \"b7e81dbd-0d99-434a-8d4f-f4dda08c5b04\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405415-qz5j4" Nov 28 10:15:00 crc kubenswrapper[4946]: I1128 10:15:00.320603 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7e81dbd-0d99-434a-8d4f-f4dda08c5b04-secret-volume\") pod \"collect-profiles-29405415-qz5j4\" (UID: \"b7e81dbd-0d99-434a-8d4f-f4dda08c5b04\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405415-qz5j4" Nov 28 10:15:00 crc kubenswrapper[4946]: I1128 10:15:00.321897 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7e81dbd-0d99-434a-8d4f-f4dda08c5b04-config-volume\") pod \"collect-profiles-29405415-qz5j4\" (UID: \"b7e81dbd-0d99-434a-8d4f-f4dda08c5b04\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405415-qz5j4" Nov 28 10:15:00 crc kubenswrapper[4946]: I1128 10:15:00.326897 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7e81dbd-0d99-434a-8d4f-f4dda08c5b04-secret-volume\") pod \"collect-profiles-29405415-qz5j4\" (UID: \"b7e81dbd-0d99-434a-8d4f-f4dda08c5b04\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405415-qz5j4" Nov 28 10:15:00 crc kubenswrapper[4946]: I1128 10:15:00.338409 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mmcm\" (UniqueName: \"kubernetes.io/projected/b7e81dbd-0d99-434a-8d4f-f4dda08c5b04-kube-api-access-5mmcm\") pod \"collect-profiles-29405415-qz5j4\" (UID: \"b7e81dbd-0d99-434a-8d4f-f4dda08c5b04\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405415-qz5j4" Nov 28 10:15:00 crc kubenswrapper[4946]: I1128 10:15:00.490633 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405415-qz5j4" Nov 28 10:15:00 crc kubenswrapper[4946]: I1128 10:15:00.958694 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405415-qz5j4"] Nov 28 10:15:00 crc kubenswrapper[4946]: I1128 10:15:00.992293 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405415-qz5j4" event={"ID":"b7e81dbd-0d99-434a-8d4f-f4dda08c5b04","Type":"ContainerStarted","Data":"412331941b25b82d260da07101276e6d01bdc4ba1e0c2b109e7a75c0cd83352d"} Nov 28 10:15:02 crc kubenswrapper[4946]: I1128 10:15:02.009964 4946 generic.go:334] "Generic (PLEG): container finished" podID="b7e81dbd-0d99-434a-8d4f-f4dda08c5b04" containerID="95f0dec296cbeb3f7aa8b2ac8678aa7e8952023624878ccd2dd5dfb7dd200468" exitCode=0 Nov 28 10:15:02 crc kubenswrapper[4946]: I1128 10:15:02.010195 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405415-qz5j4" event={"ID":"b7e81dbd-0d99-434a-8d4f-f4dda08c5b04","Type":"ContainerDied","Data":"95f0dec296cbeb3f7aa8b2ac8678aa7e8952023624878ccd2dd5dfb7dd200468"} Nov 28 10:15:03 crc kubenswrapper[4946]: I1128 10:15:03.448482 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405415-qz5j4" Nov 28 10:15:03 crc kubenswrapper[4946]: I1128 10:15:03.591842 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mmcm\" (UniqueName: \"kubernetes.io/projected/b7e81dbd-0d99-434a-8d4f-f4dda08c5b04-kube-api-access-5mmcm\") pod \"b7e81dbd-0d99-434a-8d4f-f4dda08c5b04\" (UID: \"b7e81dbd-0d99-434a-8d4f-f4dda08c5b04\") " Nov 28 10:15:03 crc kubenswrapper[4946]: I1128 10:15:03.591972 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7e81dbd-0d99-434a-8d4f-f4dda08c5b04-config-volume\") pod \"b7e81dbd-0d99-434a-8d4f-f4dda08c5b04\" (UID: \"b7e81dbd-0d99-434a-8d4f-f4dda08c5b04\") " Nov 28 10:15:03 crc kubenswrapper[4946]: I1128 10:15:03.592805 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7e81dbd-0d99-434a-8d4f-f4dda08c5b04-config-volume" (OuterVolumeSpecName: "config-volume") pod "b7e81dbd-0d99-434a-8d4f-f4dda08c5b04" (UID: "b7e81dbd-0d99-434a-8d4f-f4dda08c5b04"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 10:15:03 crc kubenswrapper[4946]: I1128 10:15:03.592871 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7e81dbd-0d99-434a-8d4f-f4dda08c5b04-secret-volume\") pod \"b7e81dbd-0d99-434a-8d4f-f4dda08c5b04\" (UID: \"b7e81dbd-0d99-434a-8d4f-f4dda08c5b04\") " Nov 28 10:15:03 crc kubenswrapper[4946]: I1128 10:15:03.593960 4946 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7e81dbd-0d99-434a-8d4f-f4dda08c5b04-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 10:15:03 crc kubenswrapper[4946]: I1128 10:15:03.608877 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7e81dbd-0d99-434a-8d4f-f4dda08c5b04-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b7e81dbd-0d99-434a-8d4f-f4dda08c5b04" (UID: "b7e81dbd-0d99-434a-8d4f-f4dda08c5b04"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 10:15:03 crc kubenswrapper[4946]: I1128 10:15:03.613191 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7e81dbd-0d99-434a-8d4f-f4dda08c5b04-kube-api-access-5mmcm" (OuterVolumeSpecName: "kube-api-access-5mmcm") pod "b7e81dbd-0d99-434a-8d4f-f4dda08c5b04" (UID: "b7e81dbd-0d99-434a-8d4f-f4dda08c5b04"). InnerVolumeSpecName "kube-api-access-5mmcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 10:15:03 crc kubenswrapper[4946]: I1128 10:15:03.696434 4946 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7e81dbd-0d99-434a-8d4f-f4dda08c5b04-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 10:15:03 crc kubenswrapper[4946]: I1128 10:15:03.696482 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mmcm\" (UniqueName: \"kubernetes.io/projected/b7e81dbd-0d99-434a-8d4f-f4dda08c5b04-kube-api-access-5mmcm\") on node \"crc\" DevicePath \"\"" Nov 28 10:15:04 crc kubenswrapper[4946]: I1128 10:15:04.033695 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405415-qz5j4" event={"ID":"b7e81dbd-0d99-434a-8d4f-f4dda08c5b04","Type":"ContainerDied","Data":"412331941b25b82d260da07101276e6d01bdc4ba1e0c2b109e7a75c0cd83352d"} Nov 28 10:15:04 crc kubenswrapper[4946]: I1128 10:15:04.034129 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="412331941b25b82d260da07101276e6d01bdc4ba1e0c2b109e7a75c0cd83352d" Nov 28 10:15:04 crc kubenswrapper[4946]: I1128 10:15:04.033891 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405415-qz5j4" Nov 28 10:15:04 crc kubenswrapper[4946]: I1128 10:15:04.528532 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405370-tmw2v"] Nov 28 10:15:04 crc kubenswrapper[4946]: I1128 10:15:04.550621 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405370-tmw2v"] Nov 28 10:15:06 crc kubenswrapper[4946]: I1128 10:15:06.003668 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a37f46c-f843-41b7-a9e1-6fc7f7201e45" path="/var/lib/kubelet/pods/1a37f46c-f843-41b7-a9e1-6fc7f7201e45/volumes" Nov 28 10:15:24 crc kubenswrapper[4946]: I1128 10:15:24.730766 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 10:15:24 crc kubenswrapper[4946]: I1128 10:15:24.731278 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 10:15:54 crc kubenswrapper[4946]: I1128 10:15:54.730721 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 10:15:54 crc kubenswrapper[4946]: I1128 10:15:54.731659 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 10:15:55 crc kubenswrapper[4946]: I1128 10:15:55.004412 4946 scope.go:117] "RemoveContainer" containerID="d176177fb65d51e29eb0e3e84bf8f10e3a8e36dea7ff15f153a6ee7f2ceebdd3" Nov 28 10:16:24 crc kubenswrapper[4946]: I1128 10:16:24.730997 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 10:16:24 crc kubenswrapper[4946]: I1128 10:16:24.731597 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 10:16:24 crc kubenswrapper[4946]: I1128 10:16:24.731652 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 10:16:24 crc kubenswrapper[4946]: I1128 10:16:24.732579 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ffde5dbc4752bb52de3ffee0d7f0302a34a211bf129849b6cf44262925feacbd"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 10:16:24 crc kubenswrapper[4946]: I1128 10:16:24.732648 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://ffde5dbc4752bb52de3ffee0d7f0302a34a211bf129849b6cf44262925feacbd" gracePeriod=600 Nov 28 10:16:25 crc kubenswrapper[4946]: I1128 10:16:25.040111 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="ffde5dbc4752bb52de3ffee0d7f0302a34a211bf129849b6cf44262925feacbd" exitCode=0 Nov 28 10:16:25 crc kubenswrapper[4946]: I1128 10:16:25.040183 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"ffde5dbc4752bb52de3ffee0d7f0302a34a211bf129849b6cf44262925feacbd"} Nov 28 10:16:25 crc kubenswrapper[4946]: I1128 10:16:25.040270 4946 scope.go:117] "RemoveContainer" containerID="f8d1b92f8b1f52e3ae14a53874f6bb68cdfa85ec1d4f802788d99ca25a4c54a9" Nov 28 10:16:26 crc kubenswrapper[4946]: I1128 10:16:26.078000 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18"} Nov 28 10:16:34 crc kubenswrapper[4946]: I1128 10:16:34.165227 4946 generic.go:334] "Generic (PLEG): container finished" podID="1e51973b-27b2-4f5f-9073-0ba9d14f9593" containerID="e4cb4a317a4a9017de161711a2626e068b88bca5b234366c7d8648fd06bca872" exitCode=0 Nov 28 10:16:34 crc kubenswrapper[4946]: I1128 10:16:34.165458 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1e51973b-27b2-4f5f-9073-0ba9d14f9593","Type":"ContainerDied","Data":"e4cb4a317a4a9017de161711a2626e068b88bca5b234366c7d8648fd06bca872"} Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.676931 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.788945 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1e51973b-27b2-4f5f-9073-0ba9d14f9593-test-operator-ephemeral-temporary\") pod \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.789199 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1e51973b-27b2-4f5f-9073-0ba9d14f9593-openstack-config\") pod \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.789246 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96xpn\" (UniqueName: \"kubernetes.io/projected/1e51973b-27b2-4f5f-9073-0ba9d14f9593-kube-api-access-96xpn\") pod \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.789300 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1e51973b-27b2-4f5f-9073-0ba9d14f9593-openstack-config-secret\") pod \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.789353 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1e51973b-27b2-4f5f-9073-0ba9d14f9593-test-operator-ephemeral-workdir\") pod \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.789381 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e51973b-27b2-4f5f-9073-0ba9d14f9593-ssh-key\") pod \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.789401 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e51973b-27b2-4f5f-9073-0ba9d14f9593-config-data\") pod \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.789488 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1e51973b-27b2-4f5f-9073-0ba9d14f9593-ca-certs\") pod \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.789572 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\" (UID: \"1e51973b-27b2-4f5f-9073-0ba9d14f9593\") " Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.790143 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e51973b-27b2-4f5f-9073-0ba9d14f9593-config-data" (OuterVolumeSpecName: "config-data") pod "1e51973b-27b2-4f5f-9073-0ba9d14f9593" (UID: "1e51973b-27b2-4f5f-9073-0ba9d14f9593"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.790997 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e51973b-27b2-4f5f-9073-0ba9d14f9593-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "1e51973b-27b2-4f5f-9073-0ba9d14f9593" (UID: "1e51973b-27b2-4f5f-9073-0ba9d14f9593"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.795197 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "1e51973b-27b2-4f5f-9073-0ba9d14f9593" (UID: "1e51973b-27b2-4f5f-9073-0ba9d14f9593"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.796263 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e51973b-27b2-4f5f-9073-0ba9d14f9593-kube-api-access-96xpn" (OuterVolumeSpecName: "kube-api-access-96xpn") pod "1e51973b-27b2-4f5f-9073-0ba9d14f9593" (UID: "1e51973b-27b2-4f5f-9073-0ba9d14f9593"). InnerVolumeSpecName "kube-api-access-96xpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.801565 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e51973b-27b2-4f5f-9073-0ba9d14f9593-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "1e51973b-27b2-4f5f-9073-0ba9d14f9593" (UID: "1e51973b-27b2-4f5f-9073-0ba9d14f9593"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.818695 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e51973b-27b2-4f5f-9073-0ba9d14f9593-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1e51973b-27b2-4f5f-9073-0ba9d14f9593" (UID: "1e51973b-27b2-4f5f-9073-0ba9d14f9593"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.820665 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e51973b-27b2-4f5f-9073-0ba9d14f9593-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "1e51973b-27b2-4f5f-9073-0ba9d14f9593" (UID: "1e51973b-27b2-4f5f-9073-0ba9d14f9593"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.829737 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e51973b-27b2-4f5f-9073-0ba9d14f9593-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "1e51973b-27b2-4f5f-9073-0ba9d14f9593" (UID: "1e51973b-27b2-4f5f-9073-0ba9d14f9593"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.844225 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e51973b-27b2-4f5f-9073-0ba9d14f9593-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "1e51973b-27b2-4f5f-9073-0ba9d14f9593" (UID: "1e51973b-27b2-4f5f-9073-0ba9d14f9593"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.892765 4946 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.892897 4946 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1e51973b-27b2-4f5f-9073-0ba9d14f9593-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.892954 4946 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1e51973b-27b2-4f5f-9073-0ba9d14f9593-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.893027 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96xpn\" (UniqueName: \"kubernetes.io/projected/1e51973b-27b2-4f5f-9073-0ba9d14f9593-kube-api-access-96xpn\") on node \"crc\" DevicePath \"\"" Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.893080 4946 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1e51973b-27b2-4f5f-9073-0ba9d14f9593-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.893164 4946 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1e51973b-27b2-4f5f-9073-0ba9d14f9593-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.893218 4946 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e51973b-27b2-4f5f-9073-0ba9d14f9593-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.893267 4946 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e51973b-27b2-4f5f-9073-0ba9d14f9593-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.893314 4946 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1e51973b-27b2-4f5f-9073-0ba9d14f9593-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.929697 4946 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 28 10:16:35 crc kubenswrapper[4946]: I1128 10:16:35.994642 4946 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 28 10:16:36 crc kubenswrapper[4946]: I1128 10:16:36.192249 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1e51973b-27b2-4f5f-9073-0ba9d14f9593","Type":"ContainerDied","Data":"69d9fe38429938a27fe27f0bf54af1649fca649f3000311b09eaec283d370049"} Nov 28 10:16:36 crc kubenswrapper[4946]: I1128 10:16:36.192633 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69d9fe38429938a27fe27f0bf54af1649fca649f3000311b09eaec283d370049" Nov 28 10:16:36 crc kubenswrapper[4946]: I1128 10:16:36.192283 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 28 10:16:46 crc kubenswrapper[4946]: I1128 10:16:46.507549 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 28 10:16:46 crc kubenswrapper[4946]: E1128 10:16:46.509575 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e51973b-27b2-4f5f-9073-0ba9d14f9593" containerName="tempest-tests-tempest-tests-runner" Nov 28 10:16:46 crc kubenswrapper[4946]: I1128 10:16:46.509601 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e51973b-27b2-4f5f-9073-0ba9d14f9593" containerName="tempest-tests-tempest-tests-runner" Nov 28 10:16:46 crc kubenswrapper[4946]: E1128 10:16:46.509686 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7e81dbd-0d99-434a-8d4f-f4dda08c5b04" containerName="collect-profiles" Nov 28 10:16:46 crc kubenswrapper[4946]: I1128 10:16:46.509699 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7e81dbd-0d99-434a-8d4f-f4dda08c5b04" containerName="collect-profiles" Nov 28 10:16:46 crc kubenswrapper[4946]: I1128 10:16:46.510530 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e51973b-27b2-4f5f-9073-0ba9d14f9593" containerName="tempest-tests-tempest-tests-runner" Nov 28 10:16:46 crc kubenswrapper[4946]: I1128 10:16:46.510633 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7e81dbd-0d99-434a-8d4f-f4dda08c5b04" containerName="collect-profiles" Nov 28 10:16:46 crc kubenswrapper[4946]: I1128 10:16:46.522511 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 10:16:46 crc kubenswrapper[4946]: I1128 10:16:46.530104 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zsz6p" Nov 28 10:16:46 crc kubenswrapper[4946]: I1128 10:16:46.547272 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 28 10:16:46 crc kubenswrapper[4946]: I1128 10:16:46.646357 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"176939d9-3e4b-4bb6-8e2a-a8a4de32d7a6\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 10:16:46 crc kubenswrapper[4946]: I1128 10:16:46.646528 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rbvn\" (UniqueName: \"kubernetes.io/projected/176939d9-3e4b-4bb6-8e2a-a8a4de32d7a6-kube-api-access-5rbvn\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"176939d9-3e4b-4bb6-8e2a-a8a4de32d7a6\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 10:16:46 crc kubenswrapper[4946]: I1128 10:16:46.748032 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"176939d9-3e4b-4bb6-8e2a-a8a4de32d7a6\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 10:16:46 crc kubenswrapper[4946]: I1128 10:16:46.748188 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rbvn\" (UniqueName: \"kubernetes.io/projected/176939d9-3e4b-4bb6-8e2a-a8a4de32d7a6-kube-api-access-5rbvn\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"176939d9-3e4b-4bb6-8e2a-a8a4de32d7a6\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 10:16:46 crc kubenswrapper[4946]: I1128 10:16:46.748875 4946 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"176939d9-3e4b-4bb6-8e2a-a8a4de32d7a6\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 10:16:46 crc kubenswrapper[4946]: I1128 10:16:46.767067 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rbvn\" (UniqueName: \"kubernetes.io/projected/176939d9-3e4b-4bb6-8e2a-a8a4de32d7a6-kube-api-access-5rbvn\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"176939d9-3e4b-4bb6-8e2a-a8a4de32d7a6\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 10:16:46 crc kubenswrapper[4946]: I1128 10:16:46.804992 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"176939d9-3e4b-4bb6-8e2a-a8a4de32d7a6\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 10:16:46 crc kubenswrapper[4946]: I1128 10:16:46.850096 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 10:16:47 crc kubenswrapper[4946]: I1128 10:16:47.328549 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 28 10:16:48 crc kubenswrapper[4946]: I1128 10:16:48.322942 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"176939d9-3e4b-4bb6-8e2a-a8a4de32d7a6","Type":"ContainerStarted","Data":"c56365545033fb0bafc1362135df450799d8caa93266b651c82c1691b9ef17e9"} Nov 28 10:16:49 crc kubenswrapper[4946]: I1128 10:16:49.339783 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"176939d9-3e4b-4bb6-8e2a-a8a4de32d7a6","Type":"ContainerStarted","Data":"38d34fdb95a59e0483df3c19bff6be6eb38f5a2c69b859420d20d4da3aa0bc2e"} Nov 28 10:16:49 crc kubenswrapper[4946]: I1128 10:16:49.372099 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.253929442 podStartE2EDuration="3.37207792s" podCreationTimestamp="2025-11-28 10:16:46 +0000 UTC" firstStartedPulling="2025-11-28 10:16:47.335202955 +0000 UTC m=+12261.713268106" lastFinishedPulling="2025-11-28 10:16:48.453351473 +0000 UTC m=+12262.831416584" observedRunningTime="2025-11-28 10:16:49.358556766 +0000 UTC m=+12263.736621937" watchObservedRunningTime="2025-11-28 10:16:49.37207792 +0000 UTC m=+12263.750143041" Nov 28 10:17:46 crc kubenswrapper[4946]: I1128 10:17:46.903741 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hx24b"] Nov 28 10:17:46 crc kubenswrapper[4946]: I1128 10:17:46.908700 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hx24b" Nov 28 10:17:46 crc kubenswrapper[4946]: I1128 10:17:46.923867 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hx24b"] Nov 28 10:17:47 crc kubenswrapper[4946]: I1128 10:17:47.097544 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe0e0ace-0690-4c29-ae72-877939ee2f22-utilities\") pod \"certified-operators-hx24b\" (UID: \"fe0e0ace-0690-4c29-ae72-877939ee2f22\") " pod="openshift-marketplace/certified-operators-hx24b" Nov 28 10:17:47 crc kubenswrapper[4946]: I1128 10:17:47.097629 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe0e0ace-0690-4c29-ae72-877939ee2f22-catalog-content\") pod \"certified-operators-hx24b\" (UID: \"fe0e0ace-0690-4c29-ae72-877939ee2f22\") " pod="openshift-marketplace/certified-operators-hx24b" Nov 28 10:17:47 crc kubenswrapper[4946]: I1128 10:17:47.097662 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cggh\" (UniqueName: \"kubernetes.io/projected/fe0e0ace-0690-4c29-ae72-877939ee2f22-kube-api-access-7cggh\") pod \"certified-operators-hx24b\" (UID: \"fe0e0ace-0690-4c29-ae72-877939ee2f22\") " pod="openshift-marketplace/certified-operators-hx24b" Nov 28 10:17:47 crc kubenswrapper[4946]: I1128 10:17:47.199199 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe0e0ace-0690-4c29-ae72-877939ee2f22-utilities\") pod \"certified-operators-hx24b\" (UID: \"fe0e0ace-0690-4c29-ae72-877939ee2f22\") " pod="openshift-marketplace/certified-operators-hx24b" Nov 28 10:17:47 crc kubenswrapper[4946]: I1128 10:17:47.199293 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe0e0ace-0690-4c29-ae72-877939ee2f22-catalog-content\") pod \"certified-operators-hx24b\" (UID: \"fe0e0ace-0690-4c29-ae72-877939ee2f22\") " pod="openshift-marketplace/certified-operators-hx24b" Nov 28 10:17:47 crc kubenswrapper[4946]: I1128 10:17:47.199338 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cggh\" (UniqueName: \"kubernetes.io/projected/fe0e0ace-0690-4c29-ae72-877939ee2f22-kube-api-access-7cggh\") pod \"certified-operators-hx24b\" (UID: \"fe0e0ace-0690-4c29-ae72-877939ee2f22\") " pod="openshift-marketplace/certified-operators-hx24b" Nov 28 10:17:47 crc kubenswrapper[4946]: I1128 10:17:47.199857 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe0e0ace-0690-4c29-ae72-877939ee2f22-utilities\") pod \"certified-operators-hx24b\" (UID: \"fe0e0ace-0690-4c29-ae72-877939ee2f22\") " pod="openshift-marketplace/certified-operators-hx24b" Nov 28 10:17:47 crc kubenswrapper[4946]: I1128 10:17:47.200051 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe0e0ace-0690-4c29-ae72-877939ee2f22-catalog-content\") pod \"certified-operators-hx24b\" (UID: \"fe0e0ace-0690-4c29-ae72-877939ee2f22\") " pod="openshift-marketplace/certified-operators-hx24b" Nov 28 10:17:47 crc kubenswrapper[4946]: I1128 10:17:47.235096 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cggh\" (UniqueName: \"kubernetes.io/projected/fe0e0ace-0690-4c29-ae72-877939ee2f22-kube-api-access-7cggh\") pod \"certified-operators-hx24b\" (UID: \"fe0e0ace-0690-4c29-ae72-877939ee2f22\") " pod="openshift-marketplace/certified-operators-hx24b" Nov 28 10:17:47 crc kubenswrapper[4946]: I1128 10:17:47.250611 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hx24b" Nov 28 10:17:47 crc kubenswrapper[4946]: W1128 10:17:47.889160 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe0e0ace_0690_4c29_ae72_877939ee2f22.slice/crio-e23f57fa19d88adf2c66726646985ccfabb25756ad54c78db8f2e8bcd3269c8b WatchSource:0}: Error finding container e23f57fa19d88adf2c66726646985ccfabb25756ad54c78db8f2e8bcd3269c8b: Status 404 returned error can't find the container with id e23f57fa19d88adf2c66726646985ccfabb25756ad54c78db8f2e8bcd3269c8b Nov 28 10:17:47 crc kubenswrapper[4946]: I1128 10:17:47.897595 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hx24b"] Nov 28 10:17:48 crc kubenswrapper[4946]: I1128 10:17:48.063687 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hx24b" event={"ID":"fe0e0ace-0690-4c29-ae72-877939ee2f22","Type":"ContainerStarted","Data":"e23f57fa19d88adf2c66726646985ccfabb25756ad54c78db8f2e8bcd3269c8b"} Nov 28 10:17:49 crc kubenswrapper[4946]: I1128 10:17:49.081553 4946 generic.go:334] "Generic (PLEG): container finished" podID="fe0e0ace-0690-4c29-ae72-877939ee2f22" containerID="81a6e569c2d3834683557fb2f379fbbb325b97a15d02d0213318d2eda0dcbcc6" exitCode=0 Nov 28 10:17:49 crc kubenswrapper[4946]: I1128 10:17:49.081671 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hx24b" event={"ID":"fe0e0ace-0690-4c29-ae72-877939ee2f22","Type":"ContainerDied","Data":"81a6e569c2d3834683557fb2f379fbbb325b97a15d02d0213318d2eda0dcbcc6"} Nov 28 10:17:50 crc kubenswrapper[4946]: I1128 10:17:50.109049 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hx24b" event={"ID":"fe0e0ace-0690-4c29-ae72-877939ee2f22","Type":"ContainerStarted","Data":"f5ec19ffe33ef83a00eb438686cdfc92458e4625b004386a564beb5ee8e2b11e"} Nov 28 10:17:51 crc kubenswrapper[4946]: I1128 10:17:51.125543 4946 generic.go:334] "Generic (PLEG): container finished" podID="fe0e0ace-0690-4c29-ae72-877939ee2f22" containerID="f5ec19ffe33ef83a00eb438686cdfc92458e4625b004386a564beb5ee8e2b11e" exitCode=0 Nov 28 10:17:51 crc kubenswrapper[4946]: I1128 10:17:51.125612 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hx24b" event={"ID":"fe0e0ace-0690-4c29-ae72-877939ee2f22","Type":"ContainerDied","Data":"f5ec19ffe33ef83a00eb438686cdfc92458e4625b004386a564beb5ee8e2b11e"} Nov 28 10:17:52 crc kubenswrapper[4946]: I1128 10:17:52.140719 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hx24b" event={"ID":"fe0e0ace-0690-4c29-ae72-877939ee2f22","Type":"ContainerStarted","Data":"08d9425f212fec449eee9c3f29b51eac759b7b4c2f52650bb28ef5a91656863a"} Nov 28 10:17:52 crc kubenswrapper[4946]: I1128 10:17:52.180373 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hx24b" podStartSLOduration=3.697970263 podStartE2EDuration="6.1803373s" podCreationTimestamp="2025-11-28 10:17:46 +0000 UTC" firstStartedPulling="2025-11-28 10:17:49.085593406 +0000 UTC m=+12323.463658517" lastFinishedPulling="2025-11-28 10:17:51.567960413 +0000 UTC m=+12325.946025554" observedRunningTime="2025-11-28 10:17:52.165866992 +0000 UTC m=+12326.543932143" watchObservedRunningTime="2025-11-28 10:17:52.1803373 +0000 UTC m=+12326.558402411" Nov 28 10:17:57 crc kubenswrapper[4946]: I1128 10:17:57.251112 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hx24b" Nov 28 10:17:57 crc kubenswrapper[4946]: I1128 10:17:57.252862 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hx24b" Nov 28 10:17:57 crc kubenswrapper[4946]: I1128 10:17:57.341976 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hx24b" Nov 28 10:17:57 crc kubenswrapper[4946]: I1128 10:17:57.875768 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hx24b" Nov 28 10:17:57 crc kubenswrapper[4946]: I1128 10:17:57.942851 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hx24b"] Nov 28 10:17:59 crc kubenswrapper[4946]: I1128 10:17:59.820065 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hx24b" podUID="fe0e0ace-0690-4c29-ae72-877939ee2f22" containerName="registry-server" containerID="cri-o://08d9425f212fec449eee9c3f29b51eac759b7b4c2f52650bb28ef5a91656863a" gracePeriod=2 Nov 28 10:18:00 crc kubenswrapper[4946]: I1128 10:18:00.397036 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hx24b" Nov 28 10:18:00 crc kubenswrapper[4946]: I1128 10:18:00.488910 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cggh\" (UniqueName: \"kubernetes.io/projected/fe0e0ace-0690-4c29-ae72-877939ee2f22-kube-api-access-7cggh\") pod \"fe0e0ace-0690-4c29-ae72-877939ee2f22\" (UID: \"fe0e0ace-0690-4c29-ae72-877939ee2f22\") " Nov 28 10:18:00 crc kubenswrapper[4946]: I1128 10:18:00.489103 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe0e0ace-0690-4c29-ae72-877939ee2f22-utilities\") pod \"fe0e0ace-0690-4c29-ae72-877939ee2f22\" (UID: \"fe0e0ace-0690-4c29-ae72-877939ee2f22\") " Nov 28 10:18:00 crc kubenswrapper[4946]: I1128 10:18:00.489214 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe0e0ace-0690-4c29-ae72-877939ee2f22-catalog-content\") pod \"fe0e0ace-0690-4c29-ae72-877939ee2f22\" (UID: \"fe0e0ace-0690-4c29-ae72-877939ee2f22\") " Nov 28 10:18:00 crc kubenswrapper[4946]: I1128 10:18:00.489970 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe0e0ace-0690-4c29-ae72-877939ee2f22-utilities" (OuterVolumeSpecName: "utilities") pod "fe0e0ace-0690-4c29-ae72-877939ee2f22" (UID: "fe0e0ace-0690-4c29-ae72-877939ee2f22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:18:00 crc kubenswrapper[4946]: I1128 10:18:00.495550 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe0e0ace-0690-4c29-ae72-877939ee2f22-kube-api-access-7cggh" (OuterVolumeSpecName: "kube-api-access-7cggh") pod "fe0e0ace-0690-4c29-ae72-877939ee2f22" (UID: "fe0e0ace-0690-4c29-ae72-877939ee2f22"). InnerVolumeSpecName "kube-api-access-7cggh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 10:18:00 crc kubenswrapper[4946]: I1128 10:18:00.551517 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe0e0ace-0690-4c29-ae72-877939ee2f22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe0e0ace-0690-4c29-ae72-877939ee2f22" (UID: "fe0e0ace-0690-4c29-ae72-877939ee2f22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:18:00 crc kubenswrapper[4946]: I1128 10:18:00.591810 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe0e0ace-0690-4c29-ae72-877939ee2f22-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 10:18:00 crc kubenswrapper[4946]: I1128 10:18:00.591840 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe0e0ace-0690-4c29-ae72-877939ee2f22-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 10:18:00 crc kubenswrapper[4946]: I1128 10:18:00.591853 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cggh\" (UniqueName: \"kubernetes.io/projected/fe0e0ace-0690-4c29-ae72-877939ee2f22-kube-api-access-7cggh\") on node \"crc\" DevicePath \"\"" Nov 28 10:18:00 crc kubenswrapper[4946]: I1128 10:18:00.834864 4946 generic.go:334] "Generic (PLEG): container finished" podID="fe0e0ace-0690-4c29-ae72-877939ee2f22" containerID="08d9425f212fec449eee9c3f29b51eac759b7b4c2f52650bb28ef5a91656863a" exitCode=0 Nov 28 10:18:00 crc kubenswrapper[4946]: I1128 10:18:00.834911 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hx24b" event={"ID":"fe0e0ace-0690-4c29-ae72-877939ee2f22","Type":"ContainerDied","Data":"08d9425f212fec449eee9c3f29b51eac759b7b4c2f52650bb28ef5a91656863a"} Nov 28 10:18:00 crc kubenswrapper[4946]: I1128 10:18:00.834938 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hx24b" event={"ID":"fe0e0ace-0690-4c29-ae72-877939ee2f22","Type":"ContainerDied","Data":"e23f57fa19d88adf2c66726646985ccfabb25756ad54c78db8f2e8bcd3269c8b"} Nov 28 10:18:00 crc kubenswrapper[4946]: I1128 10:18:00.834955 4946 scope.go:117] "RemoveContainer" containerID="08d9425f212fec449eee9c3f29b51eac759b7b4c2f52650bb28ef5a91656863a" Nov 28 10:18:00 crc kubenswrapper[4946]: I1128 10:18:00.835031 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hx24b" Nov 28 10:18:00 crc kubenswrapper[4946]: I1128 10:18:00.864660 4946 scope.go:117] "RemoveContainer" containerID="f5ec19ffe33ef83a00eb438686cdfc92458e4625b004386a564beb5ee8e2b11e" Nov 28 10:18:00 crc kubenswrapper[4946]: I1128 10:18:00.887585 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hx24b"] Nov 28 10:18:00 crc kubenswrapper[4946]: I1128 10:18:00.899009 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hx24b"] Nov 28 10:18:00 crc kubenswrapper[4946]: I1128 10:18:00.913541 4946 scope.go:117] "RemoveContainer" containerID="81a6e569c2d3834683557fb2f379fbbb325b97a15d02d0213318d2eda0dcbcc6" Nov 28 10:18:00 crc kubenswrapper[4946]: I1128 10:18:00.975892 4946 scope.go:117] "RemoveContainer" containerID="08d9425f212fec449eee9c3f29b51eac759b7b4c2f52650bb28ef5a91656863a" Nov 28 10:18:00 crc kubenswrapper[4946]: E1128 10:18:00.976381 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08d9425f212fec449eee9c3f29b51eac759b7b4c2f52650bb28ef5a91656863a\": container with ID starting with 08d9425f212fec449eee9c3f29b51eac759b7b4c2f52650bb28ef5a91656863a not found: ID does not exist" containerID="08d9425f212fec449eee9c3f29b51eac759b7b4c2f52650bb28ef5a91656863a" Nov 28 10:18:00 crc kubenswrapper[4946]: I1128 10:18:00.976436 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08d9425f212fec449eee9c3f29b51eac759b7b4c2f52650bb28ef5a91656863a"} err="failed to get container status \"08d9425f212fec449eee9c3f29b51eac759b7b4c2f52650bb28ef5a91656863a\": rpc error: code = NotFound desc = could not find container \"08d9425f212fec449eee9c3f29b51eac759b7b4c2f52650bb28ef5a91656863a\": container with ID starting with 08d9425f212fec449eee9c3f29b51eac759b7b4c2f52650bb28ef5a91656863a not found: ID does not exist" Nov 28 10:18:00 crc kubenswrapper[4946]: I1128 10:18:00.976547 4946 scope.go:117] "RemoveContainer" containerID="f5ec19ffe33ef83a00eb438686cdfc92458e4625b004386a564beb5ee8e2b11e" Nov 28 10:18:00 crc kubenswrapper[4946]: E1128 10:18:00.976881 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5ec19ffe33ef83a00eb438686cdfc92458e4625b004386a564beb5ee8e2b11e\": container with ID starting with f5ec19ffe33ef83a00eb438686cdfc92458e4625b004386a564beb5ee8e2b11e not found: ID does not exist" containerID="f5ec19ffe33ef83a00eb438686cdfc92458e4625b004386a564beb5ee8e2b11e" Nov 28 10:18:00 crc kubenswrapper[4946]: I1128 10:18:00.976918 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5ec19ffe33ef83a00eb438686cdfc92458e4625b004386a564beb5ee8e2b11e"} err="failed to get container status \"f5ec19ffe33ef83a00eb438686cdfc92458e4625b004386a564beb5ee8e2b11e\": rpc error: code = NotFound desc = could not find container \"f5ec19ffe33ef83a00eb438686cdfc92458e4625b004386a564beb5ee8e2b11e\": container with ID starting with f5ec19ffe33ef83a00eb438686cdfc92458e4625b004386a564beb5ee8e2b11e not found: ID does not exist" Nov 28 10:18:00 crc kubenswrapper[4946]: I1128 10:18:00.976943 4946 scope.go:117] "RemoveContainer" containerID="81a6e569c2d3834683557fb2f379fbbb325b97a15d02d0213318d2eda0dcbcc6" Nov 28 10:18:00 crc kubenswrapper[4946]: E1128 10:18:00.977172 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81a6e569c2d3834683557fb2f379fbbb325b97a15d02d0213318d2eda0dcbcc6\": container with ID starting with 81a6e569c2d3834683557fb2f379fbbb325b97a15d02d0213318d2eda0dcbcc6 not found: ID does not exist" containerID="81a6e569c2d3834683557fb2f379fbbb325b97a15d02d0213318d2eda0dcbcc6" Nov 28 10:18:00 crc kubenswrapper[4946]: I1128 10:18:00.977196 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81a6e569c2d3834683557fb2f379fbbb325b97a15d02d0213318d2eda0dcbcc6"} err="failed to get container status \"81a6e569c2d3834683557fb2f379fbbb325b97a15d02d0213318d2eda0dcbcc6\": rpc error: code = NotFound desc = could not find container \"81a6e569c2d3834683557fb2f379fbbb325b97a15d02d0213318d2eda0dcbcc6\": container with ID starting with 81a6e569c2d3834683557fb2f379fbbb325b97a15d02d0213318d2eda0dcbcc6 not found: ID does not exist" Nov 28 10:18:02 crc kubenswrapper[4946]: I1128 10:18:02.008316 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe0e0ace-0690-4c29-ae72-877939ee2f22" path="/var/lib/kubelet/pods/fe0e0ace-0690-4c29-ae72-877939ee2f22/volumes" Nov 28 10:18:08 crc kubenswrapper[4946]: I1128 10:18:08.007846 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2bvwr/must-gather-nkw64"] Nov 28 10:18:08 crc kubenswrapper[4946]: E1128 10:18:08.008839 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe0e0ace-0690-4c29-ae72-877939ee2f22" containerName="registry-server" Nov 28 10:18:08 crc kubenswrapper[4946]: I1128 10:18:08.008858 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe0e0ace-0690-4c29-ae72-877939ee2f22" containerName="registry-server" Nov 28 10:18:08 crc kubenswrapper[4946]: E1128 10:18:08.008884 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe0e0ace-0690-4c29-ae72-877939ee2f22" containerName="extract-utilities" Nov 28 10:18:08 crc kubenswrapper[4946]: I1128 10:18:08.008890 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe0e0ace-0690-4c29-ae72-877939ee2f22" containerName="extract-utilities" Nov 28 10:18:08 crc kubenswrapper[4946]: E1128 10:18:08.008908 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe0e0ace-0690-4c29-ae72-877939ee2f22" containerName="extract-content" Nov 28 10:18:08 crc kubenswrapper[4946]: I1128 10:18:08.008914 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe0e0ace-0690-4c29-ae72-877939ee2f22" containerName="extract-content" Nov 28 10:18:08 crc kubenswrapper[4946]: I1128 10:18:08.009138 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe0e0ace-0690-4c29-ae72-877939ee2f22" containerName="registry-server" Nov 28 10:18:08 crc kubenswrapper[4946]: I1128 10:18:08.010552 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2bvwr/must-gather-nkw64" Nov 28 10:18:08 crc kubenswrapper[4946]: I1128 10:18:08.012974 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2bvwr"/"kube-root-ca.crt" Nov 28 10:18:08 crc kubenswrapper[4946]: I1128 10:18:08.013179 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-2bvwr"/"default-dockercfg-6bdtm" Nov 28 10:18:08 crc kubenswrapper[4946]: I1128 10:18:08.013405 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2bvwr"/"openshift-service-ca.crt" Nov 28 10:18:08 crc kubenswrapper[4946]: I1128 10:18:08.023266 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2bvwr/must-gather-nkw64"] Nov 28 10:18:08 crc kubenswrapper[4946]: I1128 10:18:08.072090 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxbc4\" (UniqueName: \"kubernetes.io/projected/131948ed-47c4-49e1-837c-d24eab4a8123-kube-api-access-xxbc4\") pod \"must-gather-nkw64\" (UID: \"131948ed-47c4-49e1-837c-d24eab4a8123\") " pod="openshift-must-gather-2bvwr/must-gather-nkw64" Nov 28 10:18:08 crc kubenswrapper[4946]: I1128 10:18:08.073065 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/131948ed-47c4-49e1-837c-d24eab4a8123-must-gather-output\") pod \"must-gather-nkw64\" (UID: \"131948ed-47c4-49e1-837c-d24eab4a8123\") " pod="openshift-must-gather-2bvwr/must-gather-nkw64" Nov 28 10:18:08 crc kubenswrapper[4946]: I1128 10:18:08.174868 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxbc4\" (UniqueName: \"kubernetes.io/projected/131948ed-47c4-49e1-837c-d24eab4a8123-kube-api-access-xxbc4\") pod \"must-gather-nkw64\" (UID: \"131948ed-47c4-49e1-837c-d24eab4a8123\") " pod="openshift-must-gather-2bvwr/must-gather-nkw64" Nov 28 10:18:08 crc kubenswrapper[4946]: I1128 10:18:08.174965 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/131948ed-47c4-49e1-837c-d24eab4a8123-must-gather-output\") pod \"must-gather-nkw64\" (UID: \"131948ed-47c4-49e1-837c-d24eab4a8123\") " pod="openshift-must-gather-2bvwr/must-gather-nkw64" Nov 28 10:18:08 crc kubenswrapper[4946]: I1128 10:18:08.175421 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/131948ed-47c4-49e1-837c-d24eab4a8123-must-gather-output\") pod \"must-gather-nkw64\" (UID: \"131948ed-47c4-49e1-837c-d24eab4a8123\") " pod="openshift-must-gather-2bvwr/must-gather-nkw64" Nov 28 10:18:08 crc kubenswrapper[4946]: I1128 10:18:08.192234 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxbc4\" (UniqueName: \"kubernetes.io/projected/131948ed-47c4-49e1-837c-d24eab4a8123-kube-api-access-xxbc4\") pod \"must-gather-nkw64\" (UID: \"131948ed-47c4-49e1-837c-d24eab4a8123\") " pod="openshift-must-gather-2bvwr/must-gather-nkw64" Nov 28 10:18:08 crc kubenswrapper[4946]: I1128 10:18:08.329285 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2bvwr/must-gather-nkw64" Nov 28 10:18:08 crc kubenswrapper[4946]: I1128 10:18:08.862751 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2bvwr/must-gather-nkw64"] Nov 28 10:18:08 crc kubenswrapper[4946]: I1128 10:18:08.864891 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 10:18:08 crc kubenswrapper[4946]: I1128 10:18:08.931965 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2bvwr/must-gather-nkw64" event={"ID":"131948ed-47c4-49e1-837c-d24eab4a8123","Type":"ContainerStarted","Data":"62ae5a80d1d34c5107f8918b35c980b4134d35476a97d0dc420dfe25858c85df"} Nov 28 10:18:16 crc kubenswrapper[4946]: I1128 10:18:16.006904 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2bvwr/must-gather-nkw64" event={"ID":"131948ed-47c4-49e1-837c-d24eab4a8123","Type":"ContainerStarted","Data":"603eea7566ea14411e133c4bc603052fb727c53073fc2353e90a949d2fd6b58d"} Nov 28 10:18:16 crc kubenswrapper[4946]: I1128 10:18:16.007398 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2bvwr/must-gather-nkw64" event={"ID":"131948ed-47c4-49e1-837c-d24eab4a8123","Type":"ContainerStarted","Data":"9e7094be3a39a0af544b0cd47937ea7a0321c8ce4ed9069e8d2fe774a6d38966"} Nov 28 10:18:16 crc kubenswrapper[4946]: I1128 10:18:16.061302 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2bvwr/must-gather-nkw64" podStartSLOduration=2.8276635690000003 podStartE2EDuration="9.061274108s" podCreationTimestamp="2025-11-28 10:18:07 +0000 UTC" firstStartedPulling="2025-11-28 10:18:08.864709908 +0000 UTC m=+12343.242775019" lastFinishedPulling="2025-11-28 10:18:15.098320447 +0000 UTC m=+12349.476385558" observedRunningTime="2025-11-28 10:18:16.03870757 +0000 UTC m=+12350.416772681" watchObservedRunningTime="2025-11-28 10:18:16.061274108 +0000 UTC m=+12350.439339219" Nov 28 10:18:20 crc kubenswrapper[4946]: I1128 10:18:20.010625 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2bvwr/crc-debug-jdxg2"] Nov 28 10:18:20 crc kubenswrapper[4946]: I1128 10:18:20.012353 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2bvwr/crc-debug-jdxg2" Nov 28 10:18:20 crc kubenswrapper[4946]: I1128 10:18:20.119710 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db12e2d5-d930-4308-9353-a0aaecb1dcbc-host\") pod \"crc-debug-jdxg2\" (UID: \"db12e2d5-d930-4308-9353-a0aaecb1dcbc\") " pod="openshift-must-gather-2bvwr/crc-debug-jdxg2" Nov 28 10:18:20 crc kubenswrapper[4946]: I1128 10:18:20.119801 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl7pk\" (UniqueName: \"kubernetes.io/projected/db12e2d5-d930-4308-9353-a0aaecb1dcbc-kube-api-access-wl7pk\") pod \"crc-debug-jdxg2\" (UID: \"db12e2d5-d930-4308-9353-a0aaecb1dcbc\") " pod="openshift-must-gather-2bvwr/crc-debug-jdxg2" Nov 28 10:18:20 crc kubenswrapper[4946]: I1128 10:18:20.224856 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db12e2d5-d930-4308-9353-a0aaecb1dcbc-host\") pod \"crc-debug-jdxg2\" (UID: \"db12e2d5-d930-4308-9353-a0aaecb1dcbc\") " pod="openshift-must-gather-2bvwr/crc-debug-jdxg2" Nov 28 10:18:20 crc kubenswrapper[4946]: I1128 10:18:20.224920 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl7pk\" (UniqueName: \"kubernetes.io/projected/db12e2d5-d930-4308-9353-a0aaecb1dcbc-kube-api-access-wl7pk\") pod \"crc-debug-jdxg2\" (UID: \"db12e2d5-d930-4308-9353-a0aaecb1dcbc\") " pod="openshift-must-gather-2bvwr/crc-debug-jdxg2" Nov 28 10:18:20 crc kubenswrapper[4946]: I1128 10:18:20.225057 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db12e2d5-d930-4308-9353-a0aaecb1dcbc-host\") pod \"crc-debug-jdxg2\" (UID: \"db12e2d5-d930-4308-9353-a0aaecb1dcbc\") " pod="openshift-must-gather-2bvwr/crc-debug-jdxg2" Nov 28 10:18:20 crc kubenswrapper[4946]: I1128 10:18:20.265416 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl7pk\" (UniqueName: \"kubernetes.io/projected/db12e2d5-d930-4308-9353-a0aaecb1dcbc-kube-api-access-wl7pk\") pod \"crc-debug-jdxg2\" (UID: \"db12e2d5-d930-4308-9353-a0aaecb1dcbc\") " pod="openshift-must-gather-2bvwr/crc-debug-jdxg2" Nov 28 10:18:20 crc kubenswrapper[4946]: I1128 10:18:20.332218 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2bvwr/crc-debug-jdxg2" Nov 28 10:18:20 crc kubenswrapper[4946]: W1128 10:18:20.366970 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb12e2d5_d930_4308_9353_a0aaecb1dcbc.slice/crio-5659f3935b6344208284d6e8b88154425005d4ebb5f6b6fa626240ca6b2e8954 WatchSource:0}: Error finding container 5659f3935b6344208284d6e8b88154425005d4ebb5f6b6fa626240ca6b2e8954: Status 404 returned error can't find the container with id 5659f3935b6344208284d6e8b88154425005d4ebb5f6b6fa626240ca6b2e8954 Nov 28 10:18:21 crc kubenswrapper[4946]: I1128 10:18:21.078284 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2bvwr/crc-debug-jdxg2" event={"ID":"db12e2d5-d930-4308-9353-a0aaecb1dcbc","Type":"ContainerStarted","Data":"5659f3935b6344208284d6e8b88154425005d4ebb5f6b6fa626240ca6b2e8954"} Nov 28 10:18:31 crc kubenswrapper[4946]: I1128 10:18:31.234683 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2bvwr/crc-debug-jdxg2" event={"ID":"db12e2d5-d930-4308-9353-a0aaecb1dcbc","Type":"ContainerStarted","Data":"afac034632d7e5d3d297993c452230ddbdf3c54c831004e32b7bc07ac061cc70"} Nov 28 10:18:31 crc kubenswrapper[4946]: I1128 10:18:31.261680 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2bvwr/crc-debug-jdxg2" podStartSLOduration=2.253400464 podStartE2EDuration="12.261653524s" podCreationTimestamp="2025-11-28 10:18:19 +0000 UTC" firstStartedPulling="2025-11-28 10:18:20.370268729 +0000 UTC m=+12354.748333840" lastFinishedPulling="2025-11-28 10:18:30.378521789 +0000 UTC m=+12364.756586900" observedRunningTime="2025-11-28 10:18:31.252552799 +0000 UTC m=+12365.630617920" watchObservedRunningTime="2025-11-28 10:18:31.261653524 +0000 UTC m=+12365.639718675" Nov 28 10:18:54 crc kubenswrapper[4946]: I1128 10:18:54.730866 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 10:18:54 crc kubenswrapper[4946]: I1128 10:18:54.731430 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 10:19:12 crc kubenswrapper[4946]: I1128 10:19:12.753660 4946 generic.go:334] "Generic (PLEG): container finished" podID="db12e2d5-d930-4308-9353-a0aaecb1dcbc" containerID="afac034632d7e5d3d297993c452230ddbdf3c54c831004e32b7bc07ac061cc70" exitCode=0 Nov 28 10:19:12 crc kubenswrapper[4946]: I1128 10:19:12.753739 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2bvwr/crc-debug-jdxg2" event={"ID":"db12e2d5-d930-4308-9353-a0aaecb1dcbc","Type":"ContainerDied","Data":"afac034632d7e5d3d297993c452230ddbdf3c54c831004e32b7bc07ac061cc70"} Nov 28 10:19:13 crc kubenswrapper[4946]: I1128 10:19:13.870320 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2bvwr/crc-debug-jdxg2" Nov 28 10:19:13 crc kubenswrapper[4946]: I1128 10:19:13.911180 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2bvwr/crc-debug-jdxg2"] Nov 28 10:19:13 crc kubenswrapper[4946]: I1128 10:19:13.922318 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2bvwr/crc-debug-jdxg2"] Nov 28 10:19:13 crc kubenswrapper[4946]: I1128 10:19:13.966891 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db12e2d5-d930-4308-9353-a0aaecb1dcbc-host\") pod \"db12e2d5-d930-4308-9353-a0aaecb1dcbc\" (UID: \"db12e2d5-d930-4308-9353-a0aaecb1dcbc\") " Nov 28 10:19:13 crc kubenswrapper[4946]: I1128 10:19:13.967042 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db12e2d5-d930-4308-9353-a0aaecb1dcbc-host" (OuterVolumeSpecName: "host") pod "db12e2d5-d930-4308-9353-a0aaecb1dcbc" (UID: "db12e2d5-d930-4308-9353-a0aaecb1dcbc"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 10:19:13 crc kubenswrapper[4946]: I1128 10:19:13.967119 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl7pk\" (UniqueName: \"kubernetes.io/projected/db12e2d5-d930-4308-9353-a0aaecb1dcbc-kube-api-access-wl7pk\") pod \"db12e2d5-d930-4308-9353-a0aaecb1dcbc\" (UID: \"db12e2d5-d930-4308-9353-a0aaecb1dcbc\") " Nov 28 10:19:13 crc kubenswrapper[4946]: I1128 10:19:13.968373 4946 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db12e2d5-d930-4308-9353-a0aaecb1dcbc-host\") on node \"crc\" DevicePath \"\"" Nov 28 10:19:13 crc kubenswrapper[4946]: I1128 10:19:13.976242 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db12e2d5-d930-4308-9353-a0aaecb1dcbc-kube-api-access-wl7pk" (OuterVolumeSpecName: "kube-api-access-wl7pk") pod "db12e2d5-d930-4308-9353-a0aaecb1dcbc" (UID: "db12e2d5-d930-4308-9353-a0aaecb1dcbc"). InnerVolumeSpecName "kube-api-access-wl7pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 10:19:14 crc kubenswrapper[4946]: I1128 10:19:14.010606 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db12e2d5-d930-4308-9353-a0aaecb1dcbc" path="/var/lib/kubelet/pods/db12e2d5-d930-4308-9353-a0aaecb1dcbc/volumes" Nov 28 10:19:14 crc kubenswrapper[4946]: I1128 10:19:14.070969 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl7pk\" (UniqueName: \"kubernetes.io/projected/db12e2d5-d930-4308-9353-a0aaecb1dcbc-kube-api-access-wl7pk\") on node \"crc\" DevicePath \"\"" Nov 28 10:19:14 crc kubenswrapper[4946]: I1128 10:19:14.774671 4946 scope.go:117] "RemoveContainer" containerID="afac034632d7e5d3d297993c452230ddbdf3c54c831004e32b7bc07ac061cc70" Nov 28 10:19:14 crc kubenswrapper[4946]: I1128 10:19:14.774725 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2bvwr/crc-debug-jdxg2" Nov 28 10:19:15 crc kubenswrapper[4946]: I1128 10:19:15.095317 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2bvwr/crc-debug-qxtss"] Nov 28 10:19:15 crc kubenswrapper[4946]: E1128 10:19:15.095999 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db12e2d5-d930-4308-9353-a0aaecb1dcbc" containerName="container-00" Nov 28 10:19:15 crc kubenswrapper[4946]: I1128 10:19:15.096012 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="db12e2d5-d930-4308-9353-a0aaecb1dcbc" containerName="container-00" Nov 28 10:19:15 crc kubenswrapper[4946]: I1128 10:19:15.096235 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="db12e2d5-d930-4308-9353-a0aaecb1dcbc" containerName="container-00" Nov 28 10:19:15 crc kubenswrapper[4946]: I1128 10:19:15.096907 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2bvwr/crc-debug-qxtss" Nov 28 10:19:15 crc kubenswrapper[4946]: I1128 10:19:15.193939 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkpn9\" (UniqueName: \"kubernetes.io/projected/a4b364ef-5b94-49ad-a40a-bcc49370ebe2-kube-api-access-zkpn9\") pod \"crc-debug-qxtss\" (UID: \"a4b364ef-5b94-49ad-a40a-bcc49370ebe2\") " pod="openshift-must-gather-2bvwr/crc-debug-qxtss" Nov 28 10:19:15 crc kubenswrapper[4946]: I1128 10:19:15.194166 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4b364ef-5b94-49ad-a40a-bcc49370ebe2-host\") pod \"crc-debug-qxtss\" (UID: \"a4b364ef-5b94-49ad-a40a-bcc49370ebe2\") " pod="openshift-must-gather-2bvwr/crc-debug-qxtss" Nov 28 10:19:15 crc kubenswrapper[4946]: I1128 10:19:15.295850 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkpn9\" (UniqueName: \"kubernetes.io/projected/a4b364ef-5b94-49ad-a40a-bcc49370ebe2-kube-api-access-zkpn9\") pod \"crc-debug-qxtss\" (UID: \"a4b364ef-5b94-49ad-a40a-bcc49370ebe2\") " pod="openshift-must-gather-2bvwr/crc-debug-qxtss" Nov 28 10:19:15 crc kubenswrapper[4946]: I1128 10:19:15.296618 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4b364ef-5b94-49ad-a40a-bcc49370ebe2-host\") pod \"crc-debug-qxtss\" (UID: \"a4b364ef-5b94-49ad-a40a-bcc49370ebe2\") " pod="openshift-must-gather-2bvwr/crc-debug-qxtss" Nov 28 10:19:15 crc kubenswrapper[4946]: I1128 10:19:15.296728 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4b364ef-5b94-49ad-a40a-bcc49370ebe2-host\") pod \"crc-debug-qxtss\" (UID: \"a4b364ef-5b94-49ad-a40a-bcc49370ebe2\") " pod="openshift-must-gather-2bvwr/crc-debug-qxtss" Nov 28 10:19:15 crc kubenswrapper[4946]: I1128 10:19:15.318202 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkpn9\" (UniqueName: \"kubernetes.io/projected/a4b364ef-5b94-49ad-a40a-bcc49370ebe2-kube-api-access-zkpn9\") pod \"crc-debug-qxtss\" (UID: \"a4b364ef-5b94-49ad-a40a-bcc49370ebe2\") " pod="openshift-must-gather-2bvwr/crc-debug-qxtss" Nov 28 10:19:15 crc kubenswrapper[4946]: I1128 10:19:15.423306 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2bvwr/crc-debug-qxtss" Nov 28 10:19:15 crc kubenswrapper[4946]: I1128 10:19:15.784991 4946 generic.go:334] "Generic (PLEG): container finished" podID="a4b364ef-5b94-49ad-a40a-bcc49370ebe2" containerID="be2affc3b48e7fc34bab159cff08fec8ae65bbc29ae0a6d6bc728bb0c8a001cf" exitCode=0 Nov 28 10:19:15 crc kubenswrapper[4946]: I1128 10:19:15.785071 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2bvwr/crc-debug-qxtss" event={"ID":"a4b364ef-5b94-49ad-a40a-bcc49370ebe2","Type":"ContainerDied","Data":"be2affc3b48e7fc34bab159cff08fec8ae65bbc29ae0a6d6bc728bb0c8a001cf"} Nov 28 10:19:15 crc kubenswrapper[4946]: I1128 10:19:15.785283 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2bvwr/crc-debug-qxtss" event={"ID":"a4b364ef-5b94-49ad-a40a-bcc49370ebe2","Type":"ContainerStarted","Data":"4a2ae66421f1d535e85969102981a1f33827445ba06a5a60776dca4d3e62bf29"} Nov 28 10:19:16 crc kubenswrapper[4946]: I1128 10:19:16.499440 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2bvwr/crc-debug-qxtss"] Nov 28 10:19:16 crc kubenswrapper[4946]: I1128 10:19:16.508265 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2bvwr/crc-debug-qxtss"] Nov 28 10:19:16 crc kubenswrapper[4946]: I1128 10:19:16.907019 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2bvwr/crc-debug-qxtss" Nov 28 10:19:17 crc kubenswrapper[4946]: I1128 10:19:17.031324 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkpn9\" (UniqueName: \"kubernetes.io/projected/a4b364ef-5b94-49ad-a40a-bcc49370ebe2-kube-api-access-zkpn9\") pod \"a4b364ef-5b94-49ad-a40a-bcc49370ebe2\" (UID: \"a4b364ef-5b94-49ad-a40a-bcc49370ebe2\") " Nov 28 10:19:17 crc kubenswrapper[4946]: I1128 10:19:17.031746 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4b364ef-5b94-49ad-a40a-bcc49370ebe2-host\") pod \"a4b364ef-5b94-49ad-a40a-bcc49370ebe2\" (UID: \"a4b364ef-5b94-49ad-a40a-bcc49370ebe2\") " Nov 28 10:19:17 crc kubenswrapper[4946]: I1128 10:19:17.031868 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4b364ef-5b94-49ad-a40a-bcc49370ebe2-host" (OuterVolumeSpecName: "host") pod "a4b364ef-5b94-49ad-a40a-bcc49370ebe2" (UID: "a4b364ef-5b94-49ad-a40a-bcc49370ebe2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 10:19:17 crc kubenswrapper[4946]: I1128 10:19:17.032631 4946 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4b364ef-5b94-49ad-a40a-bcc49370ebe2-host\") on node \"crc\" DevicePath \"\"" Nov 28 10:19:17 crc kubenswrapper[4946]: I1128 10:19:17.044779 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4b364ef-5b94-49ad-a40a-bcc49370ebe2-kube-api-access-zkpn9" (OuterVolumeSpecName: "kube-api-access-zkpn9") pod "a4b364ef-5b94-49ad-a40a-bcc49370ebe2" (UID: "a4b364ef-5b94-49ad-a40a-bcc49370ebe2"). InnerVolumeSpecName "kube-api-access-zkpn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 10:19:17 crc kubenswrapper[4946]: I1128 10:19:17.134110 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkpn9\" (UniqueName: \"kubernetes.io/projected/a4b364ef-5b94-49ad-a40a-bcc49370ebe2-kube-api-access-zkpn9\") on node \"crc\" DevicePath \"\"" Nov 28 10:19:17 crc kubenswrapper[4946]: I1128 10:19:17.724110 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2bvwr/crc-debug-hslbr"] Nov 28 10:19:17 crc kubenswrapper[4946]: E1128 10:19:17.724888 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4b364ef-5b94-49ad-a40a-bcc49370ebe2" containerName="container-00" Nov 28 10:19:17 crc kubenswrapper[4946]: I1128 10:19:17.724906 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b364ef-5b94-49ad-a40a-bcc49370ebe2" containerName="container-00" Nov 28 10:19:17 crc kubenswrapper[4946]: I1128 10:19:17.725196 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4b364ef-5b94-49ad-a40a-bcc49370ebe2" containerName="container-00" Nov 28 10:19:17 crc kubenswrapper[4946]: I1128 10:19:17.726137 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2bvwr/crc-debug-hslbr" Nov 28 10:19:17 crc kubenswrapper[4946]: I1128 10:19:17.814625 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a2ae66421f1d535e85969102981a1f33827445ba06a5a60776dca4d3e62bf29" Nov 28 10:19:17 crc kubenswrapper[4946]: I1128 10:19:17.814994 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2bvwr/crc-debug-qxtss" Nov 28 10:19:17 crc kubenswrapper[4946]: I1128 10:19:17.848945 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6wlp\" (UniqueName: \"kubernetes.io/projected/6654607a-6f62-44ba-a4a8-e59152a57571-kube-api-access-k6wlp\") pod \"crc-debug-hslbr\" (UID: \"6654607a-6f62-44ba-a4a8-e59152a57571\") " pod="openshift-must-gather-2bvwr/crc-debug-hslbr" Nov 28 10:19:17 crc kubenswrapper[4946]: I1128 10:19:17.849255 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6654607a-6f62-44ba-a4a8-e59152a57571-host\") pod \"crc-debug-hslbr\" (UID: \"6654607a-6f62-44ba-a4a8-e59152a57571\") " pod="openshift-must-gather-2bvwr/crc-debug-hslbr" Nov 28 10:19:17 crc kubenswrapper[4946]: I1128 10:19:17.951267 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6wlp\" (UniqueName: \"kubernetes.io/projected/6654607a-6f62-44ba-a4a8-e59152a57571-kube-api-access-k6wlp\") pod \"crc-debug-hslbr\" (UID: \"6654607a-6f62-44ba-a4a8-e59152a57571\") " pod="openshift-must-gather-2bvwr/crc-debug-hslbr" Nov 28 10:19:17 crc kubenswrapper[4946]: I1128 10:19:17.951919 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6654607a-6f62-44ba-a4a8-e59152a57571-host\") pod \"crc-debug-hslbr\" (UID: \"6654607a-6f62-44ba-a4a8-e59152a57571\") " pod="openshift-must-gather-2bvwr/crc-debug-hslbr" Nov 28 10:19:17 crc kubenswrapper[4946]: I1128 10:19:17.952082 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6654607a-6f62-44ba-a4a8-e59152a57571-host\") pod \"crc-debug-hslbr\" (UID: \"6654607a-6f62-44ba-a4a8-e59152a57571\") " pod="openshift-must-gather-2bvwr/crc-debug-hslbr" Nov 28 10:19:17 crc kubenswrapper[4946]: I1128 10:19:17.973611 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6wlp\" (UniqueName: \"kubernetes.io/projected/6654607a-6f62-44ba-a4a8-e59152a57571-kube-api-access-k6wlp\") pod \"crc-debug-hslbr\" (UID: \"6654607a-6f62-44ba-a4a8-e59152a57571\") " pod="openshift-must-gather-2bvwr/crc-debug-hslbr" Nov 28 10:19:18 crc kubenswrapper[4946]: I1128 10:19:18.002448 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4b364ef-5b94-49ad-a40a-bcc49370ebe2" path="/var/lib/kubelet/pods/a4b364ef-5b94-49ad-a40a-bcc49370ebe2/volumes" Nov 28 10:19:18 crc kubenswrapper[4946]: I1128 10:19:18.059221 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2bvwr/crc-debug-hslbr" Nov 28 10:19:18 crc kubenswrapper[4946]: W1128 10:19:18.100029 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6654607a_6f62_44ba_a4a8_e59152a57571.slice/crio-d85b6e2a94caf9ecca17c1ee27d3e44e445b8a0dfe57f9418ead1860cd36bdb7 WatchSource:0}: Error finding container d85b6e2a94caf9ecca17c1ee27d3e44e445b8a0dfe57f9418ead1860cd36bdb7: Status 404 returned error can't find the container with id d85b6e2a94caf9ecca17c1ee27d3e44e445b8a0dfe57f9418ead1860cd36bdb7 Nov 28 10:19:18 crc kubenswrapper[4946]: I1128 10:19:18.830747 4946 generic.go:334] "Generic (PLEG): container finished" podID="6654607a-6f62-44ba-a4a8-e59152a57571" containerID="7c5d2660ed8c20895bc6890d73fac38ef421c0ea355fbd72a47b7afd194d44cb" exitCode=0 Nov 28 10:19:18 crc kubenswrapper[4946]: I1128 10:19:18.830814 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2bvwr/crc-debug-hslbr" event={"ID":"6654607a-6f62-44ba-a4a8-e59152a57571","Type":"ContainerDied","Data":"7c5d2660ed8c20895bc6890d73fac38ef421c0ea355fbd72a47b7afd194d44cb"} Nov 28 10:19:18 crc kubenswrapper[4946]: I1128 10:19:18.831085 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2bvwr/crc-debug-hslbr" event={"ID":"6654607a-6f62-44ba-a4a8-e59152a57571","Type":"ContainerStarted","Data":"d85b6e2a94caf9ecca17c1ee27d3e44e445b8a0dfe57f9418ead1860cd36bdb7"} Nov 28 10:19:18 crc kubenswrapper[4946]: I1128 10:19:18.874848 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2bvwr/crc-debug-hslbr"] Nov 28 10:19:18 crc kubenswrapper[4946]: I1128 10:19:18.888231 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2bvwr/crc-debug-hslbr"] Nov 28 10:19:19 crc kubenswrapper[4946]: I1128 10:19:19.974529 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2bvwr/crc-debug-hslbr" Nov 28 10:19:20 crc kubenswrapper[4946]: I1128 10:19:20.099957 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6wlp\" (UniqueName: \"kubernetes.io/projected/6654607a-6f62-44ba-a4a8-e59152a57571-kube-api-access-k6wlp\") pod \"6654607a-6f62-44ba-a4a8-e59152a57571\" (UID: \"6654607a-6f62-44ba-a4a8-e59152a57571\") " Nov 28 10:19:20 crc kubenswrapper[4946]: I1128 10:19:20.100030 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6654607a-6f62-44ba-a4a8-e59152a57571-host\") pod \"6654607a-6f62-44ba-a4a8-e59152a57571\" (UID: \"6654607a-6f62-44ba-a4a8-e59152a57571\") " Nov 28 10:19:20 crc kubenswrapper[4946]: I1128 10:19:20.100734 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6654607a-6f62-44ba-a4a8-e59152a57571-host" (OuterVolumeSpecName: "host") pod "6654607a-6f62-44ba-a4a8-e59152a57571" (UID: "6654607a-6f62-44ba-a4a8-e59152a57571"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 10:19:20 crc kubenswrapper[4946]: I1128 10:19:20.105117 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6654607a-6f62-44ba-a4a8-e59152a57571-kube-api-access-k6wlp" (OuterVolumeSpecName: "kube-api-access-k6wlp") pod "6654607a-6f62-44ba-a4a8-e59152a57571" (UID: "6654607a-6f62-44ba-a4a8-e59152a57571"). InnerVolumeSpecName "kube-api-access-k6wlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 10:19:20 crc kubenswrapper[4946]: I1128 10:19:20.202812 4946 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6654607a-6f62-44ba-a4a8-e59152a57571-host\") on node \"crc\" DevicePath \"\"" Nov 28 10:19:20 crc kubenswrapper[4946]: I1128 10:19:20.202853 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6wlp\" (UniqueName: \"kubernetes.io/projected/6654607a-6f62-44ba-a4a8-e59152a57571-kube-api-access-k6wlp\") on node \"crc\" DevicePath \"\"" Nov 28 10:19:20 crc kubenswrapper[4946]: I1128 10:19:20.858611 4946 scope.go:117] "RemoveContainer" containerID="7c5d2660ed8c20895bc6890d73fac38ef421c0ea355fbd72a47b7afd194d44cb" Nov 28 10:19:20 crc kubenswrapper[4946]: I1128 10:19:20.858666 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2bvwr/crc-debug-hslbr" Nov 28 10:19:22 crc kubenswrapper[4946]: I1128 10:19:22.012816 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6654607a-6f62-44ba-a4a8-e59152a57571" path="/var/lib/kubelet/pods/6654607a-6f62-44ba-a4a8-e59152a57571/volumes" Nov 28 10:19:24 crc kubenswrapper[4946]: I1128 10:19:24.730452 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 10:19:24 crc kubenswrapper[4946]: I1128 10:19:24.732322 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 10:19:54 crc kubenswrapper[4946]: I1128 10:19:54.730558 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 10:19:54 crc kubenswrapper[4946]: I1128 10:19:54.731143 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 10:19:54 crc kubenswrapper[4946]: I1128 10:19:54.731222 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 10:19:54 crc kubenswrapper[4946]: I1128 10:19:54.732382 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 10:19:54 crc kubenswrapper[4946]: I1128 10:19:54.732528 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" gracePeriod=600 Nov 28 10:19:54 crc kubenswrapper[4946]: E1128 10:19:54.865521 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:19:55 crc kubenswrapper[4946]: I1128 10:19:55.352113 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" exitCode=0 Nov 28 10:19:55 crc kubenswrapper[4946]: I1128 10:19:55.352221 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18"} Nov 28 10:19:55 crc kubenswrapper[4946]: I1128 10:19:55.352334 4946 scope.go:117] "RemoveContainer" containerID="ffde5dbc4752bb52de3ffee0d7f0302a34a211bf129849b6cf44262925feacbd" Nov 28 10:19:55 crc kubenswrapper[4946]: I1128 10:19:55.353363 4946 scope.go:117] "RemoveContainer" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" Nov 28 10:19:55 crc kubenswrapper[4946]: E1128 10:19:55.353828 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:20:10 crc kubenswrapper[4946]: I1128 10:20:10.990206 4946 scope.go:117] "RemoveContainer" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" Nov 28 10:20:10 crc kubenswrapper[4946]: E1128 10:20:10.991146 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:20:26 crc kubenswrapper[4946]: I1128 10:20:26.001044 4946 scope.go:117] "RemoveContainer" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" Nov 28 10:20:26 crc kubenswrapper[4946]: E1128 10:20:26.002269 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:20:38 crc kubenswrapper[4946]: I1128 10:20:38.990289 4946 scope.go:117] "RemoveContainer" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" Nov 28 10:20:38 crc kubenswrapper[4946]: E1128 10:20:38.991006 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:20:51 crc kubenswrapper[4946]: I1128 10:20:51.989807 4946 scope.go:117] "RemoveContainer" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" Nov 28 10:20:51 crc kubenswrapper[4946]: E1128 10:20:51.990512 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:21:06 crc kubenswrapper[4946]: I1128 10:21:06.000176 4946 scope.go:117] "RemoveContainer" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" Nov 28 10:21:06 crc kubenswrapper[4946]: E1128 10:21:06.002098 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:21:20 crc kubenswrapper[4946]: I1128 10:21:20.990141 4946 scope.go:117] "RemoveContainer" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" Nov 28 10:21:20 crc kubenswrapper[4946]: E1128 10:21:20.990944 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:21:33 crc kubenswrapper[4946]: I1128 10:21:33.991036 4946 scope.go:117] "RemoveContainer" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" Nov 28 10:21:33 crc kubenswrapper[4946]: E1128 10:21:33.992049 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:21:44 crc kubenswrapper[4946]: I1128 10:21:44.989984 4946 scope.go:117] "RemoveContainer" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" Nov 28 10:21:44 crc kubenswrapper[4946]: E1128 10:21:44.991034 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:21:56 crc kubenswrapper[4946]: I1128 10:21:56.990851 4946 scope.go:117] "RemoveContainer" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" Nov 28 10:21:56 crc kubenswrapper[4946]: E1128 10:21:56.992124 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:22:08 crc kubenswrapper[4946]: I1128 10:22:08.989978 4946 scope.go:117] "RemoveContainer" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" Nov 28 10:22:08 crc kubenswrapper[4946]: E1128 10:22:08.991025 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:22:22 crc kubenswrapper[4946]: I1128 10:22:22.990182 4946 scope.go:117] "RemoveContainer" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" Nov 28 10:22:22 crc kubenswrapper[4946]: E1128 10:22:22.991415 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:22:36 crc kubenswrapper[4946]: I1128 10:22:36.992042 4946 scope.go:117] "RemoveContainer" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" Nov 28 10:22:36 crc kubenswrapper[4946]: E1128 10:22:36.995340 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:22:44 crc kubenswrapper[4946]: I1128 10:22:44.321369 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_d418295b-f6d4-4ff4-9245-09df12057df7/init-config-reloader/0.log" Nov 28 10:22:44 crc kubenswrapper[4946]: I1128 10:22:44.549894 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_d418295b-f6d4-4ff4-9245-09df12057df7/config-reloader/0.log" Nov 28 10:22:44 crc kubenswrapper[4946]: I1128 10:22:44.567681 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_d418295b-f6d4-4ff4-9245-09df12057df7/init-config-reloader/0.log" Nov 28 10:22:44 crc kubenswrapper[4946]: I1128 10:22:44.577518 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_d418295b-f6d4-4ff4-9245-09df12057df7/alertmanager/0.log" Nov 28 10:22:44 crc kubenswrapper[4946]: I1128 10:22:44.766189 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_c4fafb80-f462-4000-a1c1-ba8405b342cc/aodh-api/0.log" Nov 28 10:22:44 crc kubenswrapper[4946]: I1128 10:22:44.771283 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_c4fafb80-f462-4000-a1c1-ba8405b342cc/aodh-listener/0.log" Nov 28 10:22:44 crc kubenswrapper[4946]: I1128 10:22:44.772707 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_c4fafb80-f462-4000-a1c1-ba8405b342cc/aodh-evaluator/0.log" Nov 28 10:22:44 crc kubenswrapper[4946]: I1128 10:22:44.845073 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_c4fafb80-f462-4000-a1c1-ba8405b342cc/aodh-notifier/0.log" Nov 28 10:22:44 crc kubenswrapper[4946]: I1128 10:22:44.957235 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5dfc584b48-xv5w9_409ff561-0177-4450-b505-7225027f0b06/barbican-api/0.log" Nov 28 10:22:44 crc kubenswrapper[4946]: I1128 10:22:44.985540 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5dfc584b48-xv5w9_409ff561-0177-4450-b505-7225027f0b06/barbican-api-log/0.log" Nov 28 10:22:45 crc kubenswrapper[4946]: I1128 10:22:45.168190 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6968f5d7bb-x689b_04af3580-115f-4b0e-a549-b66d28ccce66/barbican-keystone-listener/0.log" Nov 28 10:22:45 crc kubenswrapper[4946]: I1128 10:22:45.258941 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-77b5dd78d9-268ll_f5194021-7a69-42d3-8a19-8bfb471434db/barbican-worker/0.log" Nov 28 10:22:45 crc kubenswrapper[4946]: I1128 10:22:45.454979 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-77b5dd78d9-268ll_f5194021-7a69-42d3-8a19-8bfb471434db/barbican-worker-log/0.log" Nov 28 10:22:45 crc kubenswrapper[4946]: I1128 10:22:45.577730 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-hrvm8_d67930c7-a5cb-4b90-a00f-f2494590d922/bootstrap-openstack-openstack-cell1/0.log" Nov 28 10:22:45 crc kubenswrapper[4946]: I1128 10:22:45.773675 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-networker-vf8r5_bdc6a7ec-cc6e-4f35-a669-c9a058e4cc74/bootstrap-openstack-openstack-networker/0.log" Nov 28 10:22:45 crc kubenswrapper[4946]: I1128 10:22:45.853135 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6968f5d7bb-x689b_04af3580-115f-4b0e-a549-b66d28ccce66/barbican-keystone-listener-log/0.log" Nov 28 10:22:45 crc kubenswrapper[4946]: I1128 10:22:45.944523 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7353f743-3daf-4cfd-bbbd-c3ac503c1161/ceilometer-central-agent/0.log" Nov 28 10:22:46 crc kubenswrapper[4946]: I1128 10:22:46.018144 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7353f743-3daf-4cfd-bbbd-c3ac503c1161/ceilometer-notification-agent/0.log" Nov 28 10:22:46 crc kubenswrapper[4946]: I1128 10:22:46.050072 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7353f743-3daf-4cfd-bbbd-c3ac503c1161/proxy-httpd/0.log" Nov 28 10:22:46 crc kubenswrapper[4946]: I1128 10:22:46.075491 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7353f743-3daf-4cfd-bbbd-c3ac503c1161/sg-core/0.log" Nov 28 10:22:46 crc kubenswrapper[4946]: I1128 10:22:46.202205 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-openstack-openstack-cell1-4sfrp_8b27ef5d-f120-4a55-85ab-1299d601e069/ceph-client-openstack-openstack-cell1/0.log" Nov 28 10:22:46 crc kubenswrapper[4946]: I1128 10:22:46.470638 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_425d34b3-faf7-4523-99e6-88dcf41ed3c6/cinder-api/0.log" Nov 28 10:22:46 crc kubenswrapper[4946]: I1128 10:22:46.782010 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_e1ca140e-917e-4491-89e9-b5d62997fb8f/probe/0.log" Nov 28 10:22:46 crc kubenswrapper[4946]: I1128 10:22:46.790620 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_425d34b3-faf7-4523-99e6-88dcf41ed3c6/cinder-api-log/0.log" Nov 28 10:22:46 crc kubenswrapper[4946]: I1128 10:22:46.963554 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d/cinder-scheduler/0.log" Nov 28 10:22:47 crc kubenswrapper[4946]: I1128 10:22:47.088132 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5e89ce1f-b6c8-4cb9-ad32-ded744cb7f2d/probe/0.log" Nov 28 10:22:47 crc kubenswrapper[4946]: I1128 10:22:47.452936 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_56eca230-e485-44c4-85df-36d401200197/probe/0.log" Nov 28 10:22:47 crc kubenswrapper[4946]: I1128 10:22:47.539621 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-hp2dx_18ddb878-91b4-42ad-b516-fb436b5ecf2a/configure-network-openstack-openstack-cell1/0.log" Nov 28 10:22:47 crc kubenswrapper[4946]: I1128 10:22:47.545940 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_e1ca140e-917e-4491-89e9-b5d62997fb8f/cinder-backup/0.log" Nov 28 10:22:47 crc kubenswrapper[4946]: I1128 10:22:47.807567 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-networker-h7dbw_a8f17537-bfd9-4581-a146-8582ca24195c/configure-network-openstack-openstack-networker/0.log" Nov 28 10:22:47 crc kubenswrapper[4946]: I1128 10:22:47.957658 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-gcsc9_ec131430-238f-4a5c-8c02-f05d2b07e60e/configure-os-openstack-openstack-cell1/0.log" Nov 28 10:22:47 crc kubenswrapper[4946]: I1128 10:22:47.990173 4946 scope.go:117] "RemoveContainer" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" Nov 28 10:22:47 crc kubenswrapper[4946]: E1128 10:22:47.990433 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:22:48 crc kubenswrapper[4946]: I1128 10:22:48.081629 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-networker-kmdc6_bde3f71a-f3d1-4dff-80fd-844623944482/configure-os-openstack-openstack-networker/0.log" Nov 28 10:22:48 crc kubenswrapper[4946]: I1128 10:22:48.222046 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-675bd7f5bc-6xj4d_af25ba3e-ed08-4184-baee-857db122755c/init/0.log" Nov 28 10:22:48 crc kubenswrapper[4946]: I1128 10:22:48.678413 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-675bd7f5bc-6xj4d_af25ba3e-ed08-4184-baee-857db122755c/init/0.log" Nov 28 10:22:48 crc kubenswrapper[4946]: I1128 10:22:48.814914 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-vdqgc_82f88c64-bdae-4b41-a21a-e9bcae8aac99/download-cache-openstack-openstack-cell1/0.log" Nov 28 10:22:48 crc kubenswrapper[4946]: I1128 10:22:48.962652 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-675bd7f5bc-6xj4d_af25ba3e-ed08-4184-baee-857db122755c/dnsmasq-dns/0.log" Nov 28 10:22:49 crc kubenswrapper[4946]: I1128 10:22:49.020763 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_56eca230-e485-44c4-85df-36d401200197/cinder-volume/0.log" Nov 28 10:22:49 crc kubenswrapper[4946]: I1128 10:22:49.080625 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-networker-b5nt7_bed38fc8-f019-4416-b82d-f86696d869be/download-cache-openstack-openstack-networker/0.log" Nov 28 10:22:49 crc kubenswrapper[4946]: I1128 10:22:49.186595 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cbc7140a-4615-45a9-9f5d-6df5637a64cb/glance-httpd/0.log" Nov 28 10:22:49 crc kubenswrapper[4946]: I1128 10:22:49.230053 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cbc7140a-4615-45a9-9f5d-6df5637a64cb/glance-log/0.log" Nov 28 10:22:49 crc kubenswrapper[4946]: I1128 10:22:49.358118 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ded096e4-45dd-4d70-adf7-4beff00d6662/glance-log/0.log" Nov 28 10:22:49 crc kubenswrapper[4946]: I1128 10:22:49.493532 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ded096e4-45dd-4d70-adf7-4beff00d6662/glance-httpd/0.log" Nov 28 10:22:49 crc kubenswrapper[4946]: I1128 10:22:49.558809 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-6789f5c976-5lgn4_f3961bc3-826e-42e6-96cf-4d6c454dde1e/heat-api/0.log" Nov 28 10:22:49 crc kubenswrapper[4946]: I1128 10:22:49.695919 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-646c654b76-5l7fn_dc1c1a72-5fe7-467b-8309-61664e19d710/heat-cfnapi/0.log" Nov 28 10:22:49 crc kubenswrapper[4946]: I1128 10:22:49.721247 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-5b8b49c7cc-tdcbd_81f69514-2093-4cb7-a7aa-62ba6da948fa/heat-engine/0.log" Nov 28 10:22:49 crc kubenswrapper[4946]: I1128 10:22:49.904372 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-f6dfdcfff-hv9rg_245cf95b-ff97-4971-bc3b-5bd178679d4a/horizon/0.log" Nov 28 10:22:49 crc kubenswrapper[4946]: I1128 10:22:49.970736 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-cell1-qkmnt_72059c4b-03bd-4ec9-ba8a-5bca8ee58f68/install-certs-openstack-openstack-cell1/0.log" Nov 28 10:22:50 crc kubenswrapper[4946]: I1128 10:22:50.024649 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-f6dfdcfff-hv9rg_245cf95b-ff97-4971-bc3b-5bd178679d4a/horizon-log/0.log" Nov 28 10:22:50 crc kubenswrapper[4946]: I1128 10:22:50.142766 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-networker-b8hjj_ca68a9c2-7810-4c5c-9614-3aa70b0dff7f/install-certs-openstack-openstack-networker/0.log" Nov 28 10:22:50 crc kubenswrapper[4946]: I1128 10:22:50.212483 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-5vm8c_78f78de8-2e26-47fe-b56f-4c4b8ee76058/install-os-openstack-openstack-cell1/0.log" Nov 28 10:22:50 crc kubenswrapper[4946]: I1128 10:22:50.329286 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-networker-jkjpm_01c5528b-23a0-42cb-b0d8-37daec0b4ccf/install-os-openstack-openstack-networker/0.log" Nov 28 10:22:50 crc kubenswrapper[4946]: I1128 10:22:50.541811 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29405341-p65t2_ce89c0ed-ea08-4de6-a7d5-4cc1887aeb95/keystone-cron/0.log" Nov 28 10:22:50 crc kubenswrapper[4946]: I1128 10:22:50.549495 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29405401-xhftw_ff17d683-e6a0-4b28-97e2-256e71ee0a7e/keystone-cron/0.log" Nov 28 10:22:50 crc kubenswrapper[4946]: I1128 10:22:50.746776 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6fa41e94-689e-4339-ad9c-43e4a91cddc2/kube-state-metrics/0.log" Nov 28 10:22:50 crc kubenswrapper[4946]: I1128 10:22:50.917517 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-openstack-openstack-cell1-5qp5n_3820a072-910f-4f3c-a69c-7c178e101ed3/libvirt-openstack-openstack-cell1/0.log" Nov 28 10:22:51 crc kubenswrapper[4946]: I1128 10:22:51.302842 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b/manila-api/0.log" Nov 28 10:22:51 crc kubenswrapper[4946]: I1128 10:22:51.346136 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_a9b648b9-5f57-40e5-a36f-3a4e23a21dd1/probe/0.log" Nov 28 10:22:51 crc kubenswrapper[4946]: I1128 10:22:51.360391 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_a9b648b9-5f57-40e5-a36f-3a4e23a21dd1/manila-scheduler/0.log" Nov 28 10:22:51 crc kubenswrapper[4946]: I1128 10:22:51.573321 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_59e5bd7b-a965-4219-8ae9-b5a4c48f2c0b/manila-api-log/0.log" Nov 28 10:22:51 crc kubenswrapper[4946]: I1128 10:22:51.580638 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-c6c9f9658-b6wk7_5a0ae4a4-be53-42c7-a400-7d49ea62d95d/keystone-api/0.log" Nov 28 10:22:51 crc kubenswrapper[4946]: I1128 10:22:51.590739 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_c8ecc468-f0db-41b1-a847-7ac2fdf26b37/manila-share/0.log" Nov 28 10:22:51 crc kubenswrapper[4946]: I1128 10:22:51.617030 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_c8ecc468-f0db-41b1-a847-7ac2fdf26b37/probe/0.log" Nov 28 10:22:52 crc kubenswrapper[4946]: I1128 10:22:52.076278 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dhcp-openstack-openstack-cell1-8m7gr_a9c2a77c-92e6-4a14-886a-feaeaa5db35f/neutron-dhcp-openstack-openstack-cell1/0.log" Nov 28 10:22:52 crc kubenswrapper[4946]: I1128 10:22:52.119749 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-66757bf5bf-g455z_69c4c930-04a2-4502-8802-01d04186e378/neutron-httpd/0.log" Nov 28 10:22:52 crc kubenswrapper[4946]: I1128 10:22:52.334641 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-cell1-npk87_e2ad6585-a804-4048-b736-57d443b5d6cc/neutron-metadata-openstack-openstack-cell1/0.log" Nov 28 10:22:52 crc kubenswrapper[4946]: I1128 10:22:52.542039 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-networker-qjh49_9cc5f29c-b756-4566-b796-a63b368f7578/neutron-metadata-openstack-openstack-networker/0.log" Nov 28 10:22:52 crc kubenswrapper[4946]: I1128 10:22:52.593212 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-66757bf5bf-g455z_69c4c930-04a2-4502-8802-01d04186e378/neutron-api/0.log" Nov 28 10:22:52 crc kubenswrapper[4946]: I1128 10:22:52.645184 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-sriov-openstack-openstack-cell1-wgh6r_9caf1aa9-e1aa-406f-a3a3-15e60bb53fb5/neutron-sriov-openstack-openstack-cell1/0.log" Nov 28 10:22:52 crc kubenswrapper[4946]: I1128 10:22:52.984310 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_713eb85e-5541-4b11-aa8f-c0eb39a3596c/nova-api-api/0.log" Nov 28 10:22:53 crc kubenswrapper[4946]: I1128 10:22:53.106426 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_2ef4e01f-b481-4b66-9032-794f0179ce67/nova-cell0-conductor-conductor/0.log" Nov 28 10:22:53 crc kubenswrapper[4946]: I1128 10:22:53.209681 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_713eb85e-5541-4b11-aa8f-c0eb39a3596c/nova-api-log/0.log" Nov 28 10:22:53 crc kubenswrapper[4946]: I1128 10:22:53.346313 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_86aef0b3-1873-47bf-aab0-257c77b2bbe6/nova-cell1-conductor-conductor/0.log" Nov 28 10:22:53 crc kubenswrapper[4946]: I1128 10:22:53.521690 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_0ab31999-7efc-4936-97c5-bb8592f61595/nova-cell1-novncproxy-novncproxy/0.log" Nov 28 10:22:53 crc kubenswrapper[4946]: I1128 10:22:53.651992 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellbcspl_84956b58-bb77-4d59-9bab-5112c6660a05/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1/0.log" Nov 28 10:22:53 crc kubenswrapper[4946]: I1128 10:22:53.789027 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-openstack-cell1-tdfft_6b73ea39-1838-4f8b-b183-041d61c8c457/nova-cell1-openstack-openstack-cell1/0.log" Nov 28 10:22:53 crc kubenswrapper[4946]: I1128 10:22:53.863519 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fcdfa2ec-3907-4ddd-be31-e87f640bf0d1/nova-metadata-log/0.log" Nov 28 10:22:54 crc kubenswrapper[4946]: I1128 10:22:54.019788 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fcdfa2ec-3907-4ddd-be31-e87f640bf0d1/nova-metadata-metadata/0.log" Nov 28 10:22:54 crc kubenswrapper[4946]: I1128 10:22:54.175169 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_fc39fcf1-2b4b-4630-9bf1-7482ffc4a262/nova-scheduler-scheduler/0.log" Nov 28 10:22:54 crc kubenswrapper[4946]: I1128 10:22:54.242788 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a5bdfa1c-6408-40f5-a9db-4b991fd2b022/mysql-bootstrap/0.log" Nov 28 10:22:54 crc kubenswrapper[4946]: I1128 10:22:54.422804 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a5bdfa1c-6408-40f5-a9db-4b991fd2b022/mysql-bootstrap/0.log" Nov 28 10:22:54 crc kubenswrapper[4946]: I1128 10:22:54.452104 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1a323bad-127f-44c5-8d32-9a7c53deceea/mysql-bootstrap/0.log" Nov 28 10:22:54 crc kubenswrapper[4946]: I1128 10:22:54.479771 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a5bdfa1c-6408-40f5-a9db-4b991fd2b022/galera/0.log" Nov 28 10:22:54 crc kubenswrapper[4946]: I1128 10:22:54.684652 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1a323bad-127f-44c5-8d32-9a7c53deceea/mysql-bootstrap/0.log" Nov 28 10:22:54 crc kubenswrapper[4946]: I1128 10:22:54.742567 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_771bf4b7-953b-4d1d-9afe-fc93ff4f2341/openstackclient/0.log" Nov 28 10:22:54 crc kubenswrapper[4946]: I1128 10:22:54.790953 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1a323bad-127f-44c5-8d32-9a7c53deceea/galera/0.log" Nov 28 10:22:54 crc kubenswrapper[4946]: I1128 10:22:54.890103 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_602749e9-4345-4dcc-a573-f6fe0101fcba/openstack-network-exporter/0.log" Nov 28 10:22:54 crc kubenswrapper[4946]: I1128 10:22:54.932540 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_602749e9-4345-4dcc-a573-f6fe0101fcba/ovn-northd/0.log" Nov 28 10:22:55 crc kubenswrapper[4946]: I1128 10:22:55.215643 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-cell1-79jwx_86c84ce0-94fa-40be-8bcc-e6408a9e4411/ovn-openstack-openstack-cell1/0.log" Nov 28 10:22:55 crc kubenswrapper[4946]: I1128 10:22:55.326029 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-networker-s2jsm_507d2a47-1976-44de-b9d7-ba27223d3441/ovn-openstack-openstack-networker/0.log" Nov 28 10:22:55 crc kubenswrapper[4946]: I1128 10:22:55.345750 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ed69d76d-986f-4ef9-b0a2-920c71c2b72a/openstack-network-exporter/0.log" Nov 28 10:22:55 crc kubenswrapper[4946]: I1128 10:22:55.470303 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ed69d76d-986f-4ef9-b0a2-920c71c2b72a/ovsdbserver-nb/0.log" Nov 28 10:22:55 crc kubenswrapper[4946]: I1128 10:22:55.561854 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_76686c34-5f17-4139-aed5-05658e66a812/openstack-network-exporter/0.log" Nov 28 10:22:55 crc kubenswrapper[4946]: I1128 10:22:55.579386 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_76686c34-5f17-4139-aed5-05658e66a812/ovsdbserver-nb/0.log" Nov 28 10:22:55 crc kubenswrapper[4946]: I1128 10:22:55.681493 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_10a36adb-4aa2-49a7-a5b2-8903d8089a26/openstack-network-exporter/0.log" Nov 28 10:22:55 crc kubenswrapper[4946]: I1128 10:22:55.761427 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_10a36adb-4aa2-49a7-a5b2-8903d8089a26/ovsdbserver-nb/0.log" Nov 28 10:22:55 crc kubenswrapper[4946]: I1128 10:22:55.902807 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ccf5d58f-8529-453c-b840-d5839d294d38/openstack-network-exporter/0.log" Nov 28 10:22:55 crc kubenswrapper[4946]: I1128 10:22:55.916911 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ccf5d58f-8529-453c-b840-d5839d294d38/ovsdbserver-sb/0.log" Nov 28 10:22:56 crc kubenswrapper[4946]: I1128 10:22:56.037374 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_c283f69f-56de-48e3-a4f5-c1fc7f8497ec/openstack-network-exporter/0.log" Nov 28 10:22:56 crc kubenswrapper[4946]: I1128 10:22:56.076505 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_c283f69f-56de-48e3-a4f5-c1fc7f8497ec/ovsdbserver-sb/0.log" Nov 28 10:22:56 crc kubenswrapper[4946]: I1128 10:22:56.259121 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_8b3da77b-9e83-49fe-a2de-d7e88f24cc7e/ovsdbserver-sb/0.log" Nov 28 10:22:56 crc kubenswrapper[4946]: I1128 10:22:56.272327 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_8b3da77b-9e83-49fe-a2de-d7e88f24cc7e/openstack-network-exporter/0.log" Nov 28 10:22:56 crc kubenswrapper[4946]: I1128 10:22:56.589635 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-c4lsdg_d9fbbfa5-fb4b-4302-9d24-909f0eda1732/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Nov 28 10:22:56 crc kubenswrapper[4946]: I1128 10:22:56.622207 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-77d5c54d5b-8wbwm_6f01cd93-53ff-47a5-89e3-d9354feeb065/placement-api/0.log" Nov 28 10:22:56 crc kubenswrapper[4946]: I1128 10:22:56.689508 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-77d5c54d5b-8wbwm_6f01cd93-53ff-47a5-89e3-d9354feeb065/placement-log/0.log" Nov 28 10:22:56 crc kubenswrapper[4946]: I1128 10:22:56.785967 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-n6ggmp_f293b1f4-ce1d-4962-8528-cc59f1a70093/pre-adoption-validation-openstack-pre-adoption-openstack-networ/0.log" Nov 28 10:22:56 crc kubenswrapper[4946]: I1128 10:22:56.870617 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a56c363b-0694-4f6b-882a-adb92c93e6e5/init-config-reloader/0.log" Nov 28 10:22:57 crc kubenswrapper[4946]: I1128 10:22:57.057070 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a56c363b-0694-4f6b-882a-adb92c93e6e5/config-reloader/0.log" Nov 28 10:22:57 crc kubenswrapper[4946]: I1128 10:22:57.058564 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a56c363b-0694-4f6b-882a-adb92c93e6e5/prometheus/0.log" Nov 28 10:22:57 crc kubenswrapper[4946]: I1128 10:22:57.112833 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a56c363b-0694-4f6b-882a-adb92c93e6e5/init-config-reloader/0.log" Nov 28 10:22:57 crc kubenswrapper[4946]: I1128 10:22:57.131619 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a56c363b-0694-4f6b-882a-adb92c93e6e5/thanos-sidecar/0.log" Nov 28 10:22:57 crc kubenswrapper[4946]: I1128 10:22:57.277614 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d/setup-container/0.log" Nov 28 10:22:57 crc kubenswrapper[4946]: I1128 10:22:57.458416 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d/setup-container/0.log" Nov 28 10:22:57 crc kubenswrapper[4946]: I1128 10:22:57.486791 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_148d9ff9-fee6-4cc3-ba3f-2357ae5fe59d/rabbitmq/0.log" Nov 28 10:22:57 crc kubenswrapper[4946]: I1128 10:22:57.558687 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d90b1a28-1f52-437a-96b4-069f14a01e01/setup-container/0.log" Nov 28 10:22:57 crc kubenswrapper[4946]: I1128 10:22:57.773931 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d90b1a28-1f52-437a-96b4-069f14a01e01/setup-container/0.log" Nov 28 10:22:57 crc kubenswrapper[4946]: I1128 10:22:57.816109 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-cell1-bwtxd_4a5f9cfa-ebf0-4fd9-820a-ed7dd65ed518/reboot-os-openstack-openstack-cell1/0.log" Nov 28 10:22:57 crc kubenswrapper[4946]: I1128 10:22:57.874428 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d90b1a28-1f52-437a-96b4-069f14a01e01/rabbitmq/0.log" Nov 28 10:22:58 crc kubenswrapper[4946]: I1128 10:22:58.008753 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-networker-7k778_febda310-f4a5-44c9-99b3-d7c31f0496a3/reboot-os-openstack-openstack-networker/0.log" Nov 28 10:22:58 crc kubenswrapper[4946]: I1128 10:22:58.137146 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-cell1-tv77g_4d4e1e94-5489-464d-9a89-38c5931beaa6/run-os-openstack-openstack-cell1/0.log" Nov 28 10:22:58 crc kubenswrapper[4946]: I1128 10:22:58.213036 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-networker-qtbl9_e967dc40-00bf-470c-9d3d-3d57af669882/run-os-openstack-openstack-networker/0.log" Nov 28 10:22:58 crc kubenswrapper[4946]: I1128 10:22:58.357886 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-openstack-2jvc9_49f9db9c-f1e9-48d1-80fe-e77b98c6271b/ssh-known-hosts-openstack/0.log" Nov 28 10:22:58 crc kubenswrapper[4946]: I1128 10:22:58.634939 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-openstack-openstack-cell1-8pscp_c4819513-a41f-4a93-b8b5-9587c67a0832/telemetry-openstack-openstack-cell1/0.log" Nov 28 10:22:58 crc kubenswrapper[4946]: I1128 10:22:58.728881 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_1e51973b-27b2-4f5f-9073-0ba9d14f9593/tempest-tests-tempest-tests-runner/0.log" Nov 28 10:22:58 crc kubenswrapper[4946]: I1128 10:22:58.772117 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_176939d9-3e4b-4bb6-8e2a-a8a4de32d7a6/test-operator-logs-container/0.log" Nov 28 10:22:58 crc kubenswrapper[4946]: I1128 10:22:58.953863 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-6tcp2_041f147a-ac40-4d08-8953-4ba399c7159c/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Nov 28 10:22:59 crc kubenswrapper[4946]: I1128 10:22:59.065210 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-networker-tq28p_e36e55e7-6d10-4f5d-bc7b-75c4b164e3b7/tripleo-cleanup-tripleo-cleanup-openstack-networker/0.log" Nov 28 10:22:59 crc kubenswrapper[4946]: I1128 10:22:59.131561 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-jx5qh_81d6f1ef-6551-4711-b21f-6af3aabc8d83/validate-network-openstack-openstack-cell1/0.log" Nov 28 10:22:59 crc kubenswrapper[4946]: I1128 10:22:59.253450 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-networker-pdfg5_03b3ceb1-7fc7-4224-8dd4-cff5c557f005/validate-network-openstack-openstack-networker/0.log" Nov 28 10:22:59 crc kubenswrapper[4946]: I1128 10:22:59.989605 4946 scope.go:117] "RemoveContainer" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" Nov 28 10:22:59 crc kubenswrapper[4946]: E1128 10:22:59.989953 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:23:12 crc kubenswrapper[4946]: I1128 10:23:12.989541 4946 scope.go:117] "RemoveContainer" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" Nov 28 10:23:12 crc kubenswrapper[4946]: E1128 10:23:12.990192 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:23:14 crc kubenswrapper[4946]: I1128 10:23:14.794904 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_90119a9d-5f8c-4da1-8695-89da8ff356e9/memcached/0.log" Nov 28 10:23:18 crc kubenswrapper[4946]: I1128 10:23:18.432592 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8jxpw"] Nov 28 10:23:18 crc kubenswrapper[4946]: E1128 10:23:18.433731 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6654607a-6f62-44ba-a4a8-e59152a57571" containerName="container-00" Nov 28 10:23:18 crc kubenswrapper[4946]: I1128 10:23:18.433750 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="6654607a-6f62-44ba-a4a8-e59152a57571" containerName="container-00" Nov 28 10:23:18 crc kubenswrapper[4946]: I1128 10:23:18.434015 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="6654607a-6f62-44ba-a4a8-e59152a57571" containerName="container-00" Nov 28 10:23:18 crc kubenswrapper[4946]: I1128 10:23:18.437158 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8jxpw" Nov 28 10:23:18 crc kubenswrapper[4946]: I1128 10:23:18.445658 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8jxpw"] Nov 28 10:23:18 crc kubenswrapper[4946]: I1128 10:23:18.543244 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvpnq\" (UniqueName: \"kubernetes.io/projected/8f5db34d-f19e-4a6b-87fd-49e34a121f86-kube-api-access-zvpnq\") pod \"redhat-operators-8jxpw\" (UID: \"8f5db34d-f19e-4a6b-87fd-49e34a121f86\") " pod="openshift-marketplace/redhat-operators-8jxpw" Nov 28 10:23:18 crc kubenswrapper[4946]: I1128 10:23:18.543553 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f5db34d-f19e-4a6b-87fd-49e34a121f86-catalog-content\") pod \"redhat-operators-8jxpw\" (UID: \"8f5db34d-f19e-4a6b-87fd-49e34a121f86\") " pod="openshift-marketplace/redhat-operators-8jxpw" Nov 28 10:23:18 crc kubenswrapper[4946]: I1128 10:23:18.543650 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f5db34d-f19e-4a6b-87fd-49e34a121f86-utilities\") pod \"redhat-operators-8jxpw\" (UID: \"8f5db34d-f19e-4a6b-87fd-49e34a121f86\") " pod="openshift-marketplace/redhat-operators-8jxpw" Nov 28 10:23:18 crc kubenswrapper[4946]: I1128 10:23:18.645648 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f5db34d-f19e-4a6b-87fd-49e34a121f86-catalog-content\") pod \"redhat-operators-8jxpw\" (UID: \"8f5db34d-f19e-4a6b-87fd-49e34a121f86\") " pod="openshift-marketplace/redhat-operators-8jxpw" Nov 28 10:23:18 crc kubenswrapper[4946]: I1128 10:23:18.645733 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f5db34d-f19e-4a6b-87fd-49e34a121f86-utilities\") pod \"redhat-operators-8jxpw\" (UID: \"8f5db34d-f19e-4a6b-87fd-49e34a121f86\") " pod="openshift-marketplace/redhat-operators-8jxpw" Nov 28 10:23:18 crc kubenswrapper[4946]: I1128 10:23:18.645912 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvpnq\" (UniqueName: \"kubernetes.io/projected/8f5db34d-f19e-4a6b-87fd-49e34a121f86-kube-api-access-zvpnq\") pod \"redhat-operators-8jxpw\" (UID: \"8f5db34d-f19e-4a6b-87fd-49e34a121f86\") " pod="openshift-marketplace/redhat-operators-8jxpw" Nov 28 10:23:18 crc kubenswrapper[4946]: I1128 10:23:18.646398 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f5db34d-f19e-4a6b-87fd-49e34a121f86-utilities\") pod \"redhat-operators-8jxpw\" (UID: \"8f5db34d-f19e-4a6b-87fd-49e34a121f86\") " pod="openshift-marketplace/redhat-operators-8jxpw" Nov 28 10:23:18 crc kubenswrapper[4946]: I1128 10:23:18.646726 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f5db34d-f19e-4a6b-87fd-49e34a121f86-catalog-content\") pod \"redhat-operators-8jxpw\" (UID: \"8f5db34d-f19e-4a6b-87fd-49e34a121f86\") " pod="openshift-marketplace/redhat-operators-8jxpw" Nov 28 10:23:18 crc kubenswrapper[4946]: I1128 10:23:18.676665 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvpnq\" (UniqueName: \"kubernetes.io/projected/8f5db34d-f19e-4a6b-87fd-49e34a121f86-kube-api-access-zvpnq\") pod \"redhat-operators-8jxpw\" (UID: \"8f5db34d-f19e-4a6b-87fd-49e34a121f86\") " pod="openshift-marketplace/redhat-operators-8jxpw" Nov 28 10:23:18 crc kubenswrapper[4946]: I1128 10:23:18.767053 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8jxpw" Nov 28 10:23:19 crc kubenswrapper[4946]: I1128 10:23:19.233755 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8jxpw"] Nov 28 10:23:19 crc kubenswrapper[4946]: I1128 10:23:19.857207 4946 generic.go:334] "Generic (PLEG): container finished" podID="8f5db34d-f19e-4a6b-87fd-49e34a121f86" containerID="0d20a0941ca84d83def7b59a48a01f8ba34631c85c4d4b9fcdb1e0b24d086264" exitCode=0 Nov 28 10:23:19 crc kubenswrapper[4946]: I1128 10:23:19.857303 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jxpw" event={"ID":"8f5db34d-f19e-4a6b-87fd-49e34a121f86","Type":"ContainerDied","Data":"0d20a0941ca84d83def7b59a48a01f8ba34631c85c4d4b9fcdb1e0b24d086264"} Nov 28 10:23:19 crc kubenswrapper[4946]: I1128 10:23:19.857500 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jxpw" event={"ID":"8f5db34d-f19e-4a6b-87fd-49e34a121f86","Type":"ContainerStarted","Data":"be6d9eca1c3ae5d6134b4c3c02c01ffc8d4e742c18b4dabcdb6f2d57a7920f85"} Nov 28 10:23:19 crc kubenswrapper[4946]: I1128 10:23:19.859193 4946 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 10:23:21 crc kubenswrapper[4946]: I1128 10:23:21.885841 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jxpw" event={"ID":"8f5db34d-f19e-4a6b-87fd-49e34a121f86","Type":"ContainerStarted","Data":"e7e86897c2b3699b5416b04fcc16f21e9e4b32b09253fbf74c853fa169b20325"} Nov 28 10:23:22 crc kubenswrapper[4946]: I1128 10:23:22.672642 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h_dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8/util/0.log" Nov 28 10:23:22 crc kubenswrapper[4946]: I1128 10:23:22.846145 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h_dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8/pull/0.log" Nov 28 10:23:22 crc kubenswrapper[4946]: I1128 10:23:22.880801 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h_dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8/pull/0.log" Nov 28 10:23:23 crc kubenswrapper[4946]: I1128 10:23:23.194805 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h_dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8/util/0.log" Nov 28 10:23:23 crc kubenswrapper[4946]: I1128 10:23:23.213967 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h_dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8/pull/0.log" Nov 28 10:23:23 crc kubenswrapper[4946]: I1128 10:23:23.221373 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h_dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8/util/0.log" Nov 28 10:23:23 crc kubenswrapper[4946]: I1128 10:23:23.366852 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b45623fa79426ee6d52ccc2f61ed894b37aa2fb70e5ce0cf390950ffbeptt2h_dd4b3b0d-443a-4fb4-b7e3-3a013f946ab8/extract/0.log" Nov 28 10:23:23 crc kubenswrapper[4946]: I1128 10:23:23.416558 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-2l7gl_353bb4c2-bff5-4749-a149-8a856803b84b/kube-rbac-proxy/0.log" Nov 28 10:23:23 crc kubenswrapper[4946]: I1128 10:23:23.585320 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-2l7gl_353bb4c2-bff5-4749-a149-8a856803b84b/manager/0.log" Nov 28 10:23:23 crc kubenswrapper[4946]: I1128 10:23:23.706975 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-g6gbz_d4188f46-2979-4b90-bfa0-37962da6e3c7/kube-rbac-proxy/0.log" Nov 28 10:23:23 crc kubenswrapper[4946]: I1128 10:23:23.779127 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-hk9d2_3fe2b1f1-7ef0-4cc3-9cf1-adefe5dead46/kube-rbac-proxy/0.log" Nov 28 10:23:23 crc kubenswrapper[4946]: I1128 10:23:23.804375 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-g6gbz_d4188f46-2979-4b90-bfa0-37962da6e3c7/manager/0.log" Nov 28 10:23:23 crc kubenswrapper[4946]: I1128 10:23:23.910856 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-hk9d2_3fe2b1f1-7ef0-4cc3-9cf1-adefe5dead46/manager/0.log" Nov 28 10:23:23 crc kubenswrapper[4946]: I1128 10:23:23.917529 4946 generic.go:334] "Generic (PLEG): container finished" podID="8f5db34d-f19e-4a6b-87fd-49e34a121f86" containerID="e7e86897c2b3699b5416b04fcc16f21e9e4b32b09253fbf74c853fa169b20325" exitCode=0 Nov 28 10:23:23 crc kubenswrapper[4946]: I1128 10:23:23.917596 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jxpw" event={"ID":"8f5db34d-f19e-4a6b-87fd-49e34a121f86","Type":"ContainerDied","Data":"e7e86897c2b3699b5416b04fcc16f21e9e4b32b09253fbf74c853fa169b20325"} Nov 28 10:23:23 crc kubenswrapper[4946]: I1128 10:23:23.984526 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-7p86n_dc11bd96-48e8-4613-80e6-ce3b518cea8d/kube-rbac-proxy/0.log" Nov 28 10:23:23 crc kubenswrapper[4946]: I1128 10:23:23.990691 4946 scope.go:117] "RemoveContainer" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" Nov 28 10:23:23 crc kubenswrapper[4946]: E1128 10:23:23.991112 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:23:24 crc kubenswrapper[4946]: I1128 10:23:24.182580 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-7p86n_dc11bd96-48e8-4613-80e6-ce3b518cea8d/manager/0.log" Nov 28 10:23:24 crc kubenswrapper[4946]: I1128 10:23:24.205939 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-zwd8k_1b3d3306-e302-4fe9-b50c-8295275ed28c/kube-rbac-proxy/0.log" Nov 28 10:23:24 crc kubenswrapper[4946]: I1128 10:23:24.251218 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-zwd8k_1b3d3306-e302-4fe9-b50c-8295275ed28c/manager/0.log" Nov 28 10:23:24 crc kubenswrapper[4946]: I1128 10:23:24.405227 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-5hbz6_f1ac7d28-f59d-44e5-aa3e-c6da338fca84/kube-rbac-proxy/0.log" Nov 28 10:23:24 crc kubenswrapper[4946]: I1128 10:23:24.422410 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-5hbz6_f1ac7d28-f59d-44e5-aa3e-c6da338fca84/manager/0.log" Nov 28 10:23:24 crc kubenswrapper[4946]: I1128 10:23:24.579607 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-dhvm2_1c7d18a6-2067-4736-a42f-074f2672a841/kube-rbac-proxy/0.log" Nov 28 10:23:24 crc kubenswrapper[4946]: I1128 10:23:24.689101 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-pvrsb_e3d1cda6-c2ac-40fc-8f40-2e4b89dbca81/kube-rbac-proxy/0.log" Nov 28 10:23:24 crc kubenswrapper[4946]: I1128 10:23:24.843692 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-pvrsb_e3d1cda6-c2ac-40fc-8f40-2e4b89dbca81/manager/0.log" Nov 28 10:23:24 crc kubenswrapper[4946]: I1128 10:23:24.896978 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-dhvm2_1c7d18a6-2067-4736-a42f-074f2672a841/manager/0.log" Nov 28 10:23:24 crc kubenswrapper[4946]: I1128 10:23:24.928157 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jxpw" event={"ID":"8f5db34d-f19e-4a6b-87fd-49e34a121f86","Type":"ContainerStarted","Data":"60a1fb6dae633bbcdde49ccbbc95f9125a9b933d5f2ef6490e72fa99a74fbb5a"} Nov 28 10:23:24 crc kubenswrapper[4946]: I1128 10:23:24.948888 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8jxpw" podStartSLOduration=2.453678869 podStartE2EDuration="6.948872936s" podCreationTimestamp="2025-11-28 10:23:18 +0000 UTC" firstStartedPulling="2025-11-28 10:23:19.858962638 +0000 UTC m=+12654.237027749" lastFinishedPulling="2025-11-28 10:23:24.354156705 +0000 UTC m=+12658.732221816" observedRunningTime="2025-11-28 10:23:24.945760179 +0000 UTC m=+12659.323825290" watchObservedRunningTime="2025-11-28 10:23:24.948872936 +0000 UTC m=+12659.326938047" Nov 28 10:23:24 crc kubenswrapper[4946]: I1128 10:23:24.972624 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-sqh4v_2f759a24-d58b-4aed-8e14-71dec2ff2df6/kube-rbac-proxy/0.log" Nov 28 10:23:25 crc kubenswrapper[4946]: I1128 10:23:25.215031 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-2f2dz_cb30728b-6cd4-4d80-8e1a-bc979410fad6/kube-rbac-proxy/0.log" Nov 28 10:23:25 crc kubenswrapper[4946]: I1128 10:23:25.240694 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-sqh4v_2f759a24-d58b-4aed-8e14-71dec2ff2df6/manager/0.log" Nov 28 10:23:25 crc kubenswrapper[4946]: I1128 10:23:25.308219 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-2f2dz_cb30728b-6cd4-4d80-8e1a-bc979410fad6/manager/0.log" Nov 28 10:23:25 crc kubenswrapper[4946]: I1128 10:23:25.474724 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-9b7s9_8d84e62b-b1cf-4238-b78b-f47a9f2df3ef/kube-rbac-proxy/0.log" Nov 28 10:23:25 crc kubenswrapper[4946]: I1128 10:23:25.496707 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-9b7s9_8d84e62b-b1cf-4238-b78b-f47a9f2df3ef/manager/0.log" Nov 28 10:23:25 crc kubenswrapper[4946]: I1128 10:23:25.655347 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-nl9r5_7aeae67c-c1be-4d23-bb62-c8798d9fe052/kube-rbac-proxy/0.log" Nov 28 10:23:25 crc kubenswrapper[4946]: I1128 10:23:25.782779 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-nl9r5_7aeae67c-c1be-4d23-bb62-c8798d9fe052/manager/0.log" Nov 28 10:23:25 crc kubenswrapper[4946]: I1128 10:23:25.808835 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-mwdxn_9a41bf27-5186-4bfe-b722-87a604d851c3/kube-rbac-proxy/0.log" Nov 28 10:23:26 crc kubenswrapper[4946]: I1128 10:23:26.096085 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-r94kz_d37965ca-e557-483c-b195-310b690d9101/kube-rbac-proxy/0.log" Nov 28 10:23:26 crc kubenswrapper[4946]: I1128 10:23:26.103333 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-mwdxn_9a41bf27-5186-4bfe-b722-87a604d851c3/manager/0.log" Nov 28 10:23:26 crc kubenswrapper[4946]: I1128 10:23:26.104275 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-r94kz_d37965ca-e557-483c-b195-310b690d9101/manager/0.log" Nov 28 10:23:26 crc kubenswrapper[4946]: I1128 10:23:26.338838 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n_aef07b0c-aae8-48fd-a246-8b5669cccbce/manager/0.log" Nov 28 10:23:26 crc kubenswrapper[4946]: I1128 10:23:26.357082 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5d9f9695dbgtt8n_aef07b0c-aae8-48fd-a246-8b5669cccbce/kube-rbac-proxy/0.log" Nov 28 10:23:27 crc kubenswrapper[4946]: I1128 10:23:27.066156 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-9nzjs_b3227d8a-06db-4f44-ae26-f173c27fd3e1/kube-rbac-proxy/0.log" Nov 28 10:23:27 crc kubenswrapper[4946]: I1128 10:23:27.102608 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-67d8f6cc56-mmr5q_a7ce130a-7c66-4b3c-9ea1-aca15bb025c0/operator/0.log" Nov 28 10:23:27 crc kubenswrapper[4946]: I1128 10:23:27.140106 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-8shgs_bb89732c-7c39-48b2-896b-bb18dc44839c/registry-server/0.log" Nov 28 10:23:27 crc kubenswrapper[4946]: I1128 10:23:27.321564 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-9nzjs_b3227d8a-06db-4f44-ae26-f173c27fd3e1/manager/0.log" Nov 28 10:23:27 crc kubenswrapper[4946]: I1128 10:23:27.382939 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-4mqd6_710cbde0-3645-452d-8cda-d4165c8fdd32/manager/0.log" Nov 28 10:23:27 crc kubenswrapper[4946]: I1128 10:23:27.418431 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-4mqd6_710cbde0-3645-452d-8cda-d4165c8fdd32/kube-rbac-proxy/0.log" Nov 28 10:23:27 crc kubenswrapper[4946]: I1128 10:23:27.574876 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-t5qsk_520a099d-3fd5-42f5-b883-c7a1b94dcb70/operator/0.log" Nov 28 10:23:27 crc kubenswrapper[4946]: I1128 10:23:27.634576 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-2nfmg_3f402388-33d3-4c7b-a1b7-26241b6de58c/kube-rbac-proxy/0.log" Nov 28 10:23:27 crc kubenswrapper[4946]: I1128 10:23:27.844196 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-2nfmg_3f402388-33d3-4c7b-a1b7-26241b6de58c/manager/0.log" Nov 28 10:23:27 crc kubenswrapper[4946]: I1128 10:23:27.861070 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-qfvfk_989a6399-e7f7-4b7d-bd68-4d44531b3a8e/kube-rbac-proxy/0.log" Nov 28 10:23:28 crc kubenswrapper[4946]: I1128 10:23:28.118977 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-nqqq9_fda79d01-4f7c-4c88-8567-6c9543ec8b51/kube-rbac-proxy/0.log" Nov 28 10:23:28 crc kubenswrapper[4946]: I1128 10:23:28.133716 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-qfvfk_989a6399-e7f7-4b7d-bd68-4d44531b3a8e/manager/0.log" Nov 28 10:23:28 crc kubenswrapper[4946]: I1128 10:23:28.269144 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-nqqq9_fda79d01-4f7c-4c88-8567-6c9543ec8b51/manager/0.log" Nov 28 10:23:28 crc kubenswrapper[4946]: I1128 10:23:28.376307 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-blpm6_836f3766-a6eb-447f-9337-fa9082bcb62b/kube-rbac-proxy/0.log" Nov 28 10:23:28 crc kubenswrapper[4946]: I1128 10:23:28.472726 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-blpm6_836f3766-a6eb-447f-9337-fa9082bcb62b/manager/0.log" Nov 28 10:23:28 crc kubenswrapper[4946]: I1128 10:23:28.767613 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8jxpw" Nov 28 10:23:28 crc kubenswrapper[4946]: I1128 10:23:28.771339 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8jxpw" Nov 28 10:23:28 crc kubenswrapper[4946]: I1128 10:23:28.817323 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-66f75ddbcc-l7w7r_5f4eb990-cf89-4f9a-8d22-f016b8894f4f/manager/0.log" Nov 28 10:23:29 crc kubenswrapper[4946]: I1128 10:23:29.823074 4946 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8jxpw" podUID="8f5db34d-f19e-4a6b-87fd-49e34a121f86" containerName="registry-server" probeResult="failure" output=< Nov 28 10:23:29 crc kubenswrapper[4946]: timeout: failed to connect service ":50051" within 1s Nov 28 10:23:29 crc kubenswrapper[4946]: > Nov 28 10:23:36 crc kubenswrapper[4946]: I1128 10:23:36.990755 4946 scope.go:117] "RemoveContainer" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" Nov 28 10:23:36 crc kubenswrapper[4946]: E1128 10:23:36.992179 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:23:38 crc kubenswrapper[4946]: I1128 10:23:38.823897 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8jxpw" Nov 28 10:23:38 crc kubenswrapper[4946]: I1128 10:23:38.883607 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8jxpw" Nov 28 10:23:39 crc kubenswrapper[4946]: I1128 10:23:39.061670 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8jxpw"] Nov 28 10:23:40 crc kubenswrapper[4946]: I1128 10:23:40.071801 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8jxpw" podUID="8f5db34d-f19e-4a6b-87fd-49e34a121f86" containerName="registry-server" containerID="cri-o://60a1fb6dae633bbcdde49ccbbc95f9125a9b933d5f2ef6490e72fa99a74fbb5a" gracePeriod=2 Nov 28 10:23:40 crc kubenswrapper[4946]: I1128 10:23:40.625200 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8jxpw" Nov 28 10:23:40 crc kubenswrapper[4946]: I1128 10:23:40.785319 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f5db34d-f19e-4a6b-87fd-49e34a121f86-catalog-content\") pod \"8f5db34d-f19e-4a6b-87fd-49e34a121f86\" (UID: \"8f5db34d-f19e-4a6b-87fd-49e34a121f86\") " Nov 28 10:23:40 crc kubenswrapper[4946]: I1128 10:23:40.785528 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f5db34d-f19e-4a6b-87fd-49e34a121f86-utilities\") pod \"8f5db34d-f19e-4a6b-87fd-49e34a121f86\" (UID: \"8f5db34d-f19e-4a6b-87fd-49e34a121f86\") " Nov 28 10:23:40 crc kubenswrapper[4946]: I1128 10:23:40.785560 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvpnq\" (UniqueName: \"kubernetes.io/projected/8f5db34d-f19e-4a6b-87fd-49e34a121f86-kube-api-access-zvpnq\") pod \"8f5db34d-f19e-4a6b-87fd-49e34a121f86\" (UID: \"8f5db34d-f19e-4a6b-87fd-49e34a121f86\") " Nov 28 10:23:40 crc kubenswrapper[4946]: I1128 10:23:40.786091 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f5db34d-f19e-4a6b-87fd-49e34a121f86-utilities" (OuterVolumeSpecName: "utilities") pod "8f5db34d-f19e-4a6b-87fd-49e34a121f86" (UID: "8f5db34d-f19e-4a6b-87fd-49e34a121f86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:23:40 crc kubenswrapper[4946]: I1128 10:23:40.786995 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f5db34d-f19e-4a6b-87fd-49e34a121f86-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 10:23:40 crc kubenswrapper[4946]: I1128 10:23:40.790738 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f5db34d-f19e-4a6b-87fd-49e34a121f86-kube-api-access-zvpnq" (OuterVolumeSpecName: "kube-api-access-zvpnq") pod "8f5db34d-f19e-4a6b-87fd-49e34a121f86" (UID: "8f5db34d-f19e-4a6b-87fd-49e34a121f86"). InnerVolumeSpecName "kube-api-access-zvpnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 10:23:40 crc kubenswrapper[4946]: I1128 10:23:40.888388 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvpnq\" (UniqueName: \"kubernetes.io/projected/8f5db34d-f19e-4a6b-87fd-49e34a121f86-kube-api-access-zvpnq\") on node \"crc\" DevicePath \"\"" Nov 28 10:23:40 crc kubenswrapper[4946]: I1128 10:23:40.913150 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f5db34d-f19e-4a6b-87fd-49e34a121f86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f5db34d-f19e-4a6b-87fd-49e34a121f86" (UID: "8f5db34d-f19e-4a6b-87fd-49e34a121f86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:23:40 crc kubenswrapper[4946]: I1128 10:23:40.992180 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f5db34d-f19e-4a6b-87fd-49e34a121f86-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 10:23:41 crc kubenswrapper[4946]: I1128 10:23:41.082700 4946 generic.go:334] "Generic (PLEG): container finished" podID="8f5db34d-f19e-4a6b-87fd-49e34a121f86" containerID="60a1fb6dae633bbcdde49ccbbc95f9125a9b933d5f2ef6490e72fa99a74fbb5a" exitCode=0 Nov 28 10:23:41 crc kubenswrapper[4946]: I1128 10:23:41.082740 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jxpw" event={"ID":"8f5db34d-f19e-4a6b-87fd-49e34a121f86","Type":"ContainerDied","Data":"60a1fb6dae633bbcdde49ccbbc95f9125a9b933d5f2ef6490e72fa99a74fbb5a"} Nov 28 10:23:41 crc kubenswrapper[4946]: I1128 10:23:41.082772 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jxpw" event={"ID":"8f5db34d-f19e-4a6b-87fd-49e34a121f86","Type":"ContainerDied","Data":"be6d9eca1c3ae5d6134b4c3c02c01ffc8d4e742c18b4dabcdb6f2d57a7920f85"} Nov 28 10:23:41 crc kubenswrapper[4946]: I1128 10:23:41.082793 4946 scope.go:117] "RemoveContainer" containerID="60a1fb6dae633bbcdde49ccbbc95f9125a9b933d5f2ef6490e72fa99a74fbb5a" Nov 28 10:23:41 crc kubenswrapper[4946]: I1128 10:23:41.082949 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8jxpw" Nov 28 10:23:41 crc kubenswrapper[4946]: I1128 10:23:41.114028 4946 scope.go:117] "RemoveContainer" containerID="e7e86897c2b3699b5416b04fcc16f21e9e4b32b09253fbf74c853fa169b20325" Nov 28 10:23:41 crc kubenswrapper[4946]: I1128 10:23:41.115308 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8jxpw"] Nov 28 10:23:41 crc kubenswrapper[4946]: I1128 10:23:41.160498 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8jxpw"] Nov 28 10:23:41 crc kubenswrapper[4946]: I1128 10:23:41.167351 4946 scope.go:117] "RemoveContainer" containerID="0d20a0941ca84d83def7b59a48a01f8ba34631c85c4d4b9fcdb1e0b24d086264" Nov 28 10:23:41 crc kubenswrapper[4946]: I1128 10:23:41.199445 4946 scope.go:117] "RemoveContainer" containerID="60a1fb6dae633bbcdde49ccbbc95f9125a9b933d5f2ef6490e72fa99a74fbb5a" Nov 28 10:23:41 crc kubenswrapper[4946]: E1128 10:23:41.199919 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60a1fb6dae633bbcdde49ccbbc95f9125a9b933d5f2ef6490e72fa99a74fbb5a\": container with ID starting with 60a1fb6dae633bbcdde49ccbbc95f9125a9b933d5f2ef6490e72fa99a74fbb5a not found: ID does not exist" containerID="60a1fb6dae633bbcdde49ccbbc95f9125a9b933d5f2ef6490e72fa99a74fbb5a" Nov 28 10:23:41 crc kubenswrapper[4946]: I1128 10:23:41.199960 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60a1fb6dae633bbcdde49ccbbc95f9125a9b933d5f2ef6490e72fa99a74fbb5a"} err="failed to get container status \"60a1fb6dae633bbcdde49ccbbc95f9125a9b933d5f2ef6490e72fa99a74fbb5a\": rpc error: code = NotFound desc = could not find container \"60a1fb6dae633bbcdde49ccbbc95f9125a9b933d5f2ef6490e72fa99a74fbb5a\": container with ID starting with 60a1fb6dae633bbcdde49ccbbc95f9125a9b933d5f2ef6490e72fa99a74fbb5a not found: ID does not exist" Nov 28 10:23:41 crc kubenswrapper[4946]: I1128 10:23:41.199984 4946 scope.go:117] "RemoveContainer" containerID="e7e86897c2b3699b5416b04fcc16f21e9e4b32b09253fbf74c853fa169b20325" Nov 28 10:23:41 crc kubenswrapper[4946]: E1128 10:23:41.200293 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7e86897c2b3699b5416b04fcc16f21e9e4b32b09253fbf74c853fa169b20325\": container with ID starting with e7e86897c2b3699b5416b04fcc16f21e9e4b32b09253fbf74c853fa169b20325 not found: ID does not exist" containerID="e7e86897c2b3699b5416b04fcc16f21e9e4b32b09253fbf74c853fa169b20325" Nov 28 10:23:41 crc kubenswrapper[4946]: I1128 10:23:41.200343 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7e86897c2b3699b5416b04fcc16f21e9e4b32b09253fbf74c853fa169b20325"} err="failed to get container status \"e7e86897c2b3699b5416b04fcc16f21e9e4b32b09253fbf74c853fa169b20325\": rpc error: code = NotFound desc = could not find container \"e7e86897c2b3699b5416b04fcc16f21e9e4b32b09253fbf74c853fa169b20325\": container with ID starting with e7e86897c2b3699b5416b04fcc16f21e9e4b32b09253fbf74c853fa169b20325 not found: ID does not exist" Nov 28 10:23:41 crc kubenswrapper[4946]: I1128 10:23:41.200373 4946 scope.go:117] "RemoveContainer" containerID="0d20a0941ca84d83def7b59a48a01f8ba34631c85c4d4b9fcdb1e0b24d086264" Nov 28 10:23:41 crc kubenswrapper[4946]: E1128 10:23:41.200668 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d20a0941ca84d83def7b59a48a01f8ba34631c85c4d4b9fcdb1e0b24d086264\": container with ID starting with 0d20a0941ca84d83def7b59a48a01f8ba34631c85c4d4b9fcdb1e0b24d086264 not found: ID does not exist" containerID="0d20a0941ca84d83def7b59a48a01f8ba34631c85c4d4b9fcdb1e0b24d086264" Nov 28 10:23:41 crc kubenswrapper[4946]: I1128 10:23:41.200705 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d20a0941ca84d83def7b59a48a01f8ba34631c85c4d4b9fcdb1e0b24d086264"} err="failed to get container status \"0d20a0941ca84d83def7b59a48a01f8ba34631c85c4d4b9fcdb1e0b24d086264\": rpc error: code = NotFound desc = could not find container \"0d20a0941ca84d83def7b59a48a01f8ba34631c85c4d4b9fcdb1e0b24d086264\": container with ID starting with 0d20a0941ca84d83def7b59a48a01f8ba34631c85c4d4b9fcdb1e0b24d086264 not found: ID does not exist" Nov 28 10:23:42 crc kubenswrapper[4946]: I1128 10:23:42.006129 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f5db34d-f19e-4a6b-87fd-49e34a121f86" path="/var/lib/kubelet/pods/8f5db34d-f19e-4a6b-87fd-49e34a121f86/volumes" Nov 28 10:23:47 crc kubenswrapper[4946]: I1128 10:23:47.490770 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5tnsl_6a09131a-f0b1-48e9-9b10-8e75e3344d3c/control-plane-machine-set-operator/0.log" Nov 28 10:23:47 crc kubenswrapper[4946]: I1128 10:23:47.663575 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hnzrs_ee803337-53f2-4467-8f6c-602a16bda8e5/kube-rbac-proxy/0.log" Nov 28 10:23:47 crc kubenswrapper[4946]: I1128 10:23:47.686099 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hnzrs_ee803337-53f2-4467-8f6c-602a16bda8e5/machine-api-operator/0.log" Nov 28 10:23:49 crc kubenswrapper[4946]: I1128 10:23:49.990009 4946 scope.go:117] "RemoveContainer" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" Nov 28 10:23:49 crc kubenswrapper[4946]: E1128 10:23:49.990703 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:24:01 crc kubenswrapper[4946]: I1128 10:24:01.242794 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-c25j6_50b43344-5fec-4bba-b8a0-298c20b951c9/cert-manager-controller/0.log" Nov 28 10:24:01 crc kubenswrapper[4946]: I1128 10:24:01.429176 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-wz2tz_4e12249a-3983-4d04-a19d-57ada923666f/cert-manager-webhook/0.log" Nov 28 10:24:01 crc kubenswrapper[4946]: I1128 10:24:01.437932 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-zr7xs_0695ed81-d836-4e66-9b94-8335a37e9d46/cert-manager-cainjector/0.log" Nov 28 10:24:01 crc kubenswrapper[4946]: I1128 10:24:01.589699 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5xg2f"] Nov 28 10:24:01 crc kubenswrapper[4946]: E1128 10:24:01.590153 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f5db34d-f19e-4a6b-87fd-49e34a121f86" containerName="extract-utilities" Nov 28 10:24:01 crc kubenswrapper[4946]: I1128 10:24:01.590169 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f5db34d-f19e-4a6b-87fd-49e34a121f86" containerName="extract-utilities" Nov 28 10:24:01 crc kubenswrapper[4946]: E1128 10:24:01.590198 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f5db34d-f19e-4a6b-87fd-49e34a121f86" containerName="registry-server" Nov 28 10:24:01 crc kubenswrapper[4946]: I1128 10:24:01.590205 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f5db34d-f19e-4a6b-87fd-49e34a121f86" containerName="registry-server" Nov 28 10:24:01 crc kubenswrapper[4946]: E1128 10:24:01.590218 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f5db34d-f19e-4a6b-87fd-49e34a121f86" containerName="extract-content" Nov 28 10:24:01 crc kubenswrapper[4946]: I1128 10:24:01.590224 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f5db34d-f19e-4a6b-87fd-49e34a121f86" containerName="extract-content" Nov 28 10:24:01 crc kubenswrapper[4946]: I1128 10:24:01.590451 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f5db34d-f19e-4a6b-87fd-49e34a121f86" containerName="registry-server" Nov 28 10:24:01 crc kubenswrapper[4946]: I1128 10:24:01.592021 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5xg2f" Nov 28 10:24:01 crc kubenswrapper[4946]: I1128 10:24:01.607734 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5xg2f"] Nov 28 10:24:01 crc kubenswrapper[4946]: I1128 10:24:01.649766 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ebed15f-8435-4c30-bfb2-f8cc7014f9ed-utilities\") pod \"community-operators-5xg2f\" (UID: \"7ebed15f-8435-4c30-bfb2-f8cc7014f9ed\") " pod="openshift-marketplace/community-operators-5xg2f" Nov 28 10:24:01 crc kubenswrapper[4946]: I1128 10:24:01.650061 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn78j\" (UniqueName: \"kubernetes.io/projected/7ebed15f-8435-4c30-bfb2-f8cc7014f9ed-kube-api-access-pn78j\") pod \"community-operators-5xg2f\" (UID: \"7ebed15f-8435-4c30-bfb2-f8cc7014f9ed\") " pod="openshift-marketplace/community-operators-5xg2f" Nov 28 10:24:01 crc kubenswrapper[4946]: I1128 10:24:01.650100 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ebed15f-8435-4c30-bfb2-f8cc7014f9ed-catalog-content\") pod \"community-operators-5xg2f\" (UID: \"7ebed15f-8435-4c30-bfb2-f8cc7014f9ed\") " pod="openshift-marketplace/community-operators-5xg2f" Nov 28 10:24:01 crc kubenswrapper[4946]: I1128 10:24:01.751684 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ebed15f-8435-4c30-bfb2-f8cc7014f9ed-utilities\") pod \"community-operators-5xg2f\" (UID: \"7ebed15f-8435-4c30-bfb2-f8cc7014f9ed\") " pod="openshift-marketplace/community-operators-5xg2f" Nov 28 10:24:01 crc kubenswrapper[4946]: I1128 10:24:01.751789 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn78j\" (UniqueName: \"kubernetes.io/projected/7ebed15f-8435-4c30-bfb2-f8cc7014f9ed-kube-api-access-pn78j\") pod \"community-operators-5xg2f\" (UID: \"7ebed15f-8435-4c30-bfb2-f8cc7014f9ed\") " pod="openshift-marketplace/community-operators-5xg2f" Nov 28 10:24:01 crc kubenswrapper[4946]: I1128 10:24:01.751834 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ebed15f-8435-4c30-bfb2-f8cc7014f9ed-catalog-content\") pod \"community-operators-5xg2f\" (UID: \"7ebed15f-8435-4c30-bfb2-f8cc7014f9ed\") " pod="openshift-marketplace/community-operators-5xg2f" Nov 28 10:24:01 crc kubenswrapper[4946]: I1128 10:24:01.752098 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ebed15f-8435-4c30-bfb2-f8cc7014f9ed-utilities\") pod \"community-operators-5xg2f\" (UID: \"7ebed15f-8435-4c30-bfb2-f8cc7014f9ed\") " pod="openshift-marketplace/community-operators-5xg2f" Nov 28 10:24:01 crc kubenswrapper[4946]: I1128 10:24:01.752352 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ebed15f-8435-4c30-bfb2-f8cc7014f9ed-catalog-content\") pod \"community-operators-5xg2f\" (UID: \"7ebed15f-8435-4c30-bfb2-f8cc7014f9ed\") " pod="openshift-marketplace/community-operators-5xg2f" Nov 28 10:24:01 crc kubenswrapper[4946]: I1128 10:24:01.771765 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn78j\" (UniqueName: \"kubernetes.io/projected/7ebed15f-8435-4c30-bfb2-f8cc7014f9ed-kube-api-access-pn78j\") pod \"community-operators-5xg2f\" (UID: \"7ebed15f-8435-4c30-bfb2-f8cc7014f9ed\") " pod="openshift-marketplace/community-operators-5xg2f" Nov 28 10:24:01 crc kubenswrapper[4946]: I1128 10:24:01.916989 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5xg2f" Nov 28 10:24:02 crc kubenswrapper[4946]: I1128 10:24:02.487894 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5xg2f"] Nov 28 10:24:03 crc kubenswrapper[4946]: I1128 10:24:03.362322 4946 generic.go:334] "Generic (PLEG): container finished" podID="7ebed15f-8435-4c30-bfb2-f8cc7014f9ed" containerID="093edc29a9cecf50f2bec94218c6eb84c86763b1caa4a8744d7b3cd4f3e1476b" exitCode=0 Nov 28 10:24:03 crc kubenswrapper[4946]: I1128 10:24:03.362401 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xg2f" event={"ID":"7ebed15f-8435-4c30-bfb2-f8cc7014f9ed","Type":"ContainerDied","Data":"093edc29a9cecf50f2bec94218c6eb84c86763b1caa4a8744d7b3cd4f3e1476b"} Nov 28 10:24:03 crc kubenswrapper[4946]: I1128 10:24:03.363321 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xg2f" event={"ID":"7ebed15f-8435-4c30-bfb2-f8cc7014f9ed","Type":"ContainerStarted","Data":"650df9f380e4f0121d32ebc25411115653a3043dc96825064d2cba24748e7286"} Nov 28 10:24:04 crc kubenswrapper[4946]: I1128 10:24:04.025115 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mvsvd"] Nov 28 10:24:04 crc kubenswrapper[4946]: I1128 10:24:04.037871 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvsvd" Nov 28 10:24:04 crc kubenswrapper[4946]: I1128 10:24:04.041023 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvsvd"] Nov 28 10:24:04 crc kubenswrapper[4946]: I1128 10:24:04.115055 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwq6t\" (UniqueName: \"kubernetes.io/projected/e181daa6-ce5f-4e44-85b6-e19fa4ac3d13-kube-api-access-qwq6t\") pod \"redhat-marketplace-mvsvd\" (UID: \"e181daa6-ce5f-4e44-85b6-e19fa4ac3d13\") " pod="openshift-marketplace/redhat-marketplace-mvsvd" Nov 28 10:24:04 crc kubenswrapper[4946]: I1128 10:24:04.115130 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e181daa6-ce5f-4e44-85b6-e19fa4ac3d13-catalog-content\") pod \"redhat-marketplace-mvsvd\" (UID: \"e181daa6-ce5f-4e44-85b6-e19fa4ac3d13\") " pod="openshift-marketplace/redhat-marketplace-mvsvd" Nov 28 10:24:04 crc kubenswrapper[4946]: I1128 10:24:04.115154 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e181daa6-ce5f-4e44-85b6-e19fa4ac3d13-utilities\") pod \"redhat-marketplace-mvsvd\" (UID: \"e181daa6-ce5f-4e44-85b6-e19fa4ac3d13\") " pod="openshift-marketplace/redhat-marketplace-mvsvd" Nov 28 10:24:04 crc kubenswrapper[4946]: I1128 10:24:04.217337 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwq6t\" (UniqueName: \"kubernetes.io/projected/e181daa6-ce5f-4e44-85b6-e19fa4ac3d13-kube-api-access-qwq6t\") pod \"redhat-marketplace-mvsvd\" (UID: \"e181daa6-ce5f-4e44-85b6-e19fa4ac3d13\") " pod="openshift-marketplace/redhat-marketplace-mvsvd" Nov 28 10:24:04 crc kubenswrapper[4946]: I1128 10:24:04.217445 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e181daa6-ce5f-4e44-85b6-e19fa4ac3d13-catalog-content\") pod \"redhat-marketplace-mvsvd\" (UID: \"e181daa6-ce5f-4e44-85b6-e19fa4ac3d13\") " pod="openshift-marketplace/redhat-marketplace-mvsvd" Nov 28 10:24:04 crc kubenswrapper[4946]: I1128 10:24:04.217489 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e181daa6-ce5f-4e44-85b6-e19fa4ac3d13-utilities\") pod \"redhat-marketplace-mvsvd\" (UID: \"e181daa6-ce5f-4e44-85b6-e19fa4ac3d13\") " pod="openshift-marketplace/redhat-marketplace-mvsvd" Nov 28 10:24:04 crc kubenswrapper[4946]: I1128 10:24:04.218091 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e181daa6-ce5f-4e44-85b6-e19fa4ac3d13-utilities\") pod \"redhat-marketplace-mvsvd\" (UID: \"e181daa6-ce5f-4e44-85b6-e19fa4ac3d13\") " pod="openshift-marketplace/redhat-marketplace-mvsvd" Nov 28 10:24:04 crc kubenswrapper[4946]: I1128 10:24:04.218784 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e181daa6-ce5f-4e44-85b6-e19fa4ac3d13-catalog-content\") pod \"redhat-marketplace-mvsvd\" (UID: \"e181daa6-ce5f-4e44-85b6-e19fa4ac3d13\") " pod="openshift-marketplace/redhat-marketplace-mvsvd" Nov 28 10:24:04 crc kubenswrapper[4946]: I1128 10:24:04.237135 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwq6t\" (UniqueName: \"kubernetes.io/projected/e181daa6-ce5f-4e44-85b6-e19fa4ac3d13-kube-api-access-qwq6t\") pod \"redhat-marketplace-mvsvd\" (UID: \"e181daa6-ce5f-4e44-85b6-e19fa4ac3d13\") " pod="openshift-marketplace/redhat-marketplace-mvsvd" Nov 28 10:24:04 crc kubenswrapper[4946]: I1128 10:24:04.361945 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvsvd" Nov 28 10:24:04 crc kubenswrapper[4946]: I1128 10:24:04.942963 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvsvd"] Nov 28 10:24:04 crc kubenswrapper[4946]: I1128 10:24:04.990852 4946 scope.go:117] "RemoveContainer" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" Nov 28 10:24:04 crc kubenswrapper[4946]: E1128 10:24:04.991178 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:24:05 crc kubenswrapper[4946]: I1128 10:24:05.391572 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xg2f" event={"ID":"7ebed15f-8435-4c30-bfb2-f8cc7014f9ed","Type":"ContainerStarted","Data":"cc60dcd005d69c1fe23bdda8448e09047589431a4a56d1ed26f76a9581be6edc"} Nov 28 10:24:05 crc kubenswrapper[4946]: I1128 10:24:05.396746 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvsvd" event={"ID":"e181daa6-ce5f-4e44-85b6-e19fa4ac3d13","Type":"ContainerStarted","Data":"98bcaf31840aeb9d39536ec4c097c649d0218f036b81771fe7f8cfdc74acb0d2"} Nov 28 10:24:05 crc kubenswrapper[4946]: I1128 10:24:05.396802 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvsvd" event={"ID":"e181daa6-ce5f-4e44-85b6-e19fa4ac3d13","Type":"ContainerStarted","Data":"106a61aa0d3546b1b55273c348a724e8c54603a17c580b79e54911f1c126d4fc"} Nov 28 10:24:06 crc kubenswrapper[4946]: I1128 10:24:06.428815 4946 generic.go:334] "Generic (PLEG): container finished" podID="e181daa6-ce5f-4e44-85b6-e19fa4ac3d13" containerID="98bcaf31840aeb9d39536ec4c097c649d0218f036b81771fe7f8cfdc74acb0d2" exitCode=0 Nov 28 10:24:06 crc kubenswrapper[4946]: I1128 10:24:06.429893 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvsvd" event={"ID":"e181daa6-ce5f-4e44-85b6-e19fa4ac3d13","Type":"ContainerDied","Data":"98bcaf31840aeb9d39536ec4c097c649d0218f036b81771fe7f8cfdc74acb0d2"} Nov 28 10:24:06 crc kubenswrapper[4946]: I1128 10:24:06.438134 4946 generic.go:334] "Generic (PLEG): container finished" podID="7ebed15f-8435-4c30-bfb2-f8cc7014f9ed" containerID="cc60dcd005d69c1fe23bdda8448e09047589431a4a56d1ed26f76a9581be6edc" exitCode=0 Nov 28 10:24:06 crc kubenswrapper[4946]: I1128 10:24:06.438182 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xg2f" event={"ID":"7ebed15f-8435-4c30-bfb2-f8cc7014f9ed","Type":"ContainerDied","Data":"cc60dcd005d69c1fe23bdda8448e09047589431a4a56d1ed26f76a9581be6edc"} Nov 28 10:24:07 crc kubenswrapper[4946]: I1128 10:24:07.450635 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvsvd" event={"ID":"e181daa6-ce5f-4e44-85b6-e19fa4ac3d13","Type":"ContainerStarted","Data":"753e2573d48e155a91ab6c65cc7910e789dea9b11cece897d4b7c83fc9d3f788"} Nov 28 10:24:07 crc kubenswrapper[4946]: I1128 10:24:07.453482 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xg2f" event={"ID":"7ebed15f-8435-4c30-bfb2-f8cc7014f9ed","Type":"ContainerStarted","Data":"383f7f933ed2d84542b9f2fd514604d8f1ad121b1540bc267638db1fa480d039"} Nov 28 10:24:07 crc kubenswrapper[4946]: I1128 10:24:07.483295 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5xg2f" podStartSLOduration=2.617322088 podStartE2EDuration="6.483274489s" podCreationTimestamp="2025-11-28 10:24:01 +0000 UTC" firstStartedPulling="2025-11-28 10:24:03.36474078 +0000 UTC m=+12697.742805901" lastFinishedPulling="2025-11-28 10:24:07.230693191 +0000 UTC m=+12701.608758302" observedRunningTime="2025-11-28 10:24:07.48170359 +0000 UTC m=+12701.859768701" watchObservedRunningTime="2025-11-28 10:24:07.483274489 +0000 UTC m=+12701.861339600" Nov 28 10:24:08 crc kubenswrapper[4946]: I1128 10:24:08.464659 4946 generic.go:334] "Generic (PLEG): container finished" podID="e181daa6-ce5f-4e44-85b6-e19fa4ac3d13" containerID="753e2573d48e155a91ab6c65cc7910e789dea9b11cece897d4b7c83fc9d3f788" exitCode=0 Nov 28 10:24:08 crc kubenswrapper[4946]: I1128 10:24:08.464841 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvsvd" event={"ID":"e181daa6-ce5f-4e44-85b6-e19fa4ac3d13","Type":"ContainerDied","Data":"753e2573d48e155a91ab6c65cc7910e789dea9b11cece897d4b7c83fc9d3f788"} Nov 28 10:24:10 crc kubenswrapper[4946]: I1128 10:24:10.502828 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvsvd" event={"ID":"e181daa6-ce5f-4e44-85b6-e19fa4ac3d13","Type":"ContainerStarted","Data":"b3a9a3516d722cbe8cfb586312f7c080a41dd59f897d31b8a7e933596fe21b63"} Nov 28 10:24:10 crc kubenswrapper[4946]: I1128 10:24:10.532501 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mvsvd" podStartSLOduration=4.660926964 podStartE2EDuration="7.532483677s" podCreationTimestamp="2025-11-28 10:24:03 +0000 UTC" firstStartedPulling="2025-11-28 10:24:06.435001308 +0000 UTC m=+12700.813066419" lastFinishedPulling="2025-11-28 10:24:09.306558021 +0000 UTC m=+12703.684623132" observedRunningTime="2025-11-28 10:24:10.521897265 +0000 UTC m=+12704.899962376" watchObservedRunningTime="2025-11-28 10:24:10.532483677 +0000 UTC m=+12704.910548788" Nov 28 10:24:11 crc kubenswrapper[4946]: I1128 10:24:11.917306 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5xg2f" Nov 28 10:24:11 crc kubenswrapper[4946]: I1128 10:24:11.917576 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5xg2f" Nov 28 10:24:12 crc kubenswrapper[4946]: I1128 10:24:12.017429 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5xg2f" Nov 28 10:24:12 crc kubenswrapper[4946]: I1128 10:24:12.579639 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5xg2f" Nov 28 10:24:13 crc kubenswrapper[4946]: I1128 10:24:13.582599 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5xg2f"] Nov 28 10:24:14 crc kubenswrapper[4946]: I1128 10:24:14.362520 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mvsvd" Nov 28 10:24:14 crc kubenswrapper[4946]: I1128 10:24:14.362898 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mvsvd" Nov 28 10:24:14 crc kubenswrapper[4946]: I1128 10:24:14.425417 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mvsvd" Nov 28 10:24:14 crc kubenswrapper[4946]: I1128 10:24:14.557988 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5xg2f" podUID="7ebed15f-8435-4c30-bfb2-f8cc7014f9ed" containerName="registry-server" containerID="cri-o://383f7f933ed2d84542b9f2fd514604d8f1ad121b1540bc267638db1fa480d039" gracePeriod=2 Nov 28 10:24:14 crc kubenswrapper[4946]: I1128 10:24:14.621071 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mvsvd" Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.149662 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5xg2f" Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.276641 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ebed15f-8435-4c30-bfb2-f8cc7014f9ed-utilities\") pod \"7ebed15f-8435-4c30-bfb2-f8cc7014f9ed\" (UID: \"7ebed15f-8435-4c30-bfb2-f8cc7014f9ed\") " Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.277047 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn78j\" (UniqueName: \"kubernetes.io/projected/7ebed15f-8435-4c30-bfb2-f8cc7014f9ed-kube-api-access-pn78j\") pod \"7ebed15f-8435-4c30-bfb2-f8cc7014f9ed\" (UID: \"7ebed15f-8435-4c30-bfb2-f8cc7014f9ed\") " Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.278091 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ebed15f-8435-4c30-bfb2-f8cc7014f9ed-catalog-content\") pod \"7ebed15f-8435-4c30-bfb2-f8cc7014f9ed\" (UID: \"7ebed15f-8435-4c30-bfb2-f8cc7014f9ed\") " Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.277722 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ebed15f-8435-4c30-bfb2-f8cc7014f9ed-utilities" (OuterVolumeSpecName: "utilities") pod "7ebed15f-8435-4c30-bfb2-f8cc7014f9ed" (UID: "7ebed15f-8435-4c30-bfb2-f8cc7014f9ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.278706 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ebed15f-8435-4c30-bfb2-f8cc7014f9ed-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.284384 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ebed15f-8435-4c30-bfb2-f8cc7014f9ed-kube-api-access-pn78j" (OuterVolumeSpecName: "kube-api-access-pn78j") pod "7ebed15f-8435-4c30-bfb2-f8cc7014f9ed" (UID: "7ebed15f-8435-4c30-bfb2-f8cc7014f9ed"). InnerVolumeSpecName "kube-api-access-pn78j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.322733 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ebed15f-8435-4c30-bfb2-f8cc7014f9ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ebed15f-8435-4c30-bfb2-f8cc7014f9ed" (UID: "7ebed15f-8435-4c30-bfb2-f8cc7014f9ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.380483 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ebed15f-8435-4c30-bfb2-f8cc7014f9ed-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.380515 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn78j\" (UniqueName: \"kubernetes.io/projected/7ebed15f-8435-4c30-bfb2-f8cc7014f9ed-kube-api-access-pn78j\") on node \"crc\" DevicePath \"\"" Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.558525 4946 generic.go:334] "Generic (PLEG): container finished" podID="7ebed15f-8435-4c30-bfb2-f8cc7014f9ed" containerID="383f7f933ed2d84542b9f2fd514604d8f1ad121b1540bc267638db1fa480d039" exitCode=0 Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.558605 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5xg2f" Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.558626 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xg2f" event={"ID":"7ebed15f-8435-4c30-bfb2-f8cc7014f9ed","Type":"ContainerDied","Data":"383f7f933ed2d84542b9f2fd514604d8f1ad121b1540bc267638db1fa480d039"} Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.559806 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xg2f" event={"ID":"7ebed15f-8435-4c30-bfb2-f8cc7014f9ed","Type":"ContainerDied","Data":"650df9f380e4f0121d32ebc25411115653a3043dc96825064d2cba24748e7286"} Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.559828 4946 scope.go:117] "RemoveContainer" containerID="383f7f933ed2d84542b9f2fd514604d8f1ad121b1540bc267638db1fa480d039" Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.580414 4946 scope.go:117] "RemoveContainer" containerID="cc60dcd005d69c1fe23bdda8448e09047589431a4a56d1ed26f76a9581be6edc" Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.596516 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5xg2f"] Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.604666 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5xg2f"] Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.630139 4946 scope.go:117] "RemoveContainer" containerID="093edc29a9cecf50f2bec94218c6eb84c86763b1caa4a8744d7b3cd4f3e1476b" Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.667877 4946 scope.go:117] "RemoveContainer" containerID="383f7f933ed2d84542b9f2fd514604d8f1ad121b1540bc267638db1fa480d039" Nov 28 10:24:15 crc kubenswrapper[4946]: E1128 10:24:15.668166 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"383f7f933ed2d84542b9f2fd514604d8f1ad121b1540bc267638db1fa480d039\": container with ID starting with 383f7f933ed2d84542b9f2fd514604d8f1ad121b1540bc267638db1fa480d039 not found: ID does not exist" containerID="383f7f933ed2d84542b9f2fd514604d8f1ad121b1540bc267638db1fa480d039" Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.668196 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"383f7f933ed2d84542b9f2fd514604d8f1ad121b1540bc267638db1fa480d039"} err="failed to get container status \"383f7f933ed2d84542b9f2fd514604d8f1ad121b1540bc267638db1fa480d039\": rpc error: code = NotFound desc = could not find container \"383f7f933ed2d84542b9f2fd514604d8f1ad121b1540bc267638db1fa480d039\": container with ID starting with 383f7f933ed2d84542b9f2fd514604d8f1ad121b1540bc267638db1fa480d039 not found: ID does not exist" Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.668215 4946 scope.go:117] "RemoveContainer" containerID="cc60dcd005d69c1fe23bdda8448e09047589431a4a56d1ed26f76a9581be6edc" Nov 28 10:24:15 crc kubenswrapper[4946]: E1128 10:24:15.668385 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc60dcd005d69c1fe23bdda8448e09047589431a4a56d1ed26f76a9581be6edc\": container with ID starting with cc60dcd005d69c1fe23bdda8448e09047589431a4a56d1ed26f76a9581be6edc not found: ID does not exist" containerID="cc60dcd005d69c1fe23bdda8448e09047589431a4a56d1ed26f76a9581be6edc" Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.668410 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc60dcd005d69c1fe23bdda8448e09047589431a4a56d1ed26f76a9581be6edc"} err="failed to get container status \"cc60dcd005d69c1fe23bdda8448e09047589431a4a56d1ed26f76a9581be6edc\": rpc error: code = NotFound desc = could not find container \"cc60dcd005d69c1fe23bdda8448e09047589431a4a56d1ed26f76a9581be6edc\": container with ID starting with cc60dcd005d69c1fe23bdda8448e09047589431a4a56d1ed26f76a9581be6edc not found: ID does not exist" Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.668428 4946 scope.go:117] "RemoveContainer" containerID="093edc29a9cecf50f2bec94218c6eb84c86763b1caa4a8744d7b3cd4f3e1476b" Nov 28 10:24:15 crc kubenswrapper[4946]: E1128 10:24:15.668646 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"093edc29a9cecf50f2bec94218c6eb84c86763b1caa4a8744d7b3cd4f3e1476b\": container with ID starting with 093edc29a9cecf50f2bec94218c6eb84c86763b1caa4a8744d7b3cd4f3e1476b not found: ID does not exist" containerID="093edc29a9cecf50f2bec94218c6eb84c86763b1caa4a8744d7b3cd4f3e1476b" Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.668671 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"093edc29a9cecf50f2bec94218c6eb84c86763b1caa4a8744d7b3cd4f3e1476b"} err="failed to get container status \"093edc29a9cecf50f2bec94218c6eb84c86763b1caa4a8744d7b3cd4f3e1476b\": rpc error: code = NotFound desc = could not find container \"093edc29a9cecf50f2bec94218c6eb84c86763b1caa4a8744d7b3cd4f3e1476b\": container with ID starting with 093edc29a9cecf50f2bec94218c6eb84c86763b1caa4a8744d7b3cd4f3e1476b not found: ID does not exist" Nov 28 10:24:15 crc kubenswrapper[4946]: I1128 10:24:15.947470 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-bzbsm_a7636476-2648-4dcb-8349-15d10c0f5664/nmstate-console-plugin/0.log" Nov 28 10:24:16 crc kubenswrapper[4946]: I1128 10:24:16.003632 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ebed15f-8435-4c30-bfb2-f8cc7014f9ed" path="/var/lib/kubelet/pods/7ebed15f-8435-4c30-bfb2-f8cc7014f9ed/volumes" Nov 28 10:24:16 crc kubenswrapper[4946]: I1128 10:24:16.125390 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-pbwhg_cda20c55-deee-4488-af64-283f20a5679f/nmstate-handler/0.log" Nov 28 10:24:16 crc kubenswrapper[4946]: I1128 10:24:16.152760 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-w7xkb_da83efd2-a141-4a4d-86c6-cbf48bbc47d9/kube-rbac-proxy/0.log" Nov 28 10:24:16 crc kubenswrapper[4946]: I1128 10:24:16.173150 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-w7xkb_da83efd2-a141-4a4d-86c6-cbf48bbc47d9/nmstate-metrics/0.log" Nov 28 10:24:16 crc kubenswrapper[4946]: I1128 10:24:16.388756 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-xnjdw_4c8ab8df-c0e1-476a-a5aa-edc322f5c62c/nmstate-webhook/0.log" Nov 28 10:24:16 crc kubenswrapper[4946]: I1128 10:24:16.428924 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-586jl_3c0f21c1-4c34-4a8a-9533-2fe283ff8d16/nmstate-operator/0.log" Nov 28 10:24:16 crc kubenswrapper[4946]: I1128 10:24:16.576229 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvsvd"] Nov 28 10:24:16 crc kubenswrapper[4946]: I1128 10:24:16.576481 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mvsvd" podUID="e181daa6-ce5f-4e44-85b6-e19fa4ac3d13" containerName="registry-server" containerID="cri-o://b3a9a3516d722cbe8cfb586312f7c080a41dd59f897d31b8a7e933596fe21b63" gracePeriod=2 Nov 28 10:24:17 crc kubenswrapper[4946]: I1128 10:24:17.102860 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvsvd" Nov 28 10:24:17 crc kubenswrapper[4946]: I1128 10:24:17.111615 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwq6t\" (UniqueName: \"kubernetes.io/projected/e181daa6-ce5f-4e44-85b6-e19fa4ac3d13-kube-api-access-qwq6t\") pod \"e181daa6-ce5f-4e44-85b6-e19fa4ac3d13\" (UID: \"e181daa6-ce5f-4e44-85b6-e19fa4ac3d13\") " Nov 28 10:24:17 crc kubenswrapper[4946]: I1128 10:24:17.111802 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e181daa6-ce5f-4e44-85b6-e19fa4ac3d13-catalog-content\") pod \"e181daa6-ce5f-4e44-85b6-e19fa4ac3d13\" (UID: \"e181daa6-ce5f-4e44-85b6-e19fa4ac3d13\") " Nov 28 10:24:17 crc kubenswrapper[4946]: I1128 10:24:17.111911 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e181daa6-ce5f-4e44-85b6-e19fa4ac3d13-utilities\") pod \"e181daa6-ce5f-4e44-85b6-e19fa4ac3d13\" (UID: \"e181daa6-ce5f-4e44-85b6-e19fa4ac3d13\") " Nov 28 10:24:17 crc kubenswrapper[4946]: I1128 10:24:17.112624 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e181daa6-ce5f-4e44-85b6-e19fa4ac3d13-utilities" (OuterVolumeSpecName: "utilities") pod "e181daa6-ce5f-4e44-85b6-e19fa4ac3d13" (UID: "e181daa6-ce5f-4e44-85b6-e19fa4ac3d13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:24:17 crc kubenswrapper[4946]: I1128 10:24:17.120694 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e181daa6-ce5f-4e44-85b6-e19fa4ac3d13-kube-api-access-qwq6t" (OuterVolumeSpecName: "kube-api-access-qwq6t") pod "e181daa6-ce5f-4e44-85b6-e19fa4ac3d13" (UID: "e181daa6-ce5f-4e44-85b6-e19fa4ac3d13"). InnerVolumeSpecName "kube-api-access-qwq6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 10:24:17 crc kubenswrapper[4946]: I1128 10:24:17.160799 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e181daa6-ce5f-4e44-85b6-e19fa4ac3d13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e181daa6-ce5f-4e44-85b6-e19fa4ac3d13" (UID: "e181daa6-ce5f-4e44-85b6-e19fa4ac3d13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:24:17 crc kubenswrapper[4946]: I1128 10:24:17.221068 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e181daa6-ce5f-4e44-85b6-e19fa4ac3d13-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 10:24:17 crc kubenswrapper[4946]: I1128 10:24:17.221760 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e181daa6-ce5f-4e44-85b6-e19fa4ac3d13-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 10:24:17 crc kubenswrapper[4946]: I1128 10:24:17.221850 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwq6t\" (UniqueName: \"kubernetes.io/projected/e181daa6-ce5f-4e44-85b6-e19fa4ac3d13-kube-api-access-qwq6t\") on node \"crc\" DevicePath \"\"" Nov 28 10:24:17 crc kubenswrapper[4946]: I1128 10:24:17.577609 4946 generic.go:334] "Generic (PLEG): container finished" podID="e181daa6-ce5f-4e44-85b6-e19fa4ac3d13" containerID="b3a9a3516d722cbe8cfb586312f7c080a41dd59f897d31b8a7e933596fe21b63" exitCode=0 Nov 28 10:24:17 crc kubenswrapper[4946]: I1128 10:24:17.577646 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvsvd" event={"ID":"e181daa6-ce5f-4e44-85b6-e19fa4ac3d13","Type":"ContainerDied","Data":"b3a9a3516d722cbe8cfb586312f7c080a41dd59f897d31b8a7e933596fe21b63"} Nov 28 10:24:17 crc kubenswrapper[4946]: I1128 10:24:17.577669 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvsvd" event={"ID":"e181daa6-ce5f-4e44-85b6-e19fa4ac3d13","Type":"ContainerDied","Data":"106a61aa0d3546b1b55273c348a724e8c54603a17c580b79e54911f1c126d4fc"} Nov 28 10:24:17 crc kubenswrapper[4946]: I1128 10:24:17.577684 4946 scope.go:117] "RemoveContainer" containerID="b3a9a3516d722cbe8cfb586312f7c080a41dd59f897d31b8a7e933596fe21b63" Nov 28 10:24:17 crc kubenswrapper[4946]: I1128 10:24:17.578018 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvsvd" Nov 28 10:24:17 crc kubenswrapper[4946]: I1128 10:24:17.595735 4946 scope.go:117] "RemoveContainer" containerID="753e2573d48e155a91ab6c65cc7910e789dea9b11cece897d4b7c83fc9d3f788" Nov 28 10:24:17 crc kubenswrapper[4946]: I1128 10:24:17.615174 4946 scope.go:117] "RemoveContainer" containerID="98bcaf31840aeb9d39536ec4c097c649d0218f036b81771fe7f8cfdc74acb0d2" Nov 28 10:24:17 crc kubenswrapper[4946]: I1128 10:24:17.636261 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvsvd"] Nov 28 10:24:17 crc kubenswrapper[4946]: I1128 10:24:17.646079 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvsvd"] Nov 28 10:24:17 crc kubenswrapper[4946]: I1128 10:24:17.672052 4946 scope.go:117] "RemoveContainer" containerID="b3a9a3516d722cbe8cfb586312f7c080a41dd59f897d31b8a7e933596fe21b63" Nov 28 10:24:17 crc kubenswrapper[4946]: E1128 10:24:17.672488 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3a9a3516d722cbe8cfb586312f7c080a41dd59f897d31b8a7e933596fe21b63\": container with ID starting with b3a9a3516d722cbe8cfb586312f7c080a41dd59f897d31b8a7e933596fe21b63 not found: ID does not exist" containerID="b3a9a3516d722cbe8cfb586312f7c080a41dd59f897d31b8a7e933596fe21b63" Nov 28 10:24:17 crc kubenswrapper[4946]: I1128 10:24:17.672516 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3a9a3516d722cbe8cfb586312f7c080a41dd59f897d31b8a7e933596fe21b63"} err="failed to get container status \"b3a9a3516d722cbe8cfb586312f7c080a41dd59f897d31b8a7e933596fe21b63\": rpc error: code = NotFound desc = could not find container \"b3a9a3516d722cbe8cfb586312f7c080a41dd59f897d31b8a7e933596fe21b63\": container with ID starting with b3a9a3516d722cbe8cfb586312f7c080a41dd59f897d31b8a7e933596fe21b63 not found: ID does not exist" Nov 28 10:24:17 crc kubenswrapper[4946]: I1128 10:24:17.672537 4946 scope.go:117] "RemoveContainer" containerID="753e2573d48e155a91ab6c65cc7910e789dea9b11cece897d4b7c83fc9d3f788" Nov 28 10:24:17 crc kubenswrapper[4946]: E1128 10:24:17.672767 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"753e2573d48e155a91ab6c65cc7910e789dea9b11cece897d4b7c83fc9d3f788\": container with ID starting with 753e2573d48e155a91ab6c65cc7910e789dea9b11cece897d4b7c83fc9d3f788 not found: ID does not exist" containerID="753e2573d48e155a91ab6c65cc7910e789dea9b11cece897d4b7c83fc9d3f788" Nov 28 10:24:17 crc kubenswrapper[4946]: I1128 10:24:17.672788 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"753e2573d48e155a91ab6c65cc7910e789dea9b11cece897d4b7c83fc9d3f788"} err="failed to get container status \"753e2573d48e155a91ab6c65cc7910e789dea9b11cece897d4b7c83fc9d3f788\": rpc error: code = NotFound desc = could not find container \"753e2573d48e155a91ab6c65cc7910e789dea9b11cece897d4b7c83fc9d3f788\": container with ID starting with 753e2573d48e155a91ab6c65cc7910e789dea9b11cece897d4b7c83fc9d3f788 not found: ID does not exist" Nov 28 10:24:17 crc kubenswrapper[4946]: I1128 10:24:17.672802 4946 scope.go:117] "RemoveContainer" containerID="98bcaf31840aeb9d39536ec4c097c649d0218f036b81771fe7f8cfdc74acb0d2" Nov 28 10:24:17 crc kubenswrapper[4946]: E1128 10:24:17.673002 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98bcaf31840aeb9d39536ec4c097c649d0218f036b81771fe7f8cfdc74acb0d2\": container with ID starting with 98bcaf31840aeb9d39536ec4c097c649d0218f036b81771fe7f8cfdc74acb0d2 not found: ID does not exist" containerID="98bcaf31840aeb9d39536ec4c097c649d0218f036b81771fe7f8cfdc74acb0d2" Nov 28 10:24:17 crc kubenswrapper[4946]: I1128 10:24:17.673024 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98bcaf31840aeb9d39536ec4c097c649d0218f036b81771fe7f8cfdc74acb0d2"} err="failed to get container status \"98bcaf31840aeb9d39536ec4c097c649d0218f036b81771fe7f8cfdc74acb0d2\": rpc error: code = NotFound desc = could not find container \"98bcaf31840aeb9d39536ec4c097c649d0218f036b81771fe7f8cfdc74acb0d2\": container with ID starting with 98bcaf31840aeb9d39536ec4c097c649d0218f036b81771fe7f8cfdc74acb0d2 not found: ID does not exist" Nov 28 10:24:18 crc kubenswrapper[4946]: I1128 10:24:18.007721 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e181daa6-ce5f-4e44-85b6-e19fa4ac3d13" path="/var/lib/kubelet/pods/e181daa6-ce5f-4e44-85b6-e19fa4ac3d13/volumes" Nov 28 10:24:19 crc kubenswrapper[4946]: I1128 10:24:19.990394 4946 scope.go:117] "RemoveContainer" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" Nov 28 10:24:19 crc kubenswrapper[4946]: E1128 10:24:19.990847 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:24:33 crc kubenswrapper[4946]: I1128 10:24:33.628728 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-z8z62_115181ac-3a10-44b9-b7b1-998e1fe24938/kube-rbac-proxy/0.log" Nov 28 10:24:33 crc kubenswrapper[4946]: I1128 10:24:33.851263 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj7q9_8327637b-5c53-4bd6-b8bc-fbe3516bc4fc/cp-frr-files/0.log" Nov 28 10:24:33 crc kubenswrapper[4946]: I1128 10:24:33.960007 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-z8z62_115181ac-3a10-44b9-b7b1-998e1fe24938/controller/0.log" Nov 28 10:24:33 crc kubenswrapper[4946]: I1128 10:24:33.989702 4946 scope.go:117] "RemoveContainer" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" Nov 28 10:24:33 crc kubenswrapper[4946]: E1128 10:24:33.990107 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:24:34 crc kubenswrapper[4946]: I1128 10:24:34.091141 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj7q9_8327637b-5c53-4bd6-b8bc-fbe3516bc4fc/cp-frr-files/0.log" Nov 28 10:24:34 crc kubenswrapper[4946]: I1128 10:24:34.143514 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj7q9_8327637b-5c53-4bd6-b8bc-fbe3516bc4fc/cp-metrics/0.log" Nov 28 10:24:34 crc kubenswrapper[4946]: I1128 10:24:34.150692 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj7q9_8327637b-5c53-4bd6-b8bc-fbe3516bc4fc/cp-reloader/0.log" Nov 28 10:24:34 crc kubenswrapper[4946]: I1128 10:24:34.163361 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj7q9_8327637b-5c53-4bd6-b8bc-fbe3516bc4fc/cp-reloader/0.log" Nov 28 10:24:34 crc kubenswrapper[4946]: I1128 10:24:34.360377 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj7q9_8327637b-5c53-4bd6-b8bc-fbe3516bc4fc/cp-frr-files/0.log" Nov 28 10:24:34 crc kubenswrapper[4946]: I1128 10:24:34.399774 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj7q9_8327637b-5c53-4bd6-b8bc-fbe3516bc4fc/cp-reloader/0.log" Nov 28 10:24:34 crc kubenswrapper[4946]: I1128 10:24:34.421598 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj7q9_8327637b-5c53-4bd6-b8bc-fbe3516bc4fc/cp-metrics/0.log" Nov 28 10:24:34 crc kubenswrapper[4946]: I1128 10:24:34.428935 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj7q9_8327637b-5c53-4bd6-b8bc-fbe3516bc4fc/cp-metrics/0.log" Nov 28 10:24:34 crc kubenswrapper[4946]: I1128 10:24:34.575510 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj7q9_8327637b-5c53-4bd6-b8bc-fbe3516bc4fc/cp-frr-files/0.log" Nov 28 10:24:34 crc kubenswrapper[4946]: I1128 10:24:34.610705 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj7q9_8327637b-5c53-4bd6-b8bc-fbe3516bc4fc/cp-reloader/0.log" Nov 28 10:24:34 crc kubenswrapper[4946]: I1128 10:24:34.617841 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj7q9_8327637b-5c53-4bd6-b8bc-fbe3516bc4fc/cp-metrics/0.log" Nov 28 10:24:34 crc kubenswrapper[4946]: I1128 10:24:34.627479 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj7q9_8327637b-5c53-4bd6-b8bc-fbe3516bc4fc/controller/0.log" Nov 28 10:24:34 crc kubenswrapper[4946]: I1128 10:24:34.793829 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj7q9_8327637b-5c53-4bd6-b8bc-fbe3516bc4fc/kube-rbac-proxy/0.log" Nov 28 10:24:34 crc kubenswrapper[4946]: I1128 10:24:34.853683 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj7q9_8327637b-5c53-4bd6-b8bc-fbe3516bc4fc/kube-rbac-proxy-frr/0.log" Nov 28 10:24:34 crc kubenswrapper[4946]: I1128 10:24:34.864725 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj7q9_8327637b-5c53-4bd6-b8bc-fbe3516bc4fc/frr-metrics/0.log" Nov 28 10:24:35 crc kubenswrapper[4946]: I1128 10:24:35.120009 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj7q9_8327637b-5c53-4bd6-b8bc-fbe3516bc4fc/reloader/0.log" Nov 28 10:24:35 crc kubenswrapper[4946]: I1128 10:24:35.366651 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-mwq2z_f6c5f04a-e283-4188-afe9-5ff2c46aba47/frr-k8s-webhook-server/0.log" Nov 28 10:24:35 crc kubenswrapper[4946]: I1128 10:24:35.555249 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-8fc8b48b5-g6c68_5492c299-4681-44a7-ba80-3d4fed664c4a/manager/0.log" Nov 28 10:24:35 crc kubenswrapper[4946]: I1128 10:24:35.699726 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5d845b7ff6-rnvwq_c896810c-7445-4b8a-96a3-7be0f84f3c65/webhook-server/0.log" Nov 28 10:24:35 crc kubenswrapper[4946]: I1128 10:24:35.987165 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fcw5r_0999fee9-3799-4c5f-8338-9ea0b670bed5/kube-rbac-proxy/0.log" Nov 28 10:24:36 crc kubenswrapper[4946]: I1128 10:24:36.772893 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fcw5r_0999fee9-3799-4c5f-8338-9ea0b670bed5/speaker/0.log" Nov 28 10:24:38 crc kubenswrapper[4946]: I1128 10:24:38.126522 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj7q9_8327637b-5c53-4bd6-b8bc-fbe3516bc4fc/frr/0.log" Nov 28 10:24:48 crc kubenswrapper[4946]: I1128 10:24:48.990219 4946 scope.go:117] "RemoveContainer" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" Nov 28 10:24:48 crc kubenswrapper[4946]: E1128 10:24:48.991203 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:24:50 crc kubenswrapper[4946]: I1128 10:24:50.481894 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n_2857b07d-5417-4e58-9de9-cdfd55125727/util/0.log" Nov 28 10:24:50 crc kubenswrapper[4946]: I1128 10:24:50.666882 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n_2857b07d-5417-4e58-9de9-cdfd55125727/util/0.log" Nov 28 10:24:50 crc kubenswrapper[4946]: I1128 10:24:50.697566 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n_2857b07d-5417-4e58-9de9-cdfd55125727/pull/0.log" Nov 28 10:24:50 crc kubenswrapper[4946]: I1128 10:24:50.722325 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n_2857b07d-5417-4e58-9de9-cdfd55125727/pull/0.log" Nov 28 10:24:50 crc kubenswrapper[4946]: I1128 10:24:50.907130 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n_2857b07d-5417-4e58-9de9-cdfd55125727/extract/0.log" Nov 28 10:24:50 crc kubenswrapper[4946]: I1128 10:24:50.980740 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n_2857b07d-5417-4e58-9de9-cdfd55125727/pull/0.log" Nov 28 10:24:51 crc kubenswrapper[4946]: I1128 10:24:50.999498 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931am794n_2857b07d-5417-4e58-9de9-cdfd55125727/util/0.log" Nov 28 10:24:51 crc kubenswrapper[4946]: I1128 10:24:51.157547 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h_79d1ba63-f8be-449c-bd90-3e9eb0276d8e/util/0.log" Nov 28 10:24:51 crc kubenswrapper[4946]: I1128 10:24:51.288341 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h_79d1ba63-f8be-449c-bd90-3e9eb0276d8e/pull/0.log" Nov 28 10:24:51 crc kubenswrapper[4946]: I1128 10:24:51.293509 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h_79d1ba63-f8be-449c-bd90-3e9eb0276d8e/util/0.log" Nov 28 10:24:51 crc kubenswrapper[4946]: I1128 10:24:51.376315 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h_79d1ba63-f8be-449c-bd90-3e9eb0276d8e/pull/0.log" Nov 28 10:24:51 crc kubenswrapper[4946]: I1128 10:24:51.580333 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h_79d1ba63-f8be-449c-bd90-3e9eb0276d8e/pull/0.log" Nov 28 10:24:51 crc kubenswrapper[4946]: I1128 10:24:51.642041 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h_79d1ba63-f8be-449c-bd90-3e9eb0276d8e/util/0.log" Nov 28 10:24:51 crc kubenswrapper[4946]: I1128 10:24:51.670516 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7sg4h_79d1ba63-f8be-449c-bd90-3e9eb0276d8e/extract/0.log" Nov 28 10:24:51 crc kubenswrapper[4946]: I1128 10:24:51.774599 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq_ab2ed536-9dcd-49fc-be72-0bc6232e2bdb/util/0.log" Nov 28 10:24:51 crc kubenswrapper[4946]: I1128 10:24:51.969017 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq_ab2ed536-9dcd-49fc-be72-0bc6232e2bdb/pull/0.log" Nov 28 10:24:52 crc kubenswrapper[4946]: I1128 10:24:52.012223 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq_ab2ed536-9dcd-49fc-be72-0bc6232e2bdb/util/0.log" Nov 28 10:24:52 crc kubenswrapper[4946]: I1128 10:24:52.027045 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq_ab2ed536-9dcd-49fc-be72-0bc6232e2bdb/pull/0.log" Nov 28 10:24:52 crc kubenswrapper[4946]: I1128 10:24:52.223034 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq_ab2ed536-9dcd-49fc-be72-0bc6232e2bdb/pull/0.log" Nov 28 10:24:52 crc kubenswrapper[4946]: I1128 10:24:52.223426 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq_ab2ed536-9dcd-49fc-be72-0bc6232e2bdb/extract/0.log" Nov 28 10:24:52 crc kubenswrapper[4946]: I1128 10:24:52.268114 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210z28nq_ab2ed536-9dcd-49fc-be72-0bc6232e2bdb/util/0.log" Nov 28 10:24:52 crc kubenswrapper[4946]: I1128 10:24:52.396165 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds_0fb75378-f653-400a-8738-81376998a521/util/0.log" Nov 28 10:24:52 crc kubenswrapper[4946]: I1128 10:24:52.567179 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds_0fb75378-f653-400a-8738-81376998a521/pull/0.log" Nov 28 10:24:52 crc kubenswrapper[4946]: I1128 10:24:52.586741 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds_0fb75378-f653-400a-8738-81376998a521/util/0.log" Nov 28 10:24:52 crc kubenswrapper[4946]: I1128 10:24:52.613523 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds_0fb75378-f653-400a-8738-81376998a521/pull/0.log" Nov 28 10:24:52 crc kubenswrapper[4946]: I1128 10:24:52.779899 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds_0fb75378-f653-400a-8738-81376998a521/pull/0.log" Nov 28 10:24:52 crc kubenswrapper[4946]: I1128 10:24:52.784712 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds_0fb75378-f653-400a-8738-81376998a521/util/0.log" Nov 28 10:24:52 crc kubenswrapper[4946]: I1128 10:24:52.845123 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nbfds_0fb75378-f653-400a-8738-81376998a521/extract/0.log" Nov 28 10:24:53 crc kubenswrapper[4946]: I1128 10:24:53.102004 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rx8sd_296f2f04-739f-4ba5-be15-902ec62a6d38/extract-utilities/0.log" Nov 28 10:24:53 crc kubenswrapper[4946]: I1128 10:24:53.270734 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rx8sd_296f2f04-739f-4ba5-be15-902ec62a6d38/extract-utilities/0.log" Nov 28 10:24:53 crc kubenswrapper[4946]: I1128 10:24:53.338454 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rx8sd_296f2f04-739f-4ba5-be15-902ec62a6d38/extract-content/0.log" Nov 28 10:24:53 crc kubenswrapper[4946]: I1128 10:24:53.341240 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rx8sd_296f2f04-739f-4ba5-be15-902ec62a6d38/extract-content/0.log" Nov 28 10:24:53 crc kubenswrapper[4946]: I1128 10:24:53.515540 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rx8sd_296f2f04-739f-4ba5-be15-902ec62a6d38/extract-content/0.log" Nov 28 10:24:53 crc kubenswrapper[4946]: I1128 10:24:53.520123 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rx8sd_296f2f04-739f-4ba5-be15-902ec62a6d38/extract-utilities/0.log" Nov 28 10:24:53 crc kubenswrapper[4946]: I1128 10:24:53.742841 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bd5ld_28162389-233d-44e7-a81c-c92f186e9f94/extract-utilities/0.log" Nov 28 10:24:53 crc kubenswrapper[4946]: I1128 10:24:53.944040 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bd5ld_28162389-233d-44e7-a81c-c92f186e9f94/extract-content/0.log" Nov 28 10:24:53 crc kubenswrapper[4946]: I1128 10:24:53.955656 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bd5ld_28162389-233d-44e7-a81c-c92f186e9f94/extract-utilities/0.log" Nov 28 10:24:54 crc kubenswrapper[4946]: I1128 10:24:54.034802 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bd5ld_28162389-233d-44e7-a81c-c92f186e9f94/extract-content/0.log" Nov 28 10:24:54 crc kubenswrapper[4946]: I1128 10:24:54.193437 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rx8sd_296f2f04-739f-4ba5-be15-902ec62a6d38/registry-server/0.log" Nov 28 10:24:54 crc kubenswrapper[4946]: I1128 10:24:54.229827 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bd5ld_28162389-233d-44e7-a81c-c92f186e9f94/extract-content/0.log" Nov 28 10:24:54 crc kubenswrapper[4946]: I1128 10:24:54.240300 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bd5ld_28162389-233d-44e7-a81c-c92f186e9f94/extract-utilities/0.log" Nov 28 10:24:54 crc kubenswrapper[4946]: I1128 10:24:54.466409 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-lr6sl_29fb1915-7644-4b45-87f3-ad6ffc6b289e/marketplace-operator/0.log" Nov 28 10:24:54 crc kubenswrapper[4946]: I1128 10:24:54.602020 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h2hpw_e8d51c16-8bde-4b97-9ce2-501ddfdaf893/extract-utilities/0.log" Nov 28 10:24:54 crc kubenswrapper[4946]: I1128 10:24:54.795721 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h2hpw_e8d51c16-8bde-4b97-9ce2-501ddfdaf893/extract-content/0.log" Nov 28 10:24:54 crc kubenswrapper[4946]: I1128 10:24:54.847281 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h2hpw_e8d51c16-8bde-4b97-9ce2-501ddfdaf893/extract-utilities/0.log" Nov 28 10:24:54 crc kubenswrapper[4946]: I1128 10:24:54.903062 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h2hpw_e8d51c16-8bde-4b97-9ce2-501ddfdaf893/extract-content/0.log" Nov 28 10:24:55 crc kubenswrapper[4946]: I1128 10:24:55.095160 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h2hpw_e8d51c16-8bde-4b97-9ce2-501ddfdaf893/extract-utilities/0.log" Nov 28 10:24:55 crc kubenswrapper[4946]: I1128 10:24:55.195392 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h2hpw_e8d51c16-8bde-4b97-9ce2-501ddfdaf893/extract-content/0.log" Nov 28 10:24:55 crc kubenswrapper[4946]: I1128 10:24:55.385203 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9s587_0c331d4e-3f84-4908-8dc4-71701be41cc0/extract-utilities/0.log" Nov 28 10:24:55 crc kubenswrapper[4946]: I1128 10:24:55.524835 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9s587_0c331d4e-3f84-4908-8dc4-71701be41cc0/extract-utilities/0.log" Nov 28 10:24:55 crc kubenswrapper[4946]: I1128 10:24:55.556486 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9s587_0c331d4e-3f84-4908-8dc4-71701be41cc0/extract-content/0.log" Nov 28 10:24:55 crc kubenswrapper[4946]: I1128 10:24:55.608937 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9s587_0c331d4e-3f84-4908-8dc4-71701be41cc0/extract-content/0.log" Nov 28 10:24:55 crc kubenswrapper[4946]: I1128 10:24:55.784298 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h2hpw_e8d51c16-8bde-4b97-9ce2-501ddfdaf893/registry-server/0.log" Nov 28 10:24:55 crc kubenswrapper[4946]: I1128 10:24:55.812567 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9s587_0c331d4e-3f84-4908-8dc4-71701be41cc0/extract-utilities/0.log" Nov 28 10:24:55 crc kubenswrapper[4946]: I1128 10:24:55.866615 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9s587_0c331d4e-3f84-4908-8dc4-71701be41cc0/extract-content/0.log" Nov 28 10:24:56 crc kubenswrapper[4946]: I1128 10:24:56.483338 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bd5ld_28162389-233d-44e7-a81c-c92f186e9f94/registry-server/0.log" Nov 28 10:24:57 crc kubenswrapper[4946]: I1128 10:24:57.421238 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9s587_0c331d4e-3f84-4908-8dc4-71701be41cc0/registry-server/0.log" Nov 28 10:25:01 crc kubenswrapper[4946]: I1128 10:25:01.990718 4946 scope.go:117] "RemoveContainer" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" Nov 28 10:25:03 crc kubenswrapper[4946]: I1128 10:25:03.049367 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"db49a87b1329c0d6fd9e6b54d7f6136f6144ec6d1c3b21297d459baf4a53d21f"} Nov 28 10:25:09 crc kubenswrapper[4946]: I1128 10:25:09.294371 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-nwc4q_e1430df3-899d-4393-8356-76efd1a21981/prometheus-operator/0.log" Nov 28 10:25:09 crc kubenswrapper[4946]: I1128 10:25:09.434890 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-66bb8447f-2vvdq_ea0a1b65-05a9-4a34-866a-5c22e1ee9235/prometheus-operator-admission-webhook/0.log" Nov 28 10:25:09 crc kubenswrapper[4946]: I1128 10:25:09.496531 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-66bb8447f-b5sqs_cebbc948-553e-4951-a3bc-b5c83e87f9ca/prometheus-operator-admission-webhook/0.log" Nov 28 10:25:09 crc kubenswrapper[4946]: I1128 10:25:09.696860 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-v9829_38d84819-1411-47fa-92da-d212f932f6ed/operator/0.log" Nov 28 10:25:09 crc kubenswrapper[4946]: I1128 10:25:09.700065 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-tplpq_d50c9b34-ab7c-4f34-ab0a-04149dad3959/perses-operator/0.log" Nov 28 10:25:56 crc kubenswrapper[4946]: I1128 10:25:56.116951 4946 scope.go:117] "RemoveContainer" containerID="be2affc3b48e7fc34bab159cff08fec8ae65bbc29ae0a6d6bc728bb0c8a001cf" Nov 28 10:27:24 crc kubenswrapper[4946]: I1128 10:27:24.730836 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 10:27:24 crc kubenswrapper[4946]: I1128 10:27:24.731595 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 10:27:46 crc kubenswrapper[4946]: I1128 10:27:46.106096 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sm6lt"] Nov 28 10:27:46 crc kubenswrapper[4946]: E1128 10:27:46.107248 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ebed15f-8435-4c30-bfb2-f8cc7014f9ed" containerName="extract-content" Nov 28 10:27:46 crc kubenswrapper[4946]: I1128 10:27:46.107267 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ebed15f-8435-4c30-bfb2-f8cc7014f9ed" containerName="extract-content" Nov 28 10:27:46 crc kubenswrapper[4946]: E1128 10:27:46.107287 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e181daa6-ce5f-4e44-85b6-e19fa4ac3d13" containerName="extract-utilities" Nov 28 10:27:46 crc kubenswrapper[4946]: I1128 10:27:46.107297 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="e181daa6-ce5f-4e44-85b6-e19fa4ac3d13" containerName="extract-utilities" Nov 28 10:27:46 crc kubenswrapper[4946]: E1128 10:27:46.107321 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e181daa6-ce5f-4e44-85b6-e19fa4ac3d13" containerName="registry-server" Nov 28 10:27:46 crc kubenswrapper[4946]: I1128 10:27:46.107330 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="e181daa6-ce5f-4e44-85b6-e19fa4ac3d13" containerName="registry-server" Nov 28 10:27:46 crc kubenswrapper[4946]: E1128 10:27:46.107345 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ebed15f-8435-4c30-bfb2-f8cc7014f9ed" containerName="extract-utilities" Nov 28 10:27:46 crc kubenswrapper[4946]: I1128 10:27:46.107352 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ebed15f-8435-4c30-bfb2-f8cc7014f9ed" containerName="extract-utilities" Nov 28 10:27:46 crc kubenswrapper[4946]: E1128 10:27:46.107373 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ebed15f-8435-4c30-bfb2-f8cc7014f9ed" containerName="registry-server" Nov 28 10:27:46 crc kubenswrapper[4946]: I1128 10:27:46.107381 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ebed15f-8435-4c30-bfb2-f8cc7014f9ed" containerName="registry-server" Nov 28 10:27:46 crc kubenswrapper[4946]: E1128 10:27:46.107399 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e181daa6-ce5f-4e44-85b6-e19fa4ac3d13" containerName="extract-content" Nov 28 10:27:46 crc kubenswrapper[4946]: I1128 10:27:46.107406 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="e181daa6-ce5f-4e44-85b6-e19fa4ac3d13" containerName="extract-content" Nov 28 10:27:46 crc kubenswrapper[4946]: I1128 10:27:46.107888 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="e181daa6-ce5f-4e44-85b6-e19fa4ac3d13" containerName="registry-server" Nov 28 10:27:46 crc kubenswrapper[4946]: I1128 10:27:46.107908 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ebed15f-8435-4c30-bfb2-f8cc7014f9ed" containerName="registry-server" Nov 28 10:27:46 crc kubenswrapper[4946]: I1128 10:27:46.109840 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sm6lt" Nov 28 10:27:46 crc kubenswrapper[4946]: I1128 10:27:46.120939 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sm6lt"] Nov 28 10:27:46 crc kubenswrapper[4946]: I1128 10:27:46.157864 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca97690-b498-404a-86af-4286224fb374-catalog-content\") pod \"certified-operators-sm6lt\" (UID: \"9ca97690-b498-404a-86af-4286224fb374\") " pod="openshift-marketplace/certified-operators-sm6lt" Nov 28 10:27:46 crc kubenswrapper[4946]: I1128 10:27:46.158020 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca97690-b498-404a-86af-4286224fb374-utilities\") pod \"certified-operators-sm6lt\" (UID: \"9ca97690-b498-404a-86af-4286224fb374\") " pod="openshift-marketplace/certified-operators-sm6lt" Nov 28 10:27:46 crc kubenswrapper[4946]: I1128 10:27:46.158104 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n574\" (UniqueName: \"kubernetes.io/projected/9ca97690-b498-404a-86af-4286224fb374-kube-api-access-9n574\") pod \"certified-operators-sm6lt\" (UID: \"9ca97690-b498-404a-86af-4286224fb374\") " pod="openshift-marketplace/certified-operators-sm6lt" Nov 28 10:27:46 crc kubenswrapper[4946]: I1128 10:27:46.259779 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca97690-b498-404a-86af-4286224fb374-catalog-content\") pod \"certified-operators-sm6lt\" (UID: \"9ca97690-b498-404a-86af-4286224fb374\") " pod="openshift-marketplace/certified-operators-sm6lt" Nov 28 10:27:46 crc kubenswrapper[4946]: I1128 10:27:46.259926 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca97690-b498-404a-86af-4286224fb374-utilities\") pod \"certified-operators-sm6lt\" (UID: \"9ca97690-b498-404a-86af-4286224fb374\") " pod="openshift-marketplace/certified-operators-sm6lt" Nov 28 10:27:46 crc kubenswrapper[4946]: I1128 10:27:46.260012 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n574\" (UniqueName: \"kubernetes.io/projected/9ca97690-b498-404a-86af-4286224fb374-kube-api-access-9n574\") pod \"certified-operators-sm6lt\" (UID: \"9ca97690-b498-404a-86af-4286224fb374\") " pod="openshift-marketplace/certified-operators-sm6lt" Nov 28 10:27:46 crc kubenswrapper[4946]: I1128 10:27:46.260831 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca97690-b498-404a-86af-4286224fb374-catalog-content\") pod \"certified-operators-sm6lt\" (UID: \"9ca97690-b498-404a-86af-4286224fb374\") " pod="openshift-marketplace/certified-operators-sm6lt" Nov 28 10:27:46 crc kubenswrapper[4946]: I1128 10:27:46.261091 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca97690-b498-404a-86af-4286224fb374-utilities\") pod \"certified-operators-sm6lt\" (UID: \"9ca97690-b498-404a-86af-4286224fb374\") " pod="openshift-marketplace/certified-operators-sm6lt" Nov 28 10:27:46 crc kubenswrapper[4946]: I1128 10:27:46.282814 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n574\" (UniqueName: \"kubernetes.io/projected/9ca97690-b498-404a-86af-4286224fb374-kube-api-access-9n574\") pod \"certified-operators-sm6lt\" (UID: \"9ca97690-b498-404a-86af-4286224fb374\") " pod="openshift-marketplace/certified-operators-sm6lt" Nov 28 10:27:46 crc kubenswrapper[4946]: I1128 10:27:46.434191 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sm6lt" Nov 28 10:27:46 crc kubenswrapper[4946]: W1128 10:27:46.944642 4946 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ca97690_b498_404a_86af_4286224fb374.slice/crio-dee597d98fa2a1a21c43c4033a7989a6c1a9b174a44f6845a9716e12666c3d0c WatchSource:0}: Error finding container dee597d98fa2a1a21c43c4033a7989a6c1a9b174a44f6845a9716e12666c3d0c: Status 404 returned error can't find the container with id dee597d98fa2a1a21c43c4033a7989a6c1a9b174a44f6845a9716e12666c3d0c Nov 28 10:27:46 crc kubenswrapper[4946]: I1128 10:27:46.955735 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sm6lt"] Nov 28 10:27:47 crc kubenswrapper[4946]: I1128 10:27:47.227787 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sm6lt" event={"ID":"9ca97690-b498-404a-86af-4286224fb374","Type":"ContainerStarted","Data":"813cb988f56a54ba3aa5833fcd265daf3bc828cebb9a536eeaa33ff2db835172"} Nov 28 10:27:47 crc kubenswrapper[4946]: I1128 10:27:47.229255 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sm6lt" event={"ID":"9ca97690-b498-404a-86af-4286224fb374","Type":"ContainerStarted","Data":"dee597d98fa2a1a21c43c4033a7989a6c1a9b174a44f6845a9716e12666c3d0c"} Nov 28 10:27:48 crc kubenswrapper[4946]: I1128 10:27:48.239167 4946 generic.go:334] "Generic (PLEG): container finished" podID="9ca97690-b498-404a-86af-4286224fb374" containerID="813cb988f56a54ba3aa5833fcd265daf3bc828cebb9a536eeaa33ff2db835172" exitCode=0 Nov 28 10:27:48 crc kubenswrapper[4946]: I1128 10:27:48.239255 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sm6lt" event={"ID":"9ca97690-b498-404a-86af-4286224fb374","Type":"ContainerDied","Data":"813cb988f56a54ba3aa5833fcd265daf3bc828cebb9a536eeaa33ff2db835172"} Nov 28 10:27:48 crc kubenswrapper[4946]: I1128 10:27:48.239627 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sm6lt" event={"ID":"9ca97690-b498-404a-86af-4286224fb374","Type":"ContainerStarted","Data":"2fa88740e0689c5f394aeb5d6266a707ee61edb9ed588ac001720d87efe91676"} Nov 28 10:27:49 crc kubenswrapper[4946]: I1128 10:27:49.253538 4946 generic.go:334] "Generic (PLEG): container finished" podID="9ca97690-b498-404a-86af-4286224fb374" containerID="2fa88740e0689c5f394aeb5d6266a707ee61edb9ed588ac001720d87efe91676" exitCode=0 Nov 28 10:27:49 crc kubenswrapper[4946]: I1128 10:27:49.253641 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sm6lt" event={"ID":"9ca97690-b498-404a-86af-4286224fb374","Type":"ContainerDied","Data":"2fa88740e0689c5f394aeb5d6266a707ee61edb9ed588ac001720d87efe91676"} Nov 28 10:27:50 crc kubenswrapper[4946]: I1128 10:27:50.270394 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sm6lt" event={"ID":"9ca97690-b498-404a-86af-4286224fb374","Type":"ContainerStarted","Data":"49fcce84a9517b04bbe73101380e69366ba9f7e63991da9958d3efd49e366e4c"} Nov 28 10:27:50 crc kubenswrapper[4946]: I1128 10:27:50.312239 4946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sm6lt" podStartSLOduration=1.699089634 podStartE2EDuration="4.312217825s" podCreationTimestamp="2025-11-28 10:27:46 +0000 UTC" firstStartedPulling="2025-11-28 10:27:47.231978489 +0000 UTC m=+12921.610043640" lastFinishedPulling="2025-11-28 10:27:49.84510668 +0000 UTC m=+12924.223171831" observedRunningTime="2025-11-28 10:27:50.29625573 +0000 UTC m=+12924.674320881" watchObservedRunningTime="2025-11-28 10:27:50.312217825 +0000 UTC m=+12924.690282946" Nov 28 10:27:54 crc kubenswrapper[4946]: I1128 10:27:54.731517 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 10:27:54 crc kubenswrapper[4946]: I1128 10:27:54.732488 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 10:27:56 crc kubenswrapper[4946]: I1128 10:27:56.434277 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sm6lt" Nov 28 10:27:56 crc kubenswrapper[4946]: I1128 10:27:56.434633 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sm6lt" Nov 28 10:27:56 crc kubenswrapper[4946]: I1128 10:27:56.489919 4946 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sm6lt" Nov 28 10:27:57 crc kubenswrapper[4946]: I1128 10:27:57.439615 4946 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sm6lt" Nov 28 10:27:57 crc kubenswrapper[4946]: I1128 10:27:57.521160 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sm6lt"] Nov 28 10:27:59 crc kubenswrapper[4946]: I1128 10:27:59.427981 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sm6lt" podUID="9ca97690-b498-404a-86af-4286224fb374" containerName="registry-server" containerID="cri-o://49fcce84a9517b04bbe73101380e69366ba9f7e63991da9958d3efd49e366e4c" gracePeriod=2 Nov 28 10:28:00 crc kubenswrapper[4946]: I1128 10:28:00.437963 4946 generic.go:334] "Generic (PLEG): container finished" podID="9ca97690-b498-404a-86af-4286224fb374" containerID="49fcce84a9517b04bbe73101380e69366ba9f7e63991da9958d3efd49e366e4c" exitCode=0 Nov 28 10:28:00 crc kubenswrapper[4946]: I1128 10:28:00.438056 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sm6lt" event={"ID":"9ca97690-b498-404a-86af-4286224fb374","Type":"ContainerDied","Data":"49fcce84a9517b04bbe73101380e69366ba9f7e63991da9958d3efd49e366e4c"} Nov 28 10:28:00 crc kubenswrapper[4946]: I1128 10:28:00.873514 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sm6lt" Nov 28 10:28:01 crc kubenswrapper[4946]: I1128 10:28:01.015169 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca97690-b498-404a-86af-4286224fb374-utilities\") pod \"9ca97690-b498-404a-86af-4286224fb374\" (UID: \"9ca97690-b498-404a-86af-4286224fb374\") " Nov 28 10:28:01 crc kubenswrapper[4946]: I1128 10:28:01.015263 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n574\" (UniqueName: \"kubernetes.io/projected/9ca97690-b498-404a-86af-4286224fb374-kube-api-access-9n574\") pod \"9ca97690-b498-404a-86af-4286224fb374\" (UID: \"9ca97690-b498-404a-86af-4286224fb374\") " Nov 28 10:28:01 crc kubenswrapper[4946]: I1128 10:28:01.015369 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca97690-b498-404a-86af-4286224fb374-catalog-content\") pod \"9ca97690-b498-404a-86af-4286224fb374\" (UID: \"9ca97690-b498-404a-86af-4286224fb374\") " Nov 28 10:28:01 crc kubenswrapper[4946]: I1128 10:28:01.033778 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ca97690-b498-404a-86af-4286224fb374-kube-api-access-9n574" (OuterVolumeSpecName: "kube-api-access-9n574") pod "9ca97690-b498-404a-86af-4286224fb374" (UID: "9ca97690-b498-404a-86af-4286224fb374"). InnerVolumeSpecName "kube-api-access-9n574". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 10:28:01 crc kubenswrapper[4946]: I1128 10:28:01.035797 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ca97690-b498-404a-86af-4286224fb374-utilities" (OuterVolumeSpecName: "utilities") pod "9ca97690-b498-404a-86af-4286224fb374" (UID: "9ca97690-b498-404a-86af-4286224fb374"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:28:01 crc kubenswrapper[4946]: I1128 10:28:01.059974 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ca97690-b498-404a-86af-4286224fb374-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ca97690-b498-404a-86af-4286224fb374" (UID: "9ca97690-b498-404a-86af-4286224fb374"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:28:01 crc kubenswrapper[4946]: I1128 10:28:01.117871 4946 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca97690-b498-404a-86af-4286224fb374-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 10:28:01 crc kubenswrapper[4946]: I1128 10:28:01.117902 4946 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca97690-b498-404a-86af-4286224fb374-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 10:28:01 crc kubenswrapper[4946]: I1128 10:28:01.117919 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n574\" (UniqueName: \"kubernetes.io/projected/9ca97690-b498-404a-86af-4286224fb374-kube-api-access-9n574\") on node \"crc\" DevicePath \"\"" Nov 28 10:28:01 crc kubenswrapper[4946]: I1128 10:28:01.453133 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sm6lt" event={"ID":"9ca97690-b498-404a-86af-4286224fb374","Type":"ContainerDied","Data":"dee597d98fa2a1a21c43c4033a7989a6c1a9b174a44f6845a9716e12666c3d0c"} Nov 28 10:28:01 crc kubenswrapper[4946]: I1128 10:28:01.453206 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sm6lt" Nov 28 10:28:01 crc kubenswrapper[4946]: I1128 10:28:01.453591 4946 scope.go:117] "RemoveContainer" containerID="49fcce84a9517b04bbe73101380e69366ba9f7e63991da9958d3efd49e366e4c" Nov 28 10:28:01 crc kubenswrapper[4946]: I1128 10:28:01.504777 4946 scope.go:117] "RemoveContainer" containerID="2fa88740e0689c5f394aeb5d6266a707ee61edb9ed588ac001720d87efe91676" Nov 28 10:28:01 crc kubenswrapper[4946]: I1128 10:28:01.506849 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sm6lt"] Nov 28 10:28:01 crc kubenswrapper[4946]: I1128 10:28:01.517930 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sm6lt"] Nov 28 10:28:01 crc kubenswrapper[4946]: I1128 10:28:01.542632 4946 scope.go:117] "RemoveContainer" containerID="813cb988f56a54ba3aa5833fcd265daf3bc828cebb9a536eeaa33ff2db835172" Nov 28 10:28:02 crc kubenswrapper[4946]: I1128 10:28:02.004852 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ca97690-b498-404a-86af-4286224fb374" path="/var/lib/kubelet/pods/9ca97690-b498-404a-86af-4286224fb374/volumes" Nov 28 10:28:24 crc kubenswrapper[4946]: I1128 10:28:24.731234 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 10:28:24 crc kubenswrapper[4946]: I1128 10:28:24.731909 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 10:28:24 crc kubenswrapper[4946]: I1128 10:28:24.731964 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 10:28:24 crc kubenswrapper[4946]: I1128 10:28:24.732984 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"db49a87b1329c0d6fd9e6b54d7f6136f6144ec6d1c3b21297d459baf4a53d21f"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 10:28:24 crc kubenswrapper[4946]: I1128 10:28:24.733055 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://db49a87b1329c0d6fd9e6b54d7f6136f6144ec6d1c3b21297d459baf4a53d21f" gracePeriod=600 Nov 28 10:28:25 crc kubenswrapper[4946]: I1128 10:28:25.738404 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="db49a87b1329c0d6fd9e6b54d7f6136f6144ec6d1c3b21297d459baf4a53d21f" exitCode=0 Nov 28 10:28:25 crc kubenswrapper[4946]: I1128 10:28:25.738486 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"db49a87b1329c0d6fd9e6b54d7f6136f6144ec6d1c3b21297d459baf4a53d21f"} Nov 28 10:28:25 crc kubenswrapper[4946]: I1128 10:28:25.739108 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerStarted","Data":"a575fea3224a3d9b6e1a17580b120eb2a3f6c201caacdabe59bfdd8991d33212"} Nov 28 10:28:25 crc kubenswrapper[4946]: I1128 10:28:25.739146 4946 scope.go:117] "RemoveContainer" containerID="38a9f5347a2073fa4297c759e731174d04a4e640bf09f416b3f764353bbe2b18" Nov 28 10:28:55 crc kubenswrapper[4946]: I1128 10:28:55.159882 4946 generic.go:334] "Generic (PLEG): container finished" podID="131948ed-47c4-49e1-837c-d24eab4a8123" containerID="9e7094be3a39a0af544b0cd47937ea7a0321c8ce4ed9069e8d2fe774a6d38966" exitCode=0 Nov 28 10:28:55 crc kubenswrapper[4946]: I1128 10:28:55.160033 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2bvwr/must-gather-nkw64" event={"ID":"131948ed-47c4-49e1-837c-d24eab4a8123","Type":"ContainerDied","Data":"9e7094be3a39a0af544b0cd47937ea7a0321c8ce4ed9069e8d2fe774a6d38966"} Nov 28 10:28:55 crc kubenswrapper[4946]: I1128 10:28:55.162825 4946 scope.go:117] "RemoveContainer" containerID="9e7094be3a39a0af544b0cd47937ea7a0321c8ce4ed9069e8d2fe774a6d38966" Nov 28 10:28:55 crc kubenswrapper[4946]: I1128 10:28:55.477975 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2bvwr_must-gather-nkw64_131948ed-47c4-49e1-837c-d24eab4a8123/gather/0.log" Nov 28 10:29:06 crc kubenswrapper[4946]: I1128 10:29:06.520316 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2bvwr/must-gather-nkw64"] Nov 28 10:29:06 crc kubenswrapper[4946]: I1128 10:29:06.521428 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-2bvwr/must-gather-nkw64" podUID="131948ed-47c4-49e1-837c-d24eab4a8123" containerName="copy" containerID="cri-o://603eea7566ea14411e133c4bc603052fb727c53073fc2353e90a949d2fd6b58d" gracePeriod=2 Nov 28 10:29:06 crc kubenswrapper[4946]: I1128 10:29:06.530612 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2bvwr/must-gather-nkw64"] Nov 28 10:29:07 crc kubenswrapper[4946]: I1128 10:29:07.034003 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2bvwr_must-gather-nkw64_131948ed-47c4-49e1-837c-d24eab4a8123/copy/0.log" Nov 28 10:29:07 crc kubenswrapper[4946]: I1128 10:29:07.034829 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2bvwr/must-gather-nkw64" Nov 28 10:29:07 crc kubenswrapper[4946]: I1128 10:29:07.056203 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxbc4\" (UniqueName: \"kubernetes.io/projected/131948ed-47c4-49e1-837c-d24eab4a8123-kube-api-access-xxbc4\") pod \"131948ed-47c4-49e1-837c-d24eab4a8123\" (UID: \"131948ed-47c4-49e1-837c-d24eab4a8123\") " Nov 28 10:29:07 crc kubenswrapper[4946]: I1128 10:29:07.056303 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/131948ed-47c4-49e1-837c-d24eab4a8123-must-gather-output\") pod \"131948ed-47c4-49e1-837c-d24eab4a8123\" (UID: \"131948ed-47c4-49e1-837c-d24eab4a8123\") " Nov 28 10:29:07 crc kubenswrapper[4946]: I1128 10:29:07.067765 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/131948ed-47c4-49e1-837c-d24eab4a8123-kube-api-access-xxbc4" (OuterVolumeSpecName: "kube-api-access-xxbc4") pod "131948ed-47c4-49e1-837c-d24eab4a8123" (UID: "131948ed-47c4-49e1-837c-d24eab4a8123"). InnerVolumeSpecName "kube-api-access-xxbc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 10:29:07 crc kubenswrapper[4946]: I1128 10:29:07.158735 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxbc4\" (UniqueName: \"kubernetes.io/projected/131948ed-47c4-49e1-837c-d24eab4a8123-kube-api-access-xxbc4\") on node \"crc\" DevicePath \"\"" Nov 28 10:29:07 crc kubenswrapper[4946]: I1128 10:29:07.307564 4946 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2bvwr_must-gather-nkw64_131948ed-47c4-49e1-837c-d24eab4a8123/copy/0.log" Nov 28 10:29:07 crc kubenswrapper[4946]: I1128 10:29:07.308004 4946 generic.go:334] "Generic (PLEG): container finished" podID="131948ed-47c4-49e1-837c-d24eab4a8123" containerID="603eea7566ea14411e133c4bc603052fb727c53073fc2353e90a949d2fd6b58d" exitCode=143 Nov 28 10:29:07 crc kubenswrapper[4946]: I1128 10:29:07.308072 4946 scope.go:117] "RemoveContainer" containerID="603eea7566ea14411e133c4bc603052fb727c53073fc2353e90a949d2fd6b58d" Nov 28 10:29:07 crc kubenswrapper[4946]: I1128 10:29:07.308279 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2bvwr/must-gather-nkw64" Nov 28 10:29:07 crc kubenswrapper[4946]: I1128 10:29:07.347306 4946 scope.go:117] "RemoveContainer" containerID="9e7094be3a39a0af544b0cd47937ea7a0321c8ce4ed9069e8d2fe774a6d38966" Nov 28 10:29:07 crc kubenswrapper[4946]: I1128 10:29:07.418841 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/131948ed-47c4-49e1-837c-d24eab4a8123-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "131948ed-47c4-49e1-837c-d24eab4a8123" (UID: "131948ed-47c4-49e1-837c-d24eab4a8123"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 10:29:07 crc kubenswrapper[4946]: I1128 10:29:07.463625 4946 scope.go:117] "RemoveContainer" containerID="603eea7566ea14411e133c4bc603052fb727c53073fc2353e90a949d2fd6b58d" Nov 28 10:29:07 crc kubenswrapper[4946]: E1128 10:29:07.467613 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"603eea7566ea14411e133c4bc603052fb727c53073fc2353e90a949d2fd6b58d\": container with ID starting with 603eea7566ea14411e133c4bc603052fb727c53073fc2353e90a949d2fd6b58d not found: ID does not exist" containerID="603eea7566ea14411e133c4bc603052fb727c53073fc2353e90a949d2fd6b58d" Nov 28 10:29:07 crc kubenswrapper[4946]: I1128 10:29:07.467664 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"603eea7566ea14411e133c4bc603052fb727c53073fc2353e90a949d2fd6b58d"} err="failed to get container status \"603eea7566ea14411e133c4bc603052fb727c53073fc2353e90a949d2fd6b58d\": rpc error: code = NotFound desc = could not find container \"603eea7566ea14411e133c4bc603052fb727c53073fc2353e90a949d2fd6b58d\": container with ID starting with 603eea7566ea14411e133c4bc603052fb727c53073fc2353e90a949d2fd6b58d not found: ID does not exist" Nov 28 10:29:07 crc kubenswrapper[4946]: I1128 10:29:07.467689 4946 scope.go:117] "RemoveContainer" containerID="9e7094be3a39a0af544b0cd47937ea7a0321c8ce4ed9069e8d2fe774a6d38966" Nov 28 10:29:07 crc kubenswrapper[4946]: E1128 10:29:07.471561 4946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e7094be3a39a0af544b0cd47937ea7a0321c8ce4ed9069e8d2fe774a6d38966\": container with ID starting with 9e7094be3a39a0af544b0cd47937ea7a0321c8ce4ed9069e8d2fe774a6d38966 not found: ID does not exist" containerID="9e7094be3a39a0af544b0cd47937ea7a0321c8ce4ed9069e8d2fe774a6d38966" Nov 28 10:29:07 crc kubenswrapper[4946]: I1128 10:29:07.471598 4946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e7094be3a39a0af544b0cd47937ea7a0321c8ce4ed9069e8d2fe774a6d38966"} err="failed to get container status \"9e7094be3a39a0af544b0cd47937ea7a0321c8ce4ed9069e8d2fe774a6d38966\": rpc error: code = NotFound desc = could not find container \"9e7094be3a39a0af544b0cd47937ea7a0321c8ce4ed9069e8d2fe774a6d38966\": container with ID starting with 9e7094be3a39a0af544b0cd47937ea7a0321c8ce4ed9069e8d2fe774a6d38966 not found: ID does not exist" Nov 28 10:29:07 crc kubenswrapper[4946]: I1128 10:29:07.473704 4946 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/131948ed-47c4-49e1-837c-d24eab4a8123-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 28 10:29:08 crc kubenswrapper[4946]: I1128 10:29:08.002596 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="131948ed-47c4-49e1-837c-d24eab4a8123" path="/var/lib/kubelet/pods/131948ed-47c4-49e1-837c-d24eab4a8123/volumes" Nov 28 10:30:00 crc kubenswrapper[4946]: I1128 10:30:00.208581 4946 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405430-dzvlm"] Nov 28 10:30:00 crc kubenswrapper[4946]: E1128 10:30:00.209781 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="131948ed-47c4-49e1-837c-d24eab4a8123" containerName="gather" Nov 28 10:30:00 crc kubenswrapper[4946]: I1128 10:30:00.209803 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="131948ed-47c4-49e1-837c-d24eab4a8123" containerName="gather" Nov 28 10:30:00 crc kubenswrapper[4946]: E1128 10:30:00.209830 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca97690-b498-404a-86af-4286224fb374" containerName="registry-server" Nov 28 10:30:00 crc kubenswrapper[4946]: I1128 10:30:00.209843 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca97690-b498-404a-86af-4286224fb374" containerName="registry-server" Nov 28 10:30:00 crc kubenswrapper[4946]: E1128 10:30:00.209884 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca97690-b498-404a-86af-4286224fb374" containerName="extract-content" Nov 28 10:30:00 crc kubenswrapper[4946]: I1128 10:30:00.209897 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca97690-b498-404a-86af-4286224fb374" containerName="extract-content" Nov 28 10:30:00 crc kubenswrapper[4946]: E1128 10:30:00.209929 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca97690-b498-404a-86af-4286224fb374" containerName="extract-utilities" Nov 28 10:30:00 crc kubenswrapper[4946]: I1128 10:30:00.209941 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca97690-b498-404a-86af-4286224fb374" containerName="extract-utilities" Nov 28 10:30:00 crc kubenswrapper[4946]: E1128 10:30:00.209976 4946 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="131948ed-47c4-49e1-837c-d24eab4a8123" containerName="copy" Nov 28 10:30:00 crc kubenswrapper[4946]: I1128 10:30:00.209987 4946 state_mem.go:107] "Deleted CPUSet assignment" podUID="131948ed-47c4-49e1-837c-d24eab4a8123" containerName="copy" Nov 28 10:30:00 crc kubenswrapper[4946]: I1128 10:30:00.210328 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="131948ed-47c4-49e1-837c-d24eab4a8123" containerName="gather" Nov 28 10:30:00 crc kubenswrapper[4946]: I1128 10:30:00.210373 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="131948ed-47c4-49e1-837c-d24eab4a8123" containerName="copy" Nov 28 10:30:00 crc kubenswrapper[4946]: I1128 10:30:00.210397 4946 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ca97690-b498-404a-86af-4286224fb374" containerName="registry-server" Nov 28 10:30:00 crc kubenswrapper[4946]: I1128 10:30:00.212413 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405430-dzvlm" Nov 28 10:30:00 crc kubenswrapper[4946]: I1128 10:30:00.215529 4946 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 10:30:00 crc kubenswrapper[4946]: I1128 10:30:00.216212 4946 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 10:30:00 crc kubenswrapper[4946]: I1128 10:30:00.231156 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405430-dzvlm"] Nov 28 10:30:00 crc kubenswrapper[4946]: I1128 10:30:00.284887 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1acd7129-1c03-4ced-aa23-d0c33d252170-config-volume\") pod \"collect-profiles-29405430-dzvlm\" (UID: \"1acd7129-1c03-4ced-aa23-d0c33d252170\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405430-dzvlm" Nov 28 10:30:00 crc kubenswrapper[4946]: I1128 10:30:00.284940 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27nss\" (UniqueName: \"kubernetes.io/projected/1acd7129-1c03-4ced-aa23-d0c33d252170-kube-api-access-27nss\") pod \"collect-profiles-29405430-dzvlm\" (UID: \"1acd7129-1c03-4ced-aa23-d0c33d252170\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405430-dzvlm" Nov 28 10:30:00 crc kubenswrapper[4946]: I1128 10:30:00.285118 4946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1acd7129-1c03-4ced-aa23-d0c33d252170-secret-volume\") pod \"collect-profiles-29405430-dzvlm\" (UID: \"1acd7129-1c03-4ced-aa23-d0c33d252170\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405430-dzvlm" Nov 28 10:30:00 crc kubenswrapper[4946]: I1128 10:30:00.387808 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1acd7129-1c03-4ced-aa23-d0c33d252170-config-volume\") pod \"collect-profiles-29405430-dzvlm\" (UID: \"1acd7129-1c03-4ced-aa23-d0c33d252170\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405430-dzvlm" Nov 28 10:30:00 crc kubenswrapper[4946]: I1128 10:30:00.387875 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27nss\" (UniqueName: \"kubernetes.io/projected/1acd7129-1c03-4ced-aa23-d0c33d252170-kube-api-access-27nss\") pod \"collect-profiles-29405430-dzvlm\" (UID: \"1acd7129-1c03-4ced-aa23-d0c33d252170\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405430-dzvlm" Nov 28 10:30:00 crc kubenswrapper[4946]: I1128 10:30:00.388424 4946 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1acd7129-1c03-4ced-aa23-d0c33d252170-secret-volume\") pod \"collect-profiles-29405430-dzvlm\" (UID: \"1acd7129-1c03-4ced-aa23-d0c33d252170\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405430-dzvlm" Nov 28 10:30:00 crc kubenswrapper[4946]: I1128 10:30:00.389068 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1acd7129-1c03-4ced-aa23-d0c33d252170-config-volume\") pod \"collect-profiles-29405430-dzvlm\" (UID: \"1acd7129-1c03-4ced-aa23-d0c33d252170\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405430-dzvlm" Nov 28 10:30:00 crc kubenswrapper[4946]: I1128 10:30:00.400345 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1acd7129-1c03-4ced-aa23-d0c33d252170-secret-volume\") pod \"collect-profiles-29405430-dzvlm\" (UID: \"1acd7129-1c03-4ced-aa23-d0c33d252170\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405430-dzvlm" Nov 28 10:30:00 crc kubenswrapper[4946]: I1128 10:30:00.421385 4946 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27nss\" (UniqueName: \"kubernetes.io/projected/1acd7129-1c03-4ced-aa23-d0c33d252170-kube-api-access-27nss\") pod \"collect-profiles-29405430-dzvlm\" (UID: \"1acd7129-1c03-4ced-aa23-d0c33d252170\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405430-dzvlm" Nov 28 10:30:00 crc kubenswrapper[4946]: I1128 10:30:00.545553 4946 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405430-dzvlm" Nov 28 10:30:01 crc kubenswrapper[4946]: I1128 10:30:01.082374 4946 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405430-dzvlm"] Nov 28 10:30:01 crc kubenswrapper[4946]: I1128 10:30:01.931348 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405430-dzvlm" event={"ID":"1acd7129-1c03-4ced-aa23-d0c33d252170","Type":"ContainerStarted","Data":"8a44cc15dd7eb6bbc0ada8b2ce0e8b9cd56d4f1a699b4684f3fa7a25f6057437"} Nov 28 10:30:01 crc kubenswrapper[4946]: I1128 10:30:01.931753 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405430-dzvlm" event={"ID":"1acd7129-1c03-4ced-aa23-d0c33d252170","Type":"ContainerStarted","Data":"e1b821a1b2c32295c602cbc6e3546aa6580248cb6c298468f53a581e999e50c4"} Nov 28 10:30:02 crc kubenswrapper[4946]: I1128 10:30:02.943213 4946 generic.go:334] "Generic (PLEG): container finished" podID="1acd7129-1c03-4ced-aa23-d0c33d252170" containerID="8a44cc15dd7eb6bbc0ada8b2ce0e8b9cd56d4f1a699b4684f3fa7a25f6057437" exitCode=0 Nov 28 10:30:02 crc kubenswrapper[4946]: I1128 10:30:02.943305 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405430-dzvlm" event={"ID":"1acd7129-1c03-4ced-aa23-d0c33d252170","Type":"ContainerDied","Data":"8a44cc15dd7eb6bbc0ada8b2ce0e8b9cd56d4f1a699b4684f3fa7a25f6057437"} Nov 28 10:30:04 crc kubenswrapper[4946]: I1128 10:30:04.354826 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405430-dzvlm" Nov 28 10:30:04 crc kubenswrapper[4946]: I1128 10:30:04.479409 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1acd7129-1c03-4ced-aa23-d0c33d252170-secret-volume\") pod \"1acd7129-1c03-4ced-aa23-d0c33d252170\" (UID: \"1acd7129-1c03-4ced-aa23-d0c33d252170\") " Nov 28 10:30:04 crc kubenswrapper[4946]: I1128 10:30:04.479522 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27nss\" (UniqueName: \"kubernetes.io/projected/1acd7129-1c03-4ced-aa23-d0c33d252170-kube-api-access-27nss\") pod \"1acd7129-1c03-4ced-aa23-d0c33d252170\" (UID: \"1acd7129-1c03-4ced-aa23-d0c33d252170\") " Nov 28 10:30:04 crc kubenswrapper[4946]: I1128 10:30:04.479679 4946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1acd7129-1c03-4ced-aa23-d0c33d252170-config-volume\") pod \"1acd7129-1c03-4ced-aa23-d0c33d252170\" (UID: \"1acd7129-1c03-4ced-aa23-d0c33d252170\") " Nov 28 10:30:04 crc kubenswrapper[4946]: I1128 10:30:04.480067 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1acd7129-1c03-4ced-aa23-d0c33d252170-config-volume" (OuterVolumeSpecName: "config-volume") pod "1acd7129-1c03-4ced-aa23-d0c33d252170" (UID: "1acd7129-1c03-4ced-aa23-d0c33d252170"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 10:30:04 crc kubenswrapper[4946]: I1128 10:30:04.480261 4946 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1acd7129-1c03-4ced-aa23-d0c33d252170-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 10:30:04 crc kubenswrapper[4946]: I1128 10:30:04.484671 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1acd7129-1c03-4ced-aa23-d0c33d252170-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1acd7129-1c03-4ced-aa23-d0c33d252170" (UID: "1acd7129-1c03-4ced-aa23-d0c33d252170"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 10:30:04 crc kubenswrapper[4946]: I1128 10:30:04.485537 4946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1acd7129-1c03-4ced-aa23-d0c33d252170-kube-api-access-27nss" (OuterVolumeSpecName: "kube-api-access-27nss") pod "1acd7129-1c03-4ced-aa23-d0c33d252170" (UID: "1acd7129-1c03-4ced-aa23-d0c33d252170"). InnerVolumeSpecName "kube-api-access-27nss". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 10:30:04 crc kubenswrapper[4946]: I1128 10:30:04.581874 4946 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1acd7129-1c03-4ced-aa23-d0c33d252170-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 10:30:04 crc kubenswrapper[4946]: I1128 10:30:04.581909 4946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27nss\" (UniqueName: \"kubernetes.io/projected/1acd7129-1c03-4ced-aa23-d0c33d252170-kube-api-access-27nss\") on node \"crc\" DevicePath \"\"" Nov 28 10:30:04 crc kubenswrapper[4946]: I1128 10:30:04.969739 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405430-dzvlm" event={"ID":"1acd7129-1c03-4ced-aa23-d0c33d252170","Type":"ContainerDied","Data":"e1b821a1b2c32295c602cbc6e3546aa6580248cb6c298468f53a581e999e50c4"} Nov 28 10:30:04 crc kubenswrapper[4946]: I1128 10:30:04.969786 4946 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1b821a1b2c32295c602cbc6e3546aa6580248cb6c298468f53a581e999e50c4" Nov 28 10:30:04 crc kubenswrapper[4946]: I1128 10:30:04.969844 4946 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405430-dzvlm" Nov 28 10:30:05 crc kubenswrapper[4946]: I1128 10:30:05.423046 4946 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405385-x2g6l"] Nov 28 10:30:05 crc kubenswrapper[4946]: I1128 10:30:05.432443 4946 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405385-x2g6l"] Nov 28 10:30:06 crc kubenswrapper[4946]: I1128 10:30:06.011368 4946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d11c7c32-24ab-4e4b-be99-c3851c45a894" path="/var/lib/kubelet/pods/d11c7c32-24ab-4e4b-be99-c3851c45a894/volumes" Nov 28 10:30:54 crc kubenswrapper[4946]: I1128 10:30:54.731094 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 10:30:54 crc kubenswrapper[4946]: I1128 10:30:54.731929 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 10:30:56 crc kubenswrapper[4946]: I1128 10:30:56.388126 4946 scope.go:117] "RemoveContainer" containerID="5e0cbd9db30f17c3af1b7c3212f468240f63c4b319e1bc90af04083a5066ba26" Nov 28 10:31:24 crc kubenswrapper[4946]: I1128 10:31:24.731622 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 10:31:24 crc kubenswrapper[4946]: I1128 10:31:24.732575 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 10:31:54 crc kubenswrapper[4946]: I1128 10:31:54.730574 4946 patch_prober.go:28] interesting pod/machine-config-daemon-g2vhr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 10:31:54 crc kubenswrapper[4946]: I1128 10:31:54.731495 4946 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 10:31:54 crc kubenswrapper[4946]: I1128 10:31:54.731584 4946 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" Nov 28 10:31:54 crc kubenswrapper[4946]: I1128 10:31:54.732839 4946 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a575fea3224a3d9b6e1a17580b120eb2a3f6c201caacdabe59bfdd8991d33212"} pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 10:31:54 crc kubenswrapper[4946]: I1128 10:31:54.732985 4946 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" containerName="machine-config-daemon" containerID="cri-o://a575fea3224a3d9b6e1a17580b120eb2a3f6c201caacdabe59bfdd8991d33212" gracePeriod=600 Nov 28 10:31:54 crc kubenswrapper[4946]: E1128 10:31:54.891420 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b" Nov 28 10:31:55 crc kubenswrapper[4946]: I1128 10:31:55.332191 4946 generic.go:334] "Generic (PLEG): container finished" podID="7450befc-262f-45d1-a5f4-f445e540185b" containerID="a575fea3224a3d9b6e1a17580b120eb2a3f6c201caacdabe59bfdd8991d33212" exitCode=0 Nov 28 10:31:55 crc kubenswrapper[4946]: I1128 10:31:55.332258 4946 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" event={"ID":"7450befc-262f-45d1-a5f4-f445e540185b","Type":"ContainerDied","Data":"a575fea3224a3d9b6e1a17580b120eb2a3f6c201caacdabe59bfdd8991d33212"} Nov 28 10:31:55 crc kubenswrapper[4946]: I1128 10:31:55.332308 4946 scope.go:117] "RemoveContainer" containerID="db49a87b1329c0d6fd9e6b54d7f6136f6144ec6d1c3b21297d459baf4a53d21f" Nov 28 10:31:55 crc kubenswrapper[4946]: I1128 10:31:55.333868 4946 scope.go:117] "RemoveContainer" containerID="a575fea3224a3d9b6e1a17580b120eb2a3f6c201caacdabe59bfdd8991d33212" Nov 28 10:31:55 crc kubenswrapper[4946]: E1128 10:31:55.334980 4946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g2vhr_openshift-machine-config-operator(7450befc-262f-45d1-a5f4-f445e540185b)\"" pod="openshift-machine-config-operator/machine-config-daemon-g2vhr" podUID="7450befc-262f-45d1-a5f4-f445e540185b"